Pre-Generated Images On Internet

By Chongchen Saelee

JPEGs are the most commonly used image format on and off the Internet. That’s because it uses the best compression algorithm (which may be contested) to make images portable for storage and transfer. But image resolution is increasing in detail and even JPEGs aren’t getting any smaller. The idea has always been to retain the visual data but keeping the storage size small. With JPEG compression, when data is lost due to over compression, it is called lossy. Contrast with lossless compression, which retains the all the original color data, but the file size is smaller. PNG image compression is capable of lossless compression.

So here’s my proposal to have even more efficient image generation:

the images are already on the end-user’s computer. That’s right. But it’s not a bloated library of images. Instead, it’s a library of common patterns of pixels or color data. Think of it like a library of premade geometric shapes for vector graphics. So when a complex shape is requested, it’s just a combination of simpler shapes. And since those simpler shapes are already on the user’s computer, it will generate the completed shape very fast.

For example:

The image depicts a shape “D” and the user has downloaded this image from a webserver. The image has instructions, similar to vector graphics, to combine patterns A+B+C. So the user’s computer references those shape patterns on their end and generate shape D in their image browser/viewer. Imagine this, but instead of basic geometric shapes, it is a library of pixel patterns.

By doing it this way, less information is stored in the image files that are transferred. This also increases perception of rendering time.

Or maybe even think of it this way: like a 3D rendering program, the end user already has all textures stored on their computer. The image file they download from a webserver just references the textures it needs. So when the user loads the image, their computer puts the final image together on their end. Like this, but not 3D. Because 3D uses too much data coordinates and stuff.

This would be all 2D.

For example, an image of a blue sky with green grass field would probably be a very small file. Maybe the data can be stored like:

0.5 sky 0.5 grass

The algorithms to generate these would be so advanced that it can be finetuned to look natural. But look at how much data is passed. The image header might not be any bigger either, maybe just filetype and width and height.

And imagine how adult images will be generated on the spot too, since most are comprised of same composition. Image family portraits, where most are same composition. The only thing that needs fine-tuning are the individual faces. Imagine patching individual frames in a movie, but now without complete 3D rendering and other cumbersome methods. This is almost like Adobe’s Content Aware Fill, but instead of grabbing data from existing frame, the data already exists in a big library and you just need to “paint” it all together. After all, most pictures are comprised of same textures and elements since the dawn of time. Unless an image is abstracted, most things are real world depictions.

Comments are closed.