The image is compressed by evolving a population of solutions which are at each generation compared to the original image. Some of the best matching solutions are selected for mating. Their descendants which are also mutated a little then form the next generation.

The solutions are evolved as "genotypes", which are translated into "phenotypes", i.e. images, which are then compared to the target image and are given a "fitness" according to their closeness.

The image on the right is decoded from a genotype consisting of 256 "genes". Each "gene" is a production rule from a single letter to 2x2 matrix, where each of the four matrix elements is one of 256 letters. These letters again correspond to each of the genes.

The decoding is started from 1x1 matrix, and (each) letter in the matrix is rewritten with the corresponding gene's 2x2 matrix. This process is continued until the size of the matrix is 512x512. Then the letters are interpreted as light values in the final image.

Each of the 256 genes has 4 letters (right-hand-side of a rule) encoded. Each letter can have 256 values. This gives size of 1024 bytes. As the original image is about 250 kilobytes, the compression ratio is about 1:250. But, we can suspect that the grammar does not utilize all of the 256 rules, and thus the compression ratio can be even bigger (unfortunatelu I didn't save the grammar).

You can easily see the recurrent use of the same image blocks in several places of the image.

The compression took 10000 generations with a 50-individual population. With a 333MHz AMD k6-II processor (Pentium variant) that makes about two days. The jpeg compression took about 1 (one) second! Thus, OK, OK, not very practical, but it's the idea that matters, isn't it?

Original (1:1) |
L-System compression (1:250) |

jpeg compression (1:60) |

Last modified: Sat Feb 17 15:32:26 EET 2001