开发者

Can't find logic behind png file sizes

I'm saving a large number of small png files for use in a game on a phone, so space is at a premium.

I'm trying to figure out the logic behind the file sizes so I can save things most efficiently, but even after using pngcrush the sizes are totally inconsistent.

I saved a 1x1 image and it takes 3kb. I have another 23x21 image which takes only 2kb. I have two images which are almost the same size, but one takes 6kb and the other takes 13kb. I doubled the image height and copied one image into the empty spa开发者_Python百科ce of the other and saved that. The combined image is only 11kb!

Why is a 1x1 image larger than a 23x21 image? Why can I combine a 13kb image and a 6kb image and get an 11kb image?

Here are the images I'm talking about (there's a 1x1 pixel in between the 1st and second images. It's difficult to see, so I'll just give the URL: http://g42.org/temp/png/1x1.png):

example http://g42.org/temp/png/hat.png example http://g42.org/temp/png/1x1.png example http://g42.org/temp/png/helmet1.png example http://g42.org/temp/png/helmet2.png example http://g42.org/temp/png/helmet1_2.png


It's not a compression thing, the problem with the 1x1 image is that it has metadata (added by Photoshop, it seems), a color profile (iCCP chunk). If you look inside the binary, its' the data between the strings "iCCP" and "IDAT", it could be removed and you get a 69 bytes file. If you reopen and save the file most image viewers (xnview), or use pngcrush, you can strip that chunk. : See it here :

Can't find logic behind png file sizes

http://i.stack.imgur.com/fmOdA.png

And regarding the helmet images: besides other informational chunks (imageReady ads some informational text, as you can see), the difference is due to different formats: the two-helmets is a paletted image (8bits per pixel), the single helmet is a RGB with alpha (32bits per pixel)


PNG compression is based on the same algorithm as zlib and is highly sensitive to the data that is being compressed so you won't see a consistent relationship between image size and file size. In the case of the combined image, it is still bigger than the smaller image and given the similarity of the two halves of the image, the compressor was probably able to reuse a lot of the Huffman tree. I don't know enough about the algorithm to say for certain how it ended up smaller than the other half.

As long as you are not seeing oddities like the 1x1 image, which you seem to have figured out in the comments, I don't think this will make a lot of sense without extensive study of image compression.


There is a great utility called pngcrush

http://pmt.sourceforge.net/pngcrush/

Compressing to PNG is a rather difficult task - there are lost of assumptions and strategies to try - do we create a palette, or are we better off without it?

PNGcrush essentially bruteforces 100+ different compression strategies, while at the same time trimming useless tags and sections.


PNG has several sub-formats: 24-bit with or without alpha, 8-bit (includes alpha), grayscale, etc. which use different amount of bytes per pixel and have different "compressibility".

Plus PNG supports several compression tricks (filters and gzip settings) which affect how well image data is compressed.

On top of that PNG can contain metadata, which sometimes can be pretty large, like some embedded color profiles.

  • ImageAlpha converts images to the most space-efficient PNG8+alpha variant.

  • ImageOptim removes junk metadata and finds best compression parameters.

With a combination of those two your images can be reduced by 30-50%.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜