Request for comments on Huffman compression
The implementation of file compressors that I saw was always compressing arrays of bytes.
But it can compress arrays of shorts instead or even ints.
If each symbol in a Huffman binary tree represents a byte we can compress at the most 8 bits in a single bit when it is optimal.
If each symbol in a Huffman tree represents a short we can compress at the most 16 bits in a sigle bit when it is optimal.
Is it correct?
Can someone 开发者_StackOverflowupdate wikipedia with this additional Huffman encode information?
The optimal compression is to treat your whole file as a single token, and compress it using the zero-length huffman code. This gives you an infinite compression ratio. Unfortunately the description of the huffman code will be quite large.
That is correct, but it isn't as amazing as it sounds.
There are two pieces of data that must be transferred to decode a huffman-encoded byte stream. The encoded stream (of course) is required, but so is the dictionary that will allow you to properly build your huffman tree to perform the decoding.
Using larger tokens to encode your data will always result in a smaller encoded stream. Unfortunately, unless you have data with some pretty specific and special characteristics, larger tokens will also cause your dictionary size to increase surprisingly. The degenerate case (referred to by Mark Byers' answer) would result in the entire, uncompressed data stream being a single token and the encoded stream being a single bit, resulting in absolutely no compression.
Thus, Huffman coding (like almost everything) is an exercise in tradeoffs. Striking a balance between efficiency of the encoded file and the size of the dictionary can be tricky. I've never performed the actual analysis based on data characteristics to find out what various ideal token sizes might be, but I think bytes tend to be used because it's a simple point to divide on and will generally result in some real compression. I know back in college I did it once as an exercise with four byte tokens, but I couldn't honestly say that it was somehow better than one byte tokens.
Of course, it's also possible to cheat and, instead of building the dictionary dynamically to get genuinely greedy compression, you can use a pre-built tree and compress with that. You'd then avoid transmitting the dictionary, but the decoder would also have to have the same dictionary to decode the data.
Arabcoder, your assumptions are correct.
As a side note: A lot of 8 bit huffman codecs don't only compress the 256 natural symbols of a byte. They also have one or more special symbols. These are used to detect the end of the huffman stream or to switch from one huffman tree to another...
Absolutely correct. Anyway, there is little use (other than intellectual challenge or training) in implementing compression algorithms, since almost every language has them in their standard lib.
By the way a Huffman coding is always the same or worse as an arithmetic encoding. Huffman coding was used for many years because arithmetic encoding was patented until recently, and because huffman is a little bit easier to implement. Today when designing a new compression algorithm there isn't much reason to use Huffman anymore. Arithmetic should always be used instead.
Huffman is quite old compression method and is not used as such. It is included in basic compression methods of taught in a course. Considering many files like JPEG, PDF or JAR are alredy compressed running plain Huffman compression won't give you much.
I am saying this because I did this. And this applies even when you optimize symbol table a lot.
精彩评论