开发者

What is the best compression scheme for small data such as 1.66kBytes?

This data is stored in an array (using C++) and is a repetition of 125 bits each one varying from the other. It also has 8 messages of 12 ASCII characters each at the end. Please suggest if I should use differential compression within the array and if so how?

Or should I apply some o开发者_如何学JAVAther compression scheme onto the whole array?


Generally you can compress data that has some sort of predictability or redundancy. Dictionary based compression (e.g. ZIP style algorithms) traditionally don't work well on small chunks of data because of the need to share the selected dictionary.

In the past, when I have compressed very small chunks of data with somewhat predictable patterns, I have used SharpZipLib with a custom dictionary. Rather than embed the dictionary in the actual data, I hard-coded the dictionary in every program that needs to (de)compress the data. SharpZipLib gives you both options: custom dictionary, and keep dictionary separate from the data.

Again this will only work well if you can predict some patterns to your data ahead of time so that you can create an appropriate compression dictionary, and it's feasible for the dictionary itself to be separate from the compressed data.


You haven't given us enough information to help you. However, I can highly recommend the book Text Compression by Bell, Cleary, and Witten. Don't be fooled by the title; "Text" here just means "lossless"—all the techniques apply to binary data. Because the book is expensive you might try to get it on interlibrary loan.

Also, don't overlook the obvious Burrows-Wheeler (bzip2) or Lempel-Ziv (gzip, zlib) techniques. It's quite possible that one of these techniques will work well for your application, so before investigating alternatives, try compressing your data with standard tools.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜