开发者

Is there a way to previously determine if a file is a good candidate for compression?

I'm planning a .NET project that involves automated upload of files from the most diverse types, from various distributed clients to a constellation of servers, and sometimes the file extension may not match the real file type (long story).

Using HTTP compression will not always be an option, and in this project case, is preferrable to spend more client processing than bandwidth or server storage. But it would be really better if we could skip the compression process if we could determine if the compression would give feasible results.

I know that there is no "right answer", but we woul开发者_StackOverflow中文版d appreciate any ideas.


Filtering by File types is a good idea. Even if some files will have the wrong extensions, overall it should be a good bet.

Text files for example compress extremely well. While compressing mp3's, jpg's/gifs or divx files has little use.


Given what you say about extensions I can see a couple of ways

First: Can you determine the type of the file with out using the extension? lots of file types have standard headers so you could parse the headers and determine is this is one of the dozen of so of common file type you have implemented filters for.

Second: A simpler hurestic would be to grab say 100 bytes from the middle of the file and see if this is standard ascii e.g. each byte has a value between 9 and 126. This will be wrong a given percent of time, will not work on text in a lot of languages and will not work on unicode text.


By previously do you mean before you actually compress or send? You might keep some data and base your decision on that; map file types, extensions and sizes to compression time and final size, and see if you can learn what works


You could try compressing the file with a very fast compressor. If the compressor can't compress it enough, then it is useless to try to recompress it better. Yes, this is a stupid idea, but technically a .zip file could contain a txt file using the "stored" format (so no compression), and that .zip would be highly compressable, so there isn't a magic bullet.

(technically you could measure the entropy of the file, but then as suggested here How to calculate the entropy of a file? , gzip it to test it :-) )


You could get a pointer by doing a byte frequency analysis, perhaps also with a MTF step to transform local repetition into something more measurable. The cost is cheap, a linear scan of the file.


You can try compressing the first several KB of each file internally before sending it, and see how many bytes it compresses down to. If the result looks good enough, compress the whole thing before sending it.

One thing you should be careful about with this approach is that many file formats might have their first "few" KB be header-like data not representative of the rest of the file. So you might want to increase the sample size, take the sample from another part of the file, take multiple sub-samples from different parts of the file to form your sample, etc.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜