开发者

GZipStream and DeflateStream produce bigger files

I'm trying to use deflate/gzip streams in C# but it appears that the fi开发者_C百科les after compression are bigger than before.

For example, I compress a docx file of 900ko, but it produce a 1.4Mo one !

And it does it for every file I tried.

May be I am wrong in the way I'm doing it? Here is my code :

  FileStream input = File.OpenRead(Environment.CurrentDirectory + "/file.docx");
  FileStream output = File.OpenWrite(Environment.CurrentDirectory + "/compressedfile.dat");

  GZipStream comp = new GZipStream(output, CompressionMode.Compress);

  while (input.Position != input.Length)
      comp.WriteByte((byte)input.ReadByte());

  input.Close();

  comp.Close(); // automatically call flush at closing
  output.Close();


Such a big difference seems strange to me, but you should keep in mind that docx is itself compressed in ZIP, so there is no reason to compress it again, results usually are bigger.


Firstly, deflate/gzip streams are remarkably bad at compression when compared to zip, 7z, etc.

Secondly, docx (and all of the MS document formats with an 'x' at the end) are just .zip files anyway. Rename a .docx to .zip to reveal the smoke and mirrors.

So when you run deflate/gzip over a docx, it will actually make the file bigger. (Its like doing a zip with a low level of compression over a zipped file with a high level of compression.)

However if you run deflate/gzip over HTML or a text file or something that is not compressed then it will actually do a pretty good job.


Although it is true, as others have indicated, that the example files you specified are already compressed - the biggest issue is to understand that unlike most compression utilities, the DeflateStream and GZipStream classes simply try to tokenize/compress a data stream without the intelligence that all the additional tokens (overhead) are actually increasing the amount of data required. Zip, 7z, etc. are smart enough to know that if data is largely random entropy (virtually uncompressable), that they simply store the data "as-is" (store, not compressed), instead of attempting to compress it further.


I had the same issue with compressing databases containing jpg data. I tried dotnetzip - a drop in replacement and got decent compression (Supports Compact Framework too!):

MS : 10MB -> 10.0MB
DNZ: 10MB ->  7.6MB


I don't think GzipStream and DeflateStream are intended to compress files. You would probably have better luck with a file compressor like SharpZipLib.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜