Most efficient algorithm to compress folder of files
I have a folder of files and would like to losslessly compress it as efficiently as possible.
The files are very similar to one another in that the main payload is exactly the same but a variable sized开发者_StackOverflow header and footer may differ quite a bit between files.
I need to be able to access any of the files very quickly as well as add additional files very quickly (not have to decompress the entire folder just to add the file to recompress again). Deletion from the folder is not very common.
Algorithmic suggestions are fine though I would prefer to just be able to use some existing library/program for this task.
In this case since you have specific knowledge of the files, a custom solution would work best. Store the static main payload only once and then store the headers and footers separately. For example, say you have 3 files:
1.dat
2.dat
3.dat
Store them in the compressed file as:
payload.dat
1.header.dat
1.footer.dat
2.header.dat
2.footer.dat
3.header.dat
3.footer.dat
As far as adding files, Zip and 7zip support adding new files to an existing archive so you can use either and just append new files as needed. Personally I would recommend 7zip as I've found in most situation it provides much better compression ratios but it varies a lot depending on the exact content.
Sometime back it was 7zip, not sure if something new has been made.
With this kind of redundant data, most standard compression software should produce very satisfactory results. DO NOT use the standard Windows .zip generator for this, because it compresses each file separately. 7zip or Gzip will work great for this, though.
精彩评论