开发者

Python - Search for items in hundreds of large, gzipped files

Unfortunately, I'm working with an extremely large corpus which is spread into hundreds of .gz files -- 24 gigabytes (packed) worth, in fact. Python is really my native language (hah) but I was wondering if I haven't run up against a problem that will necessitate learning a "faster" language?

Each .gz file contains a single document in plain text, is about 56MB gzipped, and about 210MB unzipped.

On each line is an n-gram (bigram, trigram, quadrigram, etc.) and, to the right, a frequency count. I nee开发者_如何学编程d to basically create a file that stores the substring frequencies for each quadrigram alongside its whole-string frequency count (i.e., 4 unigram frequencies, 3 bigram frequencies, and 2 trigram frequencies for a total of 10 data points). Each type of n-gram has its own directory (e.g., all bigrams appear in their own set of 33 .gz files).

I know an easy, brute force solution, and which module to import to work with gzipped files in Python, but I was wondering if there was something that wouldn't take me weeks of CPU time? Any advice on speeding this process up, however slightly, would be much appreciated!


It would help to have an example of a few lines and expected output. But from what I understand, here are some ideas.

You certainly don't want to process all files every time you process a single file or, worse, a single 4-gram. Ideally you'd go through each file once. So my first suggestion is to maintain an intermediate list of frequencies (these sets of 10 data points), where they first only take into account one file. Then when you process the second file, you'll update all the frequencies for items that you encounter (and presumably add new items). Then you'll keep going like this, increasing frequencies as you find more matching n-grams. At the end write everything out.

More specifically, at each iteration I would read a new input file into memory as a map of string to number, where the string is, say, a space-separated n-gram, and the number is its frequency. I would then process the intermediate file from the last iteration, which would contain your expected output (with incomplete values), e.g. "a b c d : 10 20 30 40 5 4 3 2 1 1" (kind of guessing the output you are looking for here). For each line, I'd look up in the map all the sub-grams in my map, update the count, and write out the updated line to the new output file. That one will be used in the next iteration, until I've processed all input files.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜