Python summing frequencies in a file
I have a large file (950MB) that conains wo开发者_如何学JAVArds and frequencies as follows, one per line:
word1 54
word2 1
word3 12
word4 3
word1 99
word4 147
word1 4
word2 6
etc...
I need to sum the frequencies for the words, e.g word1 = 54 + 99 + 4 = 157, and output this to a list/file. What is the most efficient way of doing this in Python?
What I tried to do was create a list with each line being a tuple in this list, summing from there, this crashed my laptop...
Try next:
from collections import defaultdict
d = defaultdict(int)
with open('file') as fh:
for line in fh:
word, count = line.split()
d[word] += count
You don't have to read the whole file into memory. You could also split the file into multiple smaller files, process each file separately and merge the results/frequencies.
950MB shouldn't be too much for most modern machines to keep in memory. I've done this plenty of times in Python programs, and my machine has 4GB of physical memory. I can imagine doing the same with less memory too.
You definitely don't want to waste memory if you can avoid it though. A previous post mentioned processing the file line by line and accumulating a result, which is the right way to do it.
If you avoid reading the whole file into memory at once, you only have to worry about how much memory your accumulated result is taking, not the file itself. It can be possible to process files much larger than the one you mentioned, provided the result you keep in memory doesn't grow too large. If it does, then you'll want to start saving partial results as files themselves, but it doesn't sound like this problem requires that.
Here's perhaps the simplest solution to your problem:
f = open('myfile.txt')
result = {}
for line in f:
word, count = line.split()
result[word] = int(count) + result.get(word, 0)
f.close()
print '\n'.join(result.items())
If you're on Linux or another UNIX-like OS, use top
to keep an eye on memory usage while the program runs.
精彩评论