开发者

creating an iterator in Python from a dictionary in memory-efficient way

I'm iterating through a very large tab-delimited file (containing millions of lines) and pairing different lines of it based on the value of some field in that file, e.g.

mydict = defaultdict()
for line in myfile:
  # Group all lines that have the same field into a list
  mydict[line.field].append(line)

Since "mydict" gets very large, I'd like to make it into an iterator so I don't have to hold it all in memory. How can I make it so instead of populating a dictionary, I will create an iterator that I can loop through and get all these lists开发者_运维问答 of lines that have the same field value?

Thanks.


It sounds like you might want a database. There's a variety of relational and non-relational databases you can pick from (some more efficient than others, depending on what you are trying to achieve), but sqlite (built into python) would be the easiest.

Or, if there are only a small number of line.fields to process, you could just read the files several times.

But there's no real magic bullet.


"millions of lines" is not very large unless the lines are long. If the lines are long you might save some memory by storing only positions in the file (.tell()/.seek()).

If the file is sorted by line.field; you could use itertools.groupby().

SQL’s GROUP BY might help for average-sized files (e.g., using sqlite as @wisty suggested).

For really large files you could use MapReduce.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜