开发者

Which data structure should be used for reading and storing about 5 million entries from a text file

I have to deal with two text files, both about 1 GB and compare the data in the files. Which data structure should I use for storing the data? Comparing such huge records using dictionaries/hash tables gives out of memory exception. Or should I r开发者_运维知识库ead and store the data in a database?


Fundamentally, a Database would be best for this sort of behaviour, they're designed to deal with this much data and have had more work put into optimizing for that scenario then you're likely to be able to do.

You could use an InProcess SQL like SqlLite or even a NoSql scenario such as Raven or MongoDB as an alternative though.


.NET Framework 4 provides Memory Mapped Files feature (heh, old good win32 API provide such feature since many years), you can map difefrent part of file in the separate segment and process them simultaneously.

To work with a memory-mapped file, you must create a view of the entire memory-mapped file or a part of it. You can also create multiple views to the same part of the memory-mapped file, thereby creating concurrent memory. For two views to remain concurrent, they have to be created from the same memory-mapped file.

Multiple views may also be necessary if the file is greater than the size of the application’s logical memory space available for memory mapping (2 GB on a 32-bit computer).


This is a prime example of using a database. Depending on your structure a script will be needed define its layout to add the vlaues into the database.


If you can sort on some attribute in the records that is also used for your comparison, you could use merge sort to sort the files, and the scan them in parallel with no need to store the entire data in main memory.

Checking if a record in the first file is also present in the second file has a complexity of O(n^2) if you use two nested loops. But if the files are sorted, you can use one single loop. In addition, merge sort has complexity O(n log n). The overall complexity is O(n log n), which is better than O(n^2). Here is an implementation of merge sort in C#.

I think you can achieve the same result (in terms of speed) using a database, if records are indexed.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜