开发者

Using multiple cores to process large, sequential file in c++

I have a large file (bigger then RAM, can't read whole at once) and i need to process it row by row (in c++). I want to utilize multiple cores, preferably with Intel TBB or Microsoft PPL. I would rather avoid preprocessing this file (like splitting it to 4 parts etc).

I was thinking about something like using 4 iterators, initialized to (0, n/4, 2*n/4 3*n/4) positions in the file etc.

Is it good solution and is there simple way to achieve it?

Or maybe you know some libs that supports efficient, concurrent reading of streams?

update:

I did tests. IO is not the bottleneck, CPU is. And I have lot of RAM for buffers.

I need to parse record (var size, approx. 2000 bytes each, records are separated by unique '\0' char), validate it, do some calculations, and write result开发者_JAVA百科 to another file(s)


Since you are able to split it into N parts, it sounds like the processing of each row is largely independent. In that case, I think the simplest solution is to set up one thread to read the file line by line and place each row into a tbb::concurrent_queue. Then spawn as many threads as you need to pull rows off that queue and process them.

This solution is independent of the file size, and if you find you need more (or less) worker threads its trivial to change the number. But this won't work if there's some kind of dependencies between the rows... unless you set up a second poll of "post processing" threads to handle that, but then things may start to get too complex.


My recommendation is to use TBB's pipeline pattern. The first, serial stage of the pipeline reads a desired portion of data from file; subsequent stages process data chunks in parallel, and the last stage writes into another file, possibly in the same order as the data were read.

An example for this approach is available in TBB distributions; see examples/pipeline/square. It uses "old" interface, the class tbb::pipeline and filters (classes inherited from tbb::filter) that pass data by void* pointers. A newer, type-safe and lambda-friendly "declarative" interface tbb::parallel_pipeline() may be more convenient to use.


ianmac already hinted at the seek issue. Your iterator idea is reasonable with a slight twist: initialize them to 0,1,2 and 3, and increment each by 4. So, the first thread works on items 0,4,8, etc. The OS will make sure the file is being fed to your app as quickly as possible. It may be possible to tell your OS that you'll be doing a sequential scan through the file (e.g. on Windows, it's a flag to CreateFile).


In terms of reading from the file, I wouldn't recommend this. Hard drives, as far as I know, can't read from more than one place at a single time.

However, processing the data is a different thing entirely, and you can easily do that in multiple threads. (Keeping the data in the correct order also wouldn't / shouldn't be difficult at all.)


You don't say very much about what type of processing you intend to do. It is unclear whether you expect the process to be compute- or I/O-bound, whether there are data dependencies between the processing of different rows, etc.

In any case, parallel reading from four vastly different positions in one large file is likely to be inefficient (ultimately, the disk head will have to keep moving back and forth between different areas of the hard drive, with negative consequences for the throughput).

What you might consider instead is reading the file sequentially from start to finish, and fanning out individual rows (or blocks of rows) to the worker threads for processing.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜