开发者

c++ text file reading performance

I'm trying to migrate a c# program to c++. The c# program reads a 1~5 gb sized text file line by line and does some analysis on each line. The c# code is like below.

using (var f = File.OpenRead(fname))
using (var reader = new StreamReader(f))
    while (!reader.EndOfStream) {
        var line = reader.ReadLine();
        // do some analysis
    }

For a given 1.6 gb file with 7 million lines, this code takes about 18 seconds.

The c++ code I wrote first to migrate is like below

ifstream f(开发者_如何学Pythonfname);
string line;    
while (getline(f, line)) {
    // do some analysis
}

The c++ code above takes about 420 seconds. The second c++ code I wrote is like below.

ifstream f(fname);
char line[2000];
while (f.getline(line, 2000)) {
    // do some analysis
}

The c++ above takes about 85 seconds.

The last code I tried is c code, like below.

FILE *file = fopen ( fname, "r" );
char line[2000];
while (fgets(line, 2000, file) != NULL ) {
    // do some analysis
}
fclose ( file );

The c code above takes about 33 seconds.

Both of the last 2 codes, which parse the lines into char[] instead of string, need about 30 seconds more to convert char[] to string.

Is there a way to improve the performance of c/c++ code to read a text file line by line to match the c# performance? (Added : I'm using windows 7 64 bit OS with VC++ 10.0, x64)


One of the best ways to increase file reading performance is to use memory mapped files (mmap() on Unix, CreateFileMapping() etc on Windows). Then your file appears in memory as one flat chunk of bytes, and you can read it much faster than doing buffered I/O.

For a file larger than a gigabyte or so, you will want to be using a 64-bit OS (with a 64-bit process). I've done this to process a 30 GB file in Python with excellent results.


I suggest two things:

Use f.rdbuf()->pubsetbuf(...) to set a bigger read buffer. I've noticed some really significant increases in fstream performance when using larger buffer sizes.

Instead of getline(...) use read(...) to read larger blocks of data and parse them manually.


Compile with optimizations. C++ has quite some theoretical overhead that the optimizer will remove. E.g. many simple string methods will be inlined. That's probably why your char[2000] version is faster.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜