Python: Huge file reading by using linecache Vs normal file access open()
I am in a situation where multiple threads reading the same huge file with mutliple file pointers to same file. The file will have atleast 1 million lines. Eachline's length varies from 500 characters to 1500 characters. There won't "write" operations on the file. Each threa开发者_StackOverflowd will start reading the same file from different lines. Which is the efficient way..? Using the Python's linecache or normal readline() or is there anyother effient way?
Have a look at the mmap
module: http://docs.python.org/library/mmap.html
It will allow you to use the file as an array, while the OS handles the actual reading and buffering.
精彩评论