How to read file N lines at a time? [duplicate]
I need to read a big file 开发者_如何学Pythonby reading at most N lines at a time, until EOF. What is the most effective way of doing it in Python? Something like:
with open(filename, 'r') as infile:
while not EOF:
lines = [get next N lines]
process(lines)
One solution would be a list comprehension and the slice operator:
with open(filename, 'r') as infile:
lines = [line for line in infile][:N]
After this lines
is tuple of lines. However, this would load the complete file into memory. If you don't want this (i.e. if the file could be really large) there is another solution using a generator expression and islice
from the itertools package:
from itertools import islice
with open(filename, 'r') as infile:
lines_gen = islice(infile, N)
lines_gen
is a generator object, that gives you each line of the file and can be used in a loop like this:
for line in lines_gen:
print line
Both solutions give you up to N lines (or fewer, if the file doesn't have that much).
A file object is an iterator over lines in Python. To iterate over the file N lines at a time, you could use grouper()
function in the Itertools Recipes section of the documenation. (Also see What is the most “pythonic” way to iterate over a list in chunks?):
try:
from itertools import izip_longest
except ImportError: # Python 3
from itertools import zip_longest as izip_longest
def grouper(iterable, n, fillvalue=None):
args = [iter(iterable)] * n
return izip_longest(*args, fillvalue=fillvalue)
Example
with open(filename) as f:
for lines in grouper(f, N, ''):
assert len(lines) == N
# process N lines here
This code will work with any count of lines in file and any N
. If you have 1100 lines
in file and N = 200
, you will get 5 times to process chunks of 200 lines and one time with 100 lines.
with open(filename, 'r') as infile:
lines = []
for line in infile:
lines.append(line)
if len(lines) >= N:
process(lines)
lines = []
if len(lines) > 0:
process(lines)
maybe:
for x in range(N):
lines.append(f.readline())
I think you should be using chunks instead of specifying the number of lines to read. It makes your code more robust and generic. Even if the lines are big, using chunk will upload only the assigned amount of data into memory.
Refer to this link
I needed to read in n lines at a time from files for extremely large files (~1TB) and wrote a simple package to do this. If you pip install bigread
, you can do:
from bigread import Reader
stream = Reader(file='large.txt', block_size=10)
for i in stream:
print(i)
block_size
is the number of lines to read at a time.
This package is no longer maintained. I now find it best to use:
with open('big.txt') as f:
for line_idx, line in enumerate(f):
print(line)
If you need a memory of previous lines, just store them in a list. If you need to know future lines to decide what to do with the current line, store the current line in a list until you get to that future line...
How about a for loop?
with open(filename, 'r') as infile:
while not EOF:
lines = []
for i in range(next N lines):
lines.append(infile.readline())
process(lines)
You may have to do something as simple as:
lines = [infile.readline() for _ in range(N)]
Update after comments:
lines = [line for line in [infile.readline() for _ in range(N)] if len(line) ]
def get_lines_iterator(filename, n=10):
with open(filename) as fp:
lines = []
for i, line in enumerate(fp):
if i % n == 0 and i != 0:
yield lines
lines = []
lines.append(line)
if lines:
yield lines
for lines in b():
print(lines)
It is simpler with islice:
from itertools import islice
def get_lines_iterator(filename, n=10):
with open(filename) as fp:
while True:
lines = list(islice(fp, n))
if lines:
yield lines
else:
break
for lines in get_lines_iterator(filename):
print(lines)
Another way to do this:
from itertools import islice
def get_lines_iterator(filename, n=10):
with open(filename) as fp:
for line in fp:
yield [line] + list(islice(fp, n-1))
for lines in get_lines_iterator(filename):
print(lines)
If you can read the full file in ahead of time;
infile = open(filename, 'r').readlines()
my_block = [line.strip() for line in infile[:N]]
cur_pos = 0
while my_block:
print (my_block)
cur_pos +=1
my_block = [line.strip() for line in infile[cur_pos*N:(cur_pos +1)*N]]
I was looking for an answer to the same question, but did not really like any of the proposed stuff earlier, so I ended up writing this slightly ugly thing that does exactly what I wanted without using strange libraries.
def test(filename, N):
with open(filename, 'r') as infile:
lines = []
for line in infile:
line = line.strip()
if len(lines) < N-1:
lines.append(line)
else:
lines.append(line)
res = lines
lines = []
yield res
else:
if len(lines) != 0:
yield lines
精彩评论