开发者

sorting large text data

I have a large file (100 million lines of tab separated values - about 1.5GB in size). What is the fastest开发者_StackOverflow中文版 known way to sort this based on one of the fields?

I have tried hive. I would like to see if this can be done faster using python.


Have you considered using the *nix sort program? in raw terms, it'll probably be faster than most Python scripts.

Use -t $'\t' to specify that it's tab-separated, -k n to specify the field, where n is the field number, and -o outputfile if you want to output the result to a new file. Example:

sort -t $'\t' -k 4 -o sorted.txt input.txt

Will sort input.txt on its 4th field, and output the result to sorted.txt


you want to build an in-memory index for the file:

  1. create an empty list
  2. open the file
  3. read it line by line (using f.readline(), and store in the list a tuple consisting of the value on which you want to sort (extracted with line.split('\t').strip()) and the offset of the line in the file (which you can get by calling f.tell() before calling f.readline())
  4. close the file
  5. sort the list

Then to print the sorted file, reopen the file and for each element of your list, use f.seek(offset) to move the file pointer to the beginning of the line, f.readline() to read the line and print the line.

Optimization: you may want to store the length of the line in the list, so that you can use f.read(length) in the printing phase.

Sample code (optimized for readability, not speed):

def build_index(filename, sort_col):
    index = []
    f = open(filename)
    while True:
        offset = f.tell()
        line = f.readline()
        if not line:
            break
        length = len(line)
        col = line.split('\t')[sort_col].strip()
        index.append((col, offset, length))
    f.close()
    index.sort()
    return index

def print_sorted(filename, col_sort):
    index = build_index(filename, col_sort)
    f = open(filename)
    for col, offset, length in index:
        f.seek(offset)
        print f.read(length).rstrip('\n')

if __name__ == '__main__':
    filename = 'somefile.txt'
    sort_col = 2
    print_sorted(filename, sort_col)


Split up into files that can be sorted in memory. Sort each file in memory. Then merge the resulting files.

Merge by reading a portion of each of the files to be merged. The same amount from each file leaving enough space in memory for the merged result. Once merged saving this. Repeating adding blocks of merged data onto the file.

This minimises the file i/o and moving around the file on the disk.


I would store the file in a good relational database, index it on the field your are interested in and then read the ordered items.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜