开发者

Removing duplicated lines from a txt file

I am processing large text files (~20MB) containing data delimited by line. Most data entries are duplicated and I want to remove these duplications to only keep one copy.

Also, to make the problem slightly more complicated, some entries are repeated with an extra bit of info appended. In this case I need to keep the entry containing the extra info and delete the older versions.

e.g. I need to go from this:

BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB EXTRA BITS
to this:
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB EXTRA BITS
NB. the final order doesn't matter.

What is an efficient way to do t开发者_如何转开发his?

I can use awk, python or any standard linux command line tool.

Thanks.


How about the following (in Python):

prev = None
for line in sorted(open('file')):
  line = line.strip()
  if prev is not None and not line.startswith(prev):
    print prev
  prev = line
if prev is not None:
  print prev

If you find memory usage an issue, you can do the sort as a pre-processing step using Unix sort (which is disk-based) and change the script so that it doesn't read the entire file into memory.


awk '{x[$1 " " $2 " " $3] = $0} END {for (y in x) print x[y]}'

If you need to specify the number of columns for different files:

awk -v ncols=3 '
  {
    key = "";
    for (i=1; i<=ncols; i++) {key = key FS $i}
    if (length($0) > length(x[key])) {x[key] = $0}
  }
  END {for (y in x) print y "\t" x[y]}
'


This variation on glenn jackman's answer should work regardless of the position of lines with extra bits:

awk '{idx = $1 " " $2 " " $3; if (length($0) > length(x[idx])) x[idx] = $0} END {for (idx in x) print x[idx]}' inputfile

Or

awk -v ncols=3 '
  {
    key = "";
    for (i=1; i<=ncols; i++) {key = key FS $i}
    if (length($0) > length(x[key])) x[key] = $0
  }
  END {for (y in x) print x[y]}
' inputfile


This or a slight variant should do:

finalData = {}
for line in input:
    parts = line.split()
    key,extra = tuple(parts[0:3]),parts[3:]
    if key not in finalData or extra:
        finalData[key] = extra

pprint(finalData)

outputs:

{('BOB', '123', '1DB'): ['EXTRA', 'BITS'],
 ('DAVE', '789', '1DB'): [],
 ('JIM', '456', '3DB'): ['AX']}


You'll have to define a function to split your line into important bits and extra bits, then you can do:

def split_extra(s):
    """Return a pair, the important bits and the extra bits."""
    return blah blah blah

data = {}
for line in open('file'):
    impt, extra = split_extra(line)
    existing = data.setdefault(impt, extra)
    if len(extra) > len(existing):
        data[impt] = extra

out = open('newfile', 'w')
for impt, extra in data.iteritems():
    out.write(impt + extra)


Since you need the extra bits the fastest way is to create a set of unique entries (sort -u will do) and then you must compare each entry against each other, e.g.

if x.startswith(y) and not y.startswith(x)
and just leave x and discard y.


If you have perl and want only the last entry to be preserved :

cat file.txt | perl -ne 'BEGIN{%k={}} @_ = split(/ /);$kw = shift(@_); $kws{$kw} = "@_"; END{ foreach(sort keys %kws){ print "$_ $kws{$_}";} }' > file.new.txt


The function find_unique_lines will work for a file object or a list of strings.

import itertools

def split_line(s):
    parts = s.strip().split(' ')
    return " ".join(parts[:3]), parts[3:], s

def find_unique_lines(f):
    result = {}
    for key, data, line in itertools.imap(split_line, f):
        if data or key not in result:
            result[key] = line
    return result.itervalues()

test = """BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB EXTRA BITS""".split('\n')

for line in find_unique_lines(test):
        print line
BOB 123 1DB EXTRA BITS
JIM 456 3DB AX
DAVE 789 1DB
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜