how to trim file - remove the rows which with the same value in the columns except the first two columns
Here I want to have your help on trimming a file, by remove the rows which with the same value in the columns except the first two columns.
the file I have (tab-delimited, with millions of rows, and tens of columns)
Jack Mike Jones Dan Was
1 2 7 3 4
2 3 9 4 8
T T C T T
T M T T T
W A S I S
the file I want (remove the rows which have the same values in cells except the first two)
Jack Mike Jones Dan Was
1 2 7 3 4
2 3 9 4 8
T T C T T
W A S I S
Could you give me any hints on my problem? Thanks a lot.
And I have experienced several e开发者_高级运维xcellent scripts of awk, shell and perl, in a related question. Thanks a lot for the helpers.
awk '{
val=$3
for (i=4; i<=NF; i++)
if (val != $i) {
print
break
}
}'
The simplest thing I could come up with (half joking:)
#!/usr/bin/perl
while (<>)
{
my (undef, undef, @flds) = split;
print if 1<scalar keys % {{ map { $_ => 1 } @flds }}
}
Explanation
It leverages a temporary hash table to find unique columns per line. Here goes:
while (<>) # for each line
{
# split the line into columns, discarding the first two
my (undef, undef, @flds) = split;
my %columns = map { $_ => 1 } @flds; # insert the value as key into a hashtable
my @uniq_cols = keys %columns; # get just the keys
my $uniq_count= scalar @uniq_cols; # count the keys
print if 1<$uniq_count # if count == 1, all columns are the same
}
To be even more explicit, the 'map' call is roughly equivalent to the usual idiom:
# my %columns = map { $_ => 1 } @flds;
my %columns;
foreach $fld (@flds)
{
$columns{$fld}++; # actually the map version does '$columns{$fld} = 1;' every time
}
HTH
Try this: perl -ne 'next if /^\w+\W+\w+\W+(\w+)(\W+\1)+\W*$/; print;'
That is, match:
^ beginning of line
\w+ first word
\W+ non-word (like spaces, tabs, etc)
\w+\W+ second word and spaces
(\w+) third word (and remember)
(\W+\1)+ spaces followed by a copy of the third word as many times as necessary
\W* optional trailing spaces
$ end of line
精彩评论