Discovering duplicate lines
I've got a file of CSS elements, and I'm trying to check for any duplicate CSS elements,.. then output the lines that show the dupe lines.
###Test ###ABC ###test ##.hello ##.ABC ##.test bob.com###Test ~qwerty.com###Test ~more.com##.ABC
###Test
& ##.ABC
already exists in the list, and I'd like a way to output the lines that are used in the file, basically duplication checking (case sensitive). So using the above list, I would generate something like this..
Line 1: ###Test 开发者_运维百科Line 7: bob.com###Test Line 8: ~qwerty.com###Test Line 5: ##.ABC Line 9: ~more.com##.ABC
Something in bash, or maybe perl?
Thanks :)
I've been challenged by your problem, so I wrote you a script. Hope you liked it. :)
#!/usr/bin/perl
use strict;
use warnings;
sub loadf($);
{
my @file = loadf("style.css");
my @inner = @file;
my $l0 = 0; my $l1 = 0; my $l2 = 0; my $dc = 0; my $tc;
foreach my $line (@file) {
$l1++;
$line =~ s/^\s+//;
$line =~ s/\s+$//;
foreach my $iline (@inner) {
$l2++;
$iline =~ s/^\s+//;
$iline =~ s/\s+$//;
next if ($iline eq $line);
if ($iline =~ /\b$line\b/) {
$dc++;
if ($dc > 0) {
if ($l0 == 0) {
print "Line " . $l1 . ": " . $line . "\n";
$l0++;
}
print "Line " . $l2 . ": " . $iline . "\n";
}
}
}
print "\n" unless($dc == 0);
$dc = 0; $l0 = 0; $l2 = 0;
}
}
sub loadf($) {
my @file = ( );
open(FILE, $_[0] . "\n") or die("Couldn't Open " . $_[0] . "\n");
@file = <FILE>;
close(FILE);
return @file;
}
__END__
This does exactly what you need. And sorry if it's a bit messy.
This seems to work:
sort -t '#' -k 2 inputfile
It groups them by the part after the # characters:
##.ABC
~more.com##.ABC
###ABC
##.hello
##.test
###test
bob.com###Test
~qwerty.com###Test
###Test
If you only want to see the unique values:
sort -t '#' -k 2 -u inputfile
Result:
##.ABC
###ABC
##.hello
##.test
###test
###Test
This pretty closely duplicates the example output in the question (it relies on some possibly GNU-specific features):
cat -n inputfile |
sed 's/^ *\([0-9]\)/Line \1:/' |
sort -t '#' -k 2 |
awk -F '#+' '{if (! seen[$2]) { \
if ( count > 1) printf "%s\n", lines; \
count = 0; \
lines = "" \
}; \
seen[$2] = 1; \
lines = lines "\n" $0; ++count}
END {if (count > 1) print lines}'
Result:
Line 5: ##.ABC
Line 9: ~more.com##.ABC
Line 1: ###Test
Line 7: bob.com###Test
Line 8: ~qwerty.com###Test
I'd recommend using the uniq function if you can install MoreUtils:
how-do-i-print-unique-elements-in-perl-array
Here is one way to do it, which is fairly easy to extend to multiple files if need be.
With this file find_dups.pl
:
use warnings;
use strict;
my @lines;
while (<>) { # read input lines
s/^\s+//; s/\s+$//; # trim whitespace
push @lines, {data => $_, line => $.} if $_ # store useful data
}
@lines = sort {length $$a{data} <=> length $$b{data}} @lines; # shortest first
while (@lines) {
my ($line, @found) = shift @lines;
my $re = qr/\Q$$line{data}\E$/; # search token
@lines = grep { # extract matches from @lines
not $$_{data} =~ $re && push @found, $_
} @lines;
if (@found) { # write the report
print "line $$_{line}: $$_{data}\n" for $line, @found;
print "\n";
}
}
then perl find_dups.pl input.css
prints:
line 5: ##.ABC line 9: ~more.com##.ABC line 1: ###Test line 7: bob.com###Test line 8: ~qwerty.com###Test
精彩评论