Find duplicates with md5sum
I have a double loop that opens a files and uses awk to take the first section and the second section of each line. The first section is the md5sum of a file and the second chunk is the filename. However when I run the script to see if I have duplicate files, file1 fines file1 and so it thinks they are duplicaes even though they are the same file. Here is my code:
echo start
for i in $(<dump.txt) ; do
md=$(echo $i|awk -F'|' '{print $1}')
file=$(echo $i|awk -F'|' '{print $2}')
for j in $(<dump.txt) ; do
m=$(echo $j|awk -F'|' '{print $1}')
f=$(echo $j|awk -F'|' '{print $2}')
if [ "$md" == "$m" ]; then
echo $file and $f are duplicates
fi
done
done
echo end
The dump file looks like this:
404460c24654e3d64024851dd0562ff1 *./extest.sh
7a900fdfa67739adcb1b764e240be05f *./test.txt
7a900fdfa67739adcb1b764e240be05f *./test2.txt
88f5a6b83182ce5c34c4cf3b17f21af2 *./dump.txt
c8709e009da4cce3ee2675f2a1ae9d4f *./test3.txt
d41d8cd98f00b204e9800998ecf8427e *./checksums.txt
The Entire code is:
#!/bin/sh
func ()
{
if [ "$1" == "" ]; then
echo "Default";
for i in `find` ;
do
#if [ -d $i ]; then
#echo $i "is a directory";
#fi
if [ -f $i ]; then
if [ "$i" != "./ex.sh" ]; then
#echo $i "is a file";
md5sum $i >> checksums.txt;
sort --output=dump.txt checksums.txt;
fi
fi
done
fi
if [ "$1" == "--long" ]; then
echo "--long";
for i in `find` ;
do
#if [ -d $i ]; then
#echo $i "is a directory";
#fi
if [ -f $i ]; then
echo $i "is a file";
fi
done
fi
if [ "$1" == "--rm" ]; then
echo "--rm";
for i in `find` ;
do
#if [ -d $i ]; then
#echo $i "is a directory";
#fi
if [ -f $i ]; then
echo $i "is a file";
fi
done
fi
}
parse () {
echo start
for i in $(<dump.txt) ; do
md=$(echo $i|awk -F'|' '{print $1}')
file=$(echo $i|awk -F'|' '{print $2}')
for j in $(<dump.txt) ; do
m=$(echo $j|awk -F'|' '{print $1}')
f=$(echo $j|a开发者_开发百科wk -F'|' '{print $2}')
#echo $md
#echo $m
if [ "$file" != "$f" ] && [ "$md" == "$m" ]; then
echo Files $file and $f are duplicates.
fi
done
done
echo end
}
getArgs () {
if [ "$1" == "--long" ]; then
echo "got the first param $1";
else
if [ "$1" == "--rm" ]; then
echo "got the second param $1";
else
if [ "$1" == "" ]; then
echo "got default param";
else
echo "script.sh: unknown option $1";
exit;
fi
fi
fi
}
#start script
cat /dev/null > checksums.txt;
cat /dev/null > dump.txt;
getArgs $1;
func $1;
parse;
#end script
It's pretty simple:
if [ "$file" != "$f" ] && [ "$md" = "$m" ]; then
echo "Files $file and $f are duplicates."
fi
Note that I changed the comparison operator from ==
to =
, which is the common form. I also surrounded the message by double quotes to make it clear that it is a single string and that I don't want the word expansion to happen on the two variables file
and f
.
[Update:]
Another way to find duplicates, which is much faster, is to use awk for string processing:
awk -F'|' '
NF == 2 {
if (fname[$1] != "") {
print("Files " fname[$1] " and " $2 " are duplicates.");
}
fname[$1] = $2;
}
' dump.txt
you don't really need loop or two loops if you decide to solve it with awk. It is something like nuclear head in text processing.
awk -F'|' '{if($1 in a)print "duplicate found:" $0 " AND "a[$1];else a[$1]=$0 }' yourfile
will bring what you need. of course the text info you could customize.
see the test below
kent$ cat md5chk.txt
abcdefg|/foo/bar/a.txt
bbcdefg|/foo/bar2/ax.txt
cbcdefg|/foo/bar3/ay.txt
abcdefg|/foo/bar4/a.txt
1234567|/seven/7.txt
1234568|/seven/8.txt
1234567|/seven2/7.txt
kent$ awk -F'|' '{if($1 in a)print "duplicate found:" $0 " AND "a[$1];else a[$1]=$0 }' md5chk.txt
duplicate found:abcdefg|/foo/bar4/a.txt AND abcdefg|/foo/bar/a.txt
duplicate found:1234567|/seven2/7.txt AND 1234567|/seven/7.txt
updated
awk # the name of the tool/command
-F'|' # declare delimiter is "|"
'{if($1 in a) # if the first column was already saved
print "duplicate found:" $0 " AND "a[$1]; # print the info
else # else
a[$1]=$0 }' # save in an array named a, index=the 1st column (md5), value is the whole line.
yourfile # your input file
精彩评论