Bash: create a report with grep output?
Greetings.
I need to generate a simple report via Bash (Korn?) with this raw data
Test_Version=V2.5.2
Test_Version=V2.6.3
Test_Version=V2.4.7
Test_Version=V2.5.2
Test_Version=V2.5.2
Test_Version=V2.5.1
Test_Version=V2.5.0
Test_Version=V2.3.9
Test_Version=V2.3.1
Ideally, I'd like to get something like this sorted output
Version Count
...
V2.5.0 1
V2.5.1 1
V2.5.2 3
V2.6.3 1
...
I can sort the output like this (raw data is contained in ASCII files):
find . -name "*.VER" -exec grep "Test_Version" '{}' ';' -print | grep -e "Test_Version" | sort -u
But I can't figure out how to count my rec开发者_开发百科ords in a tabular layout. Any idea how could I do that?
Thanks!!
What about something like:
$ cat input.txt | sed 's/.*=//' | sort | uniq -c
1 V2.3.1
1 V2.3.9
1 V2.4.7
1 V2.5.0
1 V2.5.1
3 V2.5.2
1 V2.6.3
Can tweak it into the exact format from there...
This seems like a job for awk:
Assuming your version information is in the file versions.txt
(you can also not specify a filename, in that case awk
reads from stdin).
awk -F= '
{counts[$2]=counts[$2]+1}
END {for (key in counts)
printf "%s\t%d\n", key, counts[key]}
' versions.txt
Explanation:
-F=
tellsawk
to use the=
character as field separator. Each line in your data will be treated as two fields of which only the second is used.- The first statement between braces is executed for each line of input. Keeping count for each occurence of the second field, which is
$2
. - The second statement in braces preceded by the keyword
END
is executed after the last line is processed. It shows all the counts for all distinct values of$2
.
You won't need any greps:
find . -name "*.VER" |
awk -F= '
BEGIN {
OFS="\t"
}
/Test_Version/{
if (!count[$2]++) ver[num++]=$2
}
END {
print "Version", "Count"
n=asort(ver);
for (i=1;i<=n;i++) print ver[i], count[ver[i]]
}'
精彩评论