Bash loop command until file contains n duplicate entries (lines)
I'm writing a script and I need to create a loop that will execute same commands until file does contain a specified number of duplicate entries. For example, with each loop I will echo random s开发者_如何学Gotring to file results
. And I want loop to stop when there are 10 lines of of the same string.
I thought of something like
while [ `some command here (maybe using uniq)` -lt 10 ]
do command1 command2 command3 done
Do you have any idea how can this problem be solved? Using grep can't be done since I don't know what string I need to look for.
Thank you for your suggestions.
Not the most efficient solution, but this should work:
while [ `sort $file | uniq -c | awk '{print $1}' | sort -nr | head -n1` -lt 10 ]
here's another version, which you do it within one (g)awk process. Also try to use $()
syntax as much as possible
while [ "$(gawk '{!_[$0]++}END{b=asort(_,a);print a[b]}' file)" -lt 10 ]
I would use associative arrays in awk or Bash 4 and avoid running sort twice plus uniq and head.
Whenever you write your value to the file, increment that array element.
#!/bin/bash
# Bash 4
while true
do
one_command
val=$(command_to_output_val | tee out_file)
if ((array[$val]++ >= 10))
then
break
fi
done
while [ `sort filename | uniq -c | sort -nr | head -1 | sed -e 's:^ *::' -e 's: .*::'` -lt 10 ]
Not incredibly efficient (the "sort -nr | head -1" bit is particularly sub-optimal), but that's the standard "quick and dirty" solution.
精彩评论