ksh script optimization
I have a small script that simply reads each line of a file, retrieves id field, runs utility to get the name and appends the name at the end. The problem is the input file is huge (2GB). Since output is same as input with a 10-30 char name appended, it is of the same order of magnitude. How can I optimize it to read large buffers, process in buffers and then write buffers to the file so t开发者_运维技巧he number of file accesses are minimized?
#!/bin/ksh
while read line
do
id=`echo ${line}|cut -d',' -f 3`
NAME=$(id2name ${id} | cut -d':' -f 4)
if [[ $? -ne 0 ]]; then
NAME="ERROR"
echo "Error getting name from id2name for id: ${id}"
fi
echo "${line},\"${NAME}\"" >> ${MYFILE}
done < ${MYFILE}.csv
Thanks
You can speed things up considerably by eliminating the two calls to cut
in each iteration of the loop. It also might be faster to move the redirection to your output file to the end of the loop. Since you don't show an example of an input line, or what id2name
consists of (it's possible it's a bottleneck) or what its output looks like, I can only offer this approximation:
#!/bin/ksh
while IFS=, read -r field1 field2 id remainder # use appropriate var names
do
line=$field1,$field2,$id,$remainder
# warning - reused variables
IFS=: read -r field1 field2 field3 NAME remainder <<< $(id2name "$id")
if [[ $? -ne 0 ]]; then
NAME="ERROR"
# if you want this message to go to stderr instead of being included in the output file include the >&2 as I've done here
echo "Error getting name from id2name for id: ${id}" >&2
fi
echo "${line},\"${NAME}\""
done < "${MYFILE}.csv" > "${MYFILE}"
The OS will do the buffering for you.
Edit:
If your version of ksh doesn't have <<<
, try this:
id2name "$id" | IFS=: read -r field1 field2 field3 NAME remainder
(If you were using Bash, this wouldn't work.)
精彩评论