开发者

search through a file in the style of bash reverse-search-history

I'm trying to write a function which will search through a file in the same manner that reverse-search-history works. i.e. user starts typing in, prompt is updated with 1st match, hitting a special key rotates through other matches, hitting another special key selects the current match.

I wrote a bash script to this, but it is awfully slow. Was wondering if I could harness some other unix/bash feature to make this fast. Maybe using awk?

Any ideas would be appreciated.

For this script, TAB rotates through matches, ENTER selects the current match, ESC ends, BACKSPACE removes the last character in the current search. (forgive my dodgy bash script, am relatively new to bash/unix)

#!/bin/bash


do_search()
{
        #Record the current screen position
        tput sc
        local searchTerm
        local matchNumber=1
        local totalSearchString
        local TAB_CHAR=`echo -e "\t"`

        #print initial prompt
        echo -n "(search) '':"

        #-s: means input will not be echoed to screen, -n1 means that one character will be gotten
        while IFS= read -r -s -n1 char
        do
                #If ENTER
                if [ "$char" == "" ]; then
                        if [ "$match" != "" ]; then
                                eval "$match"
                        fi
                        echo ""
                        return 0

                #If BACKSPACE
                elif [ "$char" == "" ]; then
                        if [ "$totalSearchString" != "" ]; then
                                totalSearchString=${totalSearchString%?}
                        fi

                #If ESCAPE
                elif [ "$char" == "" ]; then
                        tput el1
                        tput rc
                        return 0

                #If TAB
                elif [ "$char" == "$TAB_CHAR" ]; then
                        matchNumber=$(($matchNumber+1))

                #OTHERWISE
                else
                        totalSearchString="$totalSearchString$char"
                fi

                match=""
                if [ "$totalSearchString" != "" ]; then
                        #This builds up a list of grep statements piping into each other for each word in the totalSearchString
                        #e.g. totalSearchString="blah" will output "| grep blah"
                        #e.g. totalSearchString="blah1 blah2" will output "| grep blah1 | grep blah2"
                        local grepStatements=`echo $totalSearchString | sed 's/\([^ ]*\) */| grep \1 /g'`
                       开发者_StackOverflow local cdHistorySearchStatement="cat $1 $grepStatements | head -$matchNumber | tail -1"

                        #Get the match
                        match=`eval "$cdHistorySearchStatement"`
                fi

                #clear the current line
                tput el1
                tput rc

                #re-print prompt & match
                echo -n "(search) '$totalSearchString': $match"
        done
  return 0
}

do_search categories.txt


I think bash uses readline for this, why don't you look into using it yourself? I don't know much more about it, sorry, but I thought it might help.


I don't think this can be made fast enough for interactive use in pure bash (maybe using the complete builtin?). That said, you can try simplifying the commands you're using. Instead of one grep per word, you can use one for all of them, by doing

  grepStatements=$(echo "$totalSearchString" | sed 's/[ ]\+/|/g')
  cdHistorySearchStatement="grep '$grepStatements' $1 | ..."

and instead of head -$matchNumber | tail -1 you could use sed -n ${matchNumber}{p;q}.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜