开发者

Control Processes Started by Bash Daemon

In bash, I have created a simple daemon to execute commands when my internet connection changes:

#!/bin/bash

doService(){
    while
    do  
    checkTheInternetConnection
    sleep 15
    done
}

checkTheInternetConnection(){
    if unchanged since last check
        return
    else
        execute someCommand
    fi
}

someCommand(){
    do something
}

doService

And this has been working pretty well for what I need it to do.

The only problem is that as a part of my "someCommand" and "checkTheInternetConnection" I use other built-in utilities like arp, awk, grep, head, etc.

However, 99% of the time, I will just need arp.

First question: Is it nece开发者_StackOverflow社区ssary to keep the other commands open? Is there a way to kill a command once I've already processed its output?


Another question: (MOVED TO AN NEW POST) I am having a hell of a time trying to write a "kill all other daemon processes" function. I do not ever want more than one daemon running at once. Any suggestions? This is what I have:

otherprocess=`ps ux | awk '/BashScriptName/ && !/awk/ {print $2}'| grep -Ev $$`

    WriteLogLine "Checking for running daemons."

    if [ "$otherprocess" != "" ]; then 
        WriteLogLine "There are other daemons running, killing all others."
        VAR=`echo "$otherprocess" |grep -Ev $$| sed 's/^/kill /'`
        `$VAR`
    else
        WriteLogLine "There are no daemons running."    
    fi


Can you detail more the first question? I think you are asking about running many commands piped together (cat xxx|grep yyy|tail -zzz).

Each command will keep running until its pipe has data (not reached EOF). So in this example grep will only exit after cat processed all the input and closed its end of the pipe. But there is a trick here, cat will only close its end of the pipe if grep already read all (buffered, at least) the input, because the writing call in pipes are blocking. So you need to have this in mind while designing your scripts.

But I don't think you should worry about the built-in utilities. Generally they have a low memory footprint, if that is the concern.


For your first question. I don't quite understand it fully, but I can see that you may be asking one of two things.

  1. You run things in a bash function (grep, awk, sed, etc) and because that function is long running, you are afraid that the utilities that you run are somehow remaining open.
  2. You are piping output from one command to another and are afraid that the command stays open after it has finished running.

Neither 1 nor 2 will leave utility commands "open" after they are finished running. You could prove this by putting in

ps -ef | grep "command" | grep -v 'grep'

throughout the code to see just what is running by that name. or

ps -ef | grep "$$" | grep -v 'grep'

which will list out things that the current process has spawned.

UPDATE:

So, it seems that you are interested with how things run from a pipe. You can see this visually using the following command:

$ ls / | grep bin | grep bin | ps -ef | grep ls
$

compare that with something like:

$ find ~ | grep bin | ps -ef | grep find
$

Notice how the 'ls' is no longer in the process list, but the find is. You may have to add more "grep bin" commands into the pipeline to get the effect. Once the first command is finished outputting, it will close, even if the rest of the commands are not yet finished. The other commands will finish as they are done processing the output from the first (thus the pipe nature)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜