开发者

redirecting stdin _and_ stdout to a pipe

I 开发者_StackOverflow社区would like to run a program "A", have its output go to the input to another program "B", as well as stdin going to intput of "B". If program "A" closes, I'd like "B" to continue running.

I can redirect A output to B input easily:

./a | ./b

And I can combine stderr into the output if I'd like:

./a 2>&1 | ./b

But I can't figure out how to combine stdin into the output. My guess would be:

./a 0>&1 | ./b

but it doesn't work.

Here's a test that doesn't require us to rewrite up any test programs:

$ echo ls 0>&1 | /bin/sh -i
$ a  b  info.txt
$
/bin/sh: Cannot set tty process group (No such process)

If possible, I'd like to do this using only bash redirection on the command line (I don't want to write a C program to fork off child processes and do anything complicated everytime I want to do some redirection of stdin to a pipe).


This cannot be done without writing an auxiliary program.

In general, stdin could be a read-only file descriptor (heck, it might refer to read-only file). So you cannot "insert" anything into it.

You will need to write a "helper" program that monitors two file descriptors (say, 0 and 3) in order to read from both and "merge" them. A simple select or poll loop would be sufficient, and you could write it in most scripting languages, but not the shell, I don't think.

Then you can use shell redirection to feed your program's output to descriptor 3 of the "helper".

Since what you want is basically the opposite of "tee", I might call it "eet"...

[edit]

If only you could launch "cat" in the background...

But that will fail because background processes with a controlling terminal cannot read from stdin. So if you could just detach "cat" from its controlling terminal and run it in the background...

On Linux, "setsid cat" should do it, roughly. But (a) I could not get it to work very well and (b) I really do not have time for this today and (c) it is non-standard anyway.

I would just write the helper program.

[edit 2]

OK, this seems to work:

{ seq 5 ; sleep 2 ; seq 5 ; } | /bin/bash -c 'set -m ; setsid cat ; echo HELLO'

The set -m thing forces bash to enable job control, which apparently is needed to prevent the shell from redirecting stdin from /dev/null.

Here, the echo HELLO represents your "program A". The seq commands (with the sleep in the middle) are just to provide some input. And yes, you can pipe this whole thing to process B.

About as ugly and non-portable a solution as you could ask for...


A pipe has two ends. One is for writing, and that which gets written appears in the other end, which is for reading.

It's a pipe, not a T or Y junction.

I don't think your scenario is possible. Having "stdin going to input of" anything doesn't make sense.


If I understand your requirements correctly, you want this set up (ASCII art to the fore):

o----+----->|  A   |----+---->|  B  |---->o
     |                  ^
     |                  |
     +------------------+

with the additional constraint that if process A closes up shop, process B should be able to continue with the input stream going to B.

This is a non-standard setup, as you realize, and can only be achieved by using an auxilliary program to drive the input to A and B. You end up with some interesting synchronization issues but it will all work remarkably well as long as your messages are short enough.

The plumbing necessary to achieve this is notable - you'll need two pipes, one for the input to A and the other for the input to B, and the output of A will be connected to the input of B as well.

o---->|  C  |---------->|  A   |----+---->|  B  |---->o
         |                          ^
         |                          |
         +--------------------------+

Note that C will be writing the data twice, once to A and once to B. Note, too, that the pipe from A to B is the same pipe as the pipe from C to A.


To make the given test case work you have to while ... read from the controlling terminal device /dev/tty inside a sh -c '...' construct.

Note the use of eval (could it be avoided here?) and that multi-line commands on input> will fail.

echo 'ls; export var=myval'  | ( 
stdin="$(</dev/stdin)"
/bin/sh -i -c '
  eval "$1"; 
  while IFS="" read -e -r -p "input> " line; do 
    history -s "${line}"
    eval "${line}"; 
  done </dev/tty
' argv0 "${stdin}"
)

input> echo $var

For a similar problem and the use of named pipes see here:

BASH: Best architecture for reading from two input streams


This can't be done exactly as shown, but to perform your example you can make use of cat's ability to join files together:

cat <(echo ls) - | /bin/sh

(You can do -i, but then you'll have to have another process kill the /bin/sh, as your attempts to Ctrl-C and Ctrl-D out will fail.)

This assumes that you want to pass in your piped input and then accept from stdin. You can also make it so that it does something after stdin is done, or on both sides -- but it won't merge input character-by-character or line-by-line.


This seems to do what you want:

$ ( ./a <&-; cat ) | ./b

(It's not clear to me if you want a to get input...this solution sends all input to b) Of course, in this case the inputs to b are strictly ordered: all of the output of a is sent to b first, then a terminates, then input goes to b. If you want things interleaved, try:

$ ( ./a <&- & cat ) | ./b
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜