开发者

Running piped subprocesses gives different result when the launch order changes?

I'm running a pipe of commands from a python3 program, using subprocess.*; I didn't want to go trough a shell, for I'm passing arguments to my subcommands, and making sure these would not be misinterpreted by the shell would be 开发者_如何转开发nightmarish.

The subprocess doc gives this example of how to do it:

p1 = Popen(command1, stdout=PIPE)
p2 = Popen(command2, stdin=p1.stdout)
p2.wait()
p1.wait()

This works well. However, I wondered if it would be safer to start the consumer before the producer, so

p2 = Popen(command2, stdin=PIPE)
p1 = Popen(command1, stdout=p2.stdin)
p2.wait()
p1.wait()

I expected this to behave in exactly the same way, but apparently they do not. The first code works flawlessly; for the second, my program hangs; If I look at the system, I can see that p1 is dead and waiting to be reaped, and p2 hangs forever. Is there a rational explanation for that ?


It looks like p2 (consumer) is hanging because its stdin remains open. If the code is modified like this, both processes finish successfully:

p2 = Popen(command2, stdin=PIPE)
p1 = Popen(command1, stdout=p2.stdin)
p1.wait()
p2.stdin.close()
p2.wait()

I bet this is the Law of Leaky Abstractions in action.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜