开发者

How to attach debugger to a python subproccess?

I need to debug a child process spawned by multi开发者_高级运维processing.Process(). The pdb degugger seems to be unaware of forking and unable to attach to already running processes.

Are there any smarter python debuggers which can be attached to a subprocess?


I've been searching for a simple to solution for this problem and came up with this:

import sys
import pdb

class ForkedPdb(pdb.Pdb):
    """A Pdb subclass that may be used
    from a forked multiprocessing child

    """
    def interaction(self, *args, **kwargs):
        _stdin = sys.stdin
        try:
            sys.stdin = open('/dev/stdin')
            pdb.Pdb.interaction(self, *args, **kwargs)
        finally:
            sys.stdin = _stdin

Use it the same way you might use the classic Pdb:

ForkedPdb().set_trace()


Winpdb is pretty much the definition of a smarter Python debugger. It explicitly supports going down a fork, not sure it works nicely with multiprocessing.Process() but it's worth a try.

For a list of candidates to check for support of your use case, see the list of Python Debuggers in the wiki.


This is an elaboration of Romuald's answer which restores the original stdin using its file descriptor. This keeps readline working inside the debugger. Besides, pdb special management of KeyboardInterrupt is disabled, in order it not to interfere with multiprocessing sigint handler.

class ForkablePdb(pdb.Pdb):

    _original_stdin_fd = sys.stdin.fileno()
    _original_stdin = None

    def __init__(self):
        pdb.Pdb.__init__(self, nosigint=True)

    def _cmdloop(self):
        current_stdin = sys.stdin
        try:
            if not self._original_stdin:
                self._original_stdin = os.fdopen(self._original_stdin_fd)
            sys.stdin = self._original_stdin
            self.cmdloop()
        finally:
            sys.stdin = current_stdin


Building upon @memplex idea, I had to modify it to get it to work with joblib by setting the sys.stdin in the constructor as well as passing it directly along via joblib.

import os
import pdb
import signal
import sys
import joblib

_original_stdin_fd = None

class ForkablePdb(pdb.Pdb):
    _original_stdin = None
    _original_pid = os.getpid()

    def __init__(self):
        pdb.Pdb.__init__(self)
        if self._original_pid != os.getpid():
            if _original_stdin_fd is None:
                raise Exception("Must set ForkablePdb._original_stdin_fd to stdin fileno")

            self.current_stdin = sys.stdin
            if not self._original_stdin:
                self._original_stdin = os.fdopen(_original_stdin_fd)
            sys.stdin = self._original_stdin

    def _cmdloop(self):
        try:
            self.cmdloop()
        finally:
            sys.stdin = self.current_stdin

def handle_pdb(sig, frame):
    ForkablePdb().set_trace(frame)

def test(i, fileno):
    global _original_stdin_fd
    _original_stdin_fd = fileno
    while True:
        pass    

if __name__ == '__main__':
    print "PID: %d" % os.getpid()
    signal.signal(signal.SIGUSR2, handle_pdb)
    ForkablePdb().set_trace()
    fileno = sys.stdin.fileno()
    joblib.Parallel(n_jobs=2)(joblib.delayed(test)(i, fileno) for i in range(10))


remote-pdb can be used to debug sub-processes. After installation, put the following lines in the code you need to debug:

import remote_pdb
remote_pdb.set_trace()

remote-pdb will print a port number which will accept a telnet connection for debugging that specific process. There are some caveats around worker launch order, where stdout goes when using various frontends, etc. To ensure a specific port is used (must be free and accessible to the current user), use the following instead:

from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace()

remote-pdb may also be launched via the breakpoint() command in Python 3.7.


Just use PuDB that gives you an awesome TUI (GUI on terminal) and supports multiprocessing as follow:

from pudb import forked; forked.set_trace()


An idea I had was to create "dummy" classes to fake the implementation of the methods you are using from multiprocessing:

from multiprocessing import Pool


class DummyPool():
    @staticmethod
    def apply_async(func, args, kwds):
        return DummyApplyResult(func(*args, **kwds))

    def close(self): pass
    def join(self): pass


class DummyApplyResult():
    def __init__(self, result):
        self.result = result

    def get(self):
        return self.result


def foo(a, b, switch):
    # set trace when DummyPool is used
    # import ipdb; ipdb.set_trace()
    if switch:
        return b - a
    else:
        return a - b


if __name__ == '__main__':
    xml = etree.parse('C:/Users/anmendoza/Downloads/jim - 8.1/running-config.xml')
    pool = DummyPool()  # switch between Pool() and DummyPool() here
    results = []
    results.append(pool.apply_async(foo, args=(1, 100), kwds={'switch': True}))
    pool.close()
    pool.join()
    results[0].get()


Here is the version of the ForkedPdb(Romuald's Solution) which will work for Windows and *nix based systems.

import sys
import pdb
import win32console


class MyHandle():
    def __init__(self):
        self.screenBuffer = win32console.GetStdHandle(win32console.STD_INPUT_HANDLE)
    
    def readline(self):
        return self.screenBuffer.ReadConsole(1000)

class ForkedPdb(pdb.Pdb):
    def interaction(self, *args, **kwargs):
        _stdin = sys.stdin
        try:
            if sys.platform == "win32":
                sys.stdin = MyHandle()
            else:
                sys.stdin = open('/dev/stdin')
            pdb.Pdb.interaction(self, *args, **kwargs)
        finally:
            sys.stdin = _stdin


The problem here is that Python always connects sys.stdin in the child process to os.devnull to avoid contention for the stream. But this means that when the debugger (or a simple input()) tries to connect to stdin to get input from the user, it immediately reaches end-of-file and reports an error.

One solution, at least if you don't expect multiple debuggers to run at the same time, is to reopen stdin in the child process. That can be done by setting sys.stdin to open(0), which always opens the active terminal. This in fact is what the ForkedPdb solution does, but it can be done more simply and in an os-independent manner like this:

import multiprocessing, sys

def main():
    process = multiprocessing.Process(target=worker)
    process.start()
    process.join()

def worker():
    # Python automatically closes sys.stdin for the subprocess, so we reopen
    # stdin. This enables pdb to connect to the terminal and accept commands.
    # See https://stackoverflow.com/a/30149635/3830997.
    sys.stdin = open(0) # or os.fdopen(0)
    print("Hello from the subprocess.")
    breakpoint()  # or import pdb; pdb.set_trace()
    print("Exited from breakpoint in the subprocess.")

if __name__ == '__main__':
    main()


If you are on a supported platform, try DTrace. Most of the BSD / Solaris / OS X family support DTrace.

Here is an intro by the author. You can use Dtrace to debug just about anything.

Here is a SO post on learning DTrace.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜