Interactive Python script output stored in some file
How do I perform logging of all activities that are done by a Python script and all scripts that are called from it?
I had several Bash scripts but now wrote a Python script which call all of these Bash scripts. I would like to have all output produced from these scripts stored in some file.
The script is interactive Python script, i.e contains raw_input lines, so I couldn't do like 'python script.py | tee log.txt' for overall the Python script since for some reasons questions are not seen on the screen.
Here is an excerpt from the script which calls one of the shell scripts.
cmd = "somescript.sh"
try:
retvalue = subprocess.check_call(cmd, shell=True)
except subprocess.CalledProcessError:
print ("script command has been failed")
sys.exit("exit from script")
What do you think could be done here?
Edit
Two subquestions based on Alex's answer:
How to make the answers on the questions stored in the output file as well? For example on line
ok = raw_input(prompt)
the user will be asked for the question and I would like to the answer logged as well.I read about Popen and communicate and didn't use since it buffers the data in memory. Here the amo开发者_JAVA百科unt of output is big and I need to care about standard-error with standard-output as well. Do you know if this is possible to handle with Popen and communicate method as well?
Making Python's own print
s go to both the terminal and a file is not hard:
>>> import sys
>>> class tee(object):
... def __init__(self, fn='/tmp/foo.txt'):
... self.o = sys.stdout
... self.f = open(fn, 'w')
... def write(self, s):
... self.o.write(s)
... self.f.write(s)
...
>>> sys.stdout = tee()
>>> print('hello world!')
hello world!
>>>
$ cat /tmp/foo.txt
hello world!
This should work both in Python 2 and Python 3.
To similarly direct the output from subcommands, don't use
retvalue = subprocess.check_call(cmd, shell=True)
which lets cmd
's output go to its regular "standard output", but rather grab and re-emit it yourself, as follows:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
retvalue = p.returncode
assuming you don't care about standard-error (only standard-output) and the amount of output from cmd
is reasonably small (since .communicate
buffers that data in memory) -- it's easy to tweak if either assumption doesn't correspond to what you exactly want.
Edit: the OP has now clarified the specs in a long comment to this answer:
- How to make the answers on the questions stored in the output file as well? For example on line ok = raw_input(prompt) the user will be asked for the question and I would like to the answer logged as well.
Use a function such as:
def echoed_input(prompt):
response = raw_input(prompt)
sys.stdout.f.write(response)
return response
instead of just raw_input
in your application code (of course, this is written specifically to cooperate with the tee
class I showed above).
- I read about Popen and communicate and didn't use since it buffers the data in memory. Here amount of output is big and I need to care about standard-error with standard-output as well. Do you know if this is possible to handle with Popen and communicate method as well?
communicate
is fine as long as you don't get more output (and standard-error) than comfortably fits in memory, say a few gigabytes at most depending on the kind of machine you're using.
If this hypothesis is met, just recode the above as, instead:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
so, se = p.communicate()
print(so)
retvalue = p.returncode
i.e., just redirect the subcommand's stderr to get mixed into its stdout.
If you DO have to worry about gigabytes (or whatever) coming at you, then
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in p.stdout:
sys.stdout.write(p)
p.wait()
retvalue = p.returncode
(which gets and emits one line at a time) may be preferable (this depends on cmd
not expecting anything from its standard input, of course... because, if it is expecting anything, it's not going to get it, and the problem starts to become challenging;-).
Python has a tracing module: trace. Usage: python -m trace --trace file.py
If you want to capture the output of any script, then on a *nix-y system you can redirect stdout and stderr to a file:
./script.py >> /tmp/outputs.txt 2>> /tmp/outputs.txt
If you want everything done by the scripts, not just what they print, then the python trace module won't trace things done by external scripts that your python executes. The only thing that can trace every action done by a program would be something like DTrace, if you are lucky enough to have a system that supports it. (OS X Instruments are based on DTrace)
精彩评论