Generating movie from python without saving individual frames to files
I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie.
In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame开发者_C百科 is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files?
Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder/ffmpeg/x264.
Is there something that can be done with pipes [3]?
[1] http://matplotlib.sourceforge.net/examples/animation/movie_demo.html
[2] How does one encode a series of images into H264 using the x264 C API?
[3] http://www.ffmpeg.org/ffmpeg-doc.html#SEC41
This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the MovieWriter
class and it's sub-classes in the animation
module. You also need to install ffmpeg
in advance.
import matplotlib.animation as animation
import numpy as np
from pylab import *
dpi = 100
def ani_frame():
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest')
im.set_clim([0,1])
fig.set_size_inches([5,5])
tight_layout()
def update_img(n):
tmp = rand(300,300)
im.set_data(tmp)
return im
#legend(loc=0)
ani = animation.FuncAnimation(fig,update_img,300,interval=30)
writer = animation.writers['ffmpeg'](fps=30)
ani.save('demo.mp4',writer=writer,dpi=dpi)
return ani
Documentation for animation
After patching ffmpeg (see Joe Kington comments to my question), I was able to get piping png's to ffmpeg as follows:
import subprocess
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
outf = 'test.avi'
rate = 1
cmdstring = ('local/bin/ffmpeg',
'-r', '%d' % rate,
'-f','image2pipe',
'-vcodec', 'png',
'-i', 'pipe:', outf
)
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)
plt.figure()
frames = 10
for i in range(frames):
plt.imshow(np.random.randn(100,100))
plt.savefig(p.stdin, format='png')
It would not work without the patch, which trivially modifies two files and adds libavcodec/png_parser.c
. I had to manually apply the patch to libavcodec/Makefile
. Lastly, I removed '-number' from Makefile
to get the man pages to build. With compile options,
FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers
built on Nov 30 2010 20:42:02 with gcc 4.2.1 (Apple Inc. build 5664)
configuration: --prefix=/Users/paul/local_test --enable-gpl --enable-postproc --enable-swscale --enable-libxvid --enable-libx264 --enable-nonfree --mandir=/Users/paul/local_test/share/man --enable-shared --enable-pthreads --disable-indevs --cc=/usr/bin/gcc-4.2 --arch=x86_64 --extra-cflags=-I/opt/local/include --extra-ldflags=-L/opt/local/lib
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
Converting to image formats is quite slow and adds dependencies. After looking at these page and other I got it working using raw uncoded buffers using mencoder (ffmpeg solution still wanted).
Details at: http://vokicodder.blogspot.com/2011/02/numpy-arrays-to-video.html
import subprocess
import numpy as np
class VideoSink(object) :
def __init__( self, size, filename="output", rate=10, byteorder="bgra" ) :
self.size = size
cmdstring = ('mencoder',
'/dev/stdin',
'-demuxer', 'rawvideo',
'-rawvideo', 'w=%i:h=%i'%size[::-1]+":fps=%i:format=%s"%(rate,byteorder),
'-o', filename+'.avi',
'-ovc', 'lavc',
)
self.p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)
def run(self, image) :
assert image.shape == self.size
self.p.stdin.write(image.tostring())
def close(self) :
self.p.stdin.close()
I got some nice speedups.
These are all really great answers. Here's another suggestion. @user621442 is correct that the bottleneck is typically the writing of the image, so if you are writing png files to your video compressor, it will be pretty slow (even if you are sending them through a pipe instead of writing to disk). I found a solution using pure ffmpeg, which I personally find easier to use than matplotlib.animation or mencoder.
Also, in my case, I wanted to just save the image in an axis, instead of saving all of the tick labels, figure title, figure background, etc. Basically I wanted to make a movie/animation using matplotlib code, but not have it "look like a graph". I've included that code here, but you can make standard graphs and pipe them to ffmpeg instead if you want.
import matplotlib
matplotlib.use('agg', warn = False, force = True)
import matplotlib.pyplot as plt
import subprocess
# create a figure window that is the exact size of the image
# 400x500 pixels in my case
# don't draw any axis stuff ... thanks to @Joe Kington for this trick
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
f = plt.figure(frameon=False, figsize=(4, 5), dpi=100)
canvas_width, canvas_height = f.canvas.get_width_height()
ax = f.add_axes([0, 0, 1, 1])
ax.axis('off')
def update(frame):
# your matplotlib code goes here
# Open an ffmpeg process
outf = 'ffmpeg.mp4'
cmdstring = ('ffmpeg',
'-y', '-r', '30', # overwrite, 30fps
'-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
'-pix_fmt', 'argb', # format
'-f', 'rawvideo', '-i', '-', # tell ffmpeg to expect raw video from the pipe
'-vcodec', 'mpeg4', outf) # output encoding
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)
# Draw 1000 frames and write to the pipe
for frame in range(1000):
# draw the frame
update(frame)
plt.draw()
# extract the image as an ARGB string
string = f.canvas.tostring_argb()
# write to pipe
p.stdin.write(string)
# Finish up
p.communicate()
This is great! I wanted to do the same. But, I could never compile the patched ffmpeg source (0.6.1) in Vista with MingW32+MSYS+pr enviroment... png_parser.c produced Error1 during compilation.
So, I came up with a jpeg solution to this using PIL. Just put your ffmpeg.exe in the same folder as this script. This should work with ffmpeg without the patch under Windows. I had to use stdin.write method rather than the communicate method which is recommended in the official documentation about subprocess. Note that the 2nd -vcodec option specifies the encoding codec. The pipe is closed by p.stdin.close().
import subprocess
import numpy as np
from PIL import Image
rate = 1
outf = 'test.avi'
cmdstring = ('ffmpeg.exe',
'-y',
'-r', '%d' % rate,
'-f','image2pipe',
'-vcodec', 'mjpeg',
'-i', 'pipe:',
'-vcodec', 'libxvid',
outf
)
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)
for i in range(10):
im = Image.fromarray(np.uint8(np.random.randn(100,100)))
p.stdin.write(im.tostring('jpeg','L'))
#p.communicate(im.tostring('jpeg','L'))
p.stdin.close()
Here is a modified version of @tacaswell 's answer. Modified the following:
- Do not require the
pylab
dependency - Fix several places s.t. this function is directly runnable. (The original one cannot be copy-and-paste-and-run directly and have to fix several places.)
Thanks so much for @tacaswell 's wonderful answer!!!
def ani_frame():
def gen_frame():
return np.random.rand(300, 300)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
im = ax.imshow(gen_frame(), cmap='gray', interpolation='nearest')
im.set_clim([0, 1])
fig.set_size_inches([5, 5])
plt.tight_layout()
def update_img(n):
tmp = gen_frame()
im.set_data(tmp)
return im
# legend(loc=0)
ani = animation.FuncAnimation(fig, update_img, 300, interval=30)
writer = animation.writers['ffmpeg'](fps=30)
ani.save('demo.mp4', writer=writer, dpi=72)
return ani
精彩评论