Improved thumbnail extraction from videos
I have been using FFmpeg to find the middle frame of a h264 video file, and extract the jpg thumbnail for use on a streaming portal. This is done automatically for each uploaded video.
Sometimes the frame happens to be a black frame or just semantically bad i.e. a b开发者_运维百科ackground or blurry shot which doesn't relate well to the video content.
I wonder if I can use openCV or some other method/library to programmatically find better thumbnails through facial recognition or frame analysis.
I've run into that problem myself and came up with a very crude-yet-simple algorithm to ensure my thumbnails were more "interesting". How?
- Create x-number of thumbnails all at different points. E.g. 5 thumbnails
- Use the largest (in bytes) file and discard the rest
Why does this work? Because jpeg files of a monotone 'boring' image, like an all black screen, compress into a much smaller files than an image with many objects and colors in it.
It's not perfect, but is a viable 80/20 solution. (Solves 80% of the problem with 20% of the work.) Coding something that actually analyzes the image itself is going to be considerably more work.
Libavfilter has got a thumbnail filter, which is meant to pick the most representative frame from a series of frames. Not sure how it works, but heres the docs http://ffmpeg.org/libavfilter.html#thumbnail
In case anyone needs a two liner (using ffmpeg and imagemagick):
(this picks a max of 20 frames from the video and uses gt(scene) to pick transition moments. It uses ffmpeg to make 120pixel wide pngs and then imagemagick to make a gif (because the ffmpeg gifs are notoriously ugly...) It might fail if nothing happens in the movie, but then you shouldn't call it a movie - should you?
ffmpeg -i $1 -loglevel error -vf "select=gt(scene\,0.1), scale=120:-1" -frames:v 20 -f image2 -vsync 0 -an ./tmp/img%05d.png
convert -delay 25 -loop 0 ./tmp/img*.png thumb.gif
精彩评论