Java - Xuggle - Best method to get a frame
I'm working with xuggle since one week and I wrote a method to get a frame by a video but if video is long this method takes too much time:
public static void getFrameBySec(IContainer container, int videoStreamId, IStreamCoder videoCoder, IVideoResampler resampler, double sec)
{
BufferedImage javaImage = new BufferedImage(videoCoder.getWidth(), videoCoder.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
IConverter converter = ConverterFactory.createConverter(javaImage, IPixelFormat.Type.BGR24);
IPacket packet = IPacket.make();
while(container.readNextPacket(packet) >= 0)
{
if (packet.getStreamIndex() == videoStreamId)
{
IVideoPicture picture = IVideoPicture.make(videoCoder.getPixelType(), videoCoder.getWidth(), videoCoder.getHeight());
int offset = 0;
while(offset < packet.getSize())
{
int bytesDecoded = videoCoder.decodeVideo(picture, packet, offset);
if (bytesDecoded < 0)
throw new RuntimeException("got error decoding video");
offset += bytesDecoded;
if (picture.isComplete())
{
IVideoPicture newPic = picture;
if (resampler != null)
{
newPic = IVideoPicture.make(resampler.getOutputPixelFormat(), picture.getWidth(), picture.getHeight());
if (resampler.resample(newPic, picture) < 0)
throw new RuntimeException("could not resample video from");
}
if (newPic.getPixelType() != IPixelFormat.Type.BGR24)
throw new RuntimeException("could not decode video开发者_如何学Go as RGB 32 bit data in");
javaImage = converter.toImage(newPic);
try
{
double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND;
if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND )))
{
File file = new File(Config.MULTIMEDIA_PATH, "frame_" + sec + ".png");
ImageIO.write(javaImage, "png", file);
System.out.printf("at elapsed time of %6.3f seconds wrote: %s \n", seconds, file);
return;
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
}
else
{
// This packet isn't part of our video stream, so we just
// silently drop it.
}
}
converter.delete();
}
Do you know a better way to do this?
Well from just reading your code I see some optimizations that can be made.
One you first read through the entire file once, create an index of byteoffsets and seconds. Then the function can lookup of the byteoffset from the seconds given and you can decode the video at that offset and do the rest of your code.
Another option is to use your method, reading through the whole file each time, but instead of calling all that resampler, newPic, and java image converter code, check if the seconds match up first. If they do, then convert the image into a new pic to be displayed.
So
if(picture.isComplete()){
try {
double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND;
if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND )))
{
Resample image
Convert Image
Do File stuff
}
Use seekKeyFrame
option. You can use this function to seek to any time in the video file (time is in milliseconds).
double timeBase = 0;
int videoStreamId = -1;
private void seekToMs(IContainer container, long timeMs) {
if(videoStreamId == -1) {
for(int i = 0; i < container.getNumStreams(); i++) {
IStream stream = container.getStream(i);
IStreamCoder coder = stream.getStreamCoder();
if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
videoStreamId = i;
timeBase = stream.getTimeBase().getDouble();
break;
}
}
}
long seekTo = (long) (timeMs/1000.0/timeBase);
container.seekKeyFrame(videoStreamId, seekTo, IContainer.SEEK_FLAG_BACKWARDS);
}
From there you use your classic while(container.readNextPacket(packet) >= 0)
method of getting the images to files.
Notice: It won't seek to exact time but approximate so you'll still need to go through the packets (but of course much less than before).
精彩评论