Parsing frame-by-frame from .mov using ffmpeg
I'm trying to parse H.264 frames from a .mov file. I think I've come to the conclusion that mov.c from AVFormat-part of FFMPEG is the way to go. But mov.c is ~2600 lines of next to uncommented code. I'm looking for examples of usage of FFMPEG, especially parsing the structure of any file type. doesn't matter if it is MPEG4 or Quicktime Movie since they are quite similar in structure.
if there are no existing examples (I can't find any) maybe someone has used it and can give me a couple of lines of code, or explain how to get started?
What I'm trying to do: i use AVCaptureSession to capture samples from the video camera, these samples are then encoded in H264 and written to file with the help of AVAssetsWriter, AVAssetsWriterInput and AVAssetsWriterInputPixelBufferAdaptor. The reason for which is that I can't access the hardware H264 encoding directly since apple won't allow this. What I now need to do (I think not sure) is parse out:
The "mdat"-atom (Movie data, there might be more than one i think) from the .mov file. then the "vide"-atom and then within the vide-atom (Video data sample, there might be more than one). I think there will be several atoms which i belive is the frames. these will be of type "avc1" (that's the type for H264). Please correct me in this because i'm quite sure that i havn't gotten all of this correctly yet.
my question then is, how will i go about parsing out the single frames. I've been reading the documentation and looked at iFrameExt开发者_JAVA百科ractor (which is not very helpful since it decodes the frames). I think I've understood it correctly when I'm supposed to use mov.c from FFMPEG-AVFormat but I'm not sure.
Edit: I'm now trying like this:
I run the slightly reduced init function i iFrameExtractor which finds the videostream in the .mov-file.
I get the data for the frame like this:
AVPacket packet; av_read_frame(pFormatCtx, &packet); NSData *frame; if(packet.stream_index == videoStream){ frame = [NSData dataWithBytes:packet.data length:packet.size]; } videoStream++; av_free_packet(&packet); return frame;
i then pass it to a subclass of NSOperation where it is retained in wait for upload.
but i receive a EXC_BAD_ACC, am I doing something wrong when copying the data from the frame? any ideas. i get the EXC_... when I try to set the class variable NSData* frame
using its (nonatomic,retain)-property. (it says EXC_BAD_ACC on the synthesize row)
i use the following to parse each frame from the mov file.
-(NSData *)nextFrame {
AVPacket packet;
NSData *frame = nil;
while(!frame && av_read_frame(pFormatCtx, &packet)>=0) {
if(packet.stream_index == streamNo) {
frame = [[[NSData alloc] initWithBytes:packet.data length:packet.size] autorelease];
}
av_free_packet(&packet);
}
return frame;
}
although watch out since av_read_frame does not verify the frames, that is done in the decoding step. this means that the "frames" returned might contain extra information which are not part of the actual frame.
to init the AVFormatContext *pFormatCtx and AVCodecContext *pCodecCtx I use this code (which I believe is derived from Martin Böhme's example code):
AVCodec *pCodec;
// Register all formats and codecs
av_register_all();
// Open video file
if(avformat_open_input(&pFormatCtx, [moviePath cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL)!=0)
goto initError; // Couldn't open file
// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx,NULL)<0)
goto initError; // Couldn't find stream information
// Find the video stream
streamNo = -1;
for(int i=0; i<pFormatCtx->nb_streams; i++){
if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
streamNo = i;
break;
}
}
if(streamNo == -1)
goto initError; // Didn't find a video stream
// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[streamNo]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
goto initError; // Codec not found
// Open codec
if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
goto initError; // Could not open codec
return self;
initError:
NSLog(@"initError in VideoFrameExtractor");
[self release];
return nil;
hope this helps someone in the future.
There's a pretty good tutorial on using libavcodec/libavformat here. The bit it sounds like you're interested in is the DoSomethingWithTheImage()
function they've left unimplemented.
If you stream H264 to iOS you need segmented streaming (aka apple live streaming).
Here is an open source project: http://code.google.com/p/httpsegmenter/
精彩评论