开发者

iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice

My goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.

Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.

I'm want to know that

1.CMSampleBufferRef to AVPicture data is success?

 avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
  pFrame linesize and data is not null but pst -9233123123 . outpic also .
 Because of this I have to guess 'non-strictly-monotonic PTS' message 

2.This log is repeat.

encoding frame (size= 0)
encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 . 

I don't know what to do...

2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame(); 
2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
Video encoding
2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
[libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
encoding frame (size=    0)
encoding frame 
[libx264 @ 0x1441e00] final ratefactor: 26.74

3.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems. what is this 'non-strictly-monotonic PTS' .

~~~~~~~~~this is source ~~~~~~~~~~~~~~~~~~~~

(void)        captureOutput:(AVCaptureOutput *)captureOutput 
        didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
               fromConnection:(AVCaptureConnection *)connection
{

    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
        NSLog( @"sample buffer is not ready. Skipping sample" );
        return;
    }


    if( [isRecordingNow isEqualToString:@"YES"] )
    {
        lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        if( videoWriter.status != AVAssetWriterStatusWriting  )
        {
            [videoWriter startWriting];
            [videoWriter startSessionAtSourceTime:lastSampleTime];
        }

        if( captureOutput == videooutput )
        {
            [self newVideoSample:sampleBuffer];

            CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
            CVPixelBufferLockBaseAddress(pixelBuffer, 0); 

            // access the data 
            int width = CVPixelBufferGetWidth(pixelBuffer); 
            int height = CVPixelBufferGetHeight(pixelBuffer); 
            unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); 

            AVFrame *pFrame; 
            pFrame = avcodec_alloc_frame(); 
            pFrame->quality = 0;

            NSLog(@"pFrame = avcodec_alloc_frame(); ");

//          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);

//          int bytesSize = height * bytesPerRow ;  

//          unsigned char *pixel = (unsigned char*)malloc(bytesSize);

//          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

//          memcpy (pixel, rowBase, bytesSize);


            int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
            //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase); 
            NSLog(@"avpicture_fill = %i",avpicture_fillNum);
            //NSLog(@"width = %i,height = %i",width, height);



            // Do something with the raw pixels here 

            CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

            //avcodec_init();
            //avdevice_register_all();
            av_register_all();





            AVCodec *codec;
            AVCodecContext *c= NULL;
            int  out_size, size, outbuf_size;
            //FILE *f;
            uint8_t *outbuf;

            printf("Video encoding\n");

            /* find the mpeg video encoder */
            codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
            NSLog(@"codec = %i",codec);
            if (!codec) {
                fprintf(stderr, "codec not found\n");
                exit(1);
            }

            c= avcodec_alloc_context();

            /* put sample parameters */
            c->bit_rate = 400000;
            c->bit_rate_tolerance = 10;
            c->me_method = 2;
            /* resolution must be a multiple of two */
            c->width = 352;//width;//352;
            c->height = 288;//height;//288;
            /* frames per second */
            c->time_base= (AVRational){1,25};
            c->gop_size = 10;//25; /* emit one intra frame every ten frames */
            //c->max_b_frames=1;
            c->pix_fmt = PIX_FMT_YUV420P;

            c ->me_range = 16;
  开发者_StackOverflow中文版          c ->max_qdiff = 4;
            c ->qmin = 10;
            c ->qmax = 51;
            c ->qcompress = 0.6f;

            /* open it */
            if (avcodec_open(c, codec) < 0) {
                fprintf(stderr, "could not open codec\n");
                exit(1);
            }


            /* alloc image and output buffer */
            outbuf_size = 100000;
            outbuf = malloc(outbuf_size);
            size = c->width * c->height;

            AVFrame* outpic = avcodec_alloc_frame();
            int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

            //create buffer for the output image
            uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

#pragma mark -  

            fflush(stdout);

<pre>//         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
//          
//          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
//          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
//          
//          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
//          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
//          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);   
//          
//          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
            avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

            struct SwsContext* fooContext = sws_getContext(c->width, c->height, 
                                                           PIX_FMT_RGB8, 
                                                           c->width, c->height, 
                                                           PIX_FMT_YUV420P, 
                                                           SWS_FAST_BILINEAR, NULL, NULL, NULL);

            //perform the conversion
            sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
            // Here is where I try to convert to YUV

            /* encode the image */

            out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
            printf("encoding frame (size=%5d)\n", out_size);
            printf("encoding frame %s\n", outbuf);


            //fwrite(outbuf, 1, out_size, f);

            //              free(buffer);
            //              buffer = NULL;      



            /* add sequence end code to have a real mpeg file */
//          outbuf[0] = 0x00;
//          outbuf[1] = 0x00;
//          outbuf[2] = 0x01;
//          outbuf[3] = 0xb7;
            //fwrite(outbuf, 1, 4, f);
            //fclose(f);
            free(outbuf);

            avcodec_close(c);
            av_free(c);
            av_free(pFrame);
            printf("\n");


This is because you initiate AVCodecContext every iteration of "captureOutput:". AVCodecContext holds the info and the state of the encoding continuously with each frame arriving. So should just do all of the initialization once per session or if the hight and width or whatever changes. It will also save you processing time. The messages you get are totally valid. They just notify you about the opening of the codec and what discussions where made codec-wise.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜