approach for recording grayscale video on iphone?
I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
- Use a shader and opengl to transform the video to grayscale
- Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
- Is this the right approach (simplest, best performance)?
- If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor开发者_如何转开发?
- If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels()
to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes()
to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels()
, and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA
as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels()
. Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.
精彩评论