Alternatives to creating an openGL texture from a captured video frame to overlay an openGL view over video? (iPhone)
This is mostly relevant for augmented reality type applications. Apple provides information on how to capture video frames (and save them as images if need be) with AVCaptureSession here:
http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html
I know that it is possible to create an openGL texture out a captured video frame and then use that as a background in the openGL view over which to overlay other graphics.
I am wondering if there are any alternatives to this method? The method mentioned above may be the best (I don't know if it is) but if there are alternatives to try it would be good to know. For example, is there a way to overlay the openGL view directly over the AVCapture开发者_Python百科VideoPreviewLayer?
You can indeed layer OpenGL content over something like AVCaptureVideoPreviewLayer, but your performance will suffer. Apple highly recommends that you not overlay non-opaque OpenGL ES content on top of other display elements. From the OpenGL ES Programming Guide for iOS:
For the absolute best performance, your application should rely solely on OpenGL ES to render your content. To do this, size the view that holds your CAEAGLLayer object to match the screen, set its opaque property to YES, and ensure that no other Core Animation layers or views are visible. If your OpenGL ES layer is composited on top of other layers, making your CAEAGLLayer object opaque reduces but doesn’t eliminate the performance cost.
If your CAEAGLLayer object is blended on top of layers underneath it in the layer hierarchy, the renderbuffer’s color data must be in a premultiplied alpha format to be composited correctly by Core Animation. Blending OpenGL ES content on top of other content has a severe performance penalty.
Honestly, it really isn't that hard to pull in the video as a texture and then display that as a billboard behind your 3-D overlay. My sample application here does passthrough of camera video to an OpenGL ES (2.0) texture for display to the screen. With only a few modifications, you could place 3-D content on top of that. This will give you far better performance than trying to draw non-opaque 3-D content on top of an AVCaptureVideoPreviewLayer.
However, if you are just wanting to display simple static UIViews over OpenGL ES content, that can be done without much of a performance penalty (~5% reduction in framerate in my experience).
Sure, views can be layered together, regardless of content. Layering GL over Video is no different than layering 2D over 2D.
Just about the only catch is that you need to render your GL content so that the image produced is premultiplied by alpha (just like all other transparent content on iOS is premultiplied).
精彩评论