开发者

camera digital zoom in ios 4.0 and later

how can I implement a digital zoom slider for the camera. I use the following APIs: AVCaptureVi开发者_StackOverflow中文版deoPreviewLayer, AVCaptureSession, AVCaptureVideoDataOutput, AVCaptureDeviceInput.

I would like to have the same slider, which is available in iphone 4 camera app.

Thanks in advance for any tips and examples!


I'm a newbie, and I have tried doing a zoom with the AVFoundation framework only, using an AVCaptureVideoPreviewLayer and I can't make it work either. I think its because that layer has its own AVCaptureSession which controls its own output and even though I added it as a sublayer to a UIScrollView, it still runs on its own and the scroll layer can't affect the preview layer.

From WWDC session 419, "Capture from camera using AVFoundation in iOS5", Brad Ford said "AVCaptureVideoPreviewLayer does NOT inherit from AVCaptureOutput (like AVCaptureVideoDataOutput does). It inherits from CALayer but can be inserted into a core animation tree (like other layers). In AVFoundation, the AVSession owns it outputs, but does NOT own its layers. The layers own the session. So if you want to insert a layer into a view hierarchy, you attach a session to it and forget about it. Then when layer tree disposes of itself, it will clean up the session as well."

I have seen Brad Larson, using a combination of Open GL ES and AVFoundation framework at: http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios use an AVCaptureVideoPreviewLayer where he can adjust the raw data from the camera, so I assume thats the place to start. Check out his ColorTrackingCamera app. Its using shaders which you (and I) don't need to zoom, but I think a similar mechanism can be used to zoom.

Oh, I forgot to mention that Brad Larson does NOT attach the AVCaptureInput to the AVCaptureSession. I can see that he is also using the main thread for his queue instead of creating his own queue on another thread. His Open GL ES methods to drawFrame is also how he renders the image, and the capture session itself is not doing that. So, if you understand more, or my assumptions are wrong, please let me know too.

Hope this helps, but since I am new to all of this, and OpenGL ES, I am assuming that library can be used to zoom if we can capture each frame and turn it into a UIImage with a different resolution and/or frame size.

Jeff W.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜