I\'m accessing the camera in iOS and using session presets as so: captureSession.sessionPreset = AVCaptureSessionPresetMedium;
I have a video player that needs to play a sequence of videos from the network. The URLs for these videos are not known in advance, as they come from XML or JSON responses from other HTTP requests.
How fast do I need to draw with CVDisplayLink? Am I correct in thinking that, after drawing m开发者_C百科y scene in my display link callback, if CVGetCurrentHostTime() > outputTime->hostTime, t
I have a CVImageBuffer that comes back with recorded height of 640px and width of 852px. The bytes per row are 3456. You\'ll notice that 3456/852px != 4 (it\'s something like 4.05). After some inspect
I have an audio file and a video file containing raw audio and video data respectively. I have successfully played the audio file on ios using CoreAudio and AudioToolBox Frameworks. Now I want to play
Am Captuing video usin开发者_如何学运维g AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articl
I noticed that accessing the pixels returned by CVPixelBufferGetBaseAddress directly (I\'m u开发者_JAVA百科sing two nested for-loops) is about 100 times slower than first allocating a buffer with mall
I\'m sure something\'s wrong with my buffer attributes, but it\'s not clear to me what -- it\'s not well documented what\'s supposed to go there, so I\'m guessing based on CVPixelBufferPoolCreate -- a
I\'ve got an app where the user shoots some video, enters a title for it, and picks a music track. I\'ve got the music dubbing working with AVMutableComposition, but the titling is a bad hack -- just
I have QTMovie open in QTKit. I need to get each frame of this video in YV12 format (kYUV420PixelFormat), in real time (ie. I\'m passing it to foreign code which only accepts YV12 and needs to play t