I am trying to write an iPhone application which will do some real-time camera image processing.I used the example presented in the AVFoundation docs as a starting point: setting a capture session, ma
I know how to use AVAssetReader and AVAssetWriter, and have successfully used them to grab a video track from one movie and transcode it into another. However, I\'d like to do this with audio as well.
I use the following code to create a video from a sequence of images stored into an array (takenImages). Duration is the frame duration in seconds.
I am trying to develop an iPhone app that processes/filters and records video. I have two sample apps that have aspects of what I need and am trying to combine them.
I\'m trying to use AVFoundation to crop videos I\'m开发者_如何转开发 recording. So lets say i create a AVCaptureVideoPreviewLayer and set the frame to be 300x300.
I have multiple AVAssets, and I create individual AVMutableCompositionTracks for each. I then create an AVMutableComposition and add each AVMutableCompositionTrack to it and then create an AVAssetExpo
Looking at the docs, I should be able to use BGRA for the internal format of a texture. I am supplying the texture with BGRA data (using GL_RGBA8_OES for glRenderbufferStorage as it seems BGRA there i
I have an AVAudioRecorder that records sound. I also have a label. I would like to update the text on the lab开发者_JAVA百科el every second to show the recording time. How can I do this?You can use th
AVCaptureVideoPreviewLayer *avLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; avLayer.frame = self.view.frame;
I am using AVFoundation to create a movie file (mp4) using images and sound files. I have successfully created movie file using AVAssetWriterInputPixelBufferAdaptor which appends CVPixelBufferRef (ex