开发者

Capturing and manipulating microphone audio with AVCaptureSession?

While there are plenty of tutorials for how to use AVCaptureSessio开发者_如何学编程n to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.

I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?

Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).


They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:

CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);

and then checking the stream description’s mFormatId.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜