开发者

iOS LPCM Non-interleaved Audio input with 2 channels: not possible?

In the aurioTouch sample app the RemoteIO audio unit is configured for 2 channel non-interleaved LPCM in the 8.24 fixed point format. This is the preferred format on the iOS platform and I assume thats what the hardware ADC is emitting. They even made a comment about this (source):

// set our required format - Canonical AU format: LPCM non-interleaved 8.24 fixed point
outFormat.SetAUCanonical(2, false);

So I would expect that when the application later receives an audio buffer it will have data for two channels packed in its mData member in some order. Something like this:

mData = [L1, L2, L3, L4, R1, R2, R3, R4];

Where L and R represent data from the left and right channel of a stereo microphone. Only it seems that cannot be the case because SetAUCannonical() doesn't set up enough memmory to hold the additional channel:

void    SetAUCanonical(UInt32 nChannels, bool interleaved)
{
    mFormatID = kAudioFormatLinearPCM;
#if CA_PREFER_FIXED_POINT
    mFormatFlags = kAudioFormatFlagsCanonical | (kAudioUnitSampleFractionBits << kLinearPCMFormatFlagsSampleFractionShift);
#else
    mFormatFlags = kAudioFormatFlagsCanonical;
#endif
    mChannelsPerFrame = nChannels;
    mFramesPerPacket = 1;
    mBitsPerChannel 开发者_JAVA百科= 8 * sizeof(AudioUnitSampleType);
    if (interleaved)
        mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
    else {
        mBytesPerPacket = mBytesPerFrame = sizeof(AudioUnitSampleType);
        mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
    }
}

If 'interleaved' is false it doesn't multiply the 'mBytesPerPacket' and mBytesPerFrame' by the number of channels. There wont be enough bits in the frame to store the extra channel.

So is the sample code just slightly misleading when it asks for 2 channels? Should it just be asking for 1 channel, since thats what its going to get back anyway:

outFormat.SetAUCanonical(1, false);

Can I just 'fix' SetAUCannonical like this to make things clear?:

mChannelsPerFrame = nChannels;
if (!interleaved) {
    mChannelsPerFrame = 1
    mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);     

Or is there some other reason why you would ask for 2 channels? I dont even think the microphone is a stereo mic.


The built-in mic and headset mic input are both mono.

The Camera Connection kit may have allowed stereo audio input from some USB mics on some newer iOS devices running some previous OS versions, but I haven't seen any reports of this working with the current OS release.

Also, check to see whether 2 channel (stereo) non-interleaved format might return 2 buffers to the RemoteIO callback, instead of concatenated data in 1 buffer.


I think you're confusing "Interleaved" and "Non-Interleaved" and how CoreAudio gives you that data in ABLs. SetAUCanonical() is doing the right thing. An ABL has an variable array of buffers where in the non-interleaved case each buffer only holds the data for a single channel.


The problem is the (sometimes) misleading variable names. I do not like it either, but here is an explanation to what is going on.

When mFormatFlags is set as NonInterleaved (of any form) then mChannelsPerFrame specifies the number of channels and the rest of the fields should specify the desired properties for a single channel. Hence you will NOT need to multiple by the number of channels. The proper values will be:

mBytesPerPacket = mFramesPerPacket * sizeof(sampleSizeInBytes);   // e.g. sizeof(float)
mBytesPerFrame = sizeof(sampleSizeInBytes);
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜