Simultaneous playback (10+) with AudioQueueServices
Can anyone show me how this is done for for the iPhone? I am trying to make a game to play about 12 sounds at exactly the same time and I can't figure out how to use AudioQueueServices. I understand you have to initialize, add buffers, play them back and use AudioQueueEnqueueBufferWithParameters to get simultaneous playback, but I don't know how to turn that into code. Anyone with code on this or someone that could explain it to me would be amazing!
If you've heard of a Tone Grid/Tone Board that's exactly what I'm trying to do. I know there are a few apps in the Appstore already that do this, but I don't know how it is done.开发者_Go百科
TLDR; Need help with AudioQueueServices simultaneous playback with 10+ sounds.
Example:
New playback
if (isPlaying == NO) {
err = AudioQueueNewOutput (&streamFormat, AudioEngineOutputBufferCallback, self, nil, nil, 0, &outputQueue);
if (err != noErr) NSLog(@"AudioQueueNewOutput() error: %d", err);
Enqueue buffers
outputBuffersToRewrite = 3;
bufferByteSize = (sampleRate > 16000)? 2176 : 512; // 40.5 Hz : 31.25 Hz
for (i=0; i<3; i++) {
err = AudioQueueAllocateBuffer (outputQueue, bufferByteSize, &buffer);
if (err == noErr) {
[self generateTone: buffer];
err = AudioQueueEnqueueBuffer (outputQueue, buffer, 0, nil);
if (err != noErr) NSLog(@"AudioQueueEnqueueBuffer() error: %d", err);
} else {
NSLog(@"AudioQueueAllocateBuffer() error: %d", err);
return;
}
}
Starting playback
isPlaying = YES;
err = AudioQueueStart(outputQueue, nil);
if (err != noErr) { NSLog(@"AudioQueueStart() error: %d", err); isPlaying= NO; return; }
} else {
NSLog (@"Error: audio is already playing back.");
Set up stream format fields
BOOL isHighSampleRate = (sampleRate > 16000);
int bufferByteSize;
AudioQueueBufferRef buffer;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
AudioQueue can play sounds at the same time.
from apple:
How do I play multiple sounds simultaneously?
Use the interfaces in Audio Queue Services (AudioToolbox/AudioQueue.h). Create one audio queue object for each sound that you want to play. Then specify simultaneous start times for the first audio buffer in each audio queue, using the AudioQueueEnqueueBufferWithParameters function.
The following limitations pertain for simultaneous sounds in iPhone OS, depending on the audio data format:
AAC, MP3, and ALAC (Apple Lossless) audio: You may play multiple AAC, MP3, and ALAC format sounds simultaneously; playback of multiple sounds of these formats will require CPU resources for decoding.
Linear PCM and IMA/ADPCM (IMA4 audio): You can play multiple linear PCM or IMA4 format sounds simultaneously without CPU resource concerns.
AudioQueue can't play multiple sounds simultaneously. It just sends audio data values one by one from enqueued buffer. So you need compose all sounds you want play simultaneously into single one.
How you can do it?
If data are not compressed it is not too hard.
E.g. you have two different sounds. Each of them you can describe as an array of float values from -1 to 1. When sounds stored in file or enqueued to audioQueue they must be represented in appropriate format. In your case (kAudioFormatLinearPCM, streamFormat.mBitsPerChannel = 16) it is two-bytes integers, and each value represented as short int from -32767 to 32767.
Therefore each of two sounds is an array of shorts and when you enqueue buffer you fill it with values of this array (if sound is not from file and generated dynamically this array not exists, but values are calculated one by one).
And to create "sum" of this two audio file you should construct each new array value as
average of corresponding values from two sounds array.
i.e. resultSound[i] = sound1[i]/2 + sound2[i]/2;
And all the same for any number of sounds.
E.g. to generate pure harmonic sound you fill buffer like:
buffer[i] = sin(i * 2 * M_PI * frequency / sampleRate) * 32767;
and to mix two harmonic sounds with different frequencies:
buffer[i] = sin(i * 2 * M_PI * frequency1 / sampleRate) + sin(i * 2 * M_PI * frequency2 / sampleRate) * 0.5 * 32767;
Remember that you're dealing with a single audio stream, so you either need to create the combined stream yourself, or you could use the mixer unit which will let you feed it multiple streams. See the "Audio Mixer (MixerHost)" sample code from Apple.
精彩评论