开发者

Using Audio Queue Services to play PCM data over a socket connection

I'm writing a remote desktop client for the iPhone and I'm trying to implement audio redirection.

The client is connected to the server over a socket connection, and the server sends 32K chunks of PCM data at a time.

I'm trying to use AQS to play the data and it plays the first two seconds (1 buffer worth). However, since the next chunk of data hasn't come in over the socket yet, the next AudioQueueBuffer is empty. When the data comes in, I fill the next available buffer with the data and enqueue it with AudioQueueEnqueueBuffer. However, it never plays these buffers.

Does the queue stop playing if there are no buffers in the queue, even if you later add a buffer?

Here's the relevant part of the code:

void
wave_out_write(STREAM s, uint16 tick, uint8 index)
{

    if(items_in_queue == NUM_BUFFERS){
        return;
    }
    if(!playState.busy){
        OSStatus status;
        status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), NULL, 0, &playState.queue);

        if(status == 0){
            for(int i=0; i<NUM_BUFFERS; i++){
                AudioQueueAllocateBuffer(playState.queue, 40000, &playState.buffers[i]);

            }
            AudioQueueAddPropertyListener(playState.queue, kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, &playState);开发者_如何转开发

            status = AudioQueueStart(playState.queue, NULL);
            if(status ==0){
                playState.busy = True;
            }
            else{
                return;
            }
        }
        else{
            return;
        }
    }
    playState.buffers[queue_hi]->mAudioDataByteSize = s->size;

    memcpy(playState.buffers[queue_hi]->mAudioData, s->data, s->size);

    AudioQueueEnqueueBuffer(playState.queue, playState.buffers[queue_hi], 0, 0);
    queue_hi++;
    queue_hi = queue_hi % NUM_BUFFERS;
    items_in_queue++;
}


void AudioOutputCallback(void* inUserData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer)
{
    PlayState *playState = (PlayState *)inUserData;
    items_in_queue--;
}

Thanks!


Using CoreAudio's audio queuing services is vastly simplified by ensuring that you always re-queue every buffer as soon as it's finished playback by doing so in the callback that is fired when playback is finished. The audio data should sit in a separate circular buffer as it is received from the network so that the network and audio code aren't directly coupled.

To ensure that you don't drop audio, queue up a fixed number of buffers with data; this acts as a jitter buffer. Do not start playback until all the buffers are queued. As soon as each buffer is finished playback, re-queue it immediately with the next packet of data. If no data is available, just queue a buffer of silence; since the arriving audio packets will eventually catch up, this just has the effect of reducing dropped audio at the expense of extra latency.


You may find the answer to this question useful: AudioQueue ate my buffer (first 15 milliseconds of it)


Generally, when using circular audio buffers, you should prevent the buffers from underrunning. If you lack the necessary data (due to e.g. network congestion), try to pad your audio data with silence or pause the audio playback.

It could be that once your buffer chain underruns, you need to restart playback. I've never actually underrunned the AudioQueue buffer, but I remember from Win32 programming that this was the case, so please feel free to correct me if I'm wrong.


I find it stupid that I can post answers but not comments here if I don't have enough points. I just wanted to add to the following answer:

"It could be that once your buffer chain underruns, you need to restart playback. I've never actually underrunned the AudioQueue buffer, but I remember from Win32 programming that this was the case, so please feel free to correct me if I'm wrong."

I have actually tested this scenario in my audio player that I have recently made. I made a FLAC decoder from scratch, and it supports 16 bits songs only at the moment. If I stumble upon a song that is 24 bits, I keep losing sync with the song I am playing - it won't play at all - and that can take any amount of time, say 30 seconds to make an example, to recover. This starves the Audio Queues really badly, and when I finally begin sending buffers again to audio queues, it takes 30 seconds of silence-catching-up into the next song to make it play again.

This is just my observation, and I have not yet put much thought into why I am observing this behaviour. Maybe it throws away samples to match the sample count AudioQueue thinks it should be playing at the moment - that it lost during starvation? My audio player seems to fast-play the song till is reaches a point it wants to play again.

EDIT: As long as you post a new buffer for every callback, you will never need to restart play or anything. In my player, if I am not done processing a buffer when the next buffer is 'called back', the thread for that buffer will be blocked till the first buffer is done filling. This is done with NSLock. This is the main reason AudioQueues is severely starved when I lose sync when my player don't yet understand 24 bit FLACs. NSLock also prevents any race conditions when AudioQueue gives you more buffers to fill. I use 3 buffers with low latency. Too low latency gives complete silence, so you need to find a 'good size' for your system.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜