开发者

iOS: Sample code for simultaneous record and playback

I'm designing a simple proof of concept for multitrack recorder.

Obvious starting point is to play from file A.caf to headphones while simultaneously recording microphone input into file B.caf

This question -- Record and play audio Simultaneously -- points out that there are three levels at which I can work:

  • AVFoundation API (AVAudioPlayer + AVAudioRecorder)
  • Audio Queue API
  • Audio Unit API (RemoteIO)

What is the best level to work at? Obviously the generic answer is to work at the highest level that gets the job done, which would be AVFoundation.

But I'm taking this job on from someone who gave up due to latency issues (he was getting a 0.3s开发者_JAVA百科ec delay between the files), so maybe I need to work at a lower level to avoid these issues?

Furthermore, what source code is available to springboard from? I have been looking at SpeakHere sample ( http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html ). if I can't find something simpler I will use this.

But can anyone suggest something simpler/else? I would rather not work with C++ code if I can avoid it.

Is anyone aware of some public code that uses AVFoundation to do this?

EDIT: AVFoundation example here: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb=1

EDIT(2): Much nicer looking one here: http://www.switchonthecode.com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework

EDIT(3): How do I record audio on iPhone with AVAudioRecorder?


To avoid latency issues, you will have to work at a lower level than AVFoundation alright. Check out this sample code from Apple - Auriotouch. It uses Remote I/O.


As suggested by Viraj, here is the answer.

Yes, you can achieve very good results using AVFoundation. Firstly you need to pay attention to the fact that for both the player and the recorder, activating them is a two step process.

First you prime it.

Then you play it.

So, prime everything. Then play everything.

This will get your latency down to about 70ms. I tested by recording a metronome tick, then playing it back through the speakers while holding the iPhone up to the speakers and simultaneously recording.

The second recording had a clear echo, which I found to be ~70ms. I could have analysed the signal in Audacity to get an exact offset.

So in order to line everything up I just performSelector:x withObject: y afterDelay: 70.0/1000.0

There may be hidden snags, for example the delay may differ from device to device. it may even differ depending on device activity. It is even possible the thread could get interrupted/rescheduled in between starting the player and starting the recorder.

But it works, and is a lot tidier than messing around with audio queues / units.


I had this problem and I solved it in my project simply by changing the PreferredHardwareIOBufferDuration parameter of the AudioSession. I think I have just 6ms latency now, that is good enough for my app.

Check this answer that has a good explanation.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜