Write Audio To Disk From IO Unit
Rewriting this question to be a little more succient.
My problem is that I cant successfully write an audio file to disk from a remote IO Unit.
The steps I took were to
Open an mp3 file and extract its audio to buffers. I set up an asbd to use with my graph based on the properties of the graph. I setup and run my graph looping the extracted audio and sound successfully comes out the speaker!
What I'm having difficulty with is taking the audio samples from the remote IO callback and writing them to an audio file on disk which I am using ExtAudioFileWriteASync for.
The audio file does get written and bears some audiable resemblance to the original mp3 but it sounds very distorted.
I'm not sure if the problem is
A) ExtAudioFileWriteAsync cant write the samples as fast as the io unit callback provides them.
- or -
B) I have set up the ASBD for the extaudiofile refeference wrong. I wanted to begin by saving a wav file. I'm not sure if I have described this properly in the ASBD below.
Secondly I am uncertain what value to pass for the inChannelLayout property when creating the audio file.
And finally I am very uncertain about what asbd to use for kExtAudioFileProperty_ClientDataFormat. I had been using my stereo stream format but a closer look at the docs says this must be pcm. Should this be the same format as the output for the remoteio? And if so was I wrong to set the output format of the remote io to stereostreamformat?
I realize there's and awful lot in this question but I have a lot of uncertainties that I cant seem to clear up on my own.
setup stereo stream format
- (void) setupStereoStreamFormat
{
size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = engineDescribtion.samplerate;
NSLog (@"The stereo stereo format :");
}
setup remoteio callback using stereo stream format
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kA开发者_Python百科udioUnitScope_Input,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// ref.equnit;
//AudioUnitRender(engineDescribtion.channelMixers[inBusNumber], ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
Engine *engine= (Engine *) inRefCon;
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(engine->isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
**the recording setup **
-(void)startrecording
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
destinationFilePath = [[NSString alloc] initWithFormat: @"%@/testrecording.wav", documentsDirectory];
destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
// prepare a 16-bit int file format, sample channel count and sample rate
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
// create the capture file
status= ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &recordingfileref);
CheckError( status ,"couldnt create audio file");
// set the capture file's client format to be the canonical format from the queue
status=ExtAudioFileSetProperty(recordingfileref, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
CheckError( status ,"couldnt set input format");
ExtAudioFileSeek(recordingfileref, 0);
isrecording=YES;
// [documentsDirectory release];
}
edit 1
I'm really stabbing in the dark here now but do I need to use an audio convertor or does kExtAudioFileProperty_ClientDataFormat take care of that?
edit 2
Im attaching 2 samples of audio. The first is the original audio that Im looping and trying to copy. The second is the recorded audio of that loop. Hopefully it might give somebody a clue as to whats going wrong.
Original mp3
Problem recording of mp3
After a couple of days of tears & hair pulling I have a solution.
In my code and in other examples I have seen extaudiofilewriteasync was called in the callback for the remoteio unit like so.
** remoteiounit callback **
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
In this callback I'm pulling audio data from another audio unit that applies eqs and mixes audio.
I removed the extaudiofilewriteasync call from the remoteio callback to this other callback that the remoteio pulls and the file writes successfully!!
*equnits callback function *
static OSStatus outputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnitRender(engineDescribtion.masterChannelMixerUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
//process audio here
Engine *engine= (Engine *) inRefCon;
OSStatus s;
if(engine->isrecording)
{
s=ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return noErr;
}
In the interest of fully understanding why my solution worked could somebody explain to me why writing data to file from the iodata bufferlist of the remoteio causes distorted audio but writing data one further step down the chain results in perfect audio?
精彩评论