开发者

Recording Mono on iPhone in IMA4 format

I'm using the SpeakHear sample app on Apple's developer site to create an audio recording app. I'm attempting to record directly to IMA4 format using the kAudioFormatAppleIMA4 system constant. This is listed as one of the usable formats, but every time I set up my audio format variable and pass and set it, I get a 'fmt?' error. Here is the code I use to set up the audio format variable:

#define kAudioRecordingFormat kAudioFormatAppleIMA4
#define kAudioRecordingType kAudioFileCAFType
#define kAudioRecordingSampleRate 16000.00
#define kAudioRecordingChannelsPerFrame 1
#define kAudioRecordingFramesPerPacket 1
#define kAudioRecordingBitsPerChannel 16
#define kAudioRecordingBytesPerPacket 2
#define kAudioRecordingBytesPerFrame 2

- (void) setupAudioFormat: (UInt32) formatID {

    // Obtains the hardware sample rate for use in the recording
    // audio format. Each time the audio route changes, the sample rate
    // needs to get updated.
    UInt32 propertySize = sizeof (self.hardwareSampleRate);

    OSStatus err = AudioSessionGetProperty (
        kAudioSessionProperty_CurrentHardwareSampleRate,
        &propertySize,
        &hardwareSampleRate
    );

    if(err != 0){
        NSLog(@"AudioRecorder::setupAudioFormat - error getting audio session property");
    }

    audioFormat.mSampleRate = kAudioRecordingSampleRate;

    NSLog (@"Hardware sample rate = %f", self.audioFormat.mSampleRate);

    audioFormat.mFormatID           = formatID;
    audioFormat.mChannelsPerFrame   = kAudioRecordingChannelsPerFrame;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = kAudioRecordingFramesPerPacket;
    audioFormat.mBitsPerChannel     = kAudioRecordingBitsPerChannel;
    audioFormat.mBytesPerPacket     = kAudioRecordingBytesPerPacket;
    audioFormat.mBytesPerFrame      = kAudioRecordingBytesPerFrame;

}

And here is where I use that function:

- (id) initWithURL: fileURL {
    NSLog (@"initializing a recorder object.");
    self = [super init];

    if (self != nil) {

        // Specify the recording format. Options are:
        //
        //      kAudioFormatLinearPCM
        //      kAudioFormatAppleLossless
        //      kAudioFormatAppleIMA4
        //      kAudioFormatiLBC
        //      kAudioFormatULaw
        //      kAudioFormatALaw
        //
        // When targeting the Simulator, SpeakHere uses linear PCM regardless of the format
        //  specified here. See the setupAudioFormat: method in this file.
        [self setupAudioFormat: kAudioRecordingFormat];

        OSStatus result =   AudioQueueNewInput (
                                &audioFormat,
                                recordingCallback,
                                self,                   // userData
                                NULL,                   // run loop
                                NULL,                   // run loop mode
                                0,                      // flags
                                &queueObject
                            );

        NSLog (@"Attempted to create new recording audio queue object. Result: %f", result);

        // get the recording format back from the audio queue's audio converter --
        //  the file may require a more specific stream description than was 
        //  necessary to create the encoder.
        UInt32 sizeOfRecordingFormatASBDStruct = sizeof (audioFormat);

        AudioQueueGetProperty (
            queueObject,
            kAudioQueueProperty_StreamDescription,  // this constant is only available in iPhone OS
            &audioFormat,
            &sizeOfRecordingFormatASBDStruct
        );

        AudioQueueAddPropertyListener (
            [self queueObject],
            kAudioQueueProperty_IsRunning,
            audioQueuePropertyListenerCallback,
            self
        );

        [self setAudioFileURL: (CFURLRef) fileURL];

        [self enableLevelMetering];
    }
    return self;
} 

Thanks for 开发者_StackOverflow中文版the help! -Matt


I'm not sure that all the format flags you're passing are correct; IMA4 (which, IIRC, stands for IMA ADPCM 4:1) is 4-bit (4:1 compression from 16 bits) with some headers.

According to the docs for AudioStreamBasicDescription:

  • mBytesPerFrame should be 0, since the format is compressed.
  • mBitsPerChannel should be 0, since the format is compressed.
  • mFormatFlags should probably be 0, since there is nothing to choose.

Aaccording to afconvert -f caff -t ima4 -c 1 blah.aiff blah.caf followed by afinfo blah.caf:

  • mBytesPerPacket should be 34, and
  • mFramesPerPacket should be 64. You might be able to set these to 0 instead.

The reference algorithm in the original IMA spec is not that helpful (It's an OCR of scans, the site also has the scans).


On top of what @tc. has already said, it's easier to automatically populate your descriptions based on the IDs using this:

AudioStreamBasicDescription streamDescription;
UInt32 streamDesSize = sizeof(streamDescription);
memset(&streamDescription, 0, streamDesSize);
streamDescription.mFormatID         = kAudioFormatiLBC;

OSStatus status;
status = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &streamDesSize, &streamDescription);
assert(status==noErr);

This way you don't need to bother with guessing the features of certain formats. Be warned, although in this example the kAudioFormatiLBC didn't need any other additional info, other formats do (usually the number of channels and the sample rate).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜