ffmpeg audio and the iphone
has anyone been able to make ffmpeg work with audio queues, I get an error when I try to create the queue.
ret = avcodec_open(enc, codec);
if (ret < 0) {
NSLog(@"Error: Could not open video decoder: %d", ret);
av_close_input_file(avfContext);
return;
}
if (audio_index >= 0) {
AudioStreamBasicDescription
audioFormat;
audioFormat.mFormatID = -1;
audioFormat.mSampleRate =
avfContext->streams[audio_index]->codec->sample_rate;
audioFormat.mFormatFlags = 0;
switch (avfContext->streams[audio_index]->codec->codec_id)
{
case CODEC_ID_MP3:
audioFormat.mFormatID = kAudioFormatMPEGLayer3;
break;
case CODEC_ID_AAC:
audioFormat.mFormatID = kAudioFormatMPEG4AAC;
audioFormat.mFormatFlags = kMPEG4Object_AAC_Main;
break;
case CODEC_ID_AC3:
audioFormat.mFormatID = kAudioFormatAC3;
break;
default:
break;
}
if (audioFormat.mFormatID != -1) {
audioFormat.mBytesPerPacket = 0;
audioFormat.mFramesPerPacket =
开发者_开发知识库 avfContext->streams[audio_index]->codec->frame_size;
audioFormat.mBytesPerFrame = 0;
audioFormat.mChannelsPerFrame = avfContext->streams[audio_index]->codec->channels;
audioFormat.mBitsPerChannel = 0;
if (ret = AudioQueueNewOutput(&audioFormat, audioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue)) {
NSLog(@"Error creating audio output queue: %d", ret);
}
The issues only with the audio,
Video is perfect if only I can figure out how to get audio queues to work.
http://web.me.com/cannonwc/Site/Photos_6.html
I though of remoteio but there is'nt much doc on that.
I will share the code for the complete class with anyone that helps me get it to work.
The idea is to have a single view controller that plays any streaming video passed to it, similar to ffplay on the iphone but without the sdl overhead.
You could be very well missing some important specifications in the AudioStreamBasicDescription structure: i don't know about ffmpeg, but specifying zero bytes per frame and zero bytes per packet won't work ;) Here is how i would fill the structure, given the samplerate, the audio format, the number of channels and the bits per sample:
iAqc.mDataFormat.mSampleRate = iSampleRate;
iAqc.mDataFormat.mFormatID = kAudioFormatLinearPCM;
iAqc.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iAqc.mDataFormat.mBytesPerPacket = (iBitsPerSample >> 3) * iNumChannels;
iAqc.mDataFormat.mFramesPerPacket = 1;
iAqc.mDataFormat.mBytesPerFrame = (iBitsPerSample >> 3) * iNumChannels;
iAqc.mDataFormat.mChannelsPerFrame = iNumChannels;
iAqc.mDataFormat.mBitsPerChannel = iBitsPerSample;
I assume here you are writing PCM samples to the audio device. As long as you know the audio format you are working with, there should be no problems adapting it: the important thing to remember is what all this stuff mean. Here i'm working with one sample frame per packet, so the number of bytes per packet coincides with the number of bytes per sample frame.
Most of the problems come out because there is a lot of bad usage of words such as "samples", "sample frames" in the wrong contexts and so on: a sample frame can be thought as the atomic unit of audio data that embrace all the available channels, a sample refers to a single sub-unit of data composing the sample frame.
For example, you have an audio stream of 2 channels with a resolution of 16 bits per sample: a sample will be 2 bytes big (16bps/8 or 16 >> 3), the sample frame will also take the number of channels into account, so it will be 4 bytes big (2bytes x 2channels).
IMPORTANT The theory behind this doesn't apply only to the iPhone, but to audio coding in general! It just happens the AudioQueues ask you for well-defined specifications about your audio stream, and that's good, but you could be asked for bytes instead, so expressing audio data sizes as audio frames is always good, you can always convert your data sizes and be sure about it.
精彩评论