What values to use for packet and frame size with AudioUnit
I am familiarizing myself with how to use AudioUnit to play sounds and am confused by the notions of packets and frames. I would like to know:
what is the definition of a packet and a frame in the context of AudioUnit
what are the trades for using multiple samples per packet, and multiple packets per frame
My reason for asking: In all code samples I saw so far, a packet is essentially a sample, with typically mBytesPerPacket=4 for a 16-bit stereo stream. And mFramesPerPacket is typically 1, making a frame, a packet, and a sample (all be it stereo sample), the sam开发者_如何学Goe concepts.
I was expecting a packet and/or a frame to be a buffer of samples, i.e. a group of 256 or 512 consecutive samples, to which the driver could be pointed and read linearly. Reducing a frame/packet size to one sample seems to put unnecessary strain on whatever driver will be responsible for playing the data. What am I missing?
First, some definitions:
- A frame is a single sample of audio data, which represents the signal's value for a single channel at a given point in time.
- A packet is a group of frames, usually meant to be the set of frames for all channels at a given point in time.
- A buffer is a group of frames delivered for processing.
You should not confuse a packet and frame, and in fact mFramesPerPacket
should generally be set to 1. This does not mean that your AudioUnit's render method will get a callback every frame. If you want to control how often that happens, you need to set the kAudioSessionProperty_PreferredHardwareIOBufferDuration
property to the preferred buffer size. Setting this property does not guarantee you the exact buffer size that you ask for, but the system will try to give you something close to this value.
精彩评论