Understanding the constructor of AudioFormat , AudioInputStream and start method
I have tried writing program that plays a sound file but have been unsuccessful so far. I am unable to understand some parts of the code:
InputStream is = new FileInputStream("sound file");
AudioFormat af = new AudioFormat(float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean bigEndian); // I don't understand it's constructor
long length ; // length in sample frames
// how cani i know the length of frames ?
AudioInputStream ais =开发者_StackOverflow中文版 new AudioInputStream( is , af , length );
// open ( ais );
// start playing by invoking start method
- In the constructor of
AudioFormat
how can I know the sample rate, file size in advance, what are channels, and the 2 boolean variable in the end? - How can I get the value of sample frames (
length
)? - Also how do I invoke start method? I don't want the data from any line but from the file kept in a folder (i.e a clip)
In addition to the encoding, the audio format includes other properties that further specify the exact arrangement of the data. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. Sounds may have different numbers of audio channels: one for mono, two for stereo. The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel. (If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel, and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same regardless of the number of channels. This is the standard use of the term.) The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values. For 16-bit samples (or any other sample size larger than a byte), byte order is important; the bytes in each sample are arranged in either the "little-endian" or "big-endian" style. For encodings like PCM, a frame consists of the set of samples for all channels at a given point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times the number of channels. However, with some other sorts of encodings a frame can contain a bundle of compressed data for a whole series of samples, as well as additional, non-sample data. For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM, and so they are completely different from the frame rate and frame size.
Link
Probably a better way to approach this is along the lines of the 'Playing a Clip
' source code shown in the Java Sound info. page. It makes most of the questions redundant (since we don't need to worry about the fine details when using a Clip
).
If you have any further questions after trying the source, let me know.
精彩评论