Audio/Voice Visualization
Hey you Objective-C bods.
Does anyone know how I would go about changing (transforming) an image based on the input from the Microphon开发者_运维问答e on the iPhone?
i.e. When a user speaks into the Mic, the image will pulse or skew.
[edit] Anyone have any ideas, I have (what is basically) a voice recording app. I just wanted something to change as the voice input is provided. I've seen it in a sample project, but that wasn't with an UIImage. [/edit]
Thanking you!!
Apple put together some great frameworks for this! The AVFoundation framework and CoreAudio framework will be the most useful to you.
To get audio level information AVAudioRecorder
is useful. Although it is made to be used for recording, it also provides levels data for the microphone. This would be useful for deforming your image base on how loud the user is shouting at his phone ;)
Here is the apple documentation for AVAudioRecorder
: AVAudioRecorder Class Reference
A bit more detail:
// You will need an AVAudioRecorder object
AVAudioRecorder *myRecorderObject;
// To be able to get levels data from the microphone you need
// to enable metering for your recorder object
[myRecorderObject prepareToRecord];
myRecorderObject.meteringEnabled=YES;
// Now you can poll the microphone to get some levels data
float peakPower = [myRecorderObject peakPowerForChannel:0];
float averagePower = [myRecorderObject averagePowerForChannel:0];
If you want to see a great example of how an AVAudioRecorder object can be used to get levels data, check out this tutorial.
As far as deforming your image, that would be up to an image library. There are a lot to choose from and some great ones from apple. I am not familiar with anything though so that might be up for someone else to answer.
Best of luck!
You may try using gl-data-visualization-view extensible framework in order to visualize your sound levels.
精彩评论