Get iPhone's camera resolution?
Is there any way to get the resolution of the iPhone's camera? Apparently the 3G has 1200x1600, and 3GS has 1500x2000, but how do I obtain these values from inside my code (without taking the picture). I need them to make some affine transform to the camera preview.
I could detec开发者_开发问答t if it's 3G or 3GS to hardcode these values, but it's just the last resort.
I think your "last resort" is a good solution.
What's wrong with detecting the model? The hardware isn't going to change for either of those models.. The only concern you might have is if the 3GSX-R or something comes out with a 16mpx camera. At which point you'd probably have to update your app anyways and could just add another value to the list.
I vote for model detection.
The post is old, but it is almost first in google, and it has got no valid answer, so here's one more option:
Solution for iOS from 4.x to 7.x
It's all in terms of AV Foundation
framework
After AVCaptureSession
is configured and started you can find video dimensions inside [[[session.inputs.lastObject] ports].lastObject formatDescription]
variable
Here's approximate code:
AVCaptureSession* session = ...;
AVCaptureDevice *videoCaptureDevice = ...;
AVCaptureDeviceInput *videoInput = ...;
[session beginConfiguration];
if ([session canAddInput:videoInput]) {[session addInput:videoInput];}
[session commitConfiguration];
[session startRunning];
//this is the clue
AVCaptureInputPort *port = videoInput.ports.lastObject;
if ([port mediaType] == AVMediaTypeVideo)
{
videoDimensions = CMVideoFormatDescriptionGetDimensions([port formatDescription]);
}
Solution for iOS8
Apple did change everything again: now you must subscribe for AVCaptureInputPortFormatDescriptionDidChangeNotification
Here is the sample:
-(void)initSession
{
AVCaptureSession* session = ...;
AVCaptureDevice *videoCaptureDevice = ...;
AVCaptureDeviceInput *videoInput = ...;
[session beginConfiguration];
if ([session canAddInput:videoInput]) {[session addInput:videoInput];}
[session commitConfiguration];
[session startRunning];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(avCaptureInputPortFormatDescriptionDidChangeNotification:)
name:@"AVCaptureInputPortFormatDescriptionDidChangeNotification"
object:nil];
}
-(void)avCaptureInputPortFormatDescriptionDidChangeNotification:(NSNotification *)notification
{
AVCaptureInputPort *port = [videoInput.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
if (formatDescription) {
videoDimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
}
}
Do you really need to know the resolution of the image to set up the affine transform? Since the view is pre-set-up to cover the width of the screen, you can alter the transform based on what fraction of the screen you want to take up, i.e. if you need the preview to be 160 pixels across, just shrink it 50% from the default. I'm doing this in an app now, and it works on new and old iPhones...
Since Apple doesn't let you talk to the hardware directly, there's not really much choice for a legitimate App Store product.
While I think snicker's answer is definitely the correct one. I thought I'd post this for fun.
You could look through the users stored photo album on the iPhone (I am making a bold assumption that this can be done programatically on the iPhone without restrictions!!) making the assumption that the photos on the phone were mostly taken with the phone it's self, you could then work out what the average photo resolution is and roll with that.
Why don't you want to take a picture? Use UIImagePickerController's takePicture method, instructing the user that this is a necessary calibration step. After that, look at the picture's resolution, persistently save the value, and delete the picture that you took. After that, you'll have your resolution and will be able to apply the transform thereon.
精彩评论