Augmented Reality
i want to draw the pin and information of the place on the image of the camera.. Please any one help me.. i had done the coding in the app delegate The code is :-
overlay = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; overlay.opaque = NO; overlay.backgroundColor=[UIColor clearColor];
[window addSubview:overlay];
#define CAMERA_TRANSFORM 1.24299
UIImagePickerController *uip;
开发者_开发百科 @try {
uip = [[[UIImagePickerController alloc] init] autorelease];
uip.sourceType = UIImagePickerControllerSourceTypeCamera; uip.showsCameraControls = NO;
uip.toolbarHidden = YES;
uip.navigationBarHidden = YES;
uip.wantsFullScreenLayout = YES;
uip.cameraViewTransform = CGAffineTransformScale(uip.cameraViewTransform, CAMERA_TRANSFORM, CAMERA_TRANSFORM);
}
@catch (NSException * e)
{ [uip release];
uip = nil;
}
@finally
{ if(uip) {
[overlay addSubview:[uip view]]; [overlay release]; }
}
it shows the camera.Not i want to detect the place and put the pin on that place which shows the information of that place.
Here is a more straightforward recipe to detect the presence of a camera:
BOOL isCameraAvailable = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera];
99% of the job is still ahead I'm afraid. Roughly you need the following:
- Get the geographical location for the user.
- Get the geographical location for the point of interest (POI) you want to show. You may need to use a 3rd party library like Foursquare, Google maps, or something like that.
- Calculate the distance between the user and a POI using a right triangle between both points h^2=c^2+c^2.
Note that the distance in spherical geometry is calculated with the haversine formula, but the loss of precision is irrelevant for small distances if we assume cartesian coordinates, so we will just do that.
- Assuming that east is 0º, get the angle from the user to the POI, which is atan dy/dx (y=latitude, x=longitude). dy is of course, the difference between the latitudes from the user and the POI.
- Get the bearing from the compass and calculate the difference between the user bearing and the angle to the POI.
- The position on screen of the object depends on the bearing and the device orientation. If the user is looking exactly at the POI, paint a label for the POI in the middle of the screen. If there is an offset from the exact angle, multiply
offset * (width in pixels / horizontal field vision)
to get the offset in pixels for the label representing the point. Do the same for the vertical offset.- If there is a rotation on the axis X (see the axis here), apply a vertical offset.
- If there is a rotation on the axis Y, there will be an update in bearing from the compass.
- If there is a rotation on the axis Z, if the object is near rotate the object in the opposite angle.
- Scale the label according to the distance, with a minimum and a maximum.
To position the labels you may want to use a 3D engine or rotate them in a circle around your device (x=x+r*cos, y=y+r*sin) and use a billboard effect.
If that sounds like too much work, concentrate on implementing just the response to changes in bearing using offset * width in pixels / horizontal field vision
. Horizontal field vision is the visible angle for the camera. It is 180º for humans, 37.5 for iPhone 3, and hmm was it 45º for iPhone 4? Width is 320, so if you are looking 10º away from your target, you have to move it horizontally 320*10/37.5 pixels away from the center.
If the readings from the compass have too much noise, add a low pass filter.
Please go through
https://github.com/zac/iphonearkit.
It's the best objectiveC code available.
精彩评论