开发者

How do I determine what is touched in 3D space from the screen?

How do I use gl.gluUnproject in my OpenGL ES 1.1 android app to determine what is selected when the user touches the screen?

My understanding is that the touch eve开发者_如何学编程nt results in a line and I have to find the first "thing" it intersects with.

Are there any tutorials on how to do this?


If you are doing 2D to 3D picking, you need to fiddle with matrices and vectors a bit. GlUnproject does not exist for OpenGL ES 1.1 so you have to do some math by yourself.

Ray-object intersection is a way to go then. Timmmms answer already covers some of it, but there's more. Idea is to create ray to 3D out of 2D coordinates. Inverse of view matrix and projection matrix are needed for that. Once you have ray, you can use ray-intersection test of your choice and of course you need to select closest object like at Timmmm's point 4. Bounding spheres and bounding boxes are easy to implement and internet is full of intersection test tutorials for them.

This picking tutorial is for DirectX, but you might get the idea. Ray-constructing part is most important.

Edit Android implements it's own version of gluUnproject. It can be used to create ray, by calling it for near and far plane (0 and 1) and subtracting near plane results from far plane results to get ray's direction. Ray origin is view location. More here.


I think for most applications you should go for the correct 3D-approach: Ray casting.

Take the location in 2D screen coordinates selected by the user and project them into your world space. This gives a 3D ray that originates at the camera and points into the scene. Now you need to perform collision testing in 3D. In most cases, this can be accomplished by reducing the objects to a set of simple geometries such as ellipsoids, spheres and boxes for speedup.

Given the precision of handheld devices, that should be sufficient precision already. Note that depending on the form of the object, you might need to use more than one basic primitive. Also, it is pointless to always use the same primitive: A bounding sphere is a very bad approximation for a very long rod, obviously.


It completely depends on what it is that you're rendering. You probably shouldn't use the picking approach because

a) It sounds like it will be slow, especially on android. b) You might want to have a larger 'touch area' for small objects so you don't have to touch precisely where they are. c) It doesn't really answer the right question - it gives you the result "What is the top-most graphical item that has been rendered exactly where I touched?" whereas you want to know "What game entity did the user touch or touch near?"

As I said, it completely depends on your game. If it is a 2D or nearly 2D game then it's simple and you just feed the touch coordinates into your game model. For 3D games I would suggest this simple algorithm that I just came up with on the spot:

  1. Make a list of all the touchable game objects that might have been touched.
  2. Transform their centre coordinates into 3D screen coordinates. (2D screen coordinates described here: http://www.flipcode.com/archives/Plotting_A_3D_Point_On_A_2D_Screen.shtml )
  3. Find the 2D distance to each object, discard objects with a distance greater your touch threshold.
  4. Find the closest object according to some distance metric. You'll have to experiment at this point and again, it depends on the nature of the game.

For exact results the line intersection thing could be used. There are algorithms for that (search for 'plane line intersection raycasting').


One thought I had was to cache the results of calling gluProject. This might only be practical when you are dealing with a fixed camera. At the end of any change in camera perspective, I can regenerate this cache of "touchables".

This also has the added benefit of ensuring that I'm only testing input against things that will respond to being touched.

I'd appreciate any thoughts on this approach!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜