Translating mouse X & Y to 3D model coordinates
I'm building a simpl开发者_StackOverflow中文版e 3D drag and drop interface in processing, and want to detect when the mouse rolls over an object. I would imagine that I need to do some matrix translations to the 3D model coordinates to get them into screen space and so on ...
I have a simple version of this working, the problem is that when camera is moved around the scene the coordinates I get go haywire.
So how do I translate the tile coordinates into screen space (since the screenX & screenY aren't working properly)?
UPDATE: I eventually found two examples from the Processing site on how to do this. Thanks to villintehaspam.
http://processing.org/hacks/hacks:picking
This problem is called picking. Search for mouse picking and you get lots and lots of hits.
Basic theory is this:
- Get x,y coords from the mouse click.
- Convert these to x,y,z coordinates in eye coordinates (i.e -1 <= x <= 1, -1 <= y <= 1, z=near/far clip distance, if you have a normal projection).
- Transform these coordinates by the inverse of the projection matrix to get world coordinates.
- You now have a ray from the camera position, with the direction towards the world coordinates you just got.
- Make a ray-object intersection test with the objects you want to consider. Choose the object that intersects the ray that is closest to the ray origin (camera position).
精彩评论