Finding normal OpenGL
I would like to know if we can get the normal on the object surface from window coordinates. gluUnProject() function converts window coordinates to object coordinates in 3D space. Likewise, I would need a function which gives the normal of the surface where my mouse point开发者_如何学Cer lie. I would be more than happy if we have an opengl functio which can give this. I would not want to do this with intersection of the ray and the objects(surface triangles).
I don't think this is possible. Why would opengl store all normals for all the pixels? It doesn't need it, so it's unlikely that you can retrieve it. It won't be really easy I'm afraid.
You could write a shader that draws normals encoded as rgb, instead of the face color. If you redraw your scene with the shader on, it'll draw all the normals. You can then simply look up the colors where you want and get the normal.
First some clarification: gluUnProject does the inverse projection of the OpenGL fixed function pipeline. If there are shaders involved, things get complicated. However, as a starting point for gluUnProject one normally retrieves the depth buffer value at the desired position (using glReadPixels). It's trivial, to read a whole block (say a 3x3 array) of depth values and unproject them, leaving you with 9 points, one of them in the center. Using the points to the left and up would already yield you a valid normal (so in theory a 2x2 array would be sufficient), but let's filter it, by taking into the account the whole neighborhood.
Let's number them this way, according to the array they came from:
1 8 7
2 0 6
3 4 5
All you've to do is calculate the normals of the plane defining triangles (0,1,2), (0,2,3),… (0,7,8), (0,8,1) take the (maybe weighted be the inverse of the sample distance) sum of the normals and normalize the resulting vector. Voila, you've determined the world-space normal of the selected point. To get back into object space, multiply with the transposed inverse of the object matrix (why the transposed inverse, well normals are transformed by the inversed transpose of the modelview matrix, see the OpenGL programming guide appendix for an explanation of why, and the inverse of that is... well, I think you figure that out).
The math isn't hard if you have the three points. I would suggest taking a look at the OpenGL Super Bible 4th edition Macintosh Windows code samples. The math3d.* in the shared directory gives you the code to do this, search for m3dFindNormal.
As for storing the normals, that will be up to you.
There are a couple ways I could think to do this.
Possibility 1
The user clicks in the window space, so now you have the window coordinates of the mouse click (x, y). Now, add one pixel in the X direction (x+1/width, y) and one pixel in the y direction (x, y+1/height) where width and height are the dimensions in pixels of the display. use gluUnProject on all three of these vectors, then you have three points in object space {A, B, C}. Once you have three points in object space, the normal calculation is relatively simple:
A = ObjectCoordinates(x+1/width, y)
B = ObjectCoordinates(x, y)
C = ObjectCoordinates(x, y+1/height)
BA = (A-B)/|A-B|
BC = (C-B)/|C-B|
N = BA cross BC
where |X| is the magnitude of vector X and the normal is N
This may not be fast, and it will have problems when A, B, and C don't all land on the same triangle (This could happen if you are clicking on an object that is really far away). But it should get an ok approximate result.
Possibility 2
Now this next possibility, I am not very sure of, but here goes. Create a fragment shader and a vertex shader. You will have to create a two pass rendering system. The first pass uses these newly created shaders, the second pass actually renders your scene. This might significantly slow down program execution, because you have to do two full scene renders per frame.
On the first pass, your vertex shader will pass the vertex normal to your fragment shader. Then, you assign the color of the fragment (r, g, b) to the normal coordinates passed in from your vertex shader (x, y, z). Then, your first pass will render to a texture using these shaders. Your second pass will render to the frame buffer. When the user clicks, you get a set of window space coordinates that you convert to (u, v) texture coordinates to sample from the texture created by the first pass rendering.
Again, this solution will be significantly slower, but it will most likely get you more accurate data depending upon the texture size that you use.
Here is a slightly outdated, but free book on GLSL the OpenGL Shading Language, it describes how you would go about putting together some of these shaders.
Here is a tutorial on loading and compiling shaders.
Here is pseudo-code for calculating a normal.
Cheers, Ned
精彩评论