开发者

Why does sign matter in opengl projection matrix

I'm working on a computer vision problem which requires rendering a 3d model using a calibrated camera. I'm writing a function that breaks the calibrated camera matrix into a modelview matrix and a projection matrix, but I've run into an interesting phenomenon in opengl that defies explanation (at least by me).

The short description is that negating the projection matrix results in nothing being rendered (at least in my experience). I would expect that multiplying the 开发者_JAVA百科projection matrix by any scalar would have no effect, because it transforms homogeneous coordinates, which are unaffected by scaling.

Below is my reasoning why I find this to be unexpected; maybe someone can point out where my reasoning is flawed.

Imagine the following perspective projection matrix, which gives correct results:

    [ a b c 0 ]
P = [ 0 d e 0 ]
    [ 0 0 f g ]
    [ 0 0 h 0 ]

Multiplying this by camera coordinates gives homogeneous clip coordinates:

[x_c]   [ a b c 0 ]   [X_e]
[y_c] = [ 0 d e 0 ] * [Y_e]
[z_c]   [ 0 0 f g ]   [Z_e]
[w_c]   [ 0 0 h 0 ]   [W_e]

Finally, to get normalized device coordinates, we divide x_c, y_c, and z_c by w_c:

[x_n]   [x_c/w_c]
[y_n] = [y_c/w_c]
[z_n]   [z_c/w_c]

Now, if we negate P, the resulting clip coordinates should be negated, but since they are homogeneous coordinates, multiplying by any scalar (e.g. -1) shouldn't have any affect on the resulting normalized device coordinates. However, in openGl, negating P results in nothing being rendered. I can multiply P by any non-negative scalar and get the exact same rendered results, but as soon as I multiply by a negative scalar, nothing renders. What is going on here??

Thanks!


Well, the gist of it is that clipping testing is done through:

-w_c < x_c < w_c
-w_c < y_c < w_c
-w_c < z_c < w_c

Multiplying by a negative value breaks this test.


I just found this tidbit, which makes progress toward an answer:

From Red book, appendix G:

Avoid using negative w vertex coordinates and negative q texture coordinates. OpenGL might not clip such coordinates correctly and might make interpolation errors when shading primitives defined by such coordinates.

Inverting the projection matrix will result in negative W clipping coordinate, and apparently opengl doesn't like this. But can anyone explain WHY opengl doesn't handle this case?

reference: http://glprogramming.com/red/appendixg.html


Reasons I can think of:

  • By inverting the projection matrix, the coordinates will no longer be within your zNear and zFar planes of the view frustum (necessarily greater than 0).
  • To create window coordinates, the normalized device coordinates are translated/scaled by the viewport. So, if you've used a negative scalar for the clip coordinates, the normalized device coordinates (now inverted) translate the viewport to window coordinates that are... off of your window (to the left and below, if you will)

Also, since you mentioned using a camera matrix and that you have inverted the projection matrix, I have to ask... to which matrices are you applying what from the camera matrix? Operating on the projection matrix save near/far/fovy/aspect causes all sorts of problems in the depth buffer including anything that uses z (depth testing, face culling, etc).

The OpenGL FAQ section on transformations has some more details.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜