开发者

raycasting: how to properly apply a projection matrix?

I am currently working on some raycasting in GLSL which works fine. Anyways I want to go from orthogonal projection to perspective projection now but I am not sure how to properly do so. Are there any good links on how t开发者_开发百科o use a projection Matrix with raycasting? I am not even sure what I have to apply the matrix to (propably to the ray direction somehow?). Right now I do it like this (pseudocode):

vec3 rayDir = (0.0, 0.0, -1.0); //down the negative -z axis in parallel;

but now I would like to use a projMatrix which works similar to gluPerspective function so that I can simply define an aspect ratio, fov and near and far plane. So basically, can anybody provide me a chunk of code to set up a proj matrix similar to gluProjection does? And secondly tell me if it is correct to multiply it with the rayDirection?


For raytracing in the same scene as a standard render, I have found that the following works for getting a scene-space ray from screen coordinates: (e.g. render a full-screen quad from [-1,-1] to [1,1], or some sub-area within that range)

Vertex Shader

uniform mat4 invprojview;
uniform float near;
uniform float far;

attribute vec2 pos; // from [-1,-1] to [1,1]

varying lowp vec3 origin;
varying lowp vec3 ray;

void main() {
    gl_Position = vec4(pos, 0.0, 1.0);
    origin = (invprojview * vec4(pos, -1.0, 1.0) * near).xyz;
    ray = (invprojview * vec4(pos * (far - near), far + near, far - near)).xyz;

    // equivalent calculation:
    // ray = (invprojview * (vec4(pos, 1.0, 1.0) * far - vec4(pos, -1.0, 1.0) * near)).xyz
}

Fragment Shader

varying lowp vec3 origin;
varying lowp vec3 ray;

void main() {
    lowp vec3 rayDir = normalize(ray);
    // Do raytracing from origin in direction rayDir
}

Note that you need to provide the inverted projection-view matrix, as well as the near and far clipping distances. I'm sure there's a way to get those clipping distances from the matrix, but I haven't figured out how.

This will define a ray which starts at the near plane, not the camera's position. This gives the advantage of clipping at the same position that OpenGL will clip triangles, making your ray-traced object match the scene. Since the ray variable will be the correct length to reach the far plane, you can also clip there too.

As for getting a perspective matrix in the first place (and understanding the mathematics behind it), I always use this reference page:

http://www.songho.ca/opengl/gl_projectionmatrix.html

I recommend looking through the derivation on that site, but in case it becomes unavailable here is the final projection matrix definition:

2n/(r-l)      0      (r+l)/(r-l)      0
    0     2n/(t-b)   (t+b)/(t-b)      0
    0         0     -(f+n)/(f-n)  -2fn/(f-n)
    0         0          -1           0


To shoot rays out into the scene, you want to start by putting yourself (mentally) into the world after the projection matrix has been applied. This means that the view frustrum is now a 2x2x1 box - this is known as the canonical view volume. (The opposing corners of the box are (-1, -1, 0) and (1, 1, -1).) The rays you generate will (in the post-projection transformed world) start at the origin and hit the rear clipping plane (located at z=-1). The "destination" of your first ray should be (-1, 1, -1) - the upper-left-hand corner of the far clipping plane. (Subsequent rays "destinations" are calculated based on the resolution of your viewport.)

Now that you have this ray in the canonical view volume, you need to get it into standard world coordinates. How do you do this? Simple - just multiply by the inverse of the projection matrix, often called the viewing transformation. This will put your rays into the same coordinate system as the objects in your scene, making ray collision testing nice and easy.


At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

raycasting: how to properly apply a projection matrix?

The Perspective Projection Matrix looks like this:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)   -1    
0              0              -2*f*n/(f-n)    0

wher :

r = w / h
t = tan( fov_y / 2 );

2 * n / (r-l) = 1 / (t * a)
2 * n / (t-b) = 1 / t

If the projection is symmetric, where the line of sight is in the center of the view port and the field of view is not displaced, then the matrix can be simplified:

1/(t*a)  0    0               0
0        1/t  0               0
0        0    -(f+n)/(f-n)   -1    
0        0    -2*f*n/(f-n)    0


The following function will calculate the same projection matrix as gluPerspective does:

#include <array>

const float cPI = 3.14159265f;
float ToRad( float deg ) { return deg * cPI / 180.0f; }

using TVec4  = std::array< float, 4 >;
using TMat44 = std::array< TVec4, 4 >;

TMat44 Perspective( float fov_y, float aspect )
{
    float fn = far + near
    float f_n = far - near;
    float r = aspect;
    float t = 1.0f / tan( ToRad( fov_y ) / 2.0f );

    return TMat44{ 
        TVec4{ t / r, 0.0f,  0.0f,                 0.0f },
        TVec4{ 0.0f,  t,     0.0f,                 0.0f },
        TVec4{ 0.0f,  0.0f, -fn / f_n,            -1.0f },
        TVec4{ 0.0f,  0.0f, -2.0f*far*near / f_n,  0.0f }
    };
}


See further:

  • Perspective projection and view matrix: Both depth buffer and triangle face orientation are reversed in OpenGL
  • How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?


WebGL example:

<script type="text/javascript">

camera_vert =
"precision mediump float; \n" +
"attribute vec3 inPos; \n" +
"attribute vec3 inCol; \n" +
"varying   vec3 vertCol;" +
"uniform   mat4 u_projectionMat44;" +
"uniform   mat4 u_viewMat44;" +
"uniform   mat4 u_modelMat44;" +
"void main()" +
"{" +
"    vertCol       = inCol;" +
"    vec4 modolPos = u_modelMat44 * vec4( inPos, 1.0 );" +
"    vec4 viewPos  = u_viewMat44 * modolPos;" +
"    gl_Position   = u_projectionMat44 * viewPos;" +
"}";

camera_frag =
"precision mediump float; \n" +
"varying vec3 vertCol;" +
"void main()" +
"{" +
"    gl_FragColor = vec4( vertCol, 1.0 );" +
"}";

glArrayType = typeof Float32Array !="undefined" ? Float32Array : ( typeof WebGLFloatArray != "undefined" ? WebGLFloatArray : Array );

function IdentityMat44() {
  var a=new glArrayType(16);
  a[0]=1;a[1]=0;a[2]=0;a[3]=0;a[4]=0;a[5]=1;a[6]=0;a[7]=0;a[8]=0;a[9]=0;a[10]=1;a[11]=0;a[12]=0;a[13]=0;a[14]=0;a[15]=1;
  return a;
};

function Cross( a, b ) { return [ a[1] * b[2] - a[2] * b[1], a[2] * b[0] - a[0] * b[2], a[0] * b[1] - a[1] * b[0], 0.0 ]; }
function Dot( a, b ) { return a[0]*b[0] + a[1]*b[1] + a[2]*b[2]; }
function Normalize( v ) {
    var len = Math.sqrt( v[0] * v[0] + v[1] * v[1] + v[2] * v[2] );
    return [ v[0] / len, v[1] / len, v[2] / len ];
}

var Camera = {};
Camera.create = function() {
    this.pos    = [0, 8, 0.5];
    this.target = [0, 0, 0];
    this.up     = [0, 0, 1];
    this.fov_y  = 90;
    this.vp     = [800, 600];
    this.near   = 0.5;
    this.far    = 100.0;
}
Camera.Perspective = function() {
    var fn = this.far + this.near;
    var f_n = this.far - this.near;
    var r = this.vp[0] / this.vp[1];
    var t = 1 / Math.tan( Math.PI * this.fov_y / 360 );
    var m = IdentityMat44();
    m[0]  = t/r; m[1]  = 0; m[2]  =  0;                              m[3]  = 0;
    m[4]  = 0;   m[5]  = t; m[6]  =  0;                              m[7]  = 0;
    m[8]  = 0;   m[9]  = 0; m[10] = -fn / f_n;                       m[11] = -1;
    m[12] = 0;   m[13] = 0; m[14] = -2 * this.far * this.near / f_n; m[15] =  0;
    return m;
}
function ToVP( v ) { return [ v[1], v[2], -v[0] ] }
Camera.LookAt = function() {
    var p = ToVP( this.pos ); t = ToVP( this.target ); u = ToVP( this.up );
    var mx = Normalize( [ t[0]-p[0], t[1]-p[1], t[2]-p[2] ] );
    var my = Normalize( Cross( u, mx ) );
    var mz = Normalize( Cross( mx, my ) );
    var eyeInv = [ -this.pos[0], -this.pos[1], -this.pos[2] ];
    var tx = Dot( eyeInv, [mx[0], my[0], mz[0]] );
    var ty = Dot( eyeInv, [mx[1], my[1], mz[1]] );
    var tz = Dot( eyeInv, [mx[2], my[2], mz[2]] ); 
    var m = IdentityMat44();
    m[0]  = mx[0]; m[1]  = mx[1]; m[2]  = mx[2]; m[3]  = 0;
    m[4]  = my[0]; m[5]  = my[1]; m[6]  = my[2]; m[7]  = 0;
    m[8]  = mz[0]; m[9]  = mz[1]; m[10] = mz[2]; m[11] = 0;
    m[12] = tx;    m[13] = ty;    m[14] = tz;    m[15] = 1; 
    return m;
}

// shader program object
var ShaderProgram = {};
ShaderProgram.Create = function( shaderList, uniformNames ) {
    var shaderObjs = [];
    for ( var i_sh = 0; i_sh < shaderList.length; ++ i_sh ) {
        var shderObj = this.CompileShader( shaderList[i_sh].source, shaderList[i_sh].stage );
        if ( shderObj == 0 )
          return 0;
        shaderObjs.push( shderObj );
    }
    if ( !this.LinkProgram( shaderObjs ) )
      return 0;
    this.unifomLocation = {};
    for ( var i_n = 0; i_n < uniformNames.length; ++ i_n ) {
        var name = uniformNames[i_n];
        this.unifomLocation[name] = gl.getUniformLocation( this.prog, name );
    }
    return this.prog;
}
ShaderProgram.Use = function() { gl.useProgram( this.prog ); } 
ShaderProgram.SetUniformMat44 = function( name, mat ) { gl.uniformMatrix4fv( this.unifomLocation[name], false, mat ); }
ShaderProgram.CompileShader = function( source, shaderStage ) {
    var shaderObj = gl.createShader( shaderStage );
    gl.shaderSource( shaderObj, source );
    gl.compileShader( shaderObj );
    return gl.getShaderParameter( shaderObj, gl.COMPILE_STATUS ) ? shaderObj : 0;
} 
ShaderProgram.LinkProgram = function( shaderObjs ) {
    this.prog = gl.createProgram();
    for ( var i_sh = 0; i_sh < shaderObjs.length; ++ i_sh )
        gl.attachShader( this.prog, shaderObjs[i_sh] );
    gl.linkProgram( this.prog );
    return gl.getProgramParameter( this.prog, gl.LINK_STATUS ) ? true : false;
}
        

function drawScene(){

    var canvas = document.getElementById( "camera-canvas" );
    Camera.create();
    Camera.vp = [canvas.width, canvas.height];
    var currentTime = Date.now();   
    var deltaMS = currentTime - startTime;
    Camera.pos = EllipticalPosition( 7, 4, CalcAng( currentTime, 10.0 ) );
        
    gl.viewport( 0, 0, canvas.width, canvas.height );
    gl.enable( gl.DEPTH_TEST );
    gl.clearColor( 0.0, 0.0, 0.0, 1.0 );
    gl.clear( gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT );
    ShaderProgram.Use();
    ShaderProgram.SetUniformMat44( "u_projectionMat44", Camera.Perspective() );
    ShaderProgram.SetUniformMat44( "u_viewMat44", Camera.LookAt() );
        
    ShaderProgram.SetUniformMat44( "u_modelMat44", IdentityMat44() );
    gl.enableVertexAttribArray( prog.inPos );
    gl.bindBuffer( gl.ARRAY_BUFFER, buf.pos );
    gl.vertexAttribPointer( prog.inPos, 3, gl.FLOAT, false, 0, 0 ); 
    gl.enableVertexAttribArray( prog.inCol );
    gl.bindBuffer( gl.ARRAY_BUFFER, buf.col );
    gl.vertexAttribPointer( prog.inCol, 3, gl.FLOAT, false, 0, 0 ); 
    gl.bindBuffer( gl.ELEMENT_ARRAY_BUFFER, buf.inx );
    gl.drawElements( gl.TRIANGLES, 12, gl.UNSIGNED_SHORT, 0 );
    gl.disableVertexAttribArray( buf.pos );
    gl.disableVertexAttribArray( buf.col );
}

var startTime;
function Fract( val ) { 
    return val - Math.trunc( val );
}
function CalcAng( currentTime, intervall ) {
    return Fract( (currentTime - startTime) / (1000*intervall) ) * 2.0 * Math.PI;
}
function CalcMove( currentTime, intervall, range ) {
    var pos = self.Fract( (currentTime - startTime) / (1000*intervall) ) * 2.0
    var pos = pos < 1.0 ? pos : (2.0-pos)
    return range[0] + (range[1] - range[0]) * pos;
}    
function EllipticalPosition( a, b, angRag ) {
    var a_b = a * a - b * b
    var ea = (a_b <= 0) ? 0 : Math.sqrt( a_b );
    var eb = (a_b >= 0) ? 0 : Math.sqrt( -a_b );
    return [ a * Math.sin( angRag ) - ea, b * Math.cos( angRag ) - eb, 0 ];
}

var gl;
var prog;
var buf = {};
function cameraStart() {

    var canvas = document.getElementById( "camera-canvas");
    gl = canvas.getContext( "experimental-webgl" );
    if ( !gl )
      return;

    prog = ShaderProgram.Create( 
      [ { source : camera_vert, stage : gl.VERTEX_SHADER },
        { source : camera_frag, stage : gl.FRAGMENT_SHADER }
      ],
      [ "u_projectionMat44", "u_viewMat44", "u_modelMat44"] );
    prog.inPos = gl.getAttribLocation( prog, "inPos" );
    prog.inCol = gl.getAttribLocation( prog, "inCol" );
    if ( prog == 0 )
        return;

    var sin120 = 0.8660254
    var pos = [ 0.0, 0.0, 1.0, 0.0, -sin120, -0.5, sin120 * sin120, 0.5 * sin120, -0.5, -sin120 * sin120, 0.5 * sin120, -0.5 ];
    var col = [ 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0 ];
    var inx = [ 0, 1, 2, 0, 2, 3, 0, 3, 1, 1, 3, 2 ];
    buf.pos = gl.createBuffer();
    gl.bindBuffer( gl.ARRAY_BUFFER, buf.pos );
    gl.bufferData( gl.ARRAY_BUFFER, new Float32Array( pos ), gl.STATIC_DRAW );
    buf.col = gl.createBuffer();
    gl.bindBuffer( gl.ARRAY_BUFFER, buf.col );
    gl.bufferData( gl.ARRAY_BUFFER, new Float32Array( col ), gl.STATIC_DRAW );
    buf.inx = gl.createBuffer();
    gl.bindBuffer( gl.ELEMENT_ARRAY_BUFFER, buf.inx );
    gl.bufferData( gl.ELEMENT_ARRAY_BUFFER, new Uint16Array( inx ), gl.STATIC_DRAW );

    startTime = Date.now();
    setInterval(drawScene, 50);
}

</script>

<body onload="cameraStart();">
    <canvas id="camera-canvas" style="border: none;" width="512" height="256"></canvas>
</body>


don't try to modify your rays. Instead do this:

a) create matrix using the location/rotation of your camera. b) invert the matrix c) apply it to all the models in the scene d) render it using your normal methods.

This is actually the way OpenGL does it as well. Rotating the camera to the right is the same as rotating the world to the left.


I answer this after arriving here from a Google search.

The existing answers seem to miss the lack of understanding in the original question.

The idea of needing to apply projection matrix when raycasting is nonsense

We create orthogonal raycasts by starting from the view plane and raytracing the same direction for each pixel. the origin of the ray changes per pixel

We create perspective raycasts by starting at the eye position, behind the view plane and raytracing a unique direction for each pixel. i.e. the origin of the ray is fixed and the same for every pixel.

Understand that the projection matrices themselves, and the process they are usually involved in is derived from raycasting. The perspective matrix encodes a raycast of the kind I described.

Projecting a point on the screen is casting a ray from the eye/view plane to the point and finding the intersection with the view plane...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜