Writing the correct value in the depth buffer when using ray-casting
I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position.
Vertex shader
world_coordinate_ = gl_Vertex;
Fragment shader
vec3 direction = (world_coordinate_.xyz - cameraPosition_);
direction = normalize(direction);
for (float k = 0.0; k < steps; k += 1.0) {
....
pos += direction*delta_step;
float thisLum = texture3D(texture3_, pos).r;
if(thisLum > surface_)
...
}
Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value开发者_Go百科 of pos
in the 3d texture to be written.
So lets say the cube is placed 10
away from origin in -z
and the size is 10*10*10
. My solution that does not work correctly is this:
pos *= 10;
pos.z += 10;
pos.z *= -1;
vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5;
gl_FragDepth = depth;
The solution was:
vec4 depth_vec = ViewMatrix * gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5;
gl_FragDepth = depth;
One solution you might try is to draw a cube that is directly on top of the cube you're trying to raytrace. Send the cube's position in the same space as you get from your ray-tracing algorithm, and perform the same transforms to compute your "depth_vec", only do it in the vertex shader.
This way, you can see where your problems are coming from. Once you get this part of the transform to work, then you can back-port this transformation sequence into your raytracer. If that doesn't fix everything, then it would only be because your ray-tracing algorithm isn't outputting positions in the space that you think it is in.
精彩评论