OpenGL: Compute eye space coord from window space coord in GLSL?
How do I compute an eye space coordinate from w开发者_如何学JAVAindow space (pixel in the frame buffer) coordinates + pixel depth value in GLSL please (gluUnproject in GLSL so to speak)?
Looks to be duplicate of GLSL convert gl_FragCoord.z into eye-space z.
Edit (complete answer):
// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates
Assuming you've stuck with a fixed pipeline-style model, view and projection, you can just implement exactly the formula given in the gluUnProject man page.
There's no matrix inversion built into GLSL, so ideally you'd so that on the CPU. So you need to supply a uniform of the inverse of your composed modelViewProjection matrix. gl_FragCoord is in window coordinates, so you also need to supply the view dimensions.
So, you'd probably end up with something like (coding extemporaneously):
vec4 unProjectedPosition = invertedModelViewProjection * vec4(
2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0,
2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
2.0 * gl_FragCoord.z - 1.0,
1.0);
If you've implemented your own analogue of the old matrix stack then you're probably fine inverting a matrix. Otherwise, it's possibly a more daunting topic than you had anticipated and you might be better off using MESA's open source implementation (see invert_matrix, the third function in that file), just because it's well tested if nothing else.
Well, a guy on opengl.org has pointed out that the clip space coordinates the projection produces are divided by clipPos.w to compute the normalized device coordinates. When reversing the steps from fragment over ndc to clip space coordinates, you need to reconstruct that w (which happens to be -z from the corresponding view space (camera) coordinate), and multiply the ndc coordinate with that value to compute the proper clip space coordinate (which you can turn into a view space coordinate by multiplying it with the inverse projection matrix).
The following code assumes that you are processing the frame buffer in a post process. When processing it while rendering geometry, you can use gl_FragCoord.z instead of texture2D (sceneDepth, ndcPos.xy).r.
Here is the code:
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height
#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))
void main()
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}
Basically this is a GLSL version of gluUnproject.
I just realized that it's unnecessary to do these computations in the fragment shader. You can save a couple operations by doing this on the CPU and multiplying it with the MVP inverse (assuming glDepthRange(0, 1)
, feel free to edit):
glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
* glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);
In the shader:
vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;
I think all available answers are touching the problem from an aspect, and khronos.org
has a Wiki page with a few different cases listed and explained with shader code, so it's worth posting here.
Compute eye space from window space.
精彩评论