开发者

How does this work in computing the depth map?

From this site: http://www.catalinzima.com/?page_id=14

I've always been confused about how the depth map is calculated.

The vertex shader function calculates position as follows:

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

    VertexShaderOutput output;

    float4 worldPosition = mul(input.Position, World);

    float4 viewPosition = mul(worldPosition, View);

    output.Position = mul(viewPosition, Projection);

    output.TexCoord = input.TexCoord;                            //pass the texture coordinates further

    output.Normal =mul(input.Normal,World);                   //get normal into world space

    output.Depth.x = output.Position.z;

    output.Depth.y = output.Position.w;

    return output;

}

What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.

And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;

So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?

Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:

 //read depth

    float depthVal = tex2D(depthSampler,texCoord).r;

    //compute screen-space position

    float4 position开发者_C百科;

    position.xy = input.ScreenPosition.xy;

    position.z = depthVal;

    position.w = 1.0f;

    //transform to world space

    position = mul(position, InvertViewProjection);

    position /= position.w;

again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.


To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:

If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.

What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).

So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.

This coordinate system, if you want to dive deeper, is called homogeneous coordinates.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜