开发者

The advantages of using a Z-Buffer versus prioritising pixels according to depth

This is a bit more of an academic question. Indeed I am preparing for a an exam and I'm just trying to truly understand this concept.

Allow me to explain somewhat the context. The issue at hand is hiding objects (or more specifically polygons) behind each other when drawing to the screen. A calculation needs to be done to decide which one gets drawn last and therefore to the forefront.

In a lecture I was at the other day my professor stated that prioritising pixels in terms of their depth value was computationally inefficient. He then gave us a short explanation of Z-buffers and how they test depth values of pixels and compare them with the the depth values of pixels in a buffer. How is this any different then 'prioritising p开发者_运维百科ixels in terms of their depth'.

Thanks!


Deciding which polygon a fragment belongs to is computionally expensive, because that would require to find the closest polygon (and, having the entire geometry information available during pixel shading!) for every single pixel.

It is easy, almost trivial to sort entire objects, each consisting of many triangles (a polygon is no more than one or several triangles) according to their depth. This, however, is only a rough approximation, nearby objects will overlap and produce artefacts, so something needs to be done to make it pixel perfect.

This is where the z buffer comes in. If it turns out that a fragment's calculated depth is greater than what's already stored in the z-buffer, this means the fragment is "behind something", so it is discarded. Otherwise, the fragment is written to the color buffer and the depth value is written to the z-buffer. Of course that means that when 20 triangles are behind each other, then the same pixel will be shaded 19 times in vain. Alas, bad luck.

Modern graphics hardware addresses this by doing the z test before actually shading a pixel, according to the interpolated depth of the triangle's vertices (this optimization is obviously not possible if per-pixel depth is calculated).

Also, they employ conservative (sometimes hierarchical, sometimes just tiled) optimizations which discard entire groups of fragments quickly. For this, the z-buffer holds some additional (unknown to you) information, such as for example the maximum depth rendered to a 64x64 rectangular area. With this information, it can immediately discard any fragments in this screen area which are greater than that, without actually looking at the stored depths, and it can fully discard any fragments belonging to a triangle of which all vertices have a greater depth. Because, obviously, there is no way that any of it could be visible.
Those are implementation details, and very platform specific, though.

EDIT: Though this is probably obvious, I'm not sure if I made that point clear enough: When sorting to exploit z-culling, you would do the exact opposite of what you do with painter's algorithm. You want the closest things drawn first (roughly, does not have to be 100% precise), so instead of determining a pixel's final color in the sense of "last man standing", you have it in the sense of "first come, first served, and only one served".


First thing you need to understand is what your professor meant by 'prioritising pixels in terms of their depth'. My guess is that it's about storing all requested fragments for a given screen pixel and then producing the resulting color by choosing closest fragment. It's inefficient because Z buffer allows us to store only a single value instead of all of them.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜