开发者

3D graphics: Normal mapping vs Bump mapping?

I know that normal mapping describes the process of adding detail to meshes开发者_Python百科 without increasing the polygon count, and that this is achieved by using specific normal textures for manipulating the way light is applied to the object. Okay.

  • But what is bump mapping then? Is it just another term for normal mapping?
  • How do the visual results compare? Can both techniques be combined?


Bump Mapping describes a general technique for simulating bumps and wrinkles on the surface of an object. This is normally accomplished by manipulating surface normals when doing lighting calculations.

Normal Mapping is a variation of Bump Mapping in which the surface normals are provided via a texture, with normals embedded into the RGB channels of the image.

Other techniques, such as Parallax Mapping, are also Bump Mapping techniques because they distort the surface normals.

To answer the second part of the question, they could fairly easily be combined. The base surface normals could be determined from a normal mapping and then modified via another bump mapping technique.


Bump mapping was originally suggested by Jim Blinn back in 1978. His system basically works by perturbing the normal on a surface by using the height of that texel and the height of the surrounding texels.

This is quite similar to DUDV bumpmapping (You may recall the original environment mapped bump mapping as introduced in DX6 which was DUDV). This works by pre-calculating the derivatives from above so that you can miss out the first stage of the calculation (as it does not change each frame).

Normal mapping is a very similar technique that works by, simply, replacing the normal at each texel position. Conceptually its much simpler.

There is another technique that produces "similar" results. It is called emboss bump mapping. This method works by using multipass rendering. Basically you end up subtracting a gray scale heightmap from the last pass but offsetting it a small amount based on the light direction.

There are other ways of emulating surface topology as well.

Elevation mapping uses the height map as an alpha texture and then renders multiple slices through that texture with a different alpha value to simulate the change in height. If not performed correctly, however, the slices can be very visible.

Displacement mapping works by generating a 3D mesh that uses the texture as its basis. This, obviously, massively increase your vertex count.

Steep parallax, relief mapping, etc are the newest techniques. They work by casting a ray through the heightmap until it intersects. This has the big advantage that if a lump should block out the texture behing it now does as the ray doesn't hit the heightmap behind where it initially hits so always displays the "closest" texel.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜