开发者

Prims vs Polys: what are the pros and cons of each?

I've noticed that most 3d gaming/rendering environments represent solids as a mesh of (usually triangular) 3d polygons. However some examples, such as Second Life, or PovRay use solids built from a set of 3d primitives (cube, sphere, cone, torus etc) on which various operations can be performed to create more comple开发者_如何学JAVAx shapes.

So my question is: why choose one method over the other for representing 3d data?

I can see there might be benefits for complex ray-tracing operations to be able to describe a surface as a single mathematical function (like PovRay does), but SL surely isn't attempting anything so ambitious with their rendering engine.

Equally, I can imagine it might be more bandwidth-efficient to serve descriptions of generalised solids instead of arbitrary meshes, but is it really worth the downside that SL suffers from (ie modelling stuff is really hard, and usually the results are ugly) - was this just a bad decision made early in SL's development that they're now stuck with? Or is it an artefact of what's easiest to implement in OpenGL/DirectX?


EDIT: Having read the answers so far, I'm now thinking that my two examples have very different reasons for using prims:

  • For PovRay, prims may be a side-effect of describing solids as maths functions, which gives benefits for complex ray-tracing.

  • Second Life seems mostly concerned with parametrizing their 3-d elements (both as prims, and as parametric human figures) for performance reasons... it makes perfect sense for an on-line game, I guess.


Higher-level "primitives" (spheres, cubes, etc.) carry with them more semantic information about what exactly they are, along with lower bandwidth/storage requirements (a sphere requires 2 parameters - center position and radius - while, say, an isosphere requires as many triangles as necessary to render the sphere).

Going with the primitives also allows the client-side engine to adjust its rendering based on local capabilities. If you say "sphere", one client can render with M subdivisions and another with N; if you send the triangles, then the information necessary to re-render at a different resolution is missing. Also, it gives you opportunity to do things such as increase the subdivision count as you move closer to the object.

I don't know what Linden Labs was thinking, as I have never worked with Second Life, but if I were building something like SL I would probably lean towards the primitives as the definition and transport format because they carry with them more information which can be used for things like re-rendering, hit detection, etc. Of course, in the end they'll be converted to polygons for rendering, but that is an implementation detail.


There are two ways of describing and rendering a 3D object:

  1. Describe the 3D object as polygons (broken into triangles, triangle strips, etc, etc). Then, do several projections from object-space to screen-space and use some clever math to simulate lighting. Once you're in screen-space, do some more clever math with pixel shaders to simulate better lighting. This is the method used by accelerated graphics APIs, such as Direct3D and OpenGL. All real-time games (like Second Life) use this method or something similar to it.
  2. Describe objects using whatever shape makes sense for that particular object (yes, even 'true' curves or infinite planes are allowed). Get every pixel color by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. This is done using simulations that mirror how light interacts with objects in real-life. This takes very long and is very expensive. Ray-tracers like POVRay use this method.

SL might use primitives in the sense that their engine API may allow you to do something like

Sphere mySphere = engine->createSphere(x,y,z);
mySphere->moveTo(x,y,z);

But those are just helper functions (most modern engines have primitive capability), this does NOT mean they render "primitives". They still render triangles.

So basically, it's not really a question of when do you use "objects" and when do you use "polygons". It's a question of do you want to ray-trace or do you want to render interactively.


It almost certainly has nothing to do with OpenGL, as OpenGL (and DirectX) work with triangles, not with curved surfaces or geometric primitives. As such it's very unlikely that Linden Labs were working with a higher level library that made it easier to render primitives than triangle meshes.

I expect it was almost entirely down to the wish to save bandwidth, since a geometric representation is almost always smaller than the same object represented as tesselated triangles (at the expense of making detailed adjustments expensive or impossible). This is important for an online game with mostly user-created content as much of the traffic will be the sending of this data to the clients.


A polygon can, by definition, represent any other geometric primitive. Each operation on a polygon (such as finding a normal vector to one of it's surfaces) works the same wether the polygon is a cube, a sphere or whatever. The downside of this approach is, you need to specifically optimize for edge cases (again, the cube is a good example). Using advanced techniques such as normal maps will reduce this impact, as important object metrics can be precalculated.

I can imagine PovRay is a scenario where the ability to express your object as a simple function and optimize for edge cases can yield huge performance gains, at the cost of requiring a more complicated scene designer though.

Using polygons provides designers with more freedom, you can represent any arbitrary level of detail (LoD adjustement) by simply increasing or decreasing the number of polygons involved.

I don't know why the creators of SL decided to go with primitives, neither do I know the game in detail but I guess high-end rendering is of secondary concern for it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜