开发者

OpenGL ES Polygon with Normals rendering (Note the 'ES!')

Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL?

I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good!

The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory.

In a perfect world I'd like to have my vertex and normal arrays have differen开发者_JAVA技巧t lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex.

Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me.

Anyone help? I hope I'm missing something very simple here.

Mark


You're not missing anything. This is how the spec works because this is how most hardware works (aka your perfect is not hardware perfect).

I won't go into the complexities of implementing hardware that would support an array of indices, but I will point out one optimization you'd likely lose: the GL potentially uses the single index as an index into a post-vertex transform cache, to not have to re-transform the vertex on the next iteration. You make that optimization significantly more complex with a set of indices.

Regarding memory savings: in your case, you're talking about roughly a cube with each face using 4 quads, 8 triangles. So we're talking about 9*6=54 unique vertices. if you only have position and normals, that's 54 * 4 * 3 * 2 = 1296 B of vertex data + 2 * 48 * 3 = 288 B of index data (assuming 4-byte for the attribute base types and GLushort for indices). Grand total of 1584B. That's assuming a non-optimal data format for positions and normals too. The alternative is roughly 26*4*3(pos)+8*4*3(norm)+2*48*3*2=312+96+576=984B. So you saved about 0.5kB on this contrived case. Pass this to more memory-saving types for the attributes, and you get to: 648+288=936 vs 156+48+576=780... The difference starts to be negligeable.

Why am I bringing this up ? because you should look at your attribute data types if you want memory consumption optimizations.

Last, as you noticed yourself, in practical 3d worlds (i.e. not in the world of boxes), the savings of such a mechanism would be low: very few attributes can get shared.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜