开发者

General Animation question

I am new to the idea of animating things in a graphics environment so I would like to clarify what the correct approach is.

(Just to set the scene although its not particularly relevant to the question: im working with opengl es on iphone)

If I go to an artist and tell them to create me a 3d model animation of a walking dwarf that wont be dynamic how will they give me the data? Will they: a) Create a 3d bones model, animate the bone paths in a path list together with timestamps and interpolation type and then simply define each bone's 3d model? I.e A walking dwarf would be a spine, hands, arms, legs, feet, neck, head and then the modeller creates parts for each of those bones and gives me the animation path...?

开发者_如何学JAVA

or b) The modeller creates one full model and then deforms it and somehow saves the deformation

!c) i assume noone would actually store 30 models of the same object and then just present those unless it was a very low poly polycount model? Or am I wrong? What is the best object format for 3d animations?

Any other advice/tips on techniques,mechanisms etc will be greatly appreciated!


You have basically the right ideas. There's two main approaches, skeletal and non-skeletal, both of which tend to involve supplying keyframes.

With non-skeletal animation, you might be supplied with, say, ten frames of animation to draw while walking and the amount of time it takes to progress from one frame to the next. So it's the exact 3d analogue of the way 2d pixel sprites used to work. You can either work out which frame is currently visible or apply tweening. If you know that you're halfway between a frame where a vertex is at V1 and a frame where the same vertex is at V2, you can position it halfway between V1 and V2. So you're linearly interpolating all vertex positions between frames. This looks a little smoother than just flicking through frames, but does tend to distort geometry a little so you still need the frames to be reasonably dense.

With skeletal animation, the motion is described by the skeleton, which is a series of connected bones. Each keyframes is a particular orientation of the bones. Often this is a hierarchical thing, so to describe the arm you could start by giving the orientation of the upper arm relative to the shoulder, then the lower arm relative to the upper arm, the hand relative to the lower arm, each finger relative to the hand, etc. The advantage of this is that you can perform really good tweening without distortion. The halfway frame is half the rotation, propagated down the bone tree. And if you stick with quaternions for describing orientation then it's relatively easy to interpolate in terms of 'half the rotation' with good results.

To put actual geometry over the bones, each vertex is associated with one of more bones. You give it a weighted attachment to each bone, e.g. vertices on the lower arm might be 100% attached to the lower arm bone, vertices towards the elbow might be 80% attached to the lower arm bone, 20% to the upper. You can use a weighted sum of where the vertex would be transformed to by each relevant bone to get the actual vertex position. In that way you can get pretty good joints (albeit, usually using a more complicated skeleton than my simplified explanation).

In iPhone terms, under ES 1.x you're very likely to have to do non-skeletal tweening on the CPU, which isn't as much of a performance problem as you might guess because the PowerVR MBX doesn't actually keep vertex buffer objects in video RAM anyway. As long as you're accumulating your buffer in a PowerVR-friendly format (alignment matters, mostly, interleaving of position/texture coordinates/normals/etc in the prescribed order is also beneficial) then the submission to OpenGL isn't much more expensive than using a vertex buffer object.

Apple support the GL_OES_matrix_palette extension for skeletal-style animation. For each group of vertices you can supply several modelview matrices and for each vertex you can set the weighting of each input matrix. There are some implementation limits on the number of matrices that will likely prevent you from doing an entire model as a single set, but you can subdivide as necessary. The benefit is that you can put all your vertex data into a vertex buffer object and leave the driver and GPU to it.

On devices that support ES 2.x, you can do a much better job of non-skeletal tweening with a vertex shader. That'll allow you to use a vertex buffer object and work out the positions on the GPU. Since the ES 2.x hardware supports pushing vertex buffer objects over for full GPU management, that's a big win.

Using the ES 1.x pipeline for skeletal tweening through GL_OES_matrix_palette is likely to work as well as using the programmable pipeline, since you're already able to use vertex buffer objects.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜