开发者

I would like to know how an OpenGL driver will be implemented to learn opengl internals?

I'm learning OpenGL an开发者_如何学God really like to know how the interaction with the Graphics card will be.

I feel understanding how it was implemented in the Graphics driver, will let me know complete internals of the opengl(With this I can know what stages/factors influence my decisions regarding performance in opengl).

Are there any ways for this path to proceed.Does exploring the 'Mesa lib' will help me in this aspect? Am I in the right path?


There's an excellent 10 part series explaining exactly this that you can find at the ryg blog. http://fgiesen.wordpress.com/2011/07/01/a-trip-through-the-graphics-pipeline-2011-part-1/ It's explained in terms of DirectX, but both API's are handled quite similar by the actual driver. Still one of the best articles that describes the performance characteristics of actual hardware is the GPU Gems 2 article http://developer.nvidia.com/node/52. The article itself is a couple of years old, but it will definitely increase your awareness of the problem space. Also, studying the NVIDIA bindless graphics extensions ( http://developer.nvidia.com/content/bindless-graphics ) will give you some extra insight, if you understand why it speeds things up. Also, the "Batch batch batch" presentation is a classic on CPU/GPU interaction optimization ( http://www.nvidia.de/docs/IO/8230/BatchBatchBatch.pdf ).

But, I feel obliged to get back to the original question. Ask yourself which comes first : knowing how to program C++, or knowing the internals of GCC. There's good reason almostopen everyone treats the 3d api as a black box. Drivers are different (API/NVIDIA) depending on the hardware, and performance characteristics doubly so. I really recommend you to just start hammering out some OpenGL code, and learn by optimizing your code. You can either do a small technique (like parallax occlusion), or, probably better, write a whole scene with with different kinds of dynamic lights, shadows, deferred rendering and post processing. And then set a couple of weeks aside for optimizing just that and see how far you can get.

Optimizing 3d rendering really is a bit of a black art, and there are very little "true in every case" answers. The best way to learn is by hard won experience.

These guidelines are probably as close as anyone can get:

  1. use LOD extensively ( meshes, textures and shaders)
  2. try to keep your draw count as low as possible
  3. try and keep your intermediate buffers as small as possible (count and size) for deferred rendering
  4. try and do some rendering in half resolution (eg particles and postprocessing)
  5. always prefer arithmetic before texture access in shaders
  6. always keep in mind that "looks good" trumps "is correct"
  7. prefer algorithmic optimization before low-level optimizations


You will have a hard time trying to understand the internals of an OpenGL driver (state tracker in Mesa/Gallium terminology) without being intimate with the OpenGL API.

OpenGL itself is defined in terms of an abstract graphics machine and actually its much easier to understand OpenGL from this vantage point, than trying to do it through the driver.

Looking at a driver's source code will surely help you to understand any bottlenecks associated with this particular driver. And of course it helps seeing the patterns in other drivers. But it helps a lot more to read the technical documents about the GPUs' architectures.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜