开发者

OpenGL Low-Level Performance Questions

This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want.

A lot of tutorials, and even SO questions have similar tips; generally covering:

  • Use GL face culling (the OpenGL function, not the scene logic)
  • Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be).
  • Use interleaved Vertices
  • Minimize as many GL calls as possible, batch where appropriate

And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using s开发者_Python百科everal vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change.

Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use?

My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance.

Note: I am looking into gDEBugger at the moment

Cross posted at Game Development


Point 1 is obvious, as is saves fill rate. In case the primitives of an objects backside get processed first this will omit those faces. However modern GPUs tolerate overdraw quite well. I once (GeForce8800 GTX) measured up to 20% overdraw before significant performance hit. But it's better to save this reserve for things like occlusion culling, rendering of blended geometry and the like.

Point 2 is, well pointless. The matrices never have been calculated on the GPU – well, if you don't count SGI Onyx. Matrices always were just some kind of rendering global parameter calculated on the CPU, then pushed into global registers on the GPU, now called a uniform, so joining them has only very little benefit. In the shader that saves only one additional vector matrix multiplication (boils down to 4 MAD instructions), at the expense of less algorithmic flexibility.

Point 3 is all about cache efficiency. Data belonging together should fit into a cache line.

Point 4 is about preventing state changes trashing the caches. But it strongly depends which GL calls they mean. Changing uniforms is cheap. Switching a texture is expensive. The reason is, that a uniform sits in a register, not some piece of memory that's cached. Switching a shader is expensive, because different shaders exhibit different runtime behaviour, thus trashing the pipeline execution predition, altering memory (and thus) cache access patterns and so on.

But those are all micro optimizations (some of them with huge impact). However I recommend looking in large impact optimizations, like implementing an early Z pass; using occlusion query in th early Z for quick discrimination of whole geometry batches. One large impact optimization, that essentially consists of summing up a lot of Point-4 like micro optimizations is to sort render batches by expensive GL states. So group everything with common shaders, within those groups sort by texture and so on. This state grouping will only affect the visible render passes. In early Z you're only testing outcomes on the Z buffer so there's only geometry transformation and the fragment shaders will just pass the Z value.


  1. Yes
  2. Makes no sense as the driver can combine these matrices for you (it knows they are uniforms, so will not change during the draw call).
  3. Yes
  4. only if you are CPU bound

The first thing you need to know is where exactly is your bottleneck. GPU is not an answer, because it's a complex system. The actual problem might be among these:

  • Shader processing (vertex/fragment/geometry)
  • Fill rate
  • Draw calls number
  • GPU <-> VMEM (that's where interleaving and smaller textures help)
  • System bus (streaming some data every frame?)

You need to perform a series of test to see the problem. For example, draw everything to a bigger FBO to see if it's a fill rate problem (or increase MSAA amount). Or draw everything twice to check the draw call overload issues.


Just to add my 2 cents to @kvark and @datenwolf answers, I'd like to say that, while the points you mention are 'basic' GPU performance tips, more involved optimization is very application dependent.

In your geometry-heavy test case, you're already throwing 28 million triangles * 40 FPS = 1120 million triangles per second - this is already quite a lot : most (not all, esp Fermi) GPU out there have a triangle setup performance of 1 triangle per GPU clock cycle. Meaning that a GPU running at 800MHz, say, cannot process more than 800 million triangles per second ; this without even drawing a single pixel. NVidia Fermi can process 4 triangles per clock cycle.

If you're hitting this limit (you don't mention your hardware platform), there's not much you can do at the OpenGL/GPU level. All you can do is send less geometry, via more efficient culling (frustum or occlusion), or via a LOD scheme.

Another thing is that tiny triangles hurt fillrate as rasterizers do parrallel processing on square blocks of pixels ; see http://www.geeks3d.com/20101201/amd-graphics-blog-tessellation-for-all/.


This very much depends on what particular hardware you are running and what the usage scenarios are. OpenGL performance tips make sense for the general case - the library is, after all, an abstraction over many different driver implementations. The driver makers are free to optimize however they want under the hood so they may remove redundant state changes or perform other optimizations without your knowledge. On another device, they may not. It is best to stick to best practices to have a better chance of good performance over a range of devices.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜