开发者

Later OpenGL specs - more than shaders?

I've been programming OpenGL almost entirely in the 2 specification, and don't know much about the 3 and 4 specs. I read on some forum that the later specs of OpenGL are basically just feeding data to shaders, that do all the real work. That w开发者_运维技巧ould be in sharp contrast to what I understand to be OpenGL, and shaders as an auxiliary tool for things like lighting, water and other effects I have, at best, a very basic understanding of shaders and would not be surprised to be proved wrong in any of the previously mentioned topics. I'm just curious for a greater understanding

Thanks


Your understanding is correct. In OpenGL 3.0, nearly all of the fixed functionality is deprecated in favor of shaders. Even the built in stuff accessible from shaders is deprecated, such as modelview / projection matrices, vertex coordinates, normals, lights, etc.

The basic workflow is supposed to be as follows:

  • Load geometry and per-vertex attributes through VBOs
  • Load textures, shaders
  • Pass stuff like matrices, lights to shaders through uniforms
  • Pass per vertex values like vertex coordinates, texture coordinates, normals, through generic attribute arrays (in VBOs)
  • Call glDrawElements or glDrawArrays

So most of the OpenGL calls you'll be making are just pushing generic data around. Instead of state functions like glLightfv, you'll be calling generic functions like glUniform4f.


If you were already using shaders, not much changes. You run into something that's missing, add a matrix multiply in your render code and two lines to your shader (or more likely, half a line) and life goes on.

If, like me, you were foolish enough to try to use OpenGL for something productive (i.e. rendering scientific data at high frame rates, not gaming), you're in for a little bit of hurt. You're going to need to write (by which I mean cut+paste an example) a couple dozen lines of shader code to replace the fixed-function pipeline, and nearer a hundred lines of setup code to compile, link, and activate those shaders (along with debug output, you won't get it right on the first try).

In either case, still nothing will work. Then you find out you need to bind a VAO. You don't actually need to do anything with it (unless you want to use multiple VAOs for state management), you just need one because none of the other attribute/VBO stuff works without it.

When you're done, though, you can start thinking about integrating GPGPU computation using shaders or OpenCL and hand off the data for rendering, all inside the graphics memory.


I think the idea is that geometry is easy. We have beat geometry to death. Engines exist to handle the geometry just fine.

Now, the real challenges are how do we make that geometry look more realistic? That kind of work is (best) done with shaders, so that's where the focus of the profession is headed.


The current OpenGL specs still do most of the same geometry as ever. What they no longer do is (automatically) handle the kinds of things that are reasonably easy to do in shaders. Just for example, they no longer automatically interpolate between vertex colors to get the color for every pixel. The interpolation hardware is still there, so the shader is trivial, but still necessary.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜