开发者

GLSL + OpenGL Moving away from state machine

I started moving one of my projects away from fixed pipeline, so to try things out I tried to write a shader that would simply pass the OpenGL matrices and transform the vertex with that and then start calculating my own once I knew that worked. I thought this would be a simple task but even this will not work.

I started out with this shader for normal fixed pipeline:

void main(void)
{
    gl_Position = gl_ModelViewProjectionMatrix  * gl_Vertex;
    gl_TexCoord[0] = gl_MultiTexCoord0;
}
开发者_高级运维

I then changed it to this:

uniform mat4 model_matrix;
uniform mat4 projection_matrix;

void main(void)
{
    gl_Position = model_matrix * projection_matrix * gl_Vertex;
    gl_TexCoord[0] = gl_MultiTexCoord0;
}

I then retrieve the OpenGL matrices like this and pass them to the shader with this code:

 [material.shader bindShader];

 GLfloat modelmat[16];
        GLfloat projectionmat[16];
        
        glGetFloatv(GL_MODELVIEW_MATRIX, modelmat);
        glGetFloatv(GL_PROJECTION_MATRIX, projectionmat);
        
            
        glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat);
        glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat );
... Draw Stuff  

For some reason this does not draw anything (I am 95% positive those matrices are correct before I pass them btw) Any Ideas?


The problem was that my order of matrix multiplication was wrong. I was not aware that the operations were not commutative.

The correct order should be:

projection * modelview * vertex

Thanks to ltjax and doug65536


For the matrix math, try using an external library, such as GLM. They also have some basic examples on how to create the necessary matrices and do the projection * view * model transform.


Use OpenGL 3.3's shading language. OpenGL 3.3 is roughly comparable to DirectX10, hardware-wise.

Don't use the deprecated functionality. Almost everything in your first void main example is deprecated. You must explicity declare your inputs and outputs if you expect to use the high-performance code path of the drivers. Deprecated functionality is also far more likely to be full of driver bugs.

Use the newer, more explicit style of declaring inputs and outputs and set them in your code. It really isn't bad. I thought this would be ugly but it actually was pretty easy (I wish I had just done it earlier).

FYI, the last time I looked at a lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Practically all video cards from AMD and NVidia that have any gaming capability will have OpenGL 3.3. And they have for a while, so any code you write now for OpenGL 3.3 will work on a typical low-end or better GPU.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜