开发者

Am I doing something wrong, or do Intel graphic cards suck so bad?

I have

VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.

I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.

Are intel cards so bad, or am I doing sth wrong?

Drivers are standard, open source drivers from intel.

My code:


void init() {
  glGenBuffersARB(4, vbos);  
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_开发者_如何学编程ARB, vbos[2]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}

void draw() {
  glPushMatrix();

  const Vector3f O = ps.getPosition();

  glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
  glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
          - originXYZ[2]);

  glEnableClientState(GL_VERTEX_ARRAY);
  glEnableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
  glVertexPointer(3, GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
  glColorPointer(4, GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
  glNormalPointer(GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
  glTexCoordPointer(2, GL_FLOAT, 0, 0);

  texture->bindTexture();
  glDrawArrays(GL_TRIANGLES, 0, verticesNum);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO

  glDisableClientState(GL_VERTEX_ARRAY);
  glDisableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

  glPopMatrix();
}

EDIT: maybe it's not clear - initialization is in different function, and is only called once.


A few hints:

  • Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.

  • Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD

  • Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.

Oh and in your code the last two lines

  glDisableClientState(GL_VERTEX_ARRAY);
  glDisableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY); 
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

I think you meant those to be glDisableClientState.


Make sure your system has OpenGL acceleration enabled:

$ glxinfo | grep rendering
direct rendering: Yes

If you get 'no', then you don't have OpenGL acceleration.


Thanks fo answers.

Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.

I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜