开发者

Weird CPU usage in OpenGL program

In an MFC-program I built myself I have some weird problems with the CPU usage.

I load a point cloud of around 360k points and everything works fine (I use VBO buffers which is the way to do it from what I understand?). I can move it around as I please and notice no adverse effects (CPU usage is very low, GPU does all the work). But then at certain angles and zoom values I see the CPU spike on one of my processors! I can then change the angle or zoom a little and it will go down to around 0 again. This is more likely to happen in a large window than a small one.

I measure the FPS of the program and it's constantly at 65, but when the CPU spike hits it typically goes down around 10 units to 55. I also measure the time SwapBuffers take and during normal operation it's around 0-1 ms. Once the CPU spike hits it goes up to around 20 ms, so it's clear something suddenly gets very hard to calculate in that function (for the GPU I guess?). This something is not in the DrawScene开发者_如何学C function (which is the function one would expect to eat CPU in a poor implementation), so I'm at a bit of a loss.

I know it's not due to the number of points visible because this can just as easily happen on just a sub-section of the data as on the whole cloud. I've tried to move it around and see if it's related to the depth buffer, clipping or similar but it seems entirely random what angles create the problem. It does seem somewhat repeatable though; moving the model to a position that was laggy once will be laggy when moved there again.

I'm very new at OpenGL so it's not impossible I've made some totally obvious error.

This is what the render loop looks like (it's run in an MFC app via a timer event with 1 ms period):

    // Clear color and depth buffer bits
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Draw OpenGL scene
    OGLDrawScene();

    unsigned int time1 = timeGetTime();

    // Swap buffers
    SwapBuffers(hdc);

    // Calculate execution time for SwapBuffers
    m_time = timeGetTime() - time1;

    // Calculate FPS
    ++m_cnt;
    if (timeGetTime() - m_lastTime > 1000)
    {
        m_fps = m_cnt;
        m_cnt = 0;
        m_lastTime = timeGetTime();
    }


I've noticed that (at least a while back), ATI drivers tend to like to spin-wait a little too aggressively, while NVidia drivers tend to be interrupt driven. Technically spin-waiting is faster, assuming you have nothing better to do. Unfortunately, today you probably do have something better to do on another thread.

I think the OP's display drivers may indeed be spin-waiting.


Ok, so I think I have figured out how this can happen.

To begin with the WM_TIMER messages doesn't seem to be generated more often than every 15 ms at best, even when using timeBeginPeriod(1), at least on my computer. This resulted in the standard 65 fps I was seeing.

As soon as the scene took more than 15 ms to render, SwapBuffers would be the limiting factor instead. SwapBuffers seem to busy-wait, which resulted in 100% CPU usage on one core when this happened. This is not something that occurred at certain camera angles, but is a result of a fluidly changing fps depending on how many points were shown on the screen at the time. It just appeared to spike whenever rendering happened to hit a time over 15 ms and started to wait at SwapBuffers instead.

On a similar note, does anyone know of a function like "glReadyToSwap" or something like that? A function that indicates if the buffers are ready to be swapped? That way I could choose another method for rendering with a higher resolution (1 ms for example) and then each ms check if the buffers are ready to swap, if they aren't just wait another ms, so as to not busy-wait.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜