开发者

GPU with software rendering

As far as I understand it is GPUs job to render images and decide the values of the pixels to be displayed on the monitor. If i have a monitor connected to a GPU and i want to experiment with a software renderer,

how is the work divided between GPU and the software renderer? I ask this because my monitor is still connected to the GPU, so th开发者_JAVA百科e GPU must be transmitting the value of the pixels computed by the software renderer. But is it all that the GPU does? Is there a specific way of telling the GPU that "only pass on these values to the monitor".

I am a newbie but i would like to know about the details.


I believe you have some concepts confused. A GPU (Graphical Processing Unit) is just like the CPU. If you are building a software renderer, you will still interface with the graphics card to set the resolution and draw/bit blit pixels to the screen.

Here is the difference at a high level between hardware and software rendering. I am over simplifying this as there are vertex/pixel shaders etc. The GPU can also perform math for program just look at OpenCL.

Software

I have a triangle I would like to draw. Given the three points of it I need to figure out the slope of the edges and which x,y coordinates to fill on the screen. This would involve a loop going through each pixel of the triangle and drawing it to the screen.

Hardware

I have a triangle I would like to draw. Send 3 coordinates to the GPU and tell it to draw it. The GPU figures out which pixels to fill in, etc.

Please understand this is grossly oversimplifying the process.


A GPU is more or less a hardware device that does certain types of vector math REALLY well compared to the CPU. A GPU is far more limited in terms of what it can do than a CPU, but it does what it does quickly.

Unless you are using DirectX or OpenGL, it's most likely that you're doing software rendering. If you are using DirectX or OpenGL, then you probably are using hardware rendering (though there are pure software implementations of both).

Both hardware and software rendering both ultimately output to addressable memory known as the framebuffer (which holds the output bitmap to be rendered to the screen), the only difference is the hardware that sets that memory (e.g. the CPU or the GPU).

The physical memory addressed in the framebuffer usually is part of the GPU (I'm not sure if it actually has to be on the GPU, input would be welcome).


In older Windows and DirectX versions you can call Lock() method of DirectDraw surface to get temporary direct pointer to framebuffer, which you can access almost like any other bitmap in RAM. Same can be done in DOS through VESA drivers/other APIs, through GX driver on Pocket PC/Windows mobile. Not sure about other operating systems.

Framebuffer is an area in memory which is being streamed by display hardware directly to the output device. Usually it is on video card, but can be mapped into address space of your aplication and accessed directly like normal RAM. But, obviously, read/write speed is different. Read speed is typically very low, write speed is very good, but only if you just store consecutive, aligned 32/64/128 bit words. Usually it is better to render everything in offscreen buffer in RAM and then just memcpy its contents into framebuffer. Some graphics cards support hardware CPU-independent blitting from RAM surface to the framebuffer, but it can be buggy and/or slow.


It depends on what level you're doing this "software rendering" at. Unless you're writing your own windowing system, the OS will still use the GPU to help it blit all its windows around.

If you want to write a software renderer, the simplest way to implement it would be to do your rendering to an in-memory bitmap, and then call into the OS to say "oh hey, can you draw this image to the screen for me?".

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜