Insight into how things get printed onto the screen (cout,printf) and origin of really complex stuff that I can't seem to find on textbooks
I've always wondered this, and still haven't found the answer. Whenever we use "cout" or "printf" how exactly is that printed on the screen?. How does the text come out as it does...(probably quite a vague question here, ill work with whatever you give me.). So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on mo开发者_运维问答re questions like how on earth have they made openGl/directx functions..
break it down people break it down.:)
Here's one scenario, with abbreviations:
printf
orcout
put characters into a buffer in the user program's address space.- Eventually the buffer fills, or perhaps
printf
asks for the buffer to be emptied early. Either way, the I/O library calls the operating system, which copies the contents of the buffer to its own space. - Supposing that the output file is bound to a terminal, the operating system delivers the characters to the terminal application.
- The terminal app decides that for each character in the buffer, it needs to paint pixels on the screen.
- The terminal app sets up pixel-painting instructions, and it asks a window manager to do this on its behalf. (On Unix these days this is usually an X server.)
- The window manager takes the pixels. If the window is actually visible on the screen, the window manager then updates a buffer (called the frame buffer) which holds the visible pixels. The window manager may then notify the operating system, or more likely, the window manager is in cahoots with the operating system and they are sharing the same memory.
- The next time the screen is refreshed, the hardware sees the new bits in the frame buffer, and it paints the screen differently.
- Voilà! You have characters on the screen.
It is amazing that the bear dances at all.
So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on more questions like how on earth have they made openGl/directx functions.
Those functions can be assembly or C, it doesn't change much (and, anyway, you can do in C virtually anything you can do in assembly.) The magic ultimately happens at the interface of software and hardware -- how you get there from printf
and cout <<
can be as trivial as a few pointer operations (see the 286 example below, or read about cprintf
further down), or as complex as going through multiple layers of diverse system calls, possibly even going over networks, before eventually hitting your display hardware.
Imagine the following scenarios:
I dig up my old 286 from under the dust and fire up MS-DOS; I compile and run the following program in real mode:
void main(void) { far long* pTextBuf = (far long*)0xb8000L; /* Poor man's gotoxy+cprintf imitation -- display "C:" (0x43,0x3a) in silver-on-black letters in the top-left corner of the screen */ *pTextBuf = 0x073a0743L; }
I am connecting with my laptop's Windows HyperTerminal to my serial port, which is hooked up with a cable to the back of a SUN box, through which I can access my SUN box's console. From that console I ssh into another box on the network, where I run my program which does
printf
, piping its output throughmore
. Theprintf
information has traveled through a pipe throughmore
, then through an SSH pseudo-tty through the network to my SUN box, from there through the serial cable onto my laptop, through Windows' GDI text drawing functions before finally appearing on my screen.
Adding more detail to Norman's answer, hopefully more in the direction of your original question:
printf
andcout <<
usually perform writes tostdout
-- typically buffered writes, but that has not always been the case- back in the day, various compiler vendors (Borland, Microsoft), especially on DOS, provided you with functions like
cprintf
, which wrote directly to video memory without making any system calls,memcpy
-style (see my 286 example above) -- more on that further down
- back in the day, various compiler vendors (Borland, Microsoft), especially on DOS, provided you with functions like
- writing to
stdout
is a system call, be itwrite
under *nix,WriteFile
orWriteConsole
under Windows, INT 21, 9 under DOS, etc. - the advantage of going through the
stdout
abstraction is that it allows the operating system to do some internal plumbing and perform redirection (be it to a tty descriptor, to a pipe, to a file, to a serial port, to another machine via a socket etc.)- it also indirectly makes it possible to have multiple applications'
stdout
s coexist on the same screen, e.g. in different windows -- something that would be much harder to do if each application tried to write directly to video memory on its own (likecprintf
did on DOS -- not what would be called today a true or usable multi-tasking operating system.)
- it also indirectly makes it possible to have multiple applications'
- nowadays, a graphical application such as your
rxvt
console window application, PuTTY telnet/ssh client, Windows console, etc. will:- read your application's
stdout
:- from a tty descriptor (or equivalent) in the case of
rxvt
or of the Windows console - from a serial port if you are using something like Realterm to connect to an embedded system or to an older SUN box console
- from a socket if you are using PuTTY as a telnet client
- from a tty descriptor (or equivalent) in the case of
- display the information by rendering it graphically, pixel by pixel, into the graphical application's window buffer/device context/etc.
- this is typically done through yet another layer of abstraction and system calls (such as GDI, OpenGL etc.)
- the pixel information ultimately ends up in a linear frame buffer, that is, a dedicated memory range (back in the days of 8MHz CPUs, well before AGP, this area could reside in system RAM, nowadays it could be megabytes and megabytes of dual-port RAM on the video card itself)
- the video card (what used to be called a RAMDAC), would periodically read the frame buffer memory range (e.g. 60 times a second when your VGA adapter was set for 60Hz), scanline after scanline (possibly doing palette lookups too), and transmit it to the display as either analogue or digital electrical signals
- read your application's
- back in the day, or even today when you boot your *nix box in single-user mode or go full-screen in a Windows console, you graphics adapter is actually in text mode
- instead of a liner frame buffer, one (be it the
cprintf
implementation or the OS) writes to the much smaller 80x25 or 80x50 etc. text buffer array, where (e.g. in the case of VGA) only two bytes are necessary to encode each character value such asA
or▒
or♣
(1 byte) as well as its color attributes (1 byte) -- that is, its foreground (4 bits, or 3 bits + brightness bit) and background colors (4 bits, or 3 bits + blink bit) - for each pixel on each scanline, the RAMDAC:
- would keep track of which text column and which text row that pixel belongs to
- would look up that column/row position's character value and attributes
- would look the character value against a simple bitmap font definition
- would see whether the pixel being rendered, in the character value's glyph bitmap definition, should be set to foreground or background, and what color that would be based on the character attribute at that position
- possibly flip the foreground and background on even seconds if the blink bit was set or the cursor is showing and is at the current position
- draw the pixel
- instead of a liner frame buffer, one (be it the
Start at the History of Video Cards and GPU pages on Wikipedia for a more in-depth look at how we got where we are today.
Also look at How GPUs Work and How Graphic Cards Work.
Well, they go through a bunch of library functions, and eventually end up calling a write() system call, that sends the data to the appropriate file descriptor, which then causes it to turn up in a read() call in the terminal emulator (or command window shell, if this is Windows). The terminal/shell causes that data to be painted on the screen, probably by way of a bunch more system calls to send it to the graphics system.
Windows and Unix/Linux terminology is quite different, especially the concept of a shell is not at all the same thing in each. But the use of read() and write() calls is pretty similar in both cases.
System calls are special functions that cause the kernel to do specific things; how they're implemented is pretty magical, and very dependent on what sort of processor you have, but usually it's by causing some kind of recoverable processor error that the kernel has to tidy up.
Crack open the source to glibc and see for yourself.
Short answer, a lot of C code, sprinkled occasionally, with some assembler.
The magic really happens in the device driver. The OS presents an interface for application programmers to hook into. This gets massaged somewhat (e.g. buffered) and then sent to the device. The device then takes the common representation and transforms it into signals the particular device can understand. So ASCII gets displayed in somme reasonable format on the console, or to a PDF file, or to a printer, or to disk, in the form appropriate for that device. Try something other than ASCII (or UTF8) that the driver does not understand and you will see what I am talking about.
For things the OS cannot handle (special graphics cards for example) the app writes the data directly to device memory. This is how something like DirectX works (to drastically oversimplify).
Each device driver is different. But each is the same in terms of how they interface with the OS, at least for each class of device (disk, NIC, keyboard, etc).
精彩评论