Rendering CLI and GUI Simultaneously – How to Achieve It

cligui

I'm confused when looking into graphics – specifically with operating systems.

I mean, how can a computer render a CLI/console along with a GUI?

GUI's are completely different from text. And how can we have GUI windows that display text interfaces, ie how can we have CLI in modern Graphics Operating system – that's what I'm mainly trying to grip on to.

How does graphics get rendered to display? Is there some sort of memory address that a GPU access which holds all pixel data, and there system's within OS's that gather the pixel position of windows and widgets, along with the Z Index and rasterize them to that memory address, which then the GPU loads to the screen?

How about the CLI's integrated with Graphics? How does the OS tell the GPU that a certain part of the screen wants to display text while the rest wants to display pixel data?

Best Answer

It seems like your main question is how GUIs and CLIs can exist together at the same time, such as the Windows command console (cmd.exe).

It's actually not as complicated as you might think. First, remember that even a GUI needs to be able to render text, so you can read filenames, labels on controls, or work with text inside of programs. It does this by having font files that contain descriptions of how to turn various characters into glyphs (shapes) on-screen or in a printout.

Once you understand that basic principle, the answer is obvious: you can make a console program on top of a GUI by having an emulation layer that renders the text the way a CLI would and a font file that looks like console text. (Note: I'm not actually claiming that that's the way Windows does it. I don't know how it works under the hood. But it could easily be done that way.)

As for the second part of your question, as to how graphics get rendered, yes, there's a special memory location somewhere that holds pixel data. That's called a frame buffer. Back in the old days, you could get direct access to the frame buffer and play around with its pixels directly. Nowadays, that's highly discouraged if not impossible, for various reasons. Instead, you go through higher-level graphic-drawing APIs, where you describe to the system what you want to do and it takes care of the details.

This has several advantages. First, it means that you don't need to know where the framebuffer is in memory or how exactly its data is formatted. Second, and closely related to the first point, different video card manufacturers can set up their framebuffers in different ways, and you don't need to know how to write specific graphics code for each version. You let the graphics APIs take care of that. And third, (and closely related to the second point,) it means that things can change in the future without breaking existing code.

For example, there was a major change in the way Windows handles forms ("windows") when they introduced Vista, that you probably never noticed unless you do graphics programming. It used to be that every form got drawn directly to the screen. This meant, among other things, that if you dragged a second form over a visible form and then away again, the form on the bottom would have to repaint itself because that image data was lost.

But Vista introduced a compositing window manager, which basically means that each form gets its own virtual framebuffer somewhere in memory, and it just draws everything to the screen based on position and Z-order. Now you can drag one form in front of another all you want and it won't invalidate the other form's pixel data; forms only need to be repainted when their own content changes. And they were able to implement that and make it work just fine on existing programs because the existing programs all use predefined drawing APIs.

Related Topic