Font Rendering – How Font Rendering Works in Browsers

browserfontsrendering

I realize that I know essentially nothing about the way fonts get rendered in my computer.

From what I can observe, font rendering is generally made in a consistent way throughout the system. For instance, the subpixel font hinting settings that I configure in my DE control panel have influence on text which appears on window borders, in my browser, in my text editor and so on. (I should observe that some Java applications show a noticeable difference, so I guess they are using a different font rendering mechanism).

What I get from the above is that probably all applications that need font rendering make use of some OS (or DE)-wide library.

On the other hand, browsers usually manage their own rendering through a rendering engine, that takes care of positioning various items – including text – according to specific flow rules.

I am not sure how these two facts are compatible. I would assume that the browser would have to ask the OS to draw a glyph at a given position, but how can it manage the flow of text without knowing beforehand how much space the glyph will take? Are there separate calls to determine the glyph sizes, so that the browser can manage the flow as if characters were little boxes that are later filled in by the OS? (Although this does not take care of kerning). Or is the OS responsible for drawing a whole text area, including text flow? Does the OS return the rendered glyph as a bitmap and leaves it to the application to draw that on the screen?

Best Answer

You are correct that, in general, applications use libraries provided by the OS or a GUI toolkit to do font rendering.

Typical font engines allow a number of modes of operation. For the simple case, an application can ask for a string of text to be drawn at a certain position, and the engine takes care of everything (measurement, positioning, drawing the pixels to the display, etc).

For applications which require a finer degree of control - browsers or word processors, for example - the engine will expose interfaces where the app can ask for a given piece of text to be measured in advance. The app can then use this knowledge to work out how much text it can fit on a line, where the line-breaks should be, how much room a paragraph will take, etc. The app can still ask the engine to do the actual rendering of the pixels.

(There may be an in-between scenario where the engine can take a maximum-width parameter, and possibly some kerning/padding parameters, and automatically render as much text as it can fit.)

Finally, the font engine might allow the app to take over the final rendering of the text, by returning bitmaps of glyphs pre-rendered at a certain size, allowing the app to position and composite it onto the final display. Or the engine might even offer to return the raw glyph outline data for rendering with some vector toolkit.

Related Topic