Why have hardware-accelerated vector graphics not taken off

apigraphicsopenglstandards

I'm working on an app that involves real-time manipulation of vector paths at 60fps, and I'm very surprised by how little information there is on the subject. At first, I tried to implement my idea using CoreGraphics, but it didn't perform adequately for my purposes. I then discovered that there was a Khronos standard for hardware-accelerated vector graphics called OpenVG, and thankfully a kind soul had written an OpenGL ES semi-implementation called MonkVG.

But despite the fact that OpenVG is a very practically useful API, it seems more or less abandoned by Khronos. According to Wikipedia, since 2011, the working group "decided to… not make any regular meeting [sic] for further standardization". The documentation, best I can find, consists of just a single reference card. And what's more, there are barely any examples of OpenVG anywhere on the internet. I can find hundreds of OpenGL tutorials in the blink of an eye, but OpenVG seems conspicuously missing.

You'd think that hardware-accelerated vectors would be more important in today's world of rapidly-increasing resolutions, and it does seem that many companies are implementing their own ways of doing this. For example, Qt and Flash have schemes for hardware-accelerated vectors, and many of Adobe's tools have hardware acceleration as an option. But it seems like the wheel is getting reinvented when a standard already exists!

Is there something I'm missing about OpenVG that makes it unsuitable for real-world use? Or is it just that the standard didn't catch on in time and now it's destined for obscurity? Do you think there's room for a standardized API for hardware-accelerated vector graphics in the future, or will it just be easier to use traditional raster-based techniques? Or are vectors simply on their way out, before they were ever in?

Best Answer

update: See bottom of reply

This answer comes a bit too late, but I hope to shine light to others (particularly now that C++ standard committee wants to incorporate Cairo into std):

The reason nobody really cares about "accelerated vector graphics" is because of how GPUs work. GPUs work using massive parallelization and SIMD capabilities to colour each pixel. AMD typically works in blocks of 64x648x8 pixels while NVIDIA cards typically work in 32x32 4x4 pixels [See update at the bottom]

Even if they're rendering a 3D triangle, the GPU works on whole quads that this triangle covers. So if a triangle doesn't cover all 8x8 pixels in the block (or 4x4 in the case of nvidia) the GPU will compute the colour of uncovered pixels and then discard the result. In other words, the processing power for uncovered pixels is wasted. While this seems wasteful, it works incredibly good for rendering large 3D triangles when paired with a massive number of GPU cores (more detailed info here: Optimizing the basic rasterizer).

So, when we look back at vector based rasterization, you'll notice that when drawing lines, even if they're thick, there is a massive blank space. A lot of processing power wasted, and more importantly bandwidth (which is the major cause of power consumption, and often a bottleneck) So, unless you're drawing an horizontal or vertical line with a thickness multiple of 8, and it perfectly aligns to 8 pixel boundaries, a lot of processing power and bandwidth will be wasted.

The amount of "waste" can be reduced by calculating the hull to render (like NV_path_rendering does), but the GPU is still constrained to 8x8/4x4 blocks (also probably the NVIDIA's GPU benchmarks scale better with higher resolutions, the pixels_covered / pixels_wasted ratio is much lower).

This is why many people don't even care about "vector hw acceleration". GPUs simply aren't well suited for the task.

NV_path_rendering is more the exception than the norm, and they've introduced the novel trick of using the stencil buffer; which supports compression and can significantly reduce bandwidth usage.

Nonetheless, I remain skeptic of NV_path_rendering, and with a bit of googling shows that Qt when using OpenGL (aka the recomended way) is significantly faster than NVIDIA's NV_path_rendering: NV Path rendering In other words, NVIDIA's slides were "accidentally" comparing XRender's version of Qt. Ooops.

Instead of arguing that "everything vector drawing with hw acceleration is faster", Qt developers are more honest admitting HW accelerated vector drawing is not always better (see how their rendering works explained: Qt Graphics and Performance – OpenGL)

And we've not touched the part of "live editing" vector graphics, which requires triangle strip generation on the fly. When editing complex svgs, this could actually add serious overhead.

Whether it is better or not, it highly depends on the applications; as to your original question "why it hasn't taken off", I hope it is now answered: there are many disadvantages and constraints to take into account, often making a lot of people skeptical and may be even biasing them into not implementing one.

update: I've been pointed out the numbers are completely off base, as the mentioned GPUs don't rasterize in 64x64 & 32x32 blocks but rather 8x8 = 64 and 4x4 = 16. This pretty much nullifies the conclusions of the post. I will soon update this post later with more up to date information.