C++ – What design pattern best suits managing handles to objects, without passing handles or Manager around

cdesign-patternsgame developmentopengl

I'm writing a game in C++ using OpenGL.

For those that don't know, with the OpenGL API you make a lot of calls to things like glGenBuffers and glCreateShader etc. These return types of GLuint which are unique identifiers to what you just created. The thing being created lives on the GPU memory.

Considering that GPU Memory is sometimes limited, you don't want to create two things that are the same when they're to be used by multiple objects.

For example, Shaders. You link a Shader Program and then you have a GLuint. When you're done with the Shader, you should call glDeleteShader (or something to that affect).

Now, let's say I have a shallow class hierarchy like:

class WorldEntity
{
public:
    /* ... */
protected:
    ShaderProgram* shader;
    /* ... */
};

class CarEntity : public WorldEntity 
{
    /* ... */
};

class PersonEntity: public WorldEntity
{
    /* ... */
};

Any code I've ever seen would require that all the Constructors have a ShaderProgram* passed to it to be stored in the WorldEntity. ShaderProgram is my class that encapsulates the binding of a GLuint to the current shader state in the OpenGL context as well as a few other helpful things that you need to do with Shaders.

The problem I have with this is:

  • There's a lot of parameters needed to construct a WorldEntity (consider that there might be a mesh, a shader, a bunch of textures etc, all of which could be shared, so they're passed as pointers)
  • Whatever is creating the WorldEntity needs to know what ShaderProgram it needs
  • This probably requires some sort of gulp EntityManager class that knows what instance of what ShaderProgram to pass to different entities.

So now because there's a Manager the classes need to either register themselves with the EntityManager along with the ShaderProgram instance they need, or I need a big-ass switch in the manager that I need to update for every new WorldEntity derived type.

My first thought was to create a ShaderManager class (I know, Managers are bad) that I pass by reference or pointer to the WorldEntity classes so that they can create whatever ShaderProgram they want, via the ShaderManager and the ShaderManager can keep track of already existing ShaderPrograms, so it can return one that already exists or create a new one if needed.

(I could store the ShaderPrograms via the hash of the filenames of the ShaderPrograms actual source code)

So now:

  • I'm now passing pointers to ShaderManager instead of ShaderProgram, so there's still a lot of parameters
  • I don't need an EntityManager, the entities themselves will know what instance of ShaderProgram to create, and ShaderManager will handle the actual ShaderPrograms.
  • But now I don't know when ShaderManager can safely delete a ShaderProgram that it holds.

SO now I've added reference counting to my ShaderProgram class that deletes its internal GLuint via glDeleteProgram and I do away with ShaderManager.

So now:

  • An object can create whatever ShaderProgram it needs
  • But now there's duplicate ShaderPrograms because there's no external Manager keeping track

Finally I come to make one of two decisions:

1. Static Class

A static class that's invoked to create ShaderPrograms. It keeps an internal track of ShaderPrograms based on a hash of the filenames
– this means I no longer need to pass pointers or references to ShaderPrograms or ShaderManagers around, so less parameters
– The WorldEntities have all knowledge about the instance of ShaderProgram they want to create

This new static ShaderManager needs to:

  • keep a count of the number of times a ShaderProgram is used and I make ShaderProgram no copyable OR
  • ShaderPrograms count their references and only call glDeleteProgram in their destructor when the count is 0 AND ShaderManager periodically checks for ShaderProgram's with a count of 1 and discards them.

The downsides to this approach I see are:

  1. I have global static class which might be a problem. The OpenGL Context needs to be created prior to the invoking any glX functions. So potentially, a WorldEntity might be created and try to create a ShaderProgram prior to OpenGL Context creation, which will result in a crash.

    The only way around this is back to passing everything around as pointers/references, or having a global GLContext class that can be queried, or holding everything in a class that creates the Context on construction. Or maybe just a global boolean IsContextCreated that can be checked. But I worry that this gives me ugly code everywhere.

    What I can see the devolving to is:

    • The big Engine class that has every other class hidden inside of it so that it can control the construction/deconstruction order appropriately. This seems like a big mess of interface code between the user of the engine and the engine, like a wrapper over a wrapper
    • A whole slew of "Manager" classes that keep track of instances and delete things when neccessary. This might be a necessary evil?

AND

  1. When to actually clear ShaderPrograms out of the static ShaderManager? Every few minutes? Every Game Loop? I'm gracefully handling the re-compiling of a shader in the case when a ShaderProgram was deleted but then a new WorldEntity requests it; but I'm sure there's a better way.

2. A better method

That's what I'm asking for here

Best Answer

  1. A better method That's what I'm asking for here

Apologies for necromancy but I've seen so many stumbling over similar issues with managing OpenGL resources, including me in the past. And so much of the difficulties I struggled with which I recognize in others came from the temptation to wrap and sometimes abstract and even encapsulate the OGL resources needed for some analogical game entity to be rendered.

And the "better way" I found (at least one which ended my particular struggles there) was do things sort of the other way around. That is to say, don't concern yourself with the low-level aspects of OGL in designing your game entities and components and move away from ideas like that your Model has to store like a triangle and vertex primitives in the form of objects wrapping or even abstracting VBOs.

Rendering Concerns vs. Game Design Concerns

There are slightly higher-level concepts than GPU textures, for examples, with simpler management requirements like CPU images (and you need those anyway, at least temporarily, before you can even create and bind a GPU texture). Absent rendering concerns a model might suffice just storing a property indicating the filename to use for the file containing the data for the model. You can have a "material" component which is higher-level and more abstract and describes the properties of that material than a GLSL shader.

And then there is only one place in the codebase concerned with things like shaders and GPU textures and VAOs/VBOs and OpenGL contexts, and that's the implementation of the rendering system. The rendering system might loop through the entities in the game scene (in my case it goes through a spatial index, but you can understand more easily and start off with a simple loop before implementing optimizations like frustum culling with a spatial index), and it discovers your high-level components like "materials" and "images" and model filenames.

And its job is to take that higher-level data which isn't directly concerned with GPU and load/create/associate/bind/use/deassociate/destroy the necessary OpenGL resources based on what it discovers in the scene and what's happening to the scene. And that eliminates the temptation to use things like singletons and static versions of "managers" and what not, because now all your OGL resource management is centralized to one system/object in your codebase (though of course you might decompose it into further objects encapsulated by the renderer to make the code more manageable). It also naturally avoids some tripping points with things like trying to destroy resources outside of a valid OGL context, since all this stuff is now (including the destruction of resources) occurring inside the rendering system which is always in a valid GL context when invoked in the pipeline.

Avoiding Design Changes

Further that offers a lot of breathing room to avoid costly central design changes, because say you discover in hindsight that some materials require multiple rendering passes (and multiple shaders) to render, like a subsurface scattering pass and shader for skin materials, whereas previously you wanted to sort of conflate a material with a single GPU shader. In that case there's no costly design change to central interfaces used by many things. All you do is update the local implementation of the rendering system to handle this formerly unanticipated case when it encounters skin properties in your higher-level material component.

The Overall Strategy

And that's the overall strategy I use now, and it especially becomes increasingly helpful the more complex your rendering concerns are. As a downside it does require a bit more upfront work than like injecting your game entities with shaders and VBOs and things like that, and it also couples your renderer more to your particular game engine (or its abstractions, though in exchange the higher-level game entities and concepts become completely decoupled from low-level rendering concerns). And your renderer might need things like callbacks to notify it when entities are destroyed so that it can deassociate and destroy any data it associates to it (you might use ref-counting here or shared_ptr for shared resources, but just locally inside the renderer). And you might want some efficient way to associate and deassociate all kinds of rendering data to whatever entities in constant-time (an ECS tends to provide this off the bat to every system with how you can associate new component types on the fly if you have an ECS -- if not it shouldn't be too difficult either way)... but on the upside all these kinds of things will likely be useful to systems other than the renderer anyway.

Admittedly the real implementation gets a lot more nuanced than this and might blur these things a bit more, like your engine might want to deal with things like triangles and vertices in areas other than rendering (ex: physics might want such data to do collision detection). But where life started to get a lot easier (at least for me) was to embrace this kind of reversal in mindset and strategy as the starting point.

And designing a real-time renderer is very hard in my experience -- the hardest thing I've ever designed (and keep re-designing) with the rapid changes to hardware, shading capabilities, discovered techniques. But this approach does eliminate the immediate concern of when GPU resources can be created/destroyed by centralizing all that to the rendering implementation, and even more beneficial for me is that it shifted what would otherwise be costly and cascading design changes (which could spill into code not immediately concerned with rendering) to just the implementation of the renderer itself. And that reduction in cost of change can add up to enormous savings with something that shifts in requirements every year or two as rapidly as real-time rendering.

Your Shading Example

The way I tackle your shading example is that I don't concern myself with things like GLSL shaders in things like car and person entities. I concern myself with "materials" which are very lightweight CPU objects which just contain properties describing what sort of material it is (skin, car paint, etc). In my actual case it is a bit sophisticated as I have a DSEL similar to Unreal Blueprints for programming shaders using a visual sort of language, but materials don't store GLSL shader handles.

ShaderPrograms count their references and only call glDeleteProgram in their destructor when the count is 0 AND ShaderManager periodically checks for ShaderProgram's with a count of 1 and discards them.

I used to do similar things when I was storing and managing these resources kind of "out there in space" outside the renderer because my earliest naive attempts which just tried to directly destroy those resources in a destructor often tried to destroy those resources outside of a valid GL context (and sometimes I'd even accidentally try to create them in script or something when I wasn't in a valid context), so I needed to defer creation and destruction to cases where I could guarantee I was in a valid context which lead to similar "manager" designs you describe.

All of these issues go away if you're storing a CPU resource in its place and have the renderer deal with the concerns of the GPU resource management. I can't destroy an OGL shader anywhere but I can destroy a CPU material anywhere and easily use shared_ptr and so forth without getting myself into trouble.

When to actually clear ShaderPrograms out of the static ShaderManager? Every few minutes? Every Game Loop? I'm gracefully handling the re-compiling of a shader in the case when a ShaderProgram was deleted but then a new WorldEntity requests it; but I'm sure there's a better way.

Now that concern is actually tricky even in my case if you want to efficiently manage the GPU resources and offload them when no longer needed. In my case I can be dealing with massive scenes and I work in VFX rather than games where artists might have particularly intense content not optimized for realtime rendering (epic textures, models spanning millions of polygons, etc).

It's very useful for performance not to merely avoid rendering them when they're offscreen (out of the viewing frustum) but also offload the GPU resources when no longer needed for a while (say the user doesn't look at something way out in distant space for a while).

So the solution I tend to use most often is the sort of "timestamped" solution, though I'm not sure how applicable it is with games. When I start using/binding resources for rendering (ex: they pass the frustum culling test), I store the current time with them. Then periodically there's a check to see if those resources have not been used for a while, and if so, they get unloaded/destroyed (though the original CPU data used to generate the GPU resource is kept until the actual entity storing those components are destroyed or until those components are removed from the entity). As the number of resources increases and more memory is used, the system becomes more aggressive about unloading/destroying those resources (the amount of idle time allowed for an old, unused resource before it is destroyed reduces as the system becomes more taxed).

I imagine it depends a whole lot on your game design. Since if you have a game with a more segmented approach with like smaller levels/zones, then you might be able to (and find the easiest time keeping frame rates stable) load all those resources necessary for that level in advance and unload them when the user goes to the next level. Whereas if you have some massive open world game that's seamless that way, you might need a much more sophisticated strategy to control when to create and destroy these resources, and there might be a greater challenge there to do this all without stuttering. In my VFX domain a little hiccup to frame rates isn't as big of a big deal (I do try to eliminate them within reason) since the user isn't going to game over as a result of it.

All of this complexity in my case is still isolated to the rendering system, and while I have generalized classes and code to help implement it, there are no concerns about valid GL contexts and temptations to use globals or anything like that.

Related Topic