Definitions
Inversion of control is a design paradigm with the goal of reducing awareness of concrete implementations from application framework code and giving more control to the domain specific components of your application. In a traditional top down designed system, the logical flow of the application and dependency awareness flows from the top components, the ones designed first, to the ones designed last. As such, inversion of control is an almost literal reversal of the control and dependency awareness in an application.
Dependency injection is a pattern used to create instances of classes that other classes rely on without knowing at compile time which implementation will be used to provide that functionality.
Working Together
Inversion of control can utilize dependency injection because a mechanism is needed in order to create the components providing the specific functionality. Other options exist and are used, e.g. activators, factory methods, etc., but frameworks don't need to reference those utility classes when framework classes can accept the dependency(ies) they need instead.
Examples
One example of these concepts at work is the plug-in framework in Reflector. The plug-ins have a great deal of control of the system even though the application didn't know anything about the plug-ins at compile time. A single method is called on each of those plug-ins, Initialize if memory serves, which passes control over to the plug-in. The framework doesn't know what they will do, it just lets them do it. Control has been taken from the main application and given to the component doing the specific work; inversion of control.
The application framework allows access to its functionality through a variety of service providers. A plug-in is given references to the service providers when it is created. These dependencies allow the plug-in to add its own menu items, change how files are displayed, display its own information in the appropriate panels, etc. Since the dependencies are passed by interface, the implementations can change and the changes will not break the code as long as the contract remains intact.
At the time, a factory method was used to create the plug-ins using configuration information, reflection and the Activator object (in .NET at least). Today, there are tools, MEF for one, that allow for a wider range of options when injecting dependencies including the ability for an application framework to accept a list of plugins as a dependency.
Summary
While these concepts can be used and provide benefits independently, together they allow for much more flexible, reusable, and testable code to be written. As such, they are important concepts in designing object oriented solutions.
The only legitimate dependency injection anti-pattern that I'm aware of is the Service Locator pattern, which is an anti-pattern when a DI framework is used for it.
All of the other so-called DI anti-patterns that I've heard about, here or elsewhere, are just slightly more specific cases of general OO/software design anti-patterns. For instance:
Constructor over-injection is a violation of the Single Responsibility Principle. Too many constructor arguments indicates too many dependencies; too many dependencies indicates that the class is trying to do too much. Usually this error correlates with other code smells, such as unusually long or ambiguous ("manager") class names. Static analysis tools can easily detect excessive afferent/efferent coupling.
Injection of data, as opposed to behaviour, is a subtype of the poltergeist anti-pattern, with the 'geist in this case being the container. If a class needs to be aware of the current date and time, you don't inject a DateTime
, which is data; instead, you inject an abstraction over the system clock (I usually call mine ISystemClock
, although I think there's a more general one in the SystemWrappers project). This is not only correct for DI; it is absolutely essential for testability, so that you can test time-varying functions without needing to actually wait on them.
Declaring every life cycle as Singleton is, to me, a perfect example of cargo cult programming and to a lesser degree the colloquially-named "object cesspool". I've seen more singleton abuse than I care to remember, and very little of it involves DI.
Another common error is implementation-specific interface types (with strange names like IOracleRepository
) done just to be able to register it in the container. This is in and of itself a violation of the Dependency Inversion Principle (just because it's an interface, does not mean it's truly abstract) and often also includes interface bloat which violates the Interface Segregation Principle.
The last error I usually see is the "optional dependency", which they did in NerdDinner. In other words, there is a constructor that accepts dependency injection, but also another constructor that uses a "default" implementation. This also violates the DIP and tends to lead to LSP violations as well, as developers, over time, start making assumptions around the default implementation, and/or start new-ing up instances using the default constructor.
As the old saying goes, you can write FORTRAN in any language. Dependency Injection isn't a silver bullet that will prevent developers from screwing up their dependency management, but it does prevent a number of common errors/anti-patterns:
...and so on.
Obviously you don't want to design a framework to depend on a specific IoC container implementation, like Unity or AutoFac. That is, once again, violating the DIP. But if you find yourself even thinking about doing something like that, then you must have already made several design errors, because Dependency Injection is a general-purpose dependency-management technique and is not tied to the concept of an IoC container.
Anything can construct a dependency tree; maybe it's an IoC container, maybe it's a unit test with a bunch of mocks, maybe it's a test driver supplying dummy data. Your framework shouldn't care, and most frameworks I've seen don't care, but they still make heavy use of dependency injection so that it can be easily integrated into the end user's IoC container of choice.
DI isn't rocket science. Just try to avoid new
and static
except when there's a compelling reason to use them, such as a utility method that has no external dependencies, or a utility class that could not possibly have any purpose outside the framework (interop wrappers and dictionary keys are common examples of this).
Many of the problems with IoC frameworks come up when developers are first learning how to use them, and instead of actually changing the way they handle dependencies and abstractions to fit the IoC model, instead try to manipulate the IoC container to meet the expectations of their old coding style, which would often involve high coupling and low cohesion. Bad code is bad code, whether it uses DI techniques or not.
Best Answer
You could split your cache implementation into 2 parts, a Cache and a CacheStorage.
The CacheStorage is only responsible for storing the cached data, and has a Singleton lifetime. Because it may be accessed by multiple threads concurrently, it should be thread-safe.
The Cache is the dynamic part of the caching mechanism, it has a PerRequest lifecycle and depends on both the CacheStorage and the DbContext. It is responsible for reading the settings from the DbContext on first access and updating the CacheStorage's data with it.
This way, the CacheStorage can be kept on for the duration of the application, and the Cache gets instantiated only when there's a valid DbContext available.