The only legitimate dependency injection anti-pattern that I'm aware of is the Service Locator pattern, which is an anti-pattern when a DI framework is used for it.
All of the other so-called DI anti-patterns that I've heard about, here or elsewhere, are just slightly more specific cases of general OO/software design anti-patterns. For instance:
Constructor over-injection is a violation of the Single Responsibility Principle. Too many constructor arguments indicates too many dependencies; too many dependencies indicates that the class is trying to do too much. Usually this error correlates with other code smells, such as unusually long or ambiguous ("manager") class names. Static analysis tools can easily detect excessive afferent/efferent coupling.
Injection of data, as opposed to behaviour, is a subtype of the poltergeist anti-pattern, with the 'geist in this case being the container. If a class needs to be aware of the current date and time, you don't inject a DateTime
, which is data; instead, you inject an abstraction over the system clock (I usually call mine ISystemClock
, although I think there's a more general one in the SystemWrappers project). This is not only correct for DI; it is absolutely essential for testability, so that you can test time-varying functions without needing to actually wait on them.
Declaring every life cycle as Singleton is, to me, a perfect example of cargo cult programming and to a lesser degree the colloquially-named "object cesspool". I've seen more singleton abuse than I care to remember, and very little of it involves DI.
Another common error is implementation-specific interface types (with strange names like IOracleRepository
) done just to be able to register it in the container. This is in and of itself a violation of the Dependency Inversion Principle (just because it's an interface, does not mean it's truly abstract) and often also includes interface bloat which violates the Interface Segregation Principle.
The last error I usually see is the "optional dependency", which they did in NerdDinner. In other words, there is a constructor that accepts dependency injection, but also another constructor that uses a "default" implementation. This also violates the DIP and tends to lead to LSP violations as well, as developers, over time, start making assumptions around the default implementation, and/or start new-ing up instances using the default constructor.
As the old saying goes, you can write FORTRAN in any language. Dependency Injection isn't a silver bullet that will prevent developers from screwing up their dependency management, but it does prevent a number of common errors/anti-patterns:
...and so on.
Obviously you don't want to design a framework to depend on a specific IoC container implementation, like Unity or AutoFac. That is, once again, violating the DIP. But if you find yourself even thinking about doing something like that, then you must have already made several design errors, because Dependency Injection is a general-purpose dependency-management technique and is not tied to the concept of an IoC container.
Anything can construct a dependency tree; maybe it's an IoC container, maybe it's a unit test with a bunch of mocks, maybe it's a test driver supplying dummy data. Your framework shouldn't care, and most frameworks I've seen don't care, but they still make heavy use of dependency injection so that it can be easily integrated into the end user's IoC container of choice.
DI isn't rocket science. Just try to avoid new
and static
except when there's a compelling reason to use them, such as a utility method that has no external dependencies, or a utility class that could not possibly have any purpose outside the framework (interop wrappers and dictionary keys are common examples of this).
Many of the problems with IoC frameworks come up when developers are first learning how to use them, and instead of actually changing the way they handle dependencies and abstractions to fit the IoC model, instead try to manipulate the IoC container to meet the expectations of their old coding style, which would often involve high coupling and low cohesion. Bad code is bad code, whether it uses DI techniques or not.
I just finished an article today about the repository pattern (with sample implementations).
The thing with most .NET ioc containers is that they support scoping. That is, they can create objects with a limited lifetime. That works very well with HTTP applications since the scope is the same as the lifetime of a HTTP request.
If you use ASP.NET MVC you can combine that with built in features of MVC to trigger the UoW if no errors where detected: http://blog.gauffin.org/2012/06/how-to-handle-transactions-in-asp-net-mvc3/
If you use any other kind of application I usually create a scope by myself (for instance to wrap a command):
using (var scope = MyContainer.CreateChildCointainer())
{
using (var uow = scope.Resolve<IUnitOfWork>())
{
// do something here
uow.SaveChanges();
}
}
The thing is that the repositories etc do not have to be aware of the unit of work (if you use databases in .NET). The UoW implementation can make sure that all OR/M or UoW implementation enlists all db commands in the transaction.
Best Answer
It depends on the intended lifetime and ownership of your objects. To construct an object of type C you need an object of type D. Shall this D object have the same lifetime as the C object? Then it makes sense to construct the D object at the same scope where the C object is constructed. Shall the D object live longer than C? Then you should construct it outside before and pass it to the function which constructs C.
Given your objects B, C and D shall have the same lifetime, and that lifetime shall be controlled by an object of type A, it makes sense to let A construct them and "wire" them. If the D object shall live longer, it must be constructed (and destroyed) outside of A.
If you note that A gets too much responsibilities by managing other, dependent objects of B, A might use a "BFactory" class for this purpose, like @gnat suggested as a response to your other question. And if you want to avoid A having to construct that factory, this factory could also be injected into A through an interface "IBFactory".
And if your system gets really large, because you have to not four but 400 classes to manage, then a DI container would be the better choice.