IMHO creation of "complete" classes via constructor has always been appealing, however setter injection requires a no-arg constructor, allowing pre-conditions (e.g. assertions) to be a reasonable alternative to guarantee "completeness" in the context of DI containers.
To me it is a trade-off between adding complexity versus implementing a simple, pragmatic, solution. And simplicity always wins.
in the case where almost everyone needs to know about a certain data structure, why is Dependency Injection any better than a global object?
Dependency injection is the best thing since sliced bread, while global objects have been known for decades to be the source of all evil, so this is a rather interesting question.
The point of dependency injection is not simply to ensure that every actor who needs some resource can have it, because obviously, if you make all resources global, then every actor will have access to every resource, problem solved, right?
The point of dependency injection is:
- To allow actors to access resources on a need basis, and
- To have control over which instance of a resource is accessed by any given actor.
The fact that in your particular configuration all actors happen to need access to the same resource instance is irrelevant. Trust me, you will one day have the need to reconfigure things so that actors will have access to different instances of the resource, and then you will realize that you have painted yourself in a corner. Some answers have already pointed such a configuration: testing.
Another example: suppose you split your application into client-server. All actors on the client use the same set of central resources on the client, and all actors on the server use the same set of central resources on the server. Now suppose, one day, that you decide to create a "standalone" version of your client-server application, where both the client and the server are packaged in a single executable and running in the same virtual machine. (Or runtime environment, depending on your language of choice.)
If you use dependency injection, you can easily make sure that all the client actors are given the client resource instances to work with, while all the server actors receive the server resource instances.
If you do not use dependency injection, you are completely out of luck, as only one global instance of each resource can exist in one virtual machine.
Then, you have to consider: do all actors really need access to that resource? really?
It is possible that you have made the mistake of turning that resource into a god object, (so, of course everyone needs access to it,) or perhaps you are grossly overestimating the number of actors in your project that actually need access to that resource.
With globals, every single line of source code in your entire application has access to every single global resource. With dependency injection, each resource instance is only visible to those actors that actually need it. If the two are the same, (the actors that need a particular resource comprise 100% of the lines of source code in your project,) then you must have made a mistake in your design. So, either
Refactor that great big huge god resource into smaller sub-resources, so different actors need access to different pieces of it, but rarely an actor needs all of its pieces, or
Refactor your actors to in turn accept as parameters only the subsets of the problem that they need to work on, so they do not have to be consulting some great big huge central resource all the time.
Best Answer
This depends on the programming language, the availability of a mocking framework and your need for mocking those complex objects. For C# and Java, there are frameworks available which allow you to mock out classes without creating interfaces first. (In the environment where I work, we don't use any of those frameworks, so whenever we have to mock a class for a unit test, we are going to create an interface.) In C++, you can avoid the need for interface-based mocking by injecting your "complex class" as a template parameter into every other component which is going to use it (the drawback is you have to templatize those classes, which means a certain amount of overhead).
In weakly typed languages there's often not even a language construct "interface" because you can replace an object of a class just by an object of a different type as long as the replacement fulfills the implicit contract (i.e. provides methods with correct names and signature).
Furthermore, I agree that DI does not work well with infrastructure classes like "strings". See this former PSE question & my answer to it.
I would like to add that your question sounds like "shall I always provide an interface 'just in case'". IMHO it is better to follow the YAGNI principle, start without an interface, and as soon as you need one, maybe for mocking purposes, refactor your code and introduce the interface afterwards.