Yes, SOLID is a very good way to design code that can be easily tested. As a short primer:
S - Single Responsibility Principle: An object should do exactly one thing, and should be the only object in the codebase that does that one thing. For instance, take a domain class, say an Invoice. The Invoice class should represent the data structure and business rules of an invoice as used in the system. It should be the only class that represents an invoice in the codebase. This can be further broken down to say that a method should have one purpose and should be the only method in the codebase that meets this need.
By following this principle, you increase the testability of your design by decreasing the number of tests you have to write that test the same functionality on different objects, and you also typically end up with smaller pieces of functionality that are easier to test in isolation.
O - Open/Closed Principle: A class should be open to extension, but closed to change. Once an object exists and works correctly, ideally there should be no need to go back into that object to make changes that add new functionality. Instead, the object should be extended, either by deriving it or by plugging new or different dependency implementations into it, to provide that new functionality. This avoids regression; you can introduce the new functionality when and where it is needed, without changing the behavior of the object as it is already used elsewhere.
By adhering to this principle, you generally increase the code's ability to tolerate "mocks", and you also avoid having to rewrite tests to anticipate new behavior; all existing tests for an object should still work on the un-extended implementation, while new tests for new functionality using the extended implementation should also work.
L - Liskov Substitution Principle: A class A, dependent upon class B, should be able to use any X:B without knowing the difference. This basically means that anything you use as a dependency should have similar behavior as seen by the dependent class. As a short example, say you have an IWriter interface that exposes Write(string), which is implemented by ConsoleWriter. Now you have to write to a file instead, so you create FileWriter. In doing so, you must make sure that FileWriter can be used the same way ConsoleWriter did (meaning that the only way the dependent can interact with it is by calling Write(string)), and so additional information that FileWriter may need to do that job (like the path and file to write to) must be provided from somewhere else than the dependent.
This is huge for writing testable code, because a design that conforms to the LSP can have a "mocked" object substituted for the real thing at any point without changing expected behavior, allowing for small pieces of code to be tested in isolation with the confidence that the system will then work with the real objects plugged in.
I - Interface Segregation Principle: An interface should have as few methods as is feasible to provide the functionality of the role defined by the interface. Simply put, more smaller interfaces are better than fewer larger interfaces. This is because a large interface has more reasons to change, and causes more changes elsewhere in the codebase that may not be necessary.
Adherence to ISP improves testability by reducing the complexity of systems under test and of dependencies of those SUTs. If the object you are testing depends on an interface IDoThreeThings which exposes DoOne(), DoTwo() and DoThree(), you must mock an object that implements all three methods even if the object only uses the DoTwo method. But, if the object depends only on IDoTwo (which exposes only DoTwo), you can more easily mock an object that has that one method.
D - Dependency Inversion Principle: Concretions and abstractions should never depend on other concretions, but on abstractions. This principle directly enforces the tenet of loose coupling. An object should never have to know what an object IS; it should instead care what an object DOES. So, the use of interfaces and/or abstract base classes is always to be preferred over the use of concrete implementations when defining properties and parameters of an object or method. That allows you to swap one implementation for another without having to change the usage (if you also follow LSP, which goes hand in hand with DIP).
Again, this is huge for testability, as it allows you, once again, to inject a mock implementation of a dependency instead of a "production" implementation into your object being tested, while still testing the object in the exact form it will have while in production. This is key to unit testing "in isolation".
S = Single Responsibility Principle
So I'd expect to see a well organised folder/file structure & Object Hierarchy. Each class/piece of functionality should be named that its functionality is very obvious, and it should only contain logic to perform that task.
If you saw huge manager classes with thousand of lines of code, that would be a sign that single responsibility wasn't being followed.
O = Open/closed Principle
This is basically the idea that new functionality should be added through new classes that have a minimum of impact on/require modification of existing functionality.
I'd expect to see lots of use of object inheritance, sub-typing, interfaces and abstract classes to separate out the design of a piece of functionality from the actual implementation, allowing others to come along and implement other versions along side without affecting the original.
L = Liskov substitution principle
This has to do with the ability to treat sub-types as their parent type. This comes out of the box in C# if you are implementing a proper inherited object hierarchy.
I'd expect to see code treating common objects as their base type and calling methods on the base/abstract classes rather than instantiating and working on the sub-types themselves.
I = Interface Segregation Principle
This is similar to SRP. Basically, you define smaller subsets of functionality as interfaces and work with those to keep your system decoupled (e.g. a FileManager
might have the single responsibilty of dealing with File I/O, but that could implement a IFileReader
and IFileWriter
which contained the specific method definitions for the reading and writing of files).
D = Dependency Inversion Principle.
Again this relates to keeping a system decoupled. Perhaps you'd be on the lookout for the use of a .NET Dependency Injection library, being used in the solution such as Unity
or Ninject
or a ServiceLocator system such as AutoFacServiceLocator
.
Best Answer
In general, no. History has shown that the SOLID principles all largely contribute to increased decoupling, which in turn has been shown to increase flexibility in code and thus your ability to be accommodating of change as well as making the code easier to reason about, test, reuse... in short, make your code cleaner.
Now, there can be cases where the SOLID principles collide with DRY (don't repeat yourself), KISS (keep it simple stupid) or other principles of good OO design. And of course, they can collide with the reality of requirements, the limitations of humans, the limitations of our programming languages, other obstacles.
In short, SOLID principles will always lend themselves to clean code, but in some scenarios they'll lend themselves less than conflicting alternatives. They're always good, but sometimes other things are more good.