Yes, SOLID is a very good way to design code that can be easily tested. As a short primer:
S - Single Responsibility Principle: An object should do exactly one thing, and should be the only object in the codebase that does that one thing. For instance, take a domain class, say an Invoice. The Invoice class should represent the data structure and business rules of an invoice as used in the system. It should be the only class that represents an invoice in the codebase. This can be further broken down to say that a method should have one purpose and should be the only method in the codebase that meets this need.
By following this principle, you increase the testability of your design by decreasing the number of tests you have to write that test the same functionality on different objects, and you also typically end up with smaller pieces of functionality that are easier to test in isolation.
O - Open/Closed Principle: A class should be open to extension, but closed to change. Once an object exists and works correctly, ideally there should be no need to go back into that object to make changes that add new functionality. Instead, the object should be extended, either by deriving it or by plugging new or different dependency implementations into it, to provide that new functionality. This avoids regression; you can introduce the new functionality when and where it is needed, without changing the behavior of the object as it is already used elsewhere.
By adhering to this principle, you generally increase the code's ability to tolerate "mocks", and you also avoid having to rewrite tests to anticipate new behavior; all existing tests for an object should still work on the un-extended implementation, while new tests for new functionality using the extended implementation should also work.
L - Liskov Substitution Principle: A class A, dependent upon class B, should be able to use any X:B without knowing the difference. This basically means that anything you use as a dependency should have similar behavior as seen by the dependent class. As a short example, say you have an IWriter interface that exposes Write(string), which is implemented by ConsoleWriter. Now you have to write to a file instead, so you create FileWriter. In doing so, you must make sure that FileWriter can be used the same way ConsoleWriter did (meaning that the only way the dependent can interact with it is by calling Write(string)), and so additional information that FileWriter may need to do that job (like the path and file to write to) must be provided from somewhere else than the dependent.
This is huge for writing testable code, because a design that conforms to the LSP can have a "mocked" object substituted for the real thing at any point without changing expected behavior, allowing for small pieces of code to be tested in isolation with the confidence that the system will then work with the real objects plugged in.
I - Interface Segregation Principle: An interface should have as few methods as is feasible to provide the functionality of the role defined by the interface. Simply put, more smaller interfaces are better than fewer larger interfaces. This is because a large interface has more reasons to change, and causes more changes elsewhere in the codebase that may not be necessary.
Adherence to ISP improves testability by reducing the complexity of systems under test and of dependencies of those SUTs. If the object you are testing depends on an interface IDoThreeThings which exposes DoOne(), DoTwo() and DoThree(), you must mock an object that implements all three methods even if the object only uses the DoTwo method. But, if the object depends only on IDoTwo (which exposes only DoTwo), you can more easily mock an object that has that one method.
D - Dependency Inversion Principle: Concretions and abstractions should never depend on other concretions, but on abstractions. This principle directly enforces the tenet of loose coupling. An object should never have to know what an object IS; it should instead care what an object DOES. So, the use of interfaces and/or abstract base classes is always to be preferred over the use of concrete implementations when defining properties and parameters of an object or method. That allows you to swap one implementation for another without having to change the usage (if you also follow LSP, which goes hand in hand with DIP).
Again, this is huge for testability, as it allows you, once again, to inject a mock implementation of a dependency instead of a "production" implementation into your object being tested, while still testing the object in the exact form it will have while in production. This is key to unit testing "in isolation".
As Telastyn says, comparing the static definitions of functions:
public string Read(int id) { /*...*/ }
to
let read (id:int) = //...
You haven't really lost anything going from OOP to FP.
However, this is only part of the story, because functions and interfaces aren't only referred to in their static definitions. They're also passed around. So let's say our MessageQuery
was read by another piece of code, a MessageProcessor
. Then we have:
public void ProcessMessage(int messageId, IMessageQuery messageReader) { /*...*/ }
Now we can't directly see the method name IMessageQuery.Read
or its parameter int id
, but we can get there very easily through our IDE. More generally, the fact that we're passing an IMessageQuery
rather than just any interface with a method a function from int to string means we're keeping that id
parameter name metadata associated with this function.
On the other hand, for our functional version we have:
let read (id:int) (messageReader : int -> string) = // ...
So what have we kept and lost? Well, we still have the parameter name messageReader
, which probably makes the type name (the equivalent to IMessageQuery
) unnecessary. But now we've lost the parameter name id
in our function.
There's two main ways around this:
Firstly, from reading that signature, you can already make a pretty good guess what's going to be going on. By keeping functions short, simple and cohesive and using good naming, you make it a lot easier to intuit or find this information. Once we got into reading the actual function itself, it'd be even simpler.
Secondly, it's considered idiomatic design in many functional languages to create small types to wrap primitives. In this case, the opposite is happening- instead of replacing a type name with a parameter name (IMessageQuery
to messageReader
) we can replace a parameter name with a type name. For example, int
could be wrapped in a type called Id
:
type Id = Id of int
Now our read
signature becomes:
let read (id:int) (messageReader : Id -> string) = // ...
Which is just as informative as what we had before.
As a side note, this also provides us some of the compiler protection we had in OOP. Whereas the OOP version ensured we took specifically a IMessageQuery
rather than just any old int -> string
function, here we have a similar (but different) protection that we're taking an Id -> string
rather than just any old int -> string
.
I'd be reluctant to say with 100% confidence that these techniques will always be just as good and informative as having the full information available on an interface, but I think from the above examples, you can say that most of the time, we can probably do just as good a job.
Best Answer
Dependency Inversion in OOP means that you code against an interface which is then provided by an implementation in an object.
Languages that support higher language functions can often solve simple dependency inversion problems by passing behaviour as a function instead of an object which implements an interface in the OO-sense.
In such languages, the function's signature can become the interface and a function is passed in instead of a traditional object to provide the desired behaviour. The hole in the middle pattern is a good example for this.
It let's you achieve the same result with less code and more expressiveness, as you don't need to implement a whole class that conforms to an (OOP) interface to provide the desired behaviour for the caller. Instead, you can just pass a simple function definition. In short: Code is often easier to maintain, more expressive and more flexible when one uses higher order functions.
An example in C#
Traditional approach:
With higher order functions:
Now the implementation and invocation become less cumbersome. We don't need to supply an IFilter implementation anymore. We don't need to implement classes for the filters anymore.
Of course, this can already be done by LinQ in C#. I just used this example to illustrate that it's easier and more flexible to use higher order functions instead of objects which implement an interface.