First, lets understand what was the issue:
The Staple job knows about many more methods which it doesn't use today, (e.g., the print
method) but which are available for it to use, should it want to.
However, the Job class "thinks" that the Staple class will be "a good citizen" and never use the print method at all.
There are many potential big issues here -
For some reason, the Staple job may start using the print method - by accident or intentionally.
Then down the road, either any changes to the print method may go untested,
OR, any changes to the print method will trigger a regression test in the Staple job also,
AND, any impact analysis for changes to the print job will necessarily involve impact analysis of the staple job too.
This is just the issue of Staple knowing about the print functions. Then there's the case of the Print job knowing all about stapling functions. Same problems.
Very soon, this system would reach a point where any change will require a full blown impact analysis of each module and a full blown regression test.
Another problem is that today, all jobs which can be printed can be stapled, and vice versa on this particular printer.
However, tomorrow, there could be a need to install the same firmware on a device that only prints or only staples. What then? The code already assumes that all Jobs are printable and stapleable. So any further granular breakdown / simplification of responsibilities is impossible.
In more recent terms, imagine a class called "AppleDevice" which has functions for MakePhoneCall as well as PlayMusic. Now your problem is while you can easily use this on an iPhone, you cannot use it for an iPod since the iPod cannot make phone calls.
So, the issue is not that the Job class is all-powerful. In fact, that's how it should be, so that it can act as a common link in the entire "workflow" where someone may scan a job, then print it, then staple it etc.
The problem is that the usage of all its methods is not restricted. Anyone and everyone could use and abuse any method whenever they want to, thus making the maintenance difficult.
Hence, the Dependency Injection approach of only telling users "exactly what they need to know, and nothing more" ensures that calling modules only use the code that they are meant to.
A sample implementation would look like:
interface IStapleableJob { void stapleYourself(); }
interface IPrintableJob { void printYourself(); }
class Job implements IStapleableJob, IPrintableJob {
....
}
class Staple {
public static void stapleAllJobs(ArrayList<IStapleableJob> jobs) {
for(IStapleableJob job : jobs) job.stapleYourself();
}
}
class Print {
public static void stapleAllJobs(ArrayList<IPrintableJob> jobs) {
for(IPrintableJob job : jobs) job.printYourself();
}
}
Here, even if you pass a Job object to the Staple and Print methods, they dont know that its a Job, so they cannot use any methods that they are not supposed to. Thus, when you make any changes to a module, your scope of impact analysis and regression testing is restricted. That's the problem that ISP solves.
Both are correct
The way I read it, the purpose of ISP (Interface Segregation Principle) is to keep interfaces small and focused: all interface members should have very high cohesion. Both definitions are intended to avoid "jack-of-all-trades-master-of-none" interfaces.
Interface segregation and SRP (Single Responsibility Principle) have the same goal: ensuring small, highly cohesive software components. They complement each other. Interface segregation ensures that interfaces are small, focused and highly cohesive. Following the single responsibility principle ensures that classes are small, focused and highly cohesive.
The first definition you mention focuses on implementers, the second on clients. Which, contrary to @user61852, I take to be the users/callers of the interface, not the implementers.
I think that your confusion stems from the a hidden assumption in the first definition: that the implementing classes are already following the single responsibility principle.
To me the second definition, with the clients as the callers of the interface, is a better way of getting to the intended goal.
Segregating
In your question you state:
since this way my MyClass is able to implement only the methods it
needs ( D() and C() ), without being forced to also provide dummy
implementations for A(), B() and C():
But that is turning the world upside down.
- A class implementing an interface does not dictate what it needs in the interface it is implementing.
- The interfaces dictate what methods an implementing class should provide.
- The callers of an interface really are the ones that dictate what functionality they need the interface to provide for them and thus what an implementer should provide.
So when you are going to split IFat
into smaller interface, which methods end up in which ISmall
interface should be decided based on how cohesive the members are.
Consider this interface:
interface IEverythingButTheKitchenSink
{
void DoDishes();
void CleanSink();
void CutGreens();
void GrillMeat();
}
Which methods would you put in ICook
and why? Would you put CleanSink
together with GrillMeat
just because you happen to have a class that does just that and a couple of other things but nothing like any of the other methods? Or would you split it into two more cohesive interfaces, such as:
interface IClean
{
void DoDishes();
void CleanSink();
}
interface ICook
{
void CutGreens();
void GrillMeat();
}
Interface declaration note
An interface definition should preferably be on its own in a separate unit, but if it absolutely needs to live with either caller or implementer, it should really be with the caller. Otherwise the caller gets an immediate dependency on the implementer which is defeating the purpose of interfaces altogether. See also: Declaring interface in the same file as the base class, is it a good practice? on Programmers and Why should we place interfaces with classes that use them rather than those that implement them? on StackOverflow.
Best Answer
As Telastyn says, comparing the static definitions of functions:
to
You haven't really lost anything going from OOP to FP.
However, this is only part of the story, because functions and interfaces aren't only referred to in their static definitions. They're also passed around. So let's say our
MessageQuery
was read by another piece of code, aMessageProcessor
. Then we have:Now we can't directly see the method name
IMessageQuery.Read
or its parameterint id
, but we can get there very easily through our IDE. More generally, the fact that we're passing anIMessageQuery
rather than just any interface with a method a function from int to string means we're keeping thatid
parameter name metadata associated with this function.On the other hand, for our functional version we have:
So what have we kept and lost? Well, we still have the parameter name
messageReader
, which probably makes the type name (the equivalent toIMessageQuery
) unnecessary. But now we've lost the parameter nameid
in our function.There's two main ways around this:
Firstly, from reading that signature, you can already make a pretty good guess what's going to be going on. By keeping functions short, simple and cohesive and using good naming, you make it a lot easier to intuit or find this information. Once we got into reading the actual function itself, it'd be even simpler.
Secondly, it's considered idiomatic design in many functional languages to create small types to wrap primitives. In this case, the opposite is happening- instead of replacing a type name with a parameter name (
IMessageQuery
tomessageReader
) we can replace a parameter name with a type name. For example,int
could be wrapped in a type calledId
:Now our
read
signature becomes:Which is just as informative as what we had before.
As a side note, this also provides us some of the compiler protection we had in OOP. Whereas the OOP version ensured we took specifically a
IMessageQuery
rather than just any oldint -> string
function, here we have a similar (but different) protection that we're taking anId -> string
rather than just any oldint -> string
.I'd be reluctant to say with 100% confidence that these techniques will always be just as good and informative as having the full information available on an interface, but I think from the above examples, you can say that most of the time, we can probably do just as good a job.