Here's the actual principle:
Let q(x)
be a property provable about objects x
of type T
. Then q(y)
should be provable for objects y
of type S
where S
is a subtype of T
.
And the excellent wikipedia summary:
It states that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may be substituted for objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.).
And some relevant quotes from the paper:
What is needed is a stronger requirement that constrains the behavior of sub-types: properties that can be proved using the specification of an object’s presumed type should hold even though the object is actually a member of a subtype of that type...
A type specification includes the following information:
- The type’s name;
- A description of the type's value space;
- For each of the type's methods:
--- Its name;
--- Its signature (including signaled exceptions);
--- Its behavior in terms of pre-conditions and post-conditions.
So on to the question:
Do I understand correctly that Liskov Substitution Principle cannot be observed in languages where objects can inspect themselves, like what is usual in duck typed languages?
No.
A.class
returns a class.
B.class
returns a class.
Since you can make the same call on the more specific type and get a compatible result, LSP holds. The issue is that with dynamic languages, you can still call things on the result expecting them to be there.
But let's consider a statically, structural (duck) typed language. In this case, A.class
would return a type with a constraint that it must be A
or a subtype of A
. This provides the static guarantee that any subtype of A
must provide a method T.class
whose result is a type that satisfies that constraint.
This provides a stronger assertion that LSP holds in languages that support duck typing, and that any violation of LSP in something like Ruby occurs more due to normal dynamic misuse than a language design incompatibility.
To truly understand how strengthening pre-conditions and weakening post-conditions is a violation of the principle Barbara Liskov has published, it is the best to actually look at a practical example.
For the demonstration we will need a few classes interacting with each other.
First we have a ValidParent
class, which sets some rules about a negate
. The rules are for the method the following:
- accepts only positive numbers as its parameter (excluding zero) and throws on an invalid value,
- negates passed value and returns it - effectively only ever returning a negative number.
Ie. the input parameter MUST be > 0 (pre-condition) and the return value is guaranteed to be < 0 (post-condition).
Besides this the class also contains other method, which does some very cool stuff (for purpose demonstration only writes line to a console).
class ValidParent
{
public int negate(int positiveNumber)
{
if (!(positiveNumberToBeNegated > 0)) {
throw new Exception("The method only accepts positive numbers.");
}
return positiveNumber * -1;
}
public void doSomething()
{
Console.WriteLine("Otherwise very useful method.");
}
}
This class alone is not enough to demonstrate the problems with strengthening pre-conditions and weakening post-conditions, for that we will need Another
class which uses the ValidParent
.
This Another
class takes a ValidParent
as a dependency and then uses it to conduct some operations in its doOperation
method.
Let's imagine you're the programmer of the Another
class and you are programming the doOperation
method in which you will be using the ValidParent
instance. Because of that you need to know how the ValidParent
class looks so you look at the documentation and it tells you the following about one of it's methods:
The negate
method accepts only positive numbers as its parameter
(excluding zero) and throws on an invalid value, negates passed value
and returns it - effectively only ever returning a negative number.
With that in mind you know that should the negate
method return a value it will always be negative and never will be zero, so you program your doOperation
method like this:
class Another
{
private ValidParent validParent;
public Another(ValidParent validParent)
{
this.validParent = validParent;
}
public double doOperation(int positiveDivisor)
{
try {
validParent.doSomething();
var negated = validParent.negate(positiveDivisor);
return (double) 1 / negated;
} catch (Exception ex) {
Console.WriteLine(
"You passed an unsupported value to doOperation method." +
"Value: "+ positiveDivisor.toString() + "." +
" Method only accepts positive values.";
);
return 0;
}
}
}
And you are instantiating the Another
instance as follows:
new Another(new ValidParent());
You run some tests, passing positive, negative and zero values to the doOperation
method and it all works as expected, dividing 1 by a negated passed value when the value is present, otherwise writing out to a console and returning 0. You are happy with your result.
How strengthened pre-condition breaks applications
Some time passes and a new logic is introduced to your system. This logic says, besides supporting positive values, you MUST also have a case where, in a specific place of the program, only positive values greater than 10 are supported by the negate
method, ie. you are required to strengthen the pre-condition, because you are narrowing the list of accepted values from all positive numbers to only positive numbers greater than 10.
So you extend the ValidParent and create its child.
class StrengthenedPreConditions extends ValidParent
{
public int negate(int positiveNumber)
{
if (!(positiveNumberToBeNegated > 10)) {
throw new Exception("The method only accepts positive numbers greater than 10.");
}
return positiveNumber * -1;
}
}
So far your code works well, until a new developer joins your team. This new developer is doing some profiling and during it by human-error changes the following line:
new Another(new ValidParent());
to:
new Another(new StrengthenedPreConditions());
As expected, the code compiles without any problems, but one day you are watching the console and suddenly see very strange messages:
You passed an unsupported value to DoOperation method. Value: 5. Method only accepts positive values.
You passed an unsupported value to DoOperation method. Value: 1. Method only accepts positive values.
You passed an unsupported value to DoOperation method. Value: 10. Method only accepts positive values.
The message really is strange, because all three values, 5, 1 and 10, are in fact positive. You start to inspect where the problem is and locate it within the Another
class. You navigate to ValidParent
and have no idea what is wrong, because the ValidParent
supports positive values, so it should support the three values as well. But then you realize there's a child of the ValidParent
class and you find the issue. By strengthening the pre-conditions you broke another class, which was counting on the ValidParent
to accept ALL positive values, not just some of them.
How weakened post-condition breaks applications
Another time passes and for some reason a new class finds its way into your system. Once again, this class is a child of the ValidParent
and overrides the negate
method.
class WeakenedPostCondition extends ValidParent
{
public int negate(int positiveNumber)
{
return 0;
}
}
The method return 0 and does nothing else. By returning zero, you are weakening the post-condition, which, as stated by the ValidParent
is: effectively only ever returning a negative number, but your WeakenedPostCondition
class returns 0, effectively returning a value which is not within the initial set of values determined by ValidParent
.
Let's see how we've gotten to the point of returning negative values from the negate
method first:
- A method returns
int
data type.
- A method will never return a positive number - strengthening.
- A method will never return a zero - strengthening.
- A method now returns only negative numbers.
By returning zero from the child, you are omitting one strengthening operation, thus weakening the post-condition.
As in previous example with strengthened pre-condition, a similar mistake happens in your code at some time, replacing:
new Another(new ValidParent());
with:
new Another(new WeakenedPostCondition());
Once again, the code compiles, but one day you receive an email:
ZeroValueDivisionException thrown by Another::doOperation (Line 16).
You look into the class, once again inspect the ValidParent
and see nothing wrong there - there is no possible way the ValidParent::negate
would ever return a zero, so how could the ZeroValueDivisionException
ever happen? Then you notice the WeakenedPostCondition
child and it all clicks.
In this case your application completely crashed because the Another::doOperation
method expected the negated
value to always be a non-zero thus performing the division (the non-zero value was guaranteed by the ValidParent
class), but one of ValidParent
's children broke the condition.
Best Answer
Very simple answer: no.
The point to the LSP is that
S
should be substitutable forT
. So ifT
implements adelete
function,S
should implement it too and should perform a delete when called. However,S
is free to add additional functionality over and above whatT
provides. Consumers of aT
, when given anS
would be unaware of this extra functionality, but it's allowed to exist for consumers ofS
directly to utilise.A highly contrived of examples of how the principle can be violated might be:
Slightly more complex answer: no, as long as you don't start affecting the state or other expected behaviour of the base type.
For example, the following would be a violation:
The type,
Point2D
, is immutable; its state cannot be changed. WithMyPoint2D
, I've deliberately circumvented that behaviour to make it mutable. That breaks the history constraint ofPoint2D
and so is a violation of the LSP.