Here's the actual principle:
Let q(x)
be a property provable about objects x
of type T
. Then q(y)
should be provable for objects y
of type S
where S
is a subtype of T
.
And the excellent wikipedia summary:
It states that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may be substituted for objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.).
And some relevant quotes from the paper:
What is needed is a stronger requirement that constrains the behavior of sub-types: properties that can be proved using the specification of an object’s presumed type should hold even though the object is actually a member of a subtype of that type...
A type specification includes the following information:
- The type’s name;
- A description of the type's value space;
- For each of the type's methods:
--- Its name;
--- Its signature (including signaled exceptions);
--- Its behavior in terms of pre-conditions and post-conditions.
So on to the question:
Do I understand correctly that Liskov Substitution Principle cannot be observed in languages where objects can inspect themselves, like what is usual in duck typed languages?
No.
A.class
returns a class.
B.class
returns a class.
Since you can make the same call on the more specific type and get a compatible result, LSP holds. The issue is that with dynamic languages, you can still call things on the result expecting them to be there.
But let's consider a statically, structural (duck) typed language. In this case, A.class
would return a type with a constraint that it must be A
or a subtype of A
. This provides the static guarantee that any subtype of A
must provide a method T.class
whose result is a type that satisfies that constraint.
This provides a stronger assertion that LSP holds in languages that support duck typing, and that any violation of LSP in something like Ruby occurs more due to normal dynamic misuse than a language design incompatibility.
It's a lot more simple than that quote makes it sound, accurate as it is.
When you look at an inheritance hierarchy, imagine a method which receives an object of the base class. Now ask yourself, are there any assumptions that someone editing this method might make which would be invalid for that class.
For example originally seen on Uncle Bob's site (broken link removed):
public class Square : Rectangle
{
public Square(double width) : base(width, width)
{
}
public override double Width
{
set
{
base.Width = value;
base.Height = value;
}
get
{
return base.Width;
}
}
public override double Height
{
set
{
base.Width = value;
base.Height = value;
}
get
{
return base.Height;
}
}
}
Seems fair enough, right? I've created a specialist kind of Rectangle called Square, which maintains that Width must equal Height at all times. A square is a rectangle, so it fits with OO principles, doesn't it?
But wait, what if someone now writes this method:
public void Enlarge(Rectangle rect, double factor)
{
rect.Width *= factor;
rect.Height *= factor;
}
Not cool. But there's no reason that the author of this method should have known there could be a potential problem.
Every time you derive one class from another, think about the base class and what people might assume about it (such as "it has a Width and a Height and they would both be independent"). Then think "do those assumptions remain valid in my subclass?" If not, rethink your design.
Best Answer
You can answer questions like this by going back to first principles and asking "what is the Liskov substitution principle intended to accomplish?" And the answer to that is that code that works correctly with the superclass should also work correctly with all subclasses. (For CS theory purposes there is a more specific mathematical definition but for the workaday programmer that's likely to only be concern if you are creating a programming language).
So, why would an abstract class be easier? There really is no situation where using a concrete class means you can't possibly follow the Liskov substitution principle in your subclasses. After all, the superclass defines a contract for the subclasses either way. The fact that it also contains one possible implementation of the contract is neither here nor there.
However, one might make the argument that psychologically it's easier to focus on a contract when making an abstract class, and perhaps a concrete class might take an implementation detail and accidentally make that part of the contract. It seems sort of plausible without being really convincing.
As far as never using a concrete class - perhaps a good rule of thumb is to be a bit skeptical of any design philosophy that says you should never use a standard, not totally broken feature of a language. In this case, the chance that you might, maybe, be more likely to make a mistake designing the contract of a concrete class vs. an abstract one - well that would be at best a minor factor in making that decision.