The difference between self-types and trait inheritance in Scala

design-patternsobject-oriented-designscala

When Googled, many responses for this topic come up. However, I don't feel like any of them do a good job of illustrating the difference between these two features. So I'd like to try one more time, specifically…

What is something that can be done with self-types and not with inheritance, and vice-versa?

To me, there should be some quantifiable, physical difference between the two, otherwise they are just nominally different.

If Trait A extends B or self-types B, don't they both illustrate that being of B is a requirement? Where is the difference?

Best Answer

If trait A extends B, then mixing in A gives you precisely B plus whatever A adds or extends. In contrast, if trait A has a self reference which is explicitly typed as B, then the ultimate parent class must also mix in B or a descendant type of B (and mix it in first, which is important).

That's the most important difference. In the first case, the precise type of B is crystallised at the point A extends it. In the second, the designer of the parent class gets to decide which version of B is used, at the point where the parent class is composed.

Another difference is where A and B provides methods of the same name. Where A extends B, A's method overrides B's. Where A is mixed in after B, A's method simply wins.

The typed self reference gives you much more freedom; the coupling between A and B is loose.

UPDATE:

Since you're not clear about the benefit of these differences...

If you use direct inheritance, then you create trait A which is B+A. You have set the relationship in stone.

If you use a typed self reference, then anybody who wants to use your trait A in class C could

  • Mix B and then A into C.
  • Mix a subtype of B and then A into C.
  • Mix A into C, where C is a subclass of B.

And this is not the limit of their options, given the way Scala allows you to instantiate a trait directly with a code block as its constructor.

As for the difference between A's method winning, because A is mixed in last, compared to A extending B, consider this...

Where you mix in a sequence of traits, whenever method foo() is invoked, the compiler goes to the last trait mixed in to look for foo(), then (if not found), it traverses the sequence to the left until it finds a trait which implements foo() and uses that. A also has the option to call super.foo(), which also traverses the sequence to the left till it finds an implementation, and so on.

So if A has a typed self reference to B and the writer of A knows that B implements foo(), A can call super.foo() knowing that if nothing else provides foo(), B will. However, the creator of class C has the option to drop any other trait in which implements foo(), and A will get that instead.

Again, this is much more powerful and less limiting than A extending B and directly calling B's version of foo().