Object-Oriented Design – When to Use Traits vs Inheritance and Composition

object-oriented-design

There are three common ways, AFAIK, to implement reusability when it comes to OOP

  1. Inheritance : usually to represent is-a relationship (a duck is-a bird)
  2. Composition: usually to represent has-a relationship (a car has-a engine)
  3. Traits (e.g. the trait keyword in PHP) : …not really sure about this

While it looks to me that traits can implement both has-a and is-a relationships, I am not really sure what kind of modelling it was intended for. What kind of situations were traits designed for?

Best Answer

Traits are another way to do composition. Think of them as a way to compose all the parts of a class at compile time (or JIT compile time), assembling the concrete implementations of the parts you will need.

Basically, you want to use traits when you find yourself making classes with different combinations of features. This situation comes up most often for people writing flexible libraries for others to consume. For example, here's the declaration of a unit test class I wrote recently using ScalaTest:

class TestMyClass
  extends WordSpecLike
  with Matchers
  with MyCustomTrait
  with BeforeAndAfterAll
  with BeforeAndAfterEach
  with ScalaFutures

Unit test frameworks have a ton of different configuration options, and every team has different preferences about how they want to do things. By putting the options into traits (which are mixed in using with in Scala), ScalaTest can offer all those options without having to create class names like WordSpecLikeWithMatchersAndFutures, or a ton of runtime boolean flags like WordSpecLike(enableFutures, enableMatchers, ...). This makes it easy to follow the Open/closed principle. You can add new features, and new combinations of features, simply by adding a new trait. It also makes it easier to follow the Interface Segregation Principle, because you can easily put functions not universally needed into a trait.

Traits are also a good way to put common code into several classes that do not make sense to share an inheritance hierarchy. Inheritance is a very tightly-coupled relationship, and you shouldn't pay that cost if you can help it. Traits are a much more loosely-coupled relationship. In my example above, I used MyCustomTrait to easily share a mock database implementation between several otherwise unrelated test classes.

Dependency injection achieves many of the same goals, but at runtime based on user input instead of at compile time based on programmer input. Traits are also intended more for dependencies that are semantically part of the same class. You're sort of assembling the parts of one class rather than making calls out to other classes with other responsibilities.

Dependency injection frameworks achieve many of the same goals at compile time based on programmer input, but are largely a workaround for programming languages without proper trait support. Traits bring these dependencies into the realm of the compiler's type checker, with cleaner syntax, with a simpler build process, that makes a more clear distinction between compile-time and runtime dependencies.

Related Topic