What are the pros and cons of using final methods (in abstract classes)

extensibilityfinalobject-oriented-design

For the purpose of writing a coding styleguide, how should final methods in software design be judged?

By final I mean the Object-oriented sense that a class that can be subclassed provides methods, and that class using special syntax to prevent subclasses from modifying certain methods.

In languages like Java, there are other ways to prevent modifications of behavior, such as making methods final, or using static methods (where applicable).

As an example, this styleguide has a section "No final methods or classes" https://doc.nuxeo.com/corg/java-code-style/

No Final Methods or Classes

This hinders reusability. Nuxeo is a platform and we never know when it'll be useful to subclass something.

No private or Package-Private Methods or Fields.

This hinders reusability, for the same reason as above.

On the other hand Guava has classes with final methods, like https://guava.dev/releases/19.0/api/docs/com/google/common/collect/AbstractIterator.html

The JDK (Java) has some classes with very few final methods, like ArrayList, AbstractList, some with several final methods like HashMap, and some with many final methods, like AbstractPipeline.

Some people will say this relates to the Open-closed principle (Clarify the Open/Closed Principle), but articles on that topic usually do not talk about final methods.

Another angle is that this is related to the composition over inheritance design debate (Why should I prefer composition over inheritance?), since the idea may be to provide functionality via inheritance when considering whether to make methods final.

Somewhat related:

Best Answer

I think the arguments presented in Eric Lippert's blog post from 2004 Why Are So Many Of The Framework Classes Sealed? apply to "final methods" (in the Java sense of that term) as well.

Every time you write a method A (in a framework) which calls a non-final method B, A cannot rely on B on doing what it was originally doing any more, so A has to be designed way more robust than if B was final. For example, the implementation A may have to invest more effort into exception handling (when calling B), the exact behaviour of B and which constraints apply when overriding it has to be documented more precisely, and A must be tested more thoroughly with different overriden variants of B. Moreover, there must be invested more thought into the exact distribution of responsibilites between A and B.

In fact, in an abstract class of a framework, the way overrideable methods are used internally becomes part of the API of that framework (see example in the comments by @wchargin). Once the framework has been published to the world, it becomes significantly harder to change the semantics of those methods.

So this makes it a tradeoff: by making B final, you make it easier to create a correct, tested and reliable implementation of method A, and you make it easier to refactor A and B inside the framework later, but you also make it harder to extend A. And if a framework's implementation guide favors towards making nothing final, I would be highly sceptical about the reliability of that piece of software.

Let me cite Eric's last paragraph, which applies perfectly here:

Obviously there is a tradeoff here. The tradeoff is between letting developers save a little time by allowing them to treat any old object as a property bag on the one hand, and developing a well-designed, OOPtacular, fully-featured, robust, secure, predictable, testable framework in a reasonable amount of time -- and I'm going to lean heavily towards the latter. Because you know what? Those same developers are going to complain bitterly if the framework we give them slows them down because it is half-baked, brittle, insecure, and not fully tested!


This older question (and its top answer) from 2014 may serve as an excellent answer here as well:

In C#, methods are "final" by default (in the Java meaning of that term), and one has to add the virtual keyword explicitly to make them overrideable. In Java, it is the other way round: every method is "virtual" by default, and one has to mark them as final to prevent this.

The top answer to that former question cites Anders Hejlsberg to explain the different "schools of thought" behind these approaches:

  • the school of thought he calls "academic" ("Everything should be virtual, because I might want to override it someday."), vs.

  • the "pragmatic" school of thought ("We've got to be real careful about what we make virtual.")

Let me finally say that the arguments of the latter look more convincing to me, but YMMV.

Related Topic