If you are modifying the base class then it is not really closed is it!
Think of the situation where you have released the library to the world. If you go and change the behavior of your base class by modifying the overtime factor to 1.5 then you have violated all the people that use your code assuming that the class was closed.
Really to make the class closed but open you should be retrieving the overtime factor from an alternative source (config file maybe) or proving a virtual method that can be overridden?
If the class was truly closed then after your change no test cases would fail (assuming you have 100% coverage with all your test cases) and I would assume that there is a test case that checks GetOvertimeFactor() == 2.0M
.
Don't over Engineer
But don't take this open-close principle to the logical conclusion and have everything configurable from the start (that is over engineering). Only define the bits you currently need.
The closed principle does not preclude you from re-engineering the object. It just pre-cludes you from changing the currently defined public interface to your object (protected members are part of the public interface). You can still add more functionality as long as the old functionality is not broken.
Given some divergences between languages, this can be a tricky topic. Thus, I'm formulating the following commentaries in a way that tries to be as comprehensive as I can inside the realm of OO.
First of all, the so called "Single Responsibility Principle" is a reflex -- explicitly declared -- of the concept cohesion. Reading the literature of the time (around '70), people were (and still are) struggling to define what a module is, and how to construct them in a way that would preserve nice properties. So, they would say "here is a bunch of structures and procedures, I'll make a module out of them", but with no criteria as to why this set of arbitrary things are packaged together, the organization might end up making little sense -- little "cohesion". Hence, discussions on criteria emerged.
So, the first thing to note here is that, so far, the debate is around organization and related effects on maintenance and understandability (for little matter to a computer if a module "makes sense").
Then, someone else (mr. Martin) came in and applied the same thinking to the unit of a class as a criteria to use when thinking about what should or should not belong to it, promoting this criteria to a principle, the one being discussed here. The point he made was that "A class should have only one reason to change".
Well, we know from experience that many objects (and many classes) that appear to do "many things" have a very good reason for doing so. The undesirable case would be the classes that are bloated with functionality to the point of being impenetrable to maintenance, etc. And to understand the latter is to see where mr. Martin was aiming at when he elaborated on the subject.
Of course, after reading what mr. Martin wrote, it should be clear these are criteria for direction and design to avoid problematic scenarios, not in any way to pursue any kind of compliance, let alone strong compliance, specially when "responsibility" is ill defined (and questions like "does this violates the principle?" are perfect examples of the widespread confusion). Thus, I find it unfortunate it is called a principle, misleading people into try to take it to the last consequences, where it would do no good. Mr. Martin himself discussed designs that "do more than one thing" that should probably be kept that way, since separating would yield worse results. Also, there are many known challenges regarding modularity (and this subject is a case of it), we are not at a point of having good answers even for some simple questions about it.
However, if we extrapolate these ideas, why would Object have a toString class? It's not a Car's or a Dog's responsibility to convert itself to a string, now is it?
Now, let me pause to say something here about toString
: there is a fundamental thing commonly neglected when one makes that transition of thought from modules to classes and reflect on what methods should belong to a class. And the thing is dynamic dispatch (aka, late binding, "polymorphism").
In a world with no "overriding methods", choosing between "obj.toString()" or "toString(obj)" is a matter of syntax preference alone. However, in a world where programmers can change the behavior of a program by adding a subclass with distinct implementation of an existing/overridden method, this choice is no more of taste: making a procedure a method also can make it a candidate for overriding, and the same might not be true for "free procedures" (languages that support multi-methods have a way out of this dichotomy). Consequently, it is no more a discussion on organization only, but on semantics as well. Finally, to which class the method is bound, also becomes an impacting decision (and in many cases, so far, we have little more than guidelines to help us decide where things belong, as non-obvious trade-offs emerge from different choices).
Finally, we are faced with languages that carry terrible design decisions, for instance, forcing one to create a class for every little bit of thing. Thus, what once was the canonical reason and main criteria for having objects (and in the class-land, therefore, classes) at all, which is, to have these "objects" that are kind of "behaviors that also behave like data", but protect their concrete representation (if any) from direct manipulation at all costs (and that's the main hint for what should be the interface of an object, from the point of view of its clients), gets blurred and confused.
Best Answer
You have shown two extremes ("everything private and all (maybe unrelated) methods in one object" vs. "everything public and no method inside the object"). IMHO good OO modeling is none of them, the sweet spot is somewhere in the middle.
One litmus test of what methods or logic belongs into a class, and what belongs outside, is to look at the dependencies the methods will introduce. Methods which don't introduce additional dependencies are fine, as long as they fit well to the abstraction of the given object. Methods which do require additional, external dependencies (like a drawing library or an I/O library) are seldom a good fit. Even if you would make the dependencies vanish by using dependency injection, I still would think twice if placing such methods inside the domain class is really necessary.
So neither you should make every member public, nor do you need to implement methods for every operation on an object inside the class. Here is an alternative suggestion:
Now the
Cat
object provides enough logic to let surrounding code easily implementprintInfo
anddraw
, without exposing all attributes in public. The right place for these two methods is most probably not a god classCatMethods
(sinceprintInfo
anddraw
are most probably different concerns, so I think it is very unlikely they belong into the same class).I can imagine a
CatDrawingController
which implementsdraw
(and maybe uses dependency injection for getting a Canvas object). I can also imagine another class which implements some console output and usestoString
(soprintInfo
may become obsolete in this context). But for making sensible decisions on this, one need to know the context and how theCat
class will actually be used.That is actually the way how I interpreted Fowler's Anemic Domain Model critics - for generally reusable logic (without external dependencies) the domain classes themselves are a good place, so they should be used for that. But that does not mean to implement any logic there, quite the opposite.
Note also the example above leaves still room for making a decision about (im)mutability. If the
Cat
class will not expose any setters, andImage
is immutable itself, this design will allow to makeCat
immutable (which the DTO approach won't). But if you think immutability is not required or not helpful for your case, you can also go into that direction.