Object-Oriented Design – Should Every Object Know How to Present Itself?

object-oriented

David West in his book Object Thinking (chapter 10, section 1, sub-section 2) proposed that in an ideal OO environment, every objects should be capable of presenting themselves upon request; be it to humans (as GUI), non-native components (as JSON and/or XML), or any other interested parties:

Object thinking says that a view (sometimes called an interface) —graphical or otherwise—is a means for an object to communicate with another object and nothing more. The need for a view arises when an object needs to present itself in a “non-native” form to some other object (usually a human being) or application (for example, an XML view for data objects being shared across platforms).

Discovery of the need and the parameters that must be satisfied by a view is manifest in the scenarios in which the object participates. Whenever an object is asked to display itself, it must use a view—a representation—appropriate for the sender of that display message. If, for example, an object is trying to instantiate itself (get a value for itself), it must present a view of itself as an implicit request to a human being (or other service-providing object) for a value. If we are building a GUI that will serve as an intermediary between a software object and a human object, we will use glyphs for display purposes and widgets for interaction purposes.

But which glyphs and widgets need to be included in the GUI ? Only those necessary to complete the scenario or scenarios 4 of immediate interest as the application runs. This perspective is counterintuitive for most developers because it suggests that a GUI be defined from the application out.

As an example, consider a brewery. Off to one side are vats filled with beer. At the a complex production line consisting of bottle washers, filler stations, capping machines, and package assemblers. Above it all is a control station that monitors the brewery and notifies human managers of status and problems. Traditional developers are likely to begin their analysis and design of “a brewery management system” from the point of view of the control panel. This is analogous to designing from the interface in.

Object thinking would suggest, instead, that you consider which object is the prime customer of the brewery and all its myriad machines. On whose behalf does the complex maze of equipment exist? The correct business answer is, of course, “The customer.” But an answer more reflective of object thinking is, “The beer.” All scenarios are written from the perspective of the beer, trying to get itself into a bottle, with a cap, placed in a package, and resident in a truck. The control panel is a passive observer 5 of the state of the brewery. If the beer encounters a problem at some point, it’s the responsibility of the beer to request intervention of the human operators by sending a message to the control panel (or machine-specific control panels) requesting an intervention service.

This perspective will simplify GUI design and, more important, eliminate the host of manager and controller objects that seem to inevitably arise when designing from the control panel’s ( GUI ’s) perspective.

Coming from a beginner in the OO world: should this really be the case?

Having objects that know how to represent themselves surely could reduce the number of controller/manager objects which West repeatedly said in his book an Object Thinker supposedly should try to avoid at all costs.
But won't abiding this "rule" break SRP?

Also (if it does turned out to be the case), given a typical implementation in, say, an Android application: How could one achieve this kind of goal? Should every object we create know how to present itself as a View?

Best Answer

I think this is one of the hardest things to understand about OO design and honestly, I think a lot of authors are wrong about it and/or don't explain it very well. A lot of people get this wrong and never really figure it out. Let's take an example that's not GUI-based but runs into the same pitfall.

In Java, every object has an equals method. You then have collections types like set and map that depend on this method to determine when objects should be added to the collection or when they are duplicate. This seems like good OO to a lot of people. The problem is that what you end up with is an object (the collection) whose behavior is not determined by it but by the objects that it contains. This is a bit like having the passengers on the bus direct where it should go. What if they disagree? This isn't a theoretical problem, it's a really thorny issue where you basically have to break inheritance to prevent bugs in your program. Take a Shape and a ColoredShape. Is a 2x2 square equal to a 2x2 blue square? Shape says 'yes' and ColoredShape says 'no'. Who's right? The answer depends on what you want to happen in your collection. It might be neither depending on what you are trying to do.

You'll see this come up as a problem again and again. The funny thing is that there's a solution and it's right next door at the Comparable. Objects that implement Comparable have this same conundrum but now they have to not only determine whether they are equal but whether they are bigger than another object. It's really intractable outside a very narrow scope of usage. So we have this other thing called Comparator. It's job is to look at two objects and tell the collection which one is bigger. All of the problems that you have trying to do this in the Comparable object disappear.

I don't know this book and I don't know the author but the example with the beer does not seem helpful at all. How would the beer know whether it should be in a bottle or in a keg and why would it be making that decision? It's job is to taste good and deliver alcohol to the users' bloodstream. Do we really think breweries work this way? "OK beer, should you be in a bottle or in a keg and if it's a bottle, should it be a 25 ounce bottle or a 12 ounce bottle?" What's the beer in this case (no pun intended) anyway? Is it a drop of beer? Maybe this is out of context but I think this gets it wrong or at the very least it's not adding any illumination to this concept.

Having said all of that, there's an approach to building interfaces that I've used that can simplify things and make it more OO. Essentially, you create an interface that defines the abstract actions that you can take to display the object. You might have an interface called Display methods like setTitle or setDescription if you are using the standard Java naming pattern. Then your object would have a method display(Display display) (because three times is the charm!) In this approach, the object doesn't need to understand what the interface is, it could be text, binary, svg, bitmap, whatever and the interface doesn't need to know about the object. In this way an object can "display itself" without needing to know about how the display works. This approach can greatly reduce the number of wrapper classes needed but it can cumbersome if you have complex display requirements that vary by object. You can mix it with standard MVC type approaches to good effect.

Related Topic