Given some divergences between languages, this can be a tricky topic. Thus, I'm formulating the following commentaries in a way that tries to be as comprehensive as I can inside the realm of OO.
First of all, the so called "Single Responsibility Principle" is a reflex -- explicitly declared -- of the concept cohesion. Reading the literature of the time (around '70), people were (and still are) struggling to define what a module is, and how to construct them in a way that would preserve nice properties. So, they would say "here is a bunch of structures and procedures, I'll make a module out of them", but with no criteria as to why this set of arbitrary things are packaged together, the organization might end up making little sense -- little "cohesion". Hence, discussions on criteria emerged.
So, the first thing to note here is that, so far, the debate is around organization and related effects on maintenance and understandability (for little matter to a computer if a module "makes sense").
Then, someone else (mr. Martin) came in and applied the same thinking to the unit of a class as a criteria to use when thinking about what should or should not belong to it, promoting this criteria to a principle, the one being discussed here. The point he made was that "A class should have only one reason to change".
Well, we know from experience that many objects (and many classes) that appear to do "many things" have a very good reason for doing so. The undesirable case would be the classes that are bloated with functionality to the point of being impenetrable to maintenance, etc. And to understand the latter is to see where mr. Martin was aiming at when he elaborated on the subject.
Of course, after reading what mr. Martin wrote, it should be clear these are criteria for direction and design to avoid problematic scenarios, not in any way to pursue any kind of compliance, let alone strong compliance, specially when "responsibility" is ill defined (and questions like "does this violates the principle?" are perfect examples of the widespread confusion). Thus, I find it unfortunate it is called a principle, misleading people into try to take it to the last consequences, where it would do no good. Mr. Martin himself discussed designs that "do more than one thing" that should probably be kept that way, since separating would yield worse results. Also, there are many known challenges regarding modularity (and this subject is a case of it), we are not at a point of having good answers even for some simple questions about it.
However, if we extrapolate these ideas, why would Object have a toString class? It's not a Car's or a Dog's responsibility to convert itself to a string, now is it?
Now, let me pause to say something here about toString
: there is a fundamental thing commonly neglected when one makes that transition of thought from modules to classes and reflect on what methods should belong to a class. And the thing is dynamic dispatch (aka, late binding, "polymorphism").
In a world with no "overriding methods", choosing between "obj.toString()" or "toString(obj)" is a matter of syntax preference alone. However, in a world where programmers can change the behavior of a program by adding a subclass with distinct implementation of an existing/overridden method, this choice is no more of taste: making a procedure a method also can make it a candidate for overriding, and the same might not be true for "free procedures" (languages that support multi-methods have a way out of this dichotomy). Consequently, it is no more a discussion on organization only, but on semantics as well. Finally, to which class the method is bound, also becomes an impacting decision (and in many cases, so far, we have little more than guidelines to help us decide where things belong, as non-obvious trade-offs emerge from different choices).
Finally, we are faced with languages that carry terrible design decisions, for instance, forcing one to create a class for every little bit of thing. Thus, what once was the canonical reason and main criteria for having objects (and in the class-land, therefore, classes) at all, which is, to have these "objects" that are kind of "behaviors that also behave like data", but protect their concrete representation (if any) from direct manipulation at all costs (and that's the main hint for what should be the interface of an object, from the point of view of its clients), gets blurred and confused.
Best Answer
I generally avoid having the class know how to serialize itself, for a couple of reasons. First, if you want to (de)serialize to/from a different format, you now need to pollute the model with that extra logic. If the model is accessed via an interface, then you also pollute the contract.
But what if you want to serialize it to/from a PNG, and GIF? Now the class becomes
Instead, I typically like to use a pattern similar to the following:
Now, at this point, one of the caveats with this design is that the serializers need to know the
identity
of the object it's serializing. Some would say that this is bad design, as the implementation leaks outside of the class. The risk/reward of this is really up to you, but you could slightly tweak the classes to do something likeThis is more of a general example, as images usually have metadata that goes along with it; things like compression level, colorspace, etc. which may complicate the process.