What you're talking about is programming to abstractions, which does not imply the Strategy pattern.
Programming to abstractions is always a good practice. There's virtually no additional effort involved in providing an interface or abstract base class for a concrete implementation, and many refactoring tools can now do this automatically for you.
How abstract to make it is a different question.
In your first example, you simply have an abstract type. You could program everything to use a generic Serializer
or ISerializer
type and then wire up a single default implementation in an IoC container like Castle or Spring. Simple. Done.
In the second instance, though, you're designing abstract functionality. You're trying to predict how this abstract type might be used in the future, and you are probably going to be wrong. The YAGNI principle applies here; unless you have some reason to believe that your application actually needs a fully-generic "transform" interface which can turn anything into anything else, and unless you've actually scoped out the requirements for it, then you're wasting your time planning for it. Maybe you should just use one of the many existing XML transformation languages if you need that kind of flexibility.
None of this really has anything to do with the strategy pattern. The strategy pattern means that not only are you coding against an abstract type, but that you are choosing which concrete type to use based on information only known at runtime. This is neither explicit nor implicit in your question.
If you only have one implementation, then implementing a strategy pattern is absolutely overkill, unless you know for sure that you will need to support additional strategies almost immediately afterward.
But that does not preclude you from using abstract types. This is generally how most OO applications are designed now, using dependency injection for code and an IoC container for configuration. I'd go so far as to say this is more productive than coding to concrete types, because (a) it's much easier to test and mock, and (b) quickly coding an interface allows you to continue your current task without much distraction, as opposed to going off and writing a whole new class because you can't continue without it.
So definitely make use of abstract types, but don't overthink things and spend a lot of time fussing about how to generalize them. It's easier to refactor later than it is to work with a bloated interface.
Reflection was created for a specific purpose, to discover the functionality of a class that was unknown at compile time, similar to what the dlopen
and dlsym
functions do in C. Any use outside of that should be heavily scrutinized.
Did it ever occur to you that the Java designers themselves encountered this problem? That's why practically every class has an equals
method. Different classes have different definitions of equality. In some circumstances a derived object could be equal to a base object. In some circumstances, equality could be determined based on private fields without getters. You don't know.
That's why every object who wants custom equality should implement an equals
method. Eventually, you'll want to put the objects into a set, or use them as a hash index, then you'll have to implement equals
anyway. Other languages do it differently, but Java uses equals
. You should stick to the conventions of your language.
Also, "boilerplate" code, if put into the correct class, is pretty hard to screw up. Reflection adds additional complexity, meaning additional chances for bugs. In your method, for example, two objects are considered equal if one returns null
for a certain field and the other doesn't. What if one of your getters returns one of your objects, without an appropriate equals
? Your if (!object1.equals(object2))
will fail. Also making it bug prone is the fact that reflection is rarely used, so programmers aren't as familiar with its gotchas.
Best Answer
The convenience of working with a strongly typed object sometimes outweighs the cost of serializing to/from that object. For instance I usually use JSON to transmit MVVM ViewModels over the wire. Once the data is deserialized, I can take advantage of the functionality embedded in the ViewModel class that works on the data. If I were just passing around dumb data, then I agree it probably makes sense to forgo that extra step.
What is dumb data? To me dumb data is a class that has nothing on it but simple properties/fields and any logic is provided by other classes that consume them (cf. Anemic Domain Model or Data Transfer Object).
If instead you are (de)serializing intelligent objects (like my ViewModel example), there is much more to gain. This goes beyond just validation. If you look at the argument for Rich Domain Models versus Anemic Domain Models, the case is that we are putting the logic that operates on the data with the data itself. That's the entire point of using an Object Oriented language. Presumably you're passing the data around to have some function performed on it. Serialize it into a rich object and put the function on that object so all you have to do is invoke the function. I've touched on the idea in a blog post here.