You are correct in that OOP class hierarchies are very closely related to discriminated unions in F# and that pattern matching is very closely related to dynamic type tests. In fact, this is actually how F# compiles discriminated unions to .NET!
Regarding extensibility, there are two sides of the problem:
- OO lets you add new sub-classes, but makes it hard to add new (virtual) functions
- FP lets you add new functions, but makes it hard to add new union cases
That said, F# will give you a warning when you miss a case in pattern matching, so adding new union cases is actually not that bad.
Regarding finding duplications in root choosing - F# will give you a warning when you have a match that is duplicate, e.g.:
match x with
| Some foo -> printfn "first"
| Some foo -> printfn "second" // Warning on this line as it cannot be matched
| None -> printfn "third"
The fact that "route choice is immutable" might also be problematic. For example, if you wanted to share the implementation of a function between Foo
and Bar
cases, but do something else for the Zoo
case, you can encode that easily using pattern matching:
match x with
| Foo y | Bar y -> y * 20
| Zoo y -> y * 30
In general, FP is more focused on first designing the types and then adding functions. So it really benefits from the fact that you can fit your types (domain model) in a couple of lines in a single file and then easily add the functions that operate on the domain model.
The two approaches - OO and FP are quite complementary and both have advantages and disadvantages. The tricky thing (coming from the OO perspective) is that F# usually uses the FP style as the default. But if there really is more need for adding new sub-classes, you can always use interfaces. But in most systems, you equally need to add types and functions, so the choice really does not matter that much - and using discriminated unions in F# is nicer.
I'd recommend this great blog series for more information.
What you describe is known as an anemic domain model. As with many OOP design principles (like Law of Demeter etc.), it's not worth bending over backwards just to satisfy a rule.
Nothing wrong about having bags of values, as long as they don't clutter the entire landscape and don't rely on other objects to do the housekeeping they could be doing for themselves.
It would certainly be a code smell if you had a separate class just for modifying properties of Card
- if it could be reasonably expected to take care of them on its own.
But is it really a job of a Card
to know which Player
it is visible to?
And why implement Card.isVisibleTo(Player p)
, but not Player.isVisibleTo(Card c)
? Or vice versa?
Yes, you can try to come up with some sort of a rule for that as you did - like Player
being more high level than a Card
(?) - but it's not that straightforward to guess and I'll have to look in more than one place to find the method.
Over time it can lead to a rotten design compromise of implementing isVisibleTo
on both Card
and Player
class, which I believe is a no-no. Why so? Because I already imagine the shameful day when player1.isVisibleTo(card1)
will return a different value than card1.isVisibleTo(player1).
I think - it's subjective - this should be made impossible by design.
Mutual visibility of cards and players should better be governed by some sort of a context object - be it Viewport
, Deal
or Game
.
It's not equal to having global functions. After all, there may be many concurrent games. Note that the same card can be used simultaneously on many tables. Shall we create many Card
instances for each ace of spade?
I might still implement isVisibleTo
on Card
, but pass a context object to it and make Card
delegate the query. Program to interface to avoid high coupling.
As for your second example - if the document ID consists only of a BigDecimal
, why create a wrapper class for it at all?
I'd say all you need is a DocumentRepository.getDocument(BigDecimal documentID);
By the way, while absent from Java, there are struct
s in C#.
See
for reference. It's a highly object-oriented language, but noone makes a big deal out of it.
Best Answer
Short answer : that depends, and using smart pointers systematically is just wrong. Think first. I'm using smart pointers for a lot of things but it's not right for everything, ie. no silver bullet. You'll have to understand your specific implementation to understand if it's wrong or not. I'm giving some examples in the ...
Long answer :
What makes a software poor, regarding object lifetime, is only the lack of clear and precise control.
C++ letting you define the lifetime of objects mean that the programmers have to setup ways to manage those lifetimes, how different they could be and how easy it is to change it.
I know a lot of cases where smart pointers are just the wrong answer (or overkill), starting with objects in pools. If objects are managed inside a "master" object that will do the new and delete called in an isolated way, then that's fine. Don't forget that smart_pointers, like any other techniques, only hide deletes in a manageable way. To achieve this, they just make clear when the delete will be called and make it a rule.
So, the idea here is that as far as the delete call is put in one place, easy to find, easy to understand, etc. and that it's obvious that people who wrote the code did want the rules of deleting the object to be uniform (no delete hidden in a "special case" code), then it's not poor software design.
Smart pointers are meant to be the "easy answer" to a range of cases where you can't be sure where the delete call should be done. So you have to define how to delete it and define a rule that trigger this delete. Shared pointers delete once there is no reference to the object. Scoped pointers delete once out of scope. etc. It's easy to use and solve a lot of cases.
But as every tool, it's not silver bullet. As said previously, you can't provide smart pointers for objects allocated in pools. In video games, you often "know" precisely how much objects of each types are allowed at the same time, and the frequency of creation/destruction of those objects. So why do new and delete in this case? You just need to new all the objects in raw memory, use them and delete everything at the end, or just dump the raw memory.
In fact, almost all choices in those cases are driven by hardware or safety or other constraints.
There are no hard and fast rules, just good solutions to specific problems. Especially in C++ as you're the one in charge, not a VM.
If you feel a code smell about your specific case, it might be because the delete calls are done in special or specific cases, not in a generic way. That is poor design. Another thing that should smell is if new and delete are done while there is no good reason to use heap memory instead of stack one. The obvious case is if an object is created and destroyed in the same function. The only case where new/delete is valid then, is if the object requires more memory than the stack allows (and that does happen!).
So, just try to understand exactly why those deletes happen where they do, and if there's no good reason for them being there, you should refactor (if possible).