I guess to answer that we should compare the intentions of both classes and namespaces. According to Wikipedia:
Class
In object-oriented programming, a class is a construct that is used as a blueprint to create instances of itself – referred to as class instances, class objects, instance objects or simply objects. A class defines constituent members which enable these class instances to have state and behavior. Data field members (member variables or instance variables) enable a class object to maintain state. Other kinds of members, especially methods, enable a class object's behavior. Class instances are of the type of the associated class.
Namespace
In general, a namespace is a container that provides context for the identifiers (names, or technical terms, or words) it holds, and allows the disambiguation of homonym identifiers residing in different namespaces.
Now, what are you trying to achieve by putting the functions in a class (statically) or a namespace? I would wager that the definition of a namespace better describes your intention - all you want is a container for your functions. You don't need any of the features described in the class definition. Note that the first words of the class definition are "In object-oriented programming", yet there is nothing object-oriented about a collection of functions.
There are probably technical reasons as well but as someone coming from Java and trying to get my head around the multi-paradigm language that is C++, the most obvious answer to me is: Because we don't need OO to achieve this.
As someone who has read Clean Code and watched the Clean Coders series, multiple times, and often teach and coach other people in writing cleaner code, I can indeed vouch that your observations are correct - the metrics you point out are all mentioned in the book.
However, the book goes on to make other points which should also be applied alongside the guidelines that you pointed out. These were seemingly ignored in the code you're dealing with.
This may have happened because your colleague is still in the learning phase, in which case, as much as it is necessary to point out the smells of their code, it's good to remember that they're doing it in good will, learning and trying to write better code.
Clean Code does propose that methods should be short, with as few arguments possible. But along those guidelines, it proposes that we must folow SOLID principles, increase cohesion and reduce the coupling.
The S in SOLID stands for Single Responsibility Principle, which states that an object should be responsible for only one thing. "Thing" is not a very precise term, so the descriptions of this principle vary wildly. However, Uncle Bob, the author of Clean Code, is also the person who coined this principle, describing it as: "Gather together the things that change for the same reasons. Separate those things that change for different reasons." He goes on to say what he means with reasons to change here and here (a longer explanation here would be too much). If this principle was applied to the class you're dealing with, it is very likely that the pieces that deal with calculations would be separated from those that deal with holding state, by splitting the class in two or more, depending on how many reasons to change those calculations have.
Also, Clean classes should be cohesive, meaning that most of its methods use most of its attributes. As such, a maximally cohesive class is one where all methods use all of its attribute; as an example, in a graphical app you may have a Vector
class with attributes Point a
and Point b
, where the only methods are scaleBy(double factor)
and printTo(Canvas canvas)
, both operating on both attributes. In contrast, a minimally cohesive class is one where each attribute is used in one method only, and never more than one attribute is used by each method. In average, a class presents non-cohesive "groups" of cohesive parts - i.e. a few methods use attributes a
, b
and c
, while the rest use c
and d
- meaning that if we split the class in two, we end up with two cohesive objects.
Finally, Clean classes should reduce coupling as much as possible. While there are many types of coupling worth discussing here, it seems the code at hand mainly suffers from temporal coupling, where, as you pointed out, the object's methods will only work as expected when they're called in the correct order. And like the two guidelines above-mentioned, the solutions to this usually involve splitting the class in two or more, cohesive objects. The splitting strategy in this case usually involves patterns like Builder or Factory, and in highly complex cases, State-Machines.
The TL;DR: The Clean Code guidelines that your colleague followed are good, but only when also following the remaining principles, practices and patterns mentioned by the book. The Clean version of the "class" you're seeing would be split into multiple classes, each with a single responsibility, cohesive methods and no temporal coupling. This is the context where small methods and little-to-no arguments make sense.
Best Answer
The simple answer is that you really can't prevent code duplication. You can however "fix it" through a difficult continuous repetitive incremental process that boils down into two steps:
Step 1. Start writing tests on legacy code (preferably using a testing framework)
Step 2. Rewrite/refactor the code that is duplicated using what you've learnt from the tests
You can use static analysis tools to detect duplicated code and for C# there are loads of tools that can do this for you:
Tools like this will help you find points in code that does similar things. Continue to write tests to determine that they really do; use the same tests to make the duplicate code simpler to use. This "refactoring" can be done in multiple ways and you can use this list to determine the correct one:
Furthermore there is also a whole book about this topic by Michael C. Feathers, Working Effectively with Legacy Code. It goes in depth different strategies you can take to change the code to the better. He has a "legacy code change algorithm" which is not far off from the two step process above:
The book is a good read if you're dealing with brown-field development, i.e. legacy code that needs to change.
In this case
In the OP's case I can imagine the untestable code is caused by a honey pot for "utility methods and tricks" that take several forms:
Take note that there is nothing wrong with these, but on the other hand they're usually hard to maintain and change. Extensions methods in .NET are static methods, but are also relatively easy to test.
Before you go through with the refactorings though, talk with your team about it. They need to be kept on the same page as you before you proceed with anything. This is because if you're refactoring something then chances are high you'll be causing merge conflicts. So before reworking something, investigate it, tell to your team to work on those code points with caution for a while until you're done.
Since the OP is new to the code there are some other things to do before you should do anything:
Good luck!