Your scenario doesn't sound like an appropriate use for an enumeration. Also, unless the values never change, this should be data driven.
If the values never change, then a key-value pair is more appropriate where the destination is the key and the flight number is the value. This would eliminate any need for a switch statement, as the key can be used to directly and efficiently locate the value.
if you really want to use an enum, then the switch statement in C# would be something like:
int flightNumber;
Destination selection = // some value collected from user input;
switch( selection )
{
case Destination.London:
flightNumber = 201;
break;
// etc.
}
Both switch
statements and polymorphism have their use. Note though that a third option exists too (in languages which support function pointers / lambdas, and higher-order functions): mapping the identifiers in question to handler functions. This is available in e.g. C which is not an OO language, and C# which is*, but not (yet) in Java which is OO too*.
In some procedural languages (having no polymorphism nor higher-order functions) switch
/ if-else
statements were the only way to solve a class of problems. So many developers, having accustomed to this way of thinking, continued to use switch
even in OO languages, where polymorphism is often a better solution. This is why it is often recommended to avoid / refactor switch
statements in favour of polymorphism.
At any rate, the best solution is always case dependent. The question is: which option gives you cleaner, more concise, more maintainable code in the long run?
Switch statements can often grow unwieldy, having dozens of cases, making their maintenance hard. Since you have to keep them in a single function, that function can grow huge. If this is the case, you should consider refactoring towards a map based and/or polymorphic solution.
If the same switch
starts to pop up in multiple places, polymorphism is probably the best option to unify all these cases and simplify code. Especially if more cases are expected to be added in the future; the more places you need to update each time, the more possibilities for errors. However, often the individual case handlers are so simple, or there are so many of them, or they are so interrelated, that refactoring them into a full polymorphic class hierarchy is overkill, or results in a lot of duplicated code and/or tangled, hard to maintain class hierarchy. In this case, it may be simpler to use functions / lambdas instead (if your language allows you).
However, if you have a switch
in a single place, with only a few cases doing something simple, it may well be the best solution to leave it like it is.
*I use the term "OO" loosely here; I am not interested in conceptual debates over what is "real" or "pure" OO.
Best Answer
He is probably an old C hacker and yes, he talking out of his ass. .Net is not C++; the .Net compiler keeps on getting better and most clever hacks are counter-productive, if not today then in the next .Net version. Small functions are preferable because .Net JIT-s every function once before it is being used. So, if some cases never get hit during a LifeCycle of a program, so no cost is incurred in JIT-compiling these. Anyhow, if speed is not an issue, there should not be optimizations. Write for programmer first, for compiler second. Your co-worker will not be easily convinced, so I would prove empirically that better organized code is actually faster. I would pick one of his worst examples, rewrite them in a better way, and then make sure that your code is faster. Cherry-pick if you must. Then run it a few million times, profile and show him. That ought to teach him well.
EDIT
Bill Wagner wrote:
Item 11: Understand the Attraction of Small Functions(Effective C# Second Edition) Remember that translating your C# code into machine-executable code is a two-step process. The C# compiler generates IL that gets delivered in assemblies. The JIT compiler generates machine code for each method (or group of methods, when inlining is involved), as needed. Small functions make it much easier for the JIT compiler to amortize that cost. Small functions are also more likely to be candidates for inlining. It’s not just smallness: Simpler control flow matters just as much. Fewer control branches inside functions make it easier for the JIT compiler to enregister variables. It’s not just good practice to write clearer code; it’s how you create more efficient code at runtime.
EDIT2:
So ... apparently a switch statement is faster and better than a bunch of if/else statements, because one comparison is logarithmic and another is linear. http://sequence-points.blogspot.com/2007/10/why-is-switch-statement-faster-than-if.html
Well, my favorite approach to replacing a huge switch statement is with a dictionary (or sometimes even an array if I am switching on enums or small ints) that is mapping values to functions that get called in response to them. Doing so forces one to remove a lot of nasty shared spaghetti state, but that is a good thing. A large switch statement is usually a maintenance nightmare. So ... with arrays and dictionaries, the lookup will take a constant time, and there will be little extra memory wasted.
I am still not convinced that the switch statement is better.