First of all, most of the allergic reactions to Hungarian notation stems from the fact that consistent application tends to result in very unnatural looking names. Occasional uses of prefixes or suffixes that suggest a certain type should not be considered to be Hungarian notation unless the name looks very forced.
What you are doing by using "IsResolved" or "ResolvedOn" instead of "Resolved" is disambiguating what information is conveyed by the name (i.e. what is represented by a class or object of that name, what does a thus-named function return).
To take a few more examples:
- Company
- CompanyName
- Resolved
- IsResolved
- ResolvedOn
When reading code, "Resolved" would be the only one where I don't have an idea what it would represent and I would need to look it up. Context makes a difference and I am not claiming I am always correct, but "Resolved" is the only name in that list where I would not be able to tell at a glance if it looks to be used correctly.
You are comparing variable declarations to #define
s, which is incorrect. With a #define
, you create a mapping between an identifier and a snippet of source code. The C preprocessor will then literally substitute any occurrences of that identifier with the provided snippet. Writing
#define FOO 40 + 2
int foos = FOO + FOO * FOO;
ends up being the same thing to the compiler as writing
int foos = 40 + 2 + 40 + 2 * 40 + 2;
Think of it as automated copy&paste.
Also, normal variables can be reassigned, while a macro created with #define
can not (although you can re-#define
it). The expression FOO = 7
would be a compiler error, since we can't assign to “rvalues”: 40 + 2 = 7
is illegal.
So, why do we need types at all? Some languages apparently get rid of types, this is especially common in scripting languages. However, they usually have something called “dynamic typing” where variables don't have fixed types, but values have. While this is far more flexible, it's also less performant. C likes performance, so it has a very simple and efficient concept of variables:
There's a stretch of memory called the “stack”. Each local variable corresponds to an area on the stack. Now the question is how many bytes long does this area have to be? In C, each type has a well-defined size which you can query via sizeof(type)
. The compiler needs to know the type of each variable so that it can reserve the correct amount of space on the stack.
Why don't constants created with #define
need a type annotation? They are not stored on the stack. Instead, #define
creates reusable snippets of source code in a slightly more maintainable manner than copy&paste. Literals in the source code such as "foo"
or 42.87
are stored by the compiler either inline as special instructions, or in a separate data section of the resulting binary.
However, literals do have types. A string literal is a char *
. 42
is an int
but can also be used for shorter types (narrowing conversion). 42.8
would be a double
. If you have a literal and want it to have a different type (e.g. to make 42.8
a float
, or 42
an unsigned long int
), then you can use suffixes – a letter after the literal that changes how the compiler treats that literal. In our case, we might say 42.8f
or 42ul
.
Some languages have static typing as in C, but the type annotations are optional. Examples are ML, Haskell, Scala, C#, C++11, and Go. How does that work? Magic? No, this is called “type inference”. In C# and Go, the compiler looks at the right hand side of an assignment, and deduces the type of that. This is fairly straightforward if the right hand side is a literal such as 42ul
. Then it's obvious what the type of the variable should be. Other languages also have more complex algorithms that take into account how a variable is used. E.g. if you do x/2
, then x
can't be a string but must have some numeric type.
Best Answer