Nullable Value Types – When Do the Benefits of Nullable Value Types Outweigh the Cost of Null Pointers?

cnull

The question is in the title. Here is the context:

Some people think that the null pointer is a big mistake. Tony Hoare famously apologized for inventing it. Since version 2.0 C# has had nullable value types (int? foo;), which introduces more of the null badness into the language. To confuse the issue, the C# team is considering adding non-nullable reference types too (MyClass! myClass).

So are we trying to increase or decrease null pointers? Why are null pointers bad? If they are bad, why did the C# team expand the language's ability to use them? If they are good, why might the C# team expand the language's ability to prevent them? When we are reviewing code, how do we decide whether a variable or property should be allowed to be null or not?

In short: when do the benefits of a nullable value type outweigh the cost of a null pointer?

Best Answer

So C#'s nullable types aren't adding null pointers. They're an Option type, where you are explicitly saying "this can be unspecified/null/empty", and users have to explicitly call .Value (or a cast) to get the value. In C++, and C# reference types (and many other places), things can be null, but normally aren't. Programmers get lazy, and boom, badtimes.

And these sorts of things are necessary because sometimes stuff is optional. You have to represent that somehow.

But C# reference types always allow null, even when things aren't optional. So it falls to the programmer to check every variable. By letting the type system do this for you, it reduces the chance for error, while still letting you specify optional types when you need them.