The point of having static typing is the ability to prove statically that your program is correct with regard of types (note: not completely correct in all senses). If you have a static type system throughout, you can detect type errors most of the time.
If you only have partial type information, you can only check the small pieces of a call graph where type info happens to be complete. But you have spent time and effort to specify type information for incomplete parts, where it can't help you but could give a false sense of security.
To express type information, you need a part of language which cannot be excessively simple. Soon you'll find out that info like int
is not enough; you'll want something like List<Pair<Int, String>>
, then parametric types, etc. It can be confusing enough even in the rather simple case of Java.
Then, you'll need to handle this information during translation phase and execution phase, because it's silly to only check for static errors; the user is going to expect that the type constraints always hold if specified at all. Dynamic languages are not too fast as they are, and such checks will slow the performance down even more. A static language can spend serious effort checking types because it only does that once; a dynamic language can't.
Now imagine adding and maintaining all of this just so that people sometimes optionally used these features, only detecting a small fraction of type errors. I don't think it's worth the effort.
The very point of dynamic languages is to have a very small and very malleable framework, within which you can easily do things that are much more involved when done in a static language: various forms of monkey-patching that are used for metaprogramming, mocking and testing, dynamic replacement of code, etc. Smalltalk and Lisp, both very dynamic, took it to such an extreme as to ship environment images instead of building from source. But when you want to ensure that particular data paths are type-safe, add assertions and write more unit tests.
Update from 2020: Some dynamic languages now support partial typing of sorts. Python allows type hints, to be used by external tools like mypy
. TypeScript allows mixing with type-oblivious JavaScript. Still, the points above mostly hold.
I believe it is primarily historical baggage.
The most prominent and oldest language with null
is C and C++. But here, null
does makes sense. Pointers are still quite numerical and low-level concept. And how someone else said, in the mindset of C and C++ programmers, having to explicitly tell that pointer can be null
doesn't make sense.
Second in line comes Java. Considering Java developers were trying to get closest to C++, so they can make transition from C++ to Java simpler, they probably didn't want to mess with such core concept of the language. Also, implementing explicit null
would require much more effort, because you have to check if non-null reference is actually set properly after initialization.
All other languages are same as Java. They usually copy the way C++ or Java does it, and considering how core concept of implicit null
of reference types is, it becomes really hard to design a language that is using explicit null
.
Best Answer
Those are two independent questions:
Incidentally, the answer to both is: we don't.
There are plenty of statically typed programming languages where you don't need to declare types. The compiler can infer the types from the surrounding context and the usage.
For example, in Scala you can say
or you could just say
The two are exactly equivalent: the compiler will infer the type to be
Int
from the initialization expression23
.Likewise, in C♯, you can say either of these, and they both mean the exact same thing:
This feature is called type inference, and many languages besides Scala and C♯ have it: Haskell, Kotlin, Ceylon, ML, F♯, C++, you name it. Even Java has limited forms of type inference.
In dynamically typed programming languages, variables don't even have types. Types only exist dynamically at runtime, not statically. Only values and expressions have types and only at runtime, variables don't have types.
E.g. in ECMAScript:
And lastly, in a lot of languages, you don't even need to declare variables at all. e.g. in Ruby:
In fact, that last example is valid in a number of programming languages. The exact same line of code would also work in Python, for example.
So,