I believe it is primarily historical baggage.
The most prominent and oldest language with null
is C and C++. But here, null
does makes sense. Pointers are still quite numerical and low-level concept. And how someone else said, in the mindset of C and C++ programmers, having to explicitly tell that pointer can be null
doesn't make sense.
Second in line comes Java. Considering Java developers were trying to get closest to C++, so they can make transition from C++ to Java simpler, they probably didn't want to mess with such core concept of the language. Also, implementing explicit null
would require much more effort, because you have to check if non-null reference is actually set properly after initialization.
All other languages are same as Java. They usually copy the way C++ or Java does it, and considering how core concept of implicit null
of reference types is, it becomes really hard to design a language that is using explicit null
.
Throughout the java.util.*
package there are instances where code is written with package level protection. For example, this bit of java.util.String
- a constructor in Java 6:
// Package private constructor which shares value array for speed.
String(int offset, int count, char value[]) {
this.value = value;
this.offset = offset;
this.count = count;
}
or this getChars method:
/** Copy characters from this string into dst starting at dstBegin.
This method doesn't perform any range checking. */
void getChars(char dst[], int dstBegin) {
System.arraycopy(value, offset, dst, dstBegin, count);
}
The reason for this is that the designers of the code (and java.util.*
can be thought of as a rather large library) wanted the ability to have faster performance at the cost of a loss of various safety (range checks on arrays, direct access to other fields that imply the implementation rather than the interface, 'unsafe' access to parts of the class that are otherwise considered to be immutable via public methods).
By restricting these methods and fields to only be accessed by other classes in the java.util
package, it makes for closer coupling between them but also avoids exposing these implementation details to the world.
If you are working on an application and don't have need to worry about other users, package level protection isn't something you need to worry about. Other than various aspects of design purity, you could make everything public
and be ok with that.
However, if you are working on a library that is to be used by others (or want to avoid entangling your classes too much for later refactoring), you should use the strictest level of protection afforded to you for your needs. Often this means private
. Sometimes, however, you need to expose a little bit of the implementation details to other classes in the same package to avoid repeating yourself or to make the actual implementation a bit easier at the expense of the coupling of the two classes and their implementations. Thus, the package level protection.
Best Answer
A very useful application of dynamic scoping is for passing contextual parameters without having to add new parameters explicitly to every function in a call stack
For example, Clojure supports dynamic scoping via binding, which can be used to temporarily reassign the value of
*out*
for printing. If you re-bind*out*
then every call to print within the dynamic scope of the binding will print to your new output stream. Very useful if, for example, you want to redirect all printed output to some kind of debugging log.Example: in the code below, the do-stuff function will print to the debug output rather than standard out, but note that I didn't need to add an output parameter to do-stuff to enable this....
Note that Clojure's bindings are also thread-local, so you don't have an issue with concurrent usage of this capability. This makes bindings considerably safer than (ab)using global variables for the same purpose.