Technically speaking, Java does have type inferencing when using generics. With a generic method like
public <T> T foo(T t) {
return t;
}
The compiler will analyze and understand that when you write
// String
foo("bar");
// Integer
foo(new Integer(42));
A String is going to be returned for the first call and an Integer for the second call based on what was input as an argument. You will get the proper compile-time checking as a result. Additionally, in Java 7, one can get some additional type inferencing when instantiating generics like so
Map<String, String> foo = new HashMap<>();
Java is kind enough to fill in the blank angle brackets for us. Now why doesn't Java support type inferencing as a part of variable assignment? At one point, there was an RFE for type inferencing in variable declarations, but this was closed as "Will not fix" because
Humans benefit from the redundancy of the type declaration in two ways.
First, the redundant type serves as valuable documentation - readers do
not have to search for the declaration of getMap() to find out what type
it returns.
Second, the redundancy allows the programmer to declare the intended type,
and thereby benefit from a cross check performed by the compiler.
The contributor who closed this also noted that it just feels "un-java-like", which I am one to agree with. Java's verbosity can be both a blessing and a curse, but it does make the language what it is.
Of course that particular RFE was not the end of that conversation. During Java 7, this feature was again considered, with some test implementations being created, including one by James Gosling himself. Again, this feature was ultimately shot down.
With the release of Java 8, we now get type inference as a part of lambdas as such:
List<String> names = Arrays.asList("Tom", "Dick", "Harry");
Collections.sort(names, (first, second) -> first.compareTo(second));
The Java compiler is able to look at the method Collections#sort(List<T>, Comparator<? super T>)
and then the interface of Comparator#compare(T o1, T o2)
and determine that first
and second
should be a String
thus allowing the programmer to forgo having to restate the type in the lambda expression.
In the old days of C, there was no boolean type. People used the int
for storing boolean data, and it worked mostly. Zero was false and everything else was true.
This meant if you took an int flag = 0;
and later did flag++
the value would be true. This would work no matter what the value of flag was (unless you did it a lot, it rolled over and you got back to zero, but lets ignore that) - incrementing the flag when its value was 1 would give 2, which was still true.
Some people used this for unconditionally setting a boolean value to true. I'm not sure it ever became idiomatic, but its in some code.
This never worked for --
, because if the value was anything other than 1 (which it could be), the value would still not be false. And if it was already false (0
) and you did a decrement operator on it, it wouldn't remain false.
When moving code from C to C++ in the early days, it was very important that C code included in C++ was still able to work. And so in the specification for C++ (section 5.2.6 (its on page 71)) it reads:
The value obtained by applying a postfix ++ is the value that the operand had before applying the operator. [Note: the value obtained is a copy of the original value ] The operand shall be a modifiable lvalue. The type of the operand shall be an arithmetic type or a pointer to a complete object type. After the result is noted, the value of the object is modified by adding 1 to it, unless the object is of type bool
, in which case it is set to true. [Note: this use is deprecated, see annex D. ]
The operand of postfix -- is decremented analogously to the postfix ++ operator, except that the operand shall not be of type bool
.
This is again mentioned in section 5.3.2 (for the prefix operator - 5.2.6 was on postfix)
As you can see, this is deprecated (Annex D in the document, page 709) and shouldn't be used.
But thats why. And sometimes you may see the code. But don't do it.
Best Answer
Without getting in contact with people who were actually involved in these design decisions, I think we're unlikely to find a definitive answer. However, based on the timelines of the development of both Java and C++, I would conjecture that Java's
boolean
was chosen before, or contemporaneously with, the introductionbool
to C++, and certainly beforebool
was in wide use. It is possible thatboolean
was chosen due to its longer history of use (as in Boolean Algebra), or to match other languages (such as Pascal) which already had aboolean
type.Historical context
According to Evolving a language in and for the real world: C++ 1991-2006, the
bool
type was introduced to C++ in 1993.Java included
boolean
in its first release in 1995 (Java Language Specification 1.0). The earliest language specification I can find is the Oak 0.2 specification (Oak was later renamed to Java). That Oak specification is marked "Copyright 1994", but the project itself was started in 1991, and apparently had a working demo by the summer of 1992.