Technically speaking, Java does have type inferencing when using generics. With a generic method like
public <T> T foo(T t) {
return t;
}
The compiler will analyze and understand that when you write
// String
foo("bar");
// Integer
foo(new Integer(42));
A String is going to be returned for the first call and an Integer for the second call based on what was input as an argument. You will get the proper compile-time checking as a result. Additionally, in Java 7, one can get some additional type inferencing when instantiating generics like so
Map<String, String> foo = new HashMap<>();
Java is kind enough to fill in the blank angle brackets for us. Now why doesn't Java support type inferencing as a part of variable assignment? At one point, there was an RFE for type inferencing in variable declarations, but this was closed as "Will not fix" because
Humans benefit from the redundancy of the type declaration in two ways.
First, the redundant type serves as valuable documentation - readers do
not have to search for the declaration of getMap() to find out what type
it returns.
Second, the redundancy allows the programmer to declare the intended type,
and thereby benefit from a cross check performed by the compiler.
The contributor who closed this also noted that it just feels "un-java-like", which I am one to agree with. Java's verbosity can be both a blessing and a curse, but it does make the language what it is.
Of course that particular RFE was not the end of that conversation. During Java 7, this feature was again considered, with some test implementations being created, including one by James Gosling himself. Again, this feature was ultimately shot down.
With the release of Java 8, we now get type inference as a part of lambdas as such:
List<String> names = Arrays.asList("Tom", "Dick", "Harry");
Collections.sort(names, (first, second) -> first.compareTo(second));
The Java compiler is able to look at the method Collections#sort(List<T>, Comparator<? super T>)
and then the interface of Comparator#compare(T o1, T o2)
and determine that first
and second
should be a String
thus allowing the programmer to forgo having to restate the type in the lambda expression.
Let's take a look at Java. Java 8 can't have variables with inferred types. This means I frequently have to spell out the type, even if it is perfectly obvious to a human reader what the type is:
int x = 42; // yes I see it's an int, because it's a bloody integer literal!
// Why the hell do I have to spell the name twice?
SomeObjectFactory<OtherObject> obj = new SomeObjectFactory<>();
And sometimes it's just plain annoying to spell out the whole type.
// this code walks through all entries in an "(int, int) -> SomeObject" table
// represented as two nested maps
// Why are there more types than actual code?
for (Map.Entry<Integer, Map<Integer, SomeObject<SomeObject, T>>> row : table.entrySet()) {
Integer rowKey = entry.getKey();
Map<Integer, SomeObject<SomeObject, T>> rowValue = entry.getValue();
for (Map.Entry<Integer, SomeObject<SomeObject, T>> col : rowValue.entrySet()) {
Integer colKey = col.getKey();
SomeObject<SomeObject, T> colValue = col.getValue();
doSomethingWith<SomeObject<SomeObject, T>>(rowKey, colKey, colValue);
}
}
This verbose static typing gets in the way of me, the programmer. Most type annotations are repetitive line-filler, content-free regurgiations of what we already know. However, I do like static typing, as it can really help with discovering bugs, so using dynamic typing isn't always a good answer. Type inference is the best of both worlds: I can omit the irrelevant types, but still be sure that my program (type-)checks out.
While type inference is really useful for local variables, it should not be used for public APIs which have to be unambiguously documented. And sometimes the types really are critical for understanding what's going on in the code. In such cases, it would be foolish to rely on type inference alone.
There are many languages that support type inference. For example:
C++. The auto
keyword triggers type inference. Without it, spelling out the types for lambdas or for entries in containers would be hell.
C#. You can declare variables with var
, which triggers a limited form of type inference. It still manages most cases where you want type inference. In certain places you can leave out the type completely (e.g. in lambdas).
Haskell, and any language in the ML family. While the specific flavour of type inference used here is quite powerful, you still often see type annotations for functions, and for two reasons: The first is documentation, and the second is a check that type inference actually found the types you expected. If there is a discrepancy, there's likely some kind of bug.
And since this answer was originally written, type inference has become more popular. E.g. Java 10 has finally added C#-style inference. We're also seeing more type systems on top of dynamic languages, e.g. TypeScript for JavaScript, or mypy for Python, which make heavy use of type inference in order to keep the overhead of type annotations manageable.
Best Answer
Haskell's type system is fully inferrable (leaving aside polymorphic recursion, certain language extensions, and the dreaded monomorphism restriction), yet programmers still frequently provide type annotations in the source code even when they don't need to. Why?
map :: (a -> b) -> [a] -> [b]
. Its more general form (fmap :: Functor f => (a -> b) -> f a -> f b
) applies to allFunctor
s, not just lists. But it was felt thatmap
would be easier to understand for beginners, so it lives on alongside its bigger brother.On the whole, the downsides of a statically-typed-but-inferrable system are much the same as the downsides of static typing in general, a well-worn discussion on this site and others (Googling "static typing disadvantages" will get you hundreds of pages of flame-wars). Of course, some of said disadvantages are ameliorated by the smaller quantity of type annotations in an inferrable system. Plus, type inference has its own advantages: hole-driven development wouldn't be possible without type inference.
Java* proves that a language requiring too many type annotations gets annoying, but with too few you lose out on the advantages I described above. Languages with opt-out type inference strike an agreeable balance between the two extremes.
*Even Java, that great scapegoat, performs a certain amount of local type inference. In a statement like
Map<String, Integer> = new HashMap<>();
, you don't have to specify the generic type of the constructor. On the other hand, ML-style languages are typically globally inferrable.