Programming Languages – Tradeoffs of Type Inference

programming-languagestype-systems

It seems that all new programming languages or at least the ones that became popular use type inference. Even Javascript got types and type inference though various implementations (Acscript, typescript etc). It looks great to me but I'm wondering if there are any trade-offs or why let's say Java or the old good languages don't have type inference

  • When declaring a variable in Go without specifying its type (using var without a type or the := syntax), the variable's type is inferred from the value on the right hand side.
  • D allows writing large code fragments without redundantly specifying types, like dynamic languages do. On the other hand, static inference deduces types and other code properties, giving the best of both the static and the dynamic worlds.
  • The type inference engine in Rust is pretty smart. It does more than looking at the type of the r-value during an initialization. It also looks how the variable is used afterwards to infer its type.
  • Swift uses type inference to work out the appropriate type. Type inference enables a compiler to deduce the type of a particular expression automatically when it compiles your code, simply by examining the values you provide.

Best Answer

Haskell's type system is fully inferrable (leaving aside polymorphic recursion, certain language extensions, and the dreaded monomorphism restriction), yet programmers still frequently provide type annotations in the source code even when they don't need to. Why?

  1. Type annotations serve as documentation. This is especially true with types as expressive as Haskell's. Given a function's name and its type you can usually have a pretty good guess at what the function does, especially when the function is parametrically polymorphic.
  2. Type annotations can drive development. Writing a type signature before you write the body of a function feels kind-of like test-driven development. In my experience, once you make a Haskell function compile it often works first time. (Of course, this does not obviate the need for automated tests!)
  3. Explicit types can help you check your assumptions. When I'm trying to understand some code that already works, I frequently pepper it with what I believe to be the correct type annotations. If the code still compiles I know I've understood it. If it doesn't, I read the error message.
  4. Type signatures let you specialise polymorphic functions. Very occasionally, an API is more expressive or useful if certain functions are not polymorphic. The compiler won't complain if you give a function a less general type than would be inferred. The classic example is map :: (a -> b) -> [a] -> [b]. Its more general form (fmap :: Functor f => (a -> b) -> f a -> f b) applies to all Functors, not just lists. But it was felt that map would be easier to understand for beginners, so it lives on alongside its bigger brother.

On the whole, the downsides of a statically-typed-but-inferrable system are much the same as the downsides of static typing in general, a well-worn discussion on this site and others (Googling "static typing disadvantages" will get you hundreds of pages of flame-wars). Of course, some of said disadvantages are ameliorated by the smaller quantity of type annotations in an inferrable system. Plus, type inference has its own advantages: hole-driven development wouldn't be possible without type inference.

Java* proves that a language requiring too many type annotations gets annoying, but with too few you lose out on the advantages I described above. Languages with opt-out type inference strike an agreeable balance between the two extremes.

*Even Java, that great scapegoat, performs a certain amount of local type inference. In a statement like Map<String, Integer> = new HashMap<>();, you don't have to specify the generic type of the constructor. On the other hand, ML-style languages are typically globally inferrable.