No, it is definitely not the usual to throw away good analysis results. For a stark difference you may want to check out the Scala compiler. It is based on execution phases and you can add plugins which will be executed after certain phases. It is particularly interesting to place your plugin after the typing phase, because that gives you all of the inferred types in a very handy way.
The AST, however, is not so much changed as it is annotated. The basic structure of your statements, expressions, function definitions, etc. that are held in your AST do remain unchanged of course. What you really do during type inference is to annotate the tree nodes with their respective types.
For example, before typing inference ran you would have an AST with a function node for the function f
, that is located somewhere within the AST and has some child nodes, which resemble arguments and statements/expressions. During type inference you determine the types of these child nodes, i.e. of the statements/expressions, and from them you determine the type of the function f
. Now you could be satisfied detecting, that f
is returning an int
and it is only ever used in places, where int
is used, but there is no reason to lose this information.
As for interpreter vs compiler, there are huge benefits from having types annotated to your AST, which go far beyond that. Consider for example IDEs.. plugin authors are very thankful for having a type-annotated AST available. It is so much more efficient to simply know that f
returns an int
, when you want to implement code completion, instead of having to run type inference yet again.
You may want to extend your language at one point. If the extension is anywhere past the type inference, you typically need to rely on its results. For example, consider adding type-safe macros, or some kind of semantic checks.
In summary: Type information in a strongly typed lanugage is so important, that it resembles blasphemy if you perform type inference only to throw away its results. You rob yourself and others of information that is both, extremely powerful when available, and rather costly to generate.
First, the GHC error,
GHC is attempting to unify a few constraints with x
, first, we use it as a function so
x :: a -> b
Next we use it as a value to that function
x :: a
And finally we unify it with the original argument expression so
x :: (a -> b) -> c -> d
Now x x
becomes an attempt to unify t2 -> t1 -> t0
, however, We can't unify this since it would require unifying t2
, the first argument of x
, with x
. Hence our error message.
Next, why not general recursive types. Well the first point worth noting is the difference between equi and iso recursive types,
- equi-recursive are what you'd expect
mu X . Type
is exactly equivalent to expanding or folding it arbitrarily.
- iso-recursive types provide a pair of operators,
fold
and unfold
which fold and unfold the recursive definitions of types.
Now equi-recursive types sound ideal, but are absurdly hard to get right in complex types systems. It can actually make type checking undecidable. I'm not familiar with every detail of OCaml's type system but fully equirecursive types in Haskell can cause the typechecker to loop arbitrarily trying to unify types, by default, Haskell makes sure that type checking terminates. Further more, in Haskell, type synonyms are dumb, the most useful recursive types would be defined like type T = T -> ()
, however are inlined almost immediately in Haskell, but you can't inline a recursive type, it's infinite! Therefore, recursive types in Haskell would demand a huge overhaul to how synonyms are handled, probably not worth the effort to put even as a language extension.
Iso-recursive types are a bit of a pain to use, you more or less have to explicitly tell the type checker how to fold and unfold your types, making your programs more complex to read and write.
However, this is very similar to what your doing with your Mu
type. Roll
is fold, and unroll
is unfold. So actually, we do have iso-recursive types baked in. However, equi-recursive types are just too complex so systems like OCaml and Haskell force you to pass recurrences through type level fixpoints.
Now if this interests you, I'd recommend Types and Programming Languages. My copy is sitting open in my lap as I'm writing this to make sure I've got the right terminology :)
Best Answer
The Hindley-Milner type inference is used for Hindley-Milner type systems, a restriction of System-F type systems. The interesting feature of HM type systems is that they have parametric polymorphism (aka. generics). That is the single biggest type system feature that Golang refuses to have.
With that frustrating restriction, HM-style type inference is impossible. Let's look at the untyped code:
What is the type of
f
? We might notice thata
must have a method, so we could use an anonymous interface:func f(a interface { method() ??? }) ???
. However, we have no idea what the return type is. With type variables, we could declare the type asHowever, Go does not have type variables so this won't work. While implicit interfaces make some aspects of type inference easier, we now have no way to find out the return type of a function call. The HM-system requires all functions to be declared rather than be implied, and each name can only have a single type (whereas Go's methods can have different types in different interfaces).
Instead, Go requires that functions are always fully declared, but allows variables to use type inference. This is possible because the right hand side of an assignment
variable := expression
already has a known type at that point of the program. This kind of type inference is simple, correct, and linear.