I've read that Go doesn't actually have true type inference in the sense that functional languages such as ML or Haskell have, but I haven't been able to find a simple to understand comparison of the two versions. Could someone explain in basic terms how type inference in Go differs from type inference in Haskell, and the pros/cons of each?
Type Inference in Golang and Haskell – Key Concepts
gohaskelltype-systems
Related Solutions
I actually think that return type polymorphism is one of the best features of type classes. After having used it for a while, it is sometimes hard for me to go back to OOP style modeling where I don't have it.
Consider the encoding of algebra. In Haskell we have a type class Monoid
(ignoring mconcat
)
class Monoid a where
mempty :: a
mappend :: a -> a -> a
How could we encode this as an interface in an OO language? The short answer is we can't. That's because the type of mempty
is (Monoid a) => a
aka, return type polymorphism. Having the ability to model algebra is incredibly useful IMO.*
You start your post with the complaint about "Referential Transparency." This raises an important point: Haskell is a value oriented language. So expressions like read 3
don't have to be understood as things that compute values, they can also be understood as values. What this means is that the real issue is not return type polymorphism: it is values with polymorphic type ([]
and Nothing
). If the language should have these, then it really has to have polymorphic return types for consistency.
Should we be able to say []
is of type forall a. [a]
? I think so. These features are very useful, and they make the language much simpler.
If Haskell had subtype polymorphism []
could be a subtype for all [a]
. The problem is, that I don't know of a way of encoding that without having the type of the empty list be polymorphic. Consider how it would be done in Scala (it is shorter than doing it in the canonical statically typed OOP language, Java)
abstract class List[A]
case class Nil[A] extends List[A]
case class Cons[A](h: A. t: List[A]) extends List[A]
Even here, Nil()
is an object of type Nil[A]
**
Another advantage of return type polymorphism is that it makes the Curry-Howard embedding much simpler.
Consider the following logical theorems:
t1 = forall P. forall Q. P -> P or Q
t2 = forall P. forall Q. P -> Q or P
We can trivially capture these as theorems in Haskell:
data Either a b = Left a | Right b
t1 :: a -> Either a b
t1 = Left
t2 :: a -> Either b a
t2 = Right
To sum up: I like return type polymorphism, and only think it breaks referential transparency if you have a limited notion of values (although this is less compelling in the ad hoc type class case). On the other hand, I do find your points about MR and type defaulting compelling.
*. In the comments ysdx points out this isn't strictly true: we could re-implement type classes by modeling the algebra as another type. Like the java:
abstract class Monoid<M>{
abstract M empty();
abstract M append(M m1, M m2);
}
You then have to pass objects of this type around with you. Scala has a notion of implicit parameters which avoids some, but in my experience not all, of the overhead of explicitly managing these things. Putting your utility methods (factory methods, binary methods, etc) on a separate F-bounded type turns out to be an incredibly nice way of managing things in an OO language that has support for generics. That said, I'm not sure I would have groked this pattern if I didn't have experience modeling things with typeclasses, and I'm not sure other people will.
It also has limitations, out of the box there is no way to get an object that implements the typeclass for an arbitrary type. You have to either pass the values explicitly, use something like Scala's implicits, or use some form of dependency injection technology. Life gets ugly. On the other hand, it is nice that you can have multiple implementations for the same type. Something can be a Monoid in multiple ways. Also, carrying around these structures separately has IMO a more mathematically modern, constructive, feel to it. So, although I still generally prefer the Haskell way of doing this, I probably overstated my case.
Typeclasses with return type polymorphism makes this kind of thing easy to handle. That doesn't meen it is the best way to do it.
**. Jörg W Mittag points out this isn't really the canonical way of doing this in Scala. Instead, we would follow the standard library with something more like:
abstract class List[+A] ...
case class Cons[A](head: A, tail: List[A]) extends List[A] ...
case object Nil extends List[Nothing] ...
This takes advantage of Scala's support for bottom types, as well as covariant type paramaters. So, Nil
is of type Nil
not Nil[A]
. At this point we are pretty far from Haskell, but it is interesting to note how Haskell represents the bottom type
undefined :: forall a. a
That is, it isn't the subtype of all types, it is polymorphically(sp) a member of all types.
Yet more return type polymorphism.
Folds over lists consist of three elements - the list to fold over, some accumulator function f
and an initial value.
They transform the list a:b:c:[]
into (a f (b f (c f init)))
where init
is the initial element i.e. they replace the cons constructor :
with your accumulator function and the empty list []
with your supplied initial value.
You can think of your append function as transforming the list x1:x2:..:xn
into the list
x1:x2:..:xn:ys
for some given list ys
. This can be done by simply using ys
as the replacement for the empty list []
which terminates your xs
list.
Your code can be written as
append xs ys = foldr (\x y -> x:y) ys xs
Your accumulator function f
has the type a -> [a] -> [a]
and is the same as the (:)
function, so you could write it as
append xs ys = foldr (:) ys xs
If the first argument xs is the list x1:x2:...:xn then the result of append
is the list
x1:x2:...xn:ys
as required.
Best Answer
See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:
then the type of
x
is deduced by figuring out the type ofy + z
, which is a relatively trivial thing to do for the compiler. To do this, the types ofy
andz
need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:
then Haskell can infer the type not just
x
but alsoy
andz
simply based on the fact that you're performing addition on them. In this case:(The lowercase
a
here denotes a polymorphic type, often called "generics" in other languages like C++. TheNum a =>
part is a constraint to indicate that the typea
support have some notion of addition.)Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:
Notice that nowhere have we specified the type of
f
, nor did we specify the type offix
, yet the Haskell compiler can automatically figure out that:This says that:
f
must be a function from some arbitrary typet
to the same typet
.fix
is a function that receives a parameter of typet -> t
and returns a result of typet
.