Is return-type-(only)-polymorphism in Haskell a good thing

functional programminghaskell

One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like

class Foo a where
    foo::Int -> a

Some of the reasons that I do not like this:

Referential transparency:

"In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion.

Monomorphism restriction:

One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit.

Type defaulting:

Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants).

So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction?

I can see some immediate problems:

If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them.

The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them.

Best Answer

I actually think that return type polymorphism is one of the best features of type classes. After having used it for a while, it is sometimes hard for me to go back to OOP style modeling where I don't have it.

Consider the encoding of algebra. In Haskell we have a type class Monoid (ignoring mconcat)

class Monoid a where
   mempty :: a
   mappend :: a -> a -> a

How could we encode this as an interface in an OO language? The short answer is we can't. That's because the type of mempty is (Monoid a) => a aka, return type polymorphism. Having the ability to model algebra is incredibly useful IMO.*

You start your post with the complaint about "Referential Transparency." This raises an important point: Haskell is a value oriented language. So expressions like read 3 don't have to be understood as things that compute values, they can also be understood as values. What this means is that the real issue is not return type polymorphism: it is values with polymorphic type ([] and Nothing). If the language should have these, then it really has to have polymorphic return types for consistency.

Should we be able to say [] is of type forall a. [a]? I think so. These features are very useful, and they make the language much simpler.

If Haskell had subtype polymorphism [] could be a subtype for all [a]. The problem is, that I don't know of a way of encoding that without having the type of the empty list be polymorphic. Consider how it would be done in Scala (it is shorter than doing it in the canonical statically typed OOP language, Java)

abstract class List[A]
case class Nil[A] extends List[A]
case class Cons[A](h: A. t: List[A]) extends List[A]

Even here, Nil() is an object of type Nil[A] **

Another advantage of return type polymorphism is that it makes the Curry-Howard embedding much simpler.

Consider the following logical theorems:

 t1 = forall P. forall Q. P -> P or Q
 t2 = forall P. forall Q. P -> Q or P

We can trivially capture these as theorems in Haskell:

data Either a b = Left a | Right b
t1 :: a -> Either a b
t1 = Left
t2 :: a -> Either b a
t2 = Right

To sum up: I like return type polymorphism, and only think it breaks referential transparency if you have a limited notion of values (although this is less compelling in the ad hoc type class case). On the other hand, I do find your points about MR and type defaulting compelling.


*. In the comments ysdx points out this isn't strictly true: we could re-implement type classes by modeling the algebra as another type. Like the java:

abstract class Monoid<M>{
   abstract M empty();
   abstract M append(M m1, M m2);
}

You then have to pass objects of this type around with you. Scala has a notion of implicit parameters which avoids some, but in my experience not all, of the overhead of explicitly managing these things. Putting your utility methods (factory methods, binary methods, etc) on a separate F-bounded type turns out to be an incredibly nice way of managing things in an OO language that has support for generics. That said, I'm not sure I would have groked this pattern if I didn't have experience modeling things with typeclasses, and I'm not sure other people will.

It also has limitations, out of the box there is no way to get an object that implements the typeclass for an arbitrary type. You have to either pass the values explicitly, use something like Scala's implicits, or use some form of dependency injection technology. Life gets ugly. On the other hand, it is nice that you can have multiple implementations for the same type. Something can be a Monoid in multiple ways. Also, carrying around these structures separately has IMO a more mathematically modern, constructive, feel to it. So, although I still generally prefer the Haskell way of doing this, I probably overstated my case.

Typeclasses with return type polymorphism makes this kind of thing easy to handle. That doesn't meen it is the best way to do it.

**. Jörg W Mittag points out this isn't really the canonical way of doing this in Scala. Instead, we would follow the standard library with something more like:

abstract class List[+A] ...  
case class Cons[A](head: A, tail: List[A]) extends List[A] ...
case object Nil extends List[Nothing] ...

This takes advantage of Scala's support for bottom types, as well as covariant type paramaters. So, Nil is of type Nil not Nil[A]. At this point we are pretty far from Haskell, but it is interesting to note how Haskell represents the bottom type

undefined :: forall a. a

That is, it isn't the subtype of all types, it is polymorphically(sp) a member of all types.
Yet more return type polymorphism.