I actually think that return type polymorphism is one of the best features of type classes. After having used it for a while, it is sometimes hard for me to go back to OOP style modeling where I don't have it.
Consider the encoding of algebra. In Haskell we have a type class Monoid
(ignoring mconcat
)
class Monoid a where
mempty :: a
mappend :: a -> a -> a
How could we encode this as an interface in an OO language? The short answer is we can't. That's because the type of mempty
is (Monoid a) => a
aka, return type polymorphism. Having the ability to model algebra is incredibly useful IMO.*
You start your post with the complaint about "Referential Transparency." This raises an important point: Haskell is a value oriented language. So expressions like read 3
don't have to be understood as things that compute values, they can also be understood as values. What this means is that the real issue is not return type polymorphism: it is values with polymorphic type ([]
and Nothing
). If the language should have these, then it really has to have polymorphic return types for consistency.
Should we be able to say []
is of type forall a. [a]
? I think so. These features are very useful, and they make the language much simpler.
If Haskell had subtype polymorphism []
could be a subtype for all [a]
. The problem is, that I don't know of a way of encoding that without having the type of the empty list be polymorphic. Consider how it would be done in Scala (it is shorter than doing it in the canonical statically typed OOP language, Java)
abstract class List[A]
case class Nil[A] extends List[A]
case class Cons[A](h: A. t: List[A]) extends List[A]
Even here, Nil()
is an object of type Nil[A]
**
Another advantage of return type polymorphism is that it makes the Curry-Howard embedding much simpler.
Consider the following logical theorems:
t1 = forall P. forall Q. P -> P or Q
t2 = forall P. forall Q. P -> Q or P
We can trivially capture these as theorems in Haskell:
data Either a b = Left a | Right b
t1 :: a -> Either a b
t1 = Left
t2 :: a -> Either b a
t2 = Right
To sum up: I like return type polymorphism, and only think it breaks referential transparency if you have a limited notion of values (although this is less compelling in the ad hoc type class case). On the other hand, I do find your points about MR and type defaulting compelling.
*. In the comments ysdx points out this isn't strictly true: we could re-implement type classes by modeling the algebra as another type. Like the java:
abstract class Monoid<M>{
abstract M empty();
abstract M append(M m1, M m2);
}
You then have to pass objects of this type around with you. Scala has a notion of implicit parameters which avoids some, but in my experience not all, of the overhead of explicitly managing these things. Putting your utility methods (factory methods, binary methods, etc) on a separate F-bounded type turns out to be an incredibly nice way of managing things in an OO language that has support for generics. That said, I'm not sure I would have groked this pattern if I didn't have experience modeling things with typeclasses, and I'm not sure other people will.
It also has limitations, out of the box there is no way to get an object that implements the typeclass for an arbitrary type. You have to either pass the values explicitly, use something like Scala's implicits, or use some form of dependency injection technology. Life gets ugly. On the other hand, it is nice that you can have multiple implementations for the same type. Something can be a Monoid in multiple ways. Also, carrying around these structures separately has IMO a more mathematically modern, constructive, feel to it. So, although I still generally prefer the Haskell way of doing this, I probably overstated my case.
Typeclasses with return type polymorphism makes this kind of thing easy to handle. That doesn't meen it is the best way to do it.
**. Jörg W Mittag points out this isn't really the canonical way of doing this in Scala. Instead, we would follow the standard library with something more like:
abstract class List[+A] ...
case class Cons[A](head: A, tail: List[A]) extends List[A] ...
case object Nil extends List[Nothing] ...
This takes advantage of Scala's support for bottom types, as well as covariant type paramaters. So, Nil
is of type Nil
not Nil[A]
. At this point we are pretty far from Haskell, but it is interesting to note how Haskell represents the bottom type
undefined :: forall a. a
That is, it isn't the subtype of all types, it is polymorphically(sp) a member of all types.
Yet more return type polymorphism.
You can't create a pure function called random
that will give a different result every time it is called. In fact, you can't even "call" pure functions. You apply them. So you aren't missing anything, but this doesn't mean that random numbers are off-limits in functional programming. Allow me to demonstrate, I'll use Haskell syntax throughout.
Coming from an imperative background, you may initially expect random to have a type like this:
random :: () -> Integer
But this has already been ruled out because random cannot be a pure function.
Consider the idea of a value. A value is an immutable thing. It never changes and every observation that you can make about it is consistent for all time.
Clearly, random can't produce a Integer value. Instead, it produces a Integer random variable. It's type might look like this:
random :: () -> Random Integer
Except that passing an argument is completely unnecessary, functions are pure, so one random ()
is as good as another random ()
. I'll give random, from here on, this type:
random :: Random Integer
Which is all well and fine, but not very useful. You may expect to be able to write expressions like random + 42
, but you can't, because it won't typecheck. You can't do anything with random variables, yet.
This raises an interesting question. What functions should exist to manipulate random variables?
This function can't exist:
bad :: Random a -> a
in any useful way, because then you could write:
badRandom :: Integer
badRandom = bad random
Which introduces an inconsistency. badRandom is supposedly a value, but it is also a random number; a contradiction.
Maybe we should add this function:
randomAdd :: Integer -> Random Integer -> Random Integer
But this just a special case of a more general pattern. You should be able to apply any function to random thing in order to get other random things like so:
randomMap :: (a -> b) -> Random a -> Random b
Instead of writing random + 42
, we can now write randomMap (+42) random
.
If all you had was randomMap, you wouldn't be able to combine random variables together. You couldn't write this function for instance:
randomCombine :: Random a -> Random b -> Random (a, b)
You might try to write it like this:
randomCombine a b = randomMap (\a' -> randomMap (\b' -> (a', b')) b) a
But it has the wrong type. Instead of ending up with a Random (a, b)
, we end up with a Random (Random (a, b))
This can be fixed by adding another function:
randomJoin :: Random (Random a) -> Random a
But, for reasons that may eventually become clear, I'm not going to do that. Instead I'm going to add this:
randomBind :: Random a -> (a -> Random b) -> Random b
It's not immediately obvious that this actually solves the problem, but it does:
randomCombine a b = randomBind a (\a' -> randomMap (\b' -> (a', b')) b)
In fact, it's possible to write randomBind in terms of randomJoin and randomMap. It's also possible to write randomJoin in terms of randomBind. But, I'll leave doing this as an exercise.
We could simplify this a little. Allow me to define this function:
randomUnit :: a -> Random a
randomUnit turns a value into a random variable. This means that we can have random variables that aren't actually random. This was always the case though; we could have done randomMap (const 4) random
before. The reason defining randomUnit is a good idea is that now we can define randomMap in terms of randomUnit and randomBind:
randomMap :: (a -> b) -> Random a -> Random b
randomMap f x = randomBind x (randomUnit . f)
Ok, now we are getting somewhere. We have random variables that we can manipulate. However:
- It's not obvious how we might actually implement these functions,
- It's quite cumbersome.
Implementation
I'll tackle pseudo random numbers. It is possible implement these functions for real random numbers, but this answer is already getting quite long.
Essentially, the way this is going to work is that we are going to pass a seed value around everywhere. Whenever we generate a new random value, we will produce a new seed. At the end, when we're done constructing a random variable, we will want to sample from it using this function:
runRandom :: Seed -> Random a -> a
I'm going to define the Random type like this:
data Random a = Random (Seed -> (Seed, a))
Then, we just need to provide implementations of randomUnit, randomBind, runRandom and random which is quite straight-forward:
randomUnit :: a -> Random a
randomUnit x = Random (\seed -> (seed, x))
randomBind :: Random a -> (a -> Random b) -> Random b
randomBind (Random f) g =
Random (\seed ->
let (seed', x) = f seed
Random g' = g x in
g' seed')
runRandom :: Seed -> Random a -> a
runRandom seed (Random f) = (snd . f) seed
For random, I'm going to assume there's already a function of the type:
psuedoRandom :: Seed -> (Seed, Integer)
In which case random is just Random psuedoRandom
.
Making things less cumbersome
Haskell has syntactic sugar to make things like this nicer on the eyes. It's called do-notation and to use it all we have to do it create an instance of Monad for Random.
instance Monad Random where
return = randomUnit
(>>=) = randomBind
Done. randomCombine
from before could now be written:
randomCombine :: Random a -> Random b -> Random (a, b)
randomCombine a b = do
a' <- a
b' <- b
return (a', b')
If I was doing this for myself, I would even go one step further than this and create an instance of Applicative. (Don't worry if this makes no sense).
instance Functor Random where
fmap = liftM
instance Applicative Random where
pure = return
(<*>) = ap
Then randomCombine could be written:
randomCombine :: Random a -> Random b -> Random (a, b)
randomCombine a b = (,) <$> a <*> b
Now that we have these instances, we can use >>=
instead of randomBind, join instead of randomJoin, fmap instead of randomMap, return instead of randomUnit. We also get a whole load of functions for free.
Is it worth it? You could argue, that getting to this stage, where working with random numbers isn't completely horrendous was quite difficult and long-winded. What did we get in exchange for this effort?
The most immediate reward is that we can now see exactly which parts of our program are dependent on randomness and which parts are entirely deterministic. In my experience, forcing a strict separation like this simplifies things immensely.
We've assumed so far that we just want a single sample from each random variable that we generate, but if it turns out that in the future we'd actually like to see more of the distribution, this is trivial. You can just use runRandom lots of times on the same random variable with different seeds. This is, of course, possible in imperative languages, but in this case, we can be certain that we aren't going to perform unanticipated IO every time we sample a random variable and we don't have to be careful about initializing state.
Best Answer
a
andb
areNumber
s, whereascurrentDateTime.seconds
returns anIO<Number>
. Those types are incompatible, you cannot add them together, therefore your function is not well-typed and simply won't compile. At least that's how it's done in pure languages with a static type system, like Haskell. In impure languages like ML, Scala or F#, it is up to the programmer to ensure referential transparency, and of course in dynamically typed languages like Clojure or Scheme, there is no static type system to enforce referential transparency.