The logic behind the use of different arrows (-> <-) in Haskell

haskelllanguage-design

I've been thinking about language design lately, and reading over some of the new things in Haskell (always a nice source of inspiration). I'm struck by the many odd uses of the left <- and right -> arrow operators.

I guess the many different usages comes from prior art in math and other languages regarding arrow syntax, but is there some other reason not to try and make the usage more consistent or clear? Maybe I'm just not seeing the big picture?

The right arrow gets used as a type constructor for functions, the separator between argument and body of lambda expressions, separator for case statements, and it's used in pattern views, which have the form (e -> p).

The left arrow gets used in do notation as something similar to variable binding, in list comprehensions for the same (I'm assuming they are the same, as list comprehensions look like condensed do blocks), and in pattern guards, which have the form (p <- e).

Now the last examples for each arrow are just silly! I understand that guards and views serve different purposes, but they have almost identical form except that one is the mirror of the other! I've also always found it kind of odd that regular functions are defined with = but lambdas with ->. Why not use the arrow for both? Or the equals for both?

It also gets pretty odd when you consider that for comparing some calculated value against a constant, there's nearly a half-dozen ways to do it:

test x = x == 4

f     x = if test x then g x else h x

f'    4 = g 4
f'    x = h x

f''   x@(test -> true) = g x
f''   _ = h x

f'''  x | true <- test x = g x
        | otherwise = h x

f'''' x = case (x) of 
          4 -> g x
          _ -> h x

Variety is the spice of source code though, right?

Best Answer

I've also always found it kind of odd that regular functions are defined with = but lambdas with ->. Why not use the arrow for both? Or the equals for both?

The equality symbol makes sense for named function definitions because it states that applying the named function to its arguments is equivalent to evaluating the right hand side of the equation. Defining lambdas that way makes no sense notationally: x = x + 1 implies x is equal to its successor, which can't be true.

The left arrow gets used in do notation as something similar to variable binding, in list comprehensions for the same (I'm assuming they are the same, as list comprehensions look like condensed do blocks), and in pattern guards, which have the form (p <- e).

All of those constructs bind a new variable. Using = here would not make much sense either, because a statement such as x = x + 1 implies that x refers to the same thing on both sides. A variable binding introduces a new x that's distinct from any x that may occur in the right hand side.

One alternative would've been to just use -> and place the variable on the right, but I believe placing it on the left is objectively more readable in cultures that read left to right. You scan text by moving your eyes along the left margin and thus placing all the variables on the left makes them easier to find. And although having two different symbols for variable bindings may not seem consistent, there's consistency in the position of the variable that gets bound; it's always on the left.

So that just leaves the question of why <- as opposed to some other symbol like :=. I don't know, but I can guess. Look at what desugared monadic code looks like:

[1, 2, 3] >>= \xs -> map (*2) xs

Considering that >>= with its arguments swapped is called =<<, it probably seemed natural to use a left-facing arrow.

Related Topic