First, the GHC error,
GHC is attempting to unify a few constraints with x
, first, we use it as a function so
x :: a -> b
Next we use it as a value to that function
x :: a
And finally we unify it with the original argument expression so
x :: (a -> b) -> c -> d
Now x x
becomes an attempt to unify t2 -> t1 -> t0
, however, We can't unify this since it would require unifying t2
, the first argument of x
, with x
. Hence our error message.
Next, why not general recursive types. Well the first point worth noting is the difference between equi and iso recursive types,
- equi-recursive are what you'd expect
mu X . Type
is exactly equivalent to expanding or folding it arbitrarily.
- iso-recursive types provide a pair of operators,
fold
and unfold
which fold and unfold the recursive definitions of types.
Now equi-recursive types sound ideal, but are absurdly hard to get right in complex types systems. It can actually make type checking undecidable. I'm not familiar with every detail of OCaml's type system but fully equirecursive types in Haskell can cause the typechecker to loop arbitrarily trying to unify types, by default, Haskell makes sure that type checking terminates. Further more, in Haskell, type synonyms are dumb, the most useful recursive types would be defined like type T = T -> ()
, however are inlined almost immediately in Haskell, but you can't inline a recursive type, it's infinite! Therefore, recursive types in Haskell would demand a huge overhaul to how synonyms are handled, probably not worth the effort to put even as a language extension.
Iso-recursive types are a bit of a pain to use, you more or less have to explicitly tell the type checker how to fold and unfold your types, making your programs more complex to read and write.
However, this is very similar to what your doing with your Mu
type. Roll
is fold, and unroll
is unfold. So actually, we do have iso-recursive types baked in. However, equi-recursive types are just too complex so systems like OCaml and Haskell force you to pass recurrences through type level fixpoints.
Now if this interests you, I'd recommend Types and Programming Languages. My copy is sitting open in my lap as I'm writing this to make sure I've got the right terminology :)
This article is discussed at several places:
To summarize: yes, OCaml is not a Lisp, and no, it is not perfect (what does that mean?). I don't think the points mentioned in the blog post are relevant for day-to-day O'Caml programmers.
Having studied O'Caml, I think it is an interesting language which can help you build programs you would not even dare write in, say, C/C++/Java: for example, have a look at
Frama-C.
For an up-to-date description of O'Caml, I encourage you to read about its features: the language promotes strong static type checking techniques which allows implementations to focus on producing performant, yet safe, runtimes.
Important: I am no OCaml expert: if you are one of them and see I wrote something horribly wrong, please correct me. I'll edit this post accordingly.
Static type checking
False Sense of Security
This is true, but obvious.
Static typing gives you proofs you can trust about a subset of your program's properties. Unless you accept to go all formal, an average (non-toy) program will be subject to programming error bugs which can be witnessed only at runtime.
That's when dynamic checking techniques can be applied: the OCaml compiler has flags to generate executables with debugging information, and so on... Or, it can generate code that blindly trust the programmer and erase type information as much as possible. Programmers who wants robust programs should implement dynamic checks explicitly.
The same thing applies to e.g. Common Lisp, but reversed: dynamic types first, with optional type declarations and compiler directives second.
Few Basic Types
Still applies: the core language has not changed (or not dramatically).
Silent Integer Overflow
This is the norm in most languages that integer overflow are checked by hands.
I don't know of any library that would type-check operations to verify whether overflow can occur.
Module Immutability
Author mentions Functors but I fail to see how his example cannot be implemented. Reading the First Class Modules chapter of https://realworldocaml.org, it seems that modules can be used to compose and build new modules. Of course, modifying an existing module requires source code modification, but again, this is not unusual among programming languages.
"Semantically, functions are compiled INLINE"
The reddit thread above disagrees, saying that binding are resolved at link time. However, this is an implementation detail and I think that the emphasized Semantically relates to the way functions are resolved. Example:
let f x y = x + y ;;
let g a b = f b a ;;
let f x y = x * y ;;
exit (g 2 3) ;;
The above program compiles, and when executed, returns 5, because g
is defined with the first version of f
, just as-if the calling function g
inlined the call to f
. This is not "bad", by the way, it is just consistent with O'Caml's name shadowing rules.
To summarize: yes, modules are immutable. But they are also composable.
Polymorphism Causes Run-time Type Errors
I can't reproduce the mentioned error. I suspect it to be a compiler error.
No Macros
Indeed, there are no macros but preprocessors (OcamlP4, OcamlP5, ...).
Minor Language Suckiness
Record field naming hell
True, but you should use modules:
- Two fields of two records have same label in OCaml
- Resolving field names
Syntax
Still applies (but really, this is just syntax).
No Polymorphism
Still applies, but somehow there are people who prefer that instead of Lisp's numerical tower (I don't know why). I suppose it helps with type inference.
Inconsistent function sets
See the OCaml Batteries Included project.
In particular, BatArray, for an example of map2
for arrays.
No dynamic variables
Can be implemented:
- http://okmij.org/ftp/ML/dynvar.txt
- http://okmij.org/ftp/ML/index.html#dynvar
Optional ~ arguments suck
By language restriction, you can't mix optional and keywords arguments in Common Lisp.
Does it mean it sucks? (off course, this can be changed with macros (see e.g. my answer)).
See O'Caml's documentation for optional and named arguments in O'Caml.
Partial argument application inconsistency
I don't think it this is really annoying in practice.
Arithmetic's readability
It holds, but you can use R or Python for numerical problems if you prefer.
Silent name conflict resolution
Still applies, but note that this is well documented.
No object input/output
Still applies.
Implementation, libraries
These keep changing every day: there is no definitive answer.
Finally,
"You should try OCaml (or, better yet, Haskell) even if you think
it sucks and you are not planning to use it. Without it, your
Computer Science education is incomplete, just like it is incomplete
without some Lisp and C (or, better yet, Assembly) exposure."
... still applies.
Best Answer
The first answer is that nobody really knows why languages become popular, and anybody who says otherwise is deluded or has an agenda. (It's often easy to identify why a language fails to become popular, but that's another question.)
With that disclaimer, here are some points that are suggestive, most important first:
The first mature C compiler appeared in 1974; the first mature OCaml compiler appeared in the late 1990s. C has a 25-year head start.
C shipped with Unix, which was the biggest "killer app" of all time. For a long time, every CS department in the world had to have Unix, which meant that every instructor and everyone taking a CS course had an opportunity to be exposed to C. OCaml and ML are still waiting for their first killer app. (MLdonkey is cool, but it's not Unix.)
C fills its niche so well that I doubt there will never be another low-level language devoted only to systems programming. (To see the evidence in favor, read Dennis Ritchie's paper on the history of C from HOPL II.) It's not even clear what OCaml's niche is, and Standard ML's niche is only a little clearer. So Caml and ML have quite a few competitors, whereas C killed off its only competitor (BLISS).
One of C's great strengths is that its cost model is very predictable: it is easy to look at any small fragment of C code can instantly get an accurate idea of what machine operations will have to be performed to execute that code. OCaml's cost model is much less clear, especially because memory allocation is much less explicit, and the overall cost of memory allocation (equals cost of allocation plus costs incurred during garbage collection) depends on emergent properties like how long objects live and which objects refer to other objects. The net result is that performance is hard to predict, and even hard to analyze after the fact. (OCaml's memory-profiling tools are not what they should be.) As a result, OCaml is not good for applications where performance must be very predictable---like embedded systems.
C is a language with a standard and many compilers. OCaml is a software artifact: the only compiler is from a single source, and the compiler is the standard. And that standard changes with every release. For people who value stability and backward compatibility, a single-source language may represent an unacceptable risk.
Anybody with a halfway-decent undergraduate compiler course and a lot of persistence can write a C compiler that more or less works, and with adequate performance. To get an implementation of OCaml or ML off the ground requires a lot more education, and to get comparable performance to a naive C compiler requires a lot more work. This means there are a lot fewer hobbyists to mess around with languages like OCaml, so it's harder tor the community to develop a deep understanding about how to exploit it.