Language Syntax – Why Most Languages Don’t Support ‘x < y < z' Comparisons

comparisonlanguage-agnosticlanguage-featuresoperatorssyntax

If I want to compare two numbers (or other well-ordered entities), I would do so with x < y. If I want to compare three of them, the high-school algebra student will suggest trying x < y < z. The programmer in me will then respond with "no, that's not valid, you have to do x < y && y < z".

Most languages I've come across don't seem to support this syntax, which is odd given how common it is in mathematics. Python is a notable exception. JavaScript looks like an exception, but it's really just an unfortunate by-product of operator precedence and implicit conversions; in node.js, 1 < 3 < 2 evaluates to true, because it's really (1 < 3) < 2 === true < 2 === 1 < 2.

So, my question is this: Why is x < y < z not commonly available in programming languages, with the expected semantics?

Best Answer

These are binary operators, which when chained, normally and naturally produce an abstract syntax tree like:

normal abstract syntax tree for binary operators

When evaluated (which you do from the leaves up), this produces a boolean result from x < y, then you get a type error trying to do boolean < z. In order for x < y < z to work as you discussed, you have to create a special case in the compiler to produce a syntax tree like:

special case syntax tree

Not that it isn't possible to do this. It obviously is, but it adds some complexity to the parser for a case that doesn't really come up that often. You're basically creating a symbol that sometimes acts like a binary operator and sometimes effectively acts like a ternary operator, with all the implications of error handling and such that entails. That adds a lot of space for things to go wrong that language designers would rather avoid if possible.

Related Topic