The ternary operator is well used, especially for short null-checks / defaults:
System.out.println("foo is "+(foo==null) ? "not set" : foo);
Some people consider this not as readable as an if/else, but that was not the question.
The XOR bitwise operator is only used in bit-processing. If you need a bitwise XOR, then there is no way around it.
The XOR logical operator is indeed so rare, that I did not see it in the last ten year in Java in valid cases. This is also due to the fact, that the boolean XOR "does not scale" like ||
or &&
. What I mean:
if( a && b && c && d ) .... // it's clear what the intention is
if( a || b || c || d ) .... // here also
if( a ^ b ^ c ^ d ) .... // ???
In the last case I would guess, the coder meant "only one should be true". But XOR is a beast. The coder got the beast and not what (s)he wanted.
That would be an interesting interview question: What is the result of the last if
?
Binary
Nand
Lets look at how NAND can be implemented with just AND or OR gates.
NAND becomes one of:
!(a && b)
!a || !b
Either of these can be seen as short circuiting. The reason that NAND doesn't exist is that it is easily rewritten as not(a and b)
Xor
XOR, is at its heart, a parity checker. To check the parity of two values, you need to test both values. This is why it is fundamentally not able to short circuit it - you can't validate if the value is true or false until you test all the values.
Looking at how XOR is written with AND and OR gates:
(a || b) && !(a && b)
If a
is true, the OR part of the xor can be short circuited, however it also means that the AND part cannot be short circuited.
N-ary
N-ary operands take any number of inputs (compared to the binary ones that just take two).
Nand
The n-ary NAND is
!(a && b && c && d ... )
This again can be short circuited in that as soon as one of operands evaluates to false, the value of the NAND is true.
Xor
There are two different interpretations of the n-ary xor:
- An odd number of true operand
- One and only one true operand
The first one is in common usage (see XOR at Wolfram):
For multiple arguments, XOR is defined to be true if an odd number of its arguments are true, and false otherwise. This definition is quite common in computer science, where XOR is usually thought of as addition modulo 2.
From Wikipedia
Strict reading of the definition of exclusive or, or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to accept three or more inputs and produce a true output if exactly one of those inputs were true, then it would in effect be a one-hot detector (and indeed this is the case for only two inputs). However, it is rarely implemented this way in practice.
The 'one hot' xor may be short circuited when evaluating the expression when finding the second true value.
Otherwise, again, the XOR is more commonly implemented as the "odd number of true values" which serves as a parity checker and requires the evaluation of all the operands to determine the truth of the expression as the last value evaluated can always change the truth of the expression.
Best Answer
Although there were older precursors, the influential French mathematician Rene Descartes is usually credited for introducing superscripted exponents (ab) into mathematical writing, in his work Geometrie which was published in 1637. This is the notation still universally used in mathematics today.
Fortran is the oldest programming language widely used for numerical computations that provides an exponentiation operator, it dates to 1954. The exponentiation operation is denoted by a double asterisk
**
. It should be noted that many computers at that time used 6-bit character encodings that did not provide a caret character^
. The use of**
was subsequently adopted by creators of various more recent programming languages that offer an exponentiation operation, such as Python.The first widely adopted character set that contained the caret
^
was the 7-bit ASCII encoding which was first standardized in 1963. The oldest programming language I am aware of that used the caret to denote exponentiation is BASIC, which dates to 1964. Around the same time IBM adopted the EBCDIC character encoding, which also includes the caret^
.The C language came into existence in 1972. It does not provide an exponentiation operator, rather it supports exponentiation via library functions such as
pow()
. Therefore no symbol needs to be set aside for exponentiation in C, and other, later, languages in the C-family, such as C++ and CUDA.On the other hand, and uncommon for programming languages up to that time, C provides symbols for bitwise operations. The number of special characters available in 7-bit ASCII was limited, and since there was a "natural affinity" of other operations to certain special characters, e.g.
&
for AND and~
for NOT, there were not all that many choices for the symbol for XOR.I am not aware of a published rationale provided by Ritchie or Kernighan as to why they chose
^
to denote XOR specifically; Ritchie's short history of C is silent on this issue. A look at the specification for the precursor to C, the language B, reveals that it did not have an XOR operator, but already used all special characters other than^
,$
,@
,#
.[Update] I sent email to Ken Thompson, creator of B and one of the co-creators of C, inquiring about the rationale for choosing
^
as C's XOR operator, and asking permission to share the answer here. His reply (slightly reformatted for readability):The use of
^
for exponentiation in "mathematics" that you refer to is actually usage established at a much later date for typesetting systems such as Knuth's TeX which dates to 1978, command line interfaces for algebra systems such as Mathematica which dates to 1988, and graphing calculators in the early 1990s.Why did these products adopt the use of
^
for exponentiation? In the case of calculators I suspect the influence of BASIC. Throughout the 1980s it was a very popular first programming language and was also embedded into other software products. The notation therefore would have been familiar to many buyers of the calculators. My memory is vague, but I believe there were even calculators that actually ran simple BASIC interpreters.