XOR vs Exponentiation – Why is the Caret Used for XOR Instead of Exponentiation?

bitwise-operatorshistorymathprogramming-languages

Not that it's really a problem for anyone that has faced this syntactic issue before, but I see a wild amount of confusion stemming from the use of the caret (^) as the XOR operation in lieu of the widely accepted mathematical exponentiation operation.

Of course there are a lot of places where the (mis-)use of the caret is explained and corrected, but I haven't come across any definitive sources as to why the caret was given a different meaning.

Was it a matter of convenience? An accident? Obviously the reasoning could be different for the various languages, so information in any regard would be insightful.

Best Answer

Although there were older precursors, the influential French mathematician Rene Descartes is usually credited for introducing superscripted exponents (ab) into mathematical writing, in his work Geometrie which was published in 1637. This is the notation still universally used in mathematics today.

Fortran is the oldest programming language widely used for numerical computations that provides an exponentiation operator, it dates to 1954. The exponentiation operation is denoted by a double asterisk **. It should be noted that many computers at that time used 6-bit character encodings that did not provide a caret character ^. The use of ** was subsequently adopted by creators of various more recent programming languages that offer an exponentiation operation, such as Python.

The first widely adopted character set that contained the caret ^ was the 7-bit ASCII encoding which was first standardized in 1963. The oldest programming language I am aware of that used the caret to denote exponentiation is BASIC, which dates to 1964. Around the same time IBM adopted the EBCDIC character encoding, which also includes the caret ^.

The C language came into existence in 1972. It does not provide an exponentiation operator, rather it supports exponentiation via library functions such as pow(). Therefore no symbol needs to be set aside for exponentiation in C, and other, later, languages in the C-family, such as C++ and CUDA.

On the other hand, and uncommon for programming languages up to that time, C provides symbols for bitwise operations. The number of special characters available in 7-bit ASCII was limited, and since there was a "natural affinity" of other operations to certain special characters, e.g. & for AND and ~ for NOT, there were not all that many choices for the symbol for XOR.

I am not aware of a published rationale provided by Ritchie or Kernighan as to why they chose ^ to denote XOR specifically; Ritchie's short history of C is silent on this issue. A look at the specification for the precursor to C, the language B, reveals that it did not have an XOR operator, but already used all special characters other than ^, $, @, #.

[Update] I sent email to Ken Thompson, creator of B and one of the co-creators of C, inquiring about the rationale for choosing ^ as C's XOR operator, and asking permission to share the answer here. His reply (slightly reformatted for readability):

From: Ken Thompson
Sent: Thursday, September 29, 2016 4:50 AM
To: Norbert Juffa
Subject: Re: Rationale behind choice of caret as XOR operator in C?

it was a random choice of the characters left.

if i had it to do over again (which i did) i would use the same operator for xor (^) and bit complement (~).

since ^ is now the better known operator, in go, ^ is xor and also complement.

The use of ^ for exponentiation in "mathematics" that you refer to is actually usage established at a much later date for typesetting systems such as Knuth's TeX which dates to 1978, command line interfaces for algebra systems such as Mathematica which dates to 1988, and graphing calculators in the early 1990s.

Why did these products adopt the use of ^ for exponentiation? In the case of calculators I suspect the influence of BASIC. Throughout the 1980s it was a very popular first programming language and was also embedded into other software products. The notation therefore would have been familiar to many buyers of the calculators. My memory is vague, but I believe there were even calculators that actually ran simple BASIC interpreters.