I know this is an old post, but I saw this post being referenced and dislike the chosen answer's tone.
So I did a bit of investigation!
- DirectX is old. It was first released in 1995, when the world had much more than Nvidia and ATI, DirectX vs OpenGL. That's over 15 years, people.
- 3dfx Interactive's Glide (one of DirectX's competitors back in the day. OpenGL wasn't meant for gaming back then) used a left-handed coordinate system.
- POV-Ray and RenderMan (Pixar's rendering software), also use a left-handed coordinate system.
- DirectX 9+ can work with both coordinate systems.
- Both WPF and XNA (which work with DirectX under the scenes) use a right-handed coordinate system.
From this, I can speculate about a couple things:
- Industry standards aren't as standard as people like.
- Direct3D was built in a time everyone did things their own way, and the developers probably didn't know better.
- Left-handedness is optional, but customary in the DirectX world.
- Since conventions die out hard, everyone thinks DirectX can only work with left-handedness.
- Microsoft eventually learned, and followed the standard in any new APIs they created.
Therefore, my conclusion would be:
When they had to choose, they didn't know of the standard, chose the 'other' system, and everyone else just went along for the ride.
No shady business, just an unfortunate design decision that was carried along because backward compatibility is the name of Microsoft's game.
Ruby’s model is provided more for convenience than correctness, and is inconsistent:
array + array
is array concatenation, allowing duplicates, but array - array
is set difference, removing duplicates: [1, 1] - [1]
is []
, not [1]
.
-
is not the inverse of +
, because it’s not the case that a + b - c == a
for all Array
instances a
, b
, and c
: take [1] + [1] - [1]
.
array * fixnum
is defined as iterated array concatenation, but fixnum * array
is not defined at all.
For purely array-based operations, I would expect +
and -
to be inverses:
[1, 2] + [3, 1] == [1, 2, 3, 1]
[1, 2, 3, 1] - [3, 1] == [1, 2]
-
would remove elements from the tail just as +
added them. Similarly for *
and /
:
[1, 2] * 3 == [1, 2, 1, 2, 1, 2]
[1, 2, 1, 2, 1, 2] / 3 == [1, 2]
[5, 1, 2, 1, 2] / 2 == [1, 2]
/
would first discard elements from the left until a.size % b == 0
. Why from the left? Well, I would expect an array modulus operator to satisfy the law:
a % b == a - (b * (a / b))
And that rule seems to work if you go through a few examples:
[1, 1] % 2 == [1, 1] - (2 * ([1, 1] / 2)) == []
[5, 1, 1] % 2 == [5, 1, 1] - (2 * ([5, 1, 1] / 2)) == [5]
This is basically defining division as iterated subtraction.
There are a couple of consistent and reasonably intuitive interpretations of array ♦ array
:
Cartesian product: [1, 2] ♦ [3, 4] == [1 ♦ 3, 1 ♦ 4, 2 ♦ 3, 2 ♦ 4]
Pairwise product: [1, 2] ♦ [3, 4] == [1 ♦ 3, 2 ♦ 4]
With a Cartesian product, the size of the result is the product of the size of the inputs. This is how list comprehensions and the list monad work in Haskell:
[x ♦ y | x <- [1, 2], y <- [3, 4]]
do
x <- [1, 2]
y <- [3, 4]
return (x ♦ y)
A pairwise product also makes sense, in that ([x1, y1, z1] * [x2, y2, z2]).reduce(:+)
would be the dot product of the vectors [x1, y1, z1]
and [x2, y2, z2]
. Of course, you would need to define the result when the inputs are of different lengths; in Haskell, the zipWith
function takes the shorter of the two input lists:
zipWith (♦) [1, 2] [3, 4, 5]
== zipWith (♦) [1, 2] [3, 4]
So the answer is that there are several possible interpretations, the choice of which is up to the designers of languages and libraries. As long as they’re self-consistent, none of them is strictly more “right” or “intuitive” than any other. The established convention in array languages is for array * array
to refer to pairwise product, because this generalises well to higher dimensions of array, and from promoting scalars to arrays of appropriate dimension.
Best Answer
Multiplication is complex typically... unless one of the multiplicands is the base the numbers are in themselves.
When working with base 10 math, multiplying by 10 is trivial: "append the same number of zeros as the 10 has".
2 * 10 = 20
and3 * 100 = 300
This is very easy for us. The exact same rule exists in binary.
In binary,
2 * 3 = 6
is10 * 11 = 110
, and4 * 3 = 12
is100 * 11 = 1100
.For a system already working with bits (ANDs and ORs), operations such as shift and roll already exist as part of the standard tool set. It just happens that translating
N * 2^M
into binary becomesshift N by M places
If we are doing something that isn't a power of 2 in binary, we've got to go back to the old fashioned multiply and add. Granted, binary is a bit 'easier', but a bit more tedious at the same time.
11 * 14
becomes (from Wikipedia on binary multiplier - a good read as it links to other multiplication algorithms for binary... shifting powers of two is still much easier):You can see, we're still doing shifts, and adds. But lets change that to
11 * 8
to see how easy it becomes and why we can just skip to the answer:By just skipping to that last step, we have drastically simplified the entire problem without adding lots of 0s that are still 0s.
Dividing is the same thing as multiplying, just the reverse. Just as
400 / 100
can be summarized as 'cancel the zeros', so too can this be done in binary.Using the example of
88 / 8
from the above exampleYou can see the steps in the long way of doing long division for binary is again quite tedious, and for a power of two, you can just skip to the answer by in effect, canceling the zeros.
(as a side note, if this is an interesting area for you, you may find browsing the binary tag on Math.SE, well... interesting.)