I think the motivation for language designers to revise existing languages is to introduce innovation while ensuring that their target developer community stays together and adopts the new language: moving an existing community to a new revision of an existing language is more effective than creating a new community around a new language. Of course, this forces some developers to adopt the new standard even if they were fine with the old one: in a community you sometimes have to impose certain decisions on a minority if you want to keep the community together.
Also, consider that a general-purpose language tries to serve as many programmers as possible, and often it is applied in new areas it wasn't designed for. So instead of aiming for simplicity and stability of design, the community can choose to incorporate new ideas (even from other languages) as the language moves to new application areas. In such a scenario, you cannot expect to get it right at the first attempt.
This means that languages can undergo deep change over the years and the latest revision may look very different from the first one. The name of the language is not kept for technical reasons, but because the community of developers agrees to use an old name for a new language. So the name of the programming language identifies the community of its users rather than the language itself.
IMO the motivation why many developers find this acceptable (or even desirable) is that a gradual transition to a slightly different language is easier and less confusing than a jump into a completely new language that would take them more time and effort to master.
Consider that there are a number of developers that have one or two favourite languages and are not very keen on learning new (radically different) languages. And even for the ones who do like learning new stuff, learning a new programming language is always a hard and time consuming activity.
Also, it can be preferable to be part of a large community and rich ecosystem than of a very small community using a lesser known language. So, when the community decides to move on, many members decide to follow to avoid isolation.
As a side comment, I think that the argument of allowing evolution while maintaining compatibility with legacy code is rather weak: Java can call C code, Scala can easily integrate with Java code, C# can integrate with C++. There are many examples that show that you can easily interface with legacy code written in another language without the need of source code compatibility.
NOTE
From some answers and comments I seem to understand that some readers have interpreted the question as "Why do programming languages need to evolve."
I think this is not the main point of the question, since it is obvious that programming languages need to evolve and why (new requirements, new ideas). The question is rather "Why does this evolution have to happen inside a programming language instead of spawning many new languages?"
Languages have copied that from C, and for C, Dennis Ritchie explains that initially, in B (and perhaps early C), there was only one form &
which depending on the context did a bitwise and or a logical one. Later, each function got its operator: &
for the bitwise one and &&
for for logical one. Then he continues
Their tardy introduction explains an infelicity of C's precedence rules. In B one writes
if (a == b & c) ...
to check whether a
equals b
and c
is non-zero; in such a conditional expression it is better that &
have lower precedence than ==
. In converting from B to C, one wants to replace &
by &&
in such a statement; to make the conversion less painful, we decided to keep the precedence of the &
operator the same relative to ==
, and merely split the precedence of &&
slightly from &
. Today, it seems that it would have been preferable to move the relative precedences of &
and ==
, and thereby simplify a common C idiom: to test a masked value against another value, one must write
if ((a & mask) == b) ...
where the inner parentheses are required but easily forgotten.
Best Answer
The short answer is that this is how twos-complement negation works. It's not overflow, and it wouldn't be detectable without special circuitry in the processor (or equivalent checks in the language runtime).
How Twos-Complement Arithmetic Works
I'll start with the number line, in binary, limiting my wordsize to 3 bits:
Inside the computer, there is an Adder circuit that can combine two bits and produce a result plus carry bit. These circuits are chained together, so that the processor can add two entire words. The carry bit from this addition is exposed to via a processor status register, but is not normally available to high-level languages.
Some examples:
Let's look at those examples individually:
So why isn't overflow checked? The simplest answer is cost.
At the level of the hardware, addition is unsigned (at least on the three processors that I've programmed). Detecting integer overflow would require a separate set of opcodes for signed math, which would mean more transistors, which could be more profitably used elsewhere. In the earlier days of computing that was a huge concern; today, maybe not so much but almost everyone is OK with how math is implemented.
At the level of the language, cost is still a factor. There is the runtime cost of checking every signed operation for overflow, but there is also a programmer cost: imagine having to wrap all expressions (even a for loop) with a try/catch. The .Net runtime apparently gives you the option of enabling this, while Java explicitly does not.
How Twos-Complement Negation Works, and why -MIN_VALUE equals itself
In prose: twos-complement negation flips all of the bits in a number and then adds one.
I use a prose definition because that's almost certainly how it actually works in the hardware (although I'm not a hardware engineer, so can't say for sure, plus different architectures might use different techniques).
Let's see what happens with our 3-bit words:
Note that there's no carry involved, although you could check for sign of value and result. However, that again would require special circuitry and/or runtime-level checks, to catch a result that will happen almost never.
What Are Some Alternatives, and Why Aren't They Used
One alternative is ones-complement, in which negation is simply inverting all bits. The ones-complement number line for a 3-bit word looks like this:
According to the linked Wikipedia article, there were machines using ones-complement arithmetic; I never used one. Again, I'm not a hardware engineer, but I believe that you need separate operations for addition and subtraction with ones-complement (in addition to separate operations for unsigned math), which again runs into the problem of cost.
The Wikipedia article mentions the problem of "end-around borrow," which may have been an issue with the actual computers that used ones-complement math, but I don't think is a necessary problem. I believe that the carry bit could also serve as a borrow bit.
The bigger problem is that you have two values for zero. Which is going to cause programmers to create a lot of off-by-one errors when counting, or is going to require a lot of special-case code in the language runtime (eg: a
for
loop that knows when it crosses 0 that it has to skip to 1/-1).Another alternative is to use the high-order bit just as a sign bit, with the low-order bits being the same between positive and negative:
This is how IEEE-754 floating point works. It makes sense when your primary operations are assumed to be multiplication and division, not so much for addition and subtraction. And it still has the issue of two zeros.
Commentary
To me, this question is identical to questions that express outrage over the fact that 0.10 cannot be represented by a floating point number: both indicate a belief that digital computers should be able to exactly represent the real world. Or, in other words, that computers operate according to the laws of mathematics.
I can understand this belief; what I can't understand is the outrage that people express when the belief is shown to be false. A few moment's reflection should make it apparent that the belief cannot be true: computers work with finite quantities, whereas mathematics deals with continuous relations (I was about to say that everything in the real world is continuous, but figured that someone would bring up quantum mechanics).
Faced with this fundamental truth, computer designers -- and language designers, and application programmers -- have to make trade-offs. You might not like the particular tradeoff, but you should seek to understand it rather than simply complain about it. And once you understand the tradeoff, you can look for an environment that made a different tradeoff.