Ever tried to sum up all numbers from 1 to 2,000,000 in your favorite programming language? The result is easy to calculate manually: 2,000,001,000,000, which some 900 times larger than the maximum value of an unsigned 32bit integer.
C# prints out -1453759936
– a negative value! And I guess Java does the same.
That means there are some common programming languages which ignore Arithmetic Overflow by default (in C#, there are hidden options for changing that). That's a behavior which looks very risky to me, and wasn't the crash of Ariane 5 caused by such an overflow?
So: what are the design decisions behind such a dangerous behavior?
Edit:
The first answers to this question express the excessive costs of checking. Let's execute a short C# program to test this assumption:
Stopwatch watch = Stopwatch.StartNew();
checked
{
for (int i = 0; i < 200000; i++)
{
int sum = 0;
for (int j = 1; j < 50000; j++)
{
sum += j;
}
}
}
watch.Stop();
Console.WriteLine(watch.Elapsed.TotalMilliseconds);
On my machine, the checked version takes 11015ms, while the unchecked version takes 4125ms. I.e. the checking steps take almost twice as long as adding the numbers (in total 3 times the original time). But with the 10,000,000,000 repetitions, the time taken by a check is still less than 1 nanosecond. There may be situation where that is important, but for most applications, that won't matter.
Edit 2:
I recompiled our server application (a Windows service analyzing data received from several sensors, quite some number crunching involved) with the /p:CheckForOverflowUnderflow="false"
parameter (normally, I switch the overflow check on) and deployed it on a device. Nagios monitoring shows that the average CPU load stayed at 17%.
This means that the performance hit found in the made-up example above is totally irrelevant for our application.
Best Answer
There are 3 reasons for this:
The cost of checking for overflows (for every single arithmetic operation) at run-time is excessive.
The complexity of proving that an overflow check can be omitted at compile-time is excessive.
In some cases (e.g. CRC calculations, big number libraries, etc) "wrap on overflow" is more convenient for programmers.