Is Replacing Division with Multiplication Good Practice?

coding-stylelanguage-agnosticmath

Whenever I need division, for example, condition checking, I would like to refactor the expression of division into multiplication, for example:

Original version:

if(newValue / oldValue >= SOME_CONSTANT)

New version:

if(newValue >= oldValue * SOME_CONSTANT)

Because I think it can avoid:

  1. Division by zero

  2. Overflow when oldValue is very small

Is that right? Is there a problem for this habit?

Best Answer

Two common cases to consider:

Integer arithmetic

Obviously if you are using integer arithmetic (which truncates) you will get a different result. Here's a small example in C#:

public static void TestIntegerArithmetic()
{
    int newValue = 101;
    int oldValue = 10;
    int SOME_CONSTANT = 10;

    if(newValue / oldValue > SOME_CONSTANT)
    {
        Console.WriteLine("First comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("First comparison says it's not bigger.");
    }

    if(newValue > oldValue * SOME_CONSTANT)
    {
        Console.WriteLine("Second comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("Second comparison says it's not bigger.");
    }
}

Output:

First comparison says it's not bigger.
Second comparison says it's bigger.

Floating point arithmetic

Aside from the fact that division can yield a different result when it divides by zero (it generates an exception, whereas multiplication does not), it can also result in slightly different rounding errors and a different outcome. Simple example in C#:

public static void TestFloatingPoint()
{
    double newValue = 1;
    double oldValue = 3;
    double SOME_CONSTANT = 0.33333333333333335;

    if(newValue / oldValue >= SOME_CONSTANT)
    {
        Console.WriteLine("First comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("First comparison says it's not bigger.");
    }

    if(newValue >= oldValue * SOME_CONSTANT)
    {
        Console.WriteLine("Second comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("Second comparison says it's not bigger.");
    }
}

Output:

First comparison says it's not bigger.
Second comparison says it's bigger.

In case you don't believe me, here is a Fiddle which you can execute and see for yourself.

Other languages may be different; bear in mind, however, that C#, like many languages, implements an IEEE standard (IEEE 754) floating point library, so you should get the same results in other standardized run times.

Conclusion

If you are working greenfield, you are probably OK.

If you are working on legacy code, and the application is a financial or other sensitive application that performs arithmetic and is required to provide consistent results, be very cautious when changing around operations. If you must, be sure that you have unit tests that will detect any subtle changes in the arithmetic.

If you are just doing things like counting elements in an array or other general computational functions, you will probably be OK. I am not sure the multiplication method makes your code any clearer, though.

If you are implementing an algorithm to a specification, I would not change anything at all, not just because of the problem of rounding errors, but so that developers can review the code and map each expression back to the specification to ensure there are no implementation flaws.