When I test out the difference in time between shifting and multiplying in C, there is no difference. Why

bitwise-operatorscefficiency

I have been taught that shifting in binary is much more efficient than multiplying by 2^k. So I wanted to experiment, and I used the following code to test this out:

#include <time.h>
#include <stdio.h>

int main() {
    clock_t launch = clock();
    int test = 0x01;
    int runs;

    //simple loop that oscillates between int 1 and int 2
    for (runs = 0; runs < 100000000; runs++) {


    // I first compiled + ran it a few times with this:
    test *= 2;

    // then I recompiled + ran it a few times with:
    test <<= 1;

    // set back to 1 each time
    test >>= 1;
    }

    clock_t done = clock();
    double diff = (done - launch);
    printf("%f\n",diff);
}

For both versions, the print out was approximately 440000, give or take 10000. There was no (visually, at least) significant difference between the two versions' outputs. So my question is, is there something wrong with my methodology? Should there even be a visual difference? Does this have something to do with the architecture of my computer, the compiler, or something else?

Best Answer

As said in the other answer, most compilers will automatically optimize multiplications to be done with bitshifts.

This is a very general rule when optimizing: Most 'optimizations' will actually misguide the compile about what you really mean, and might even lessen the performance.

Only optimize when you have noticed a performance problem and measured what the problem is. (and most code we write doesn't get executed that often, so we don't need to bother)

The big downside to optimizing is that the 'optimized' code is often much less readable. So in your case, always go for multiplication when you are looking to multiply. And go for bit shifting when you want to move bits.