Algorithms – Approach to Simplifying an Algorithm

algorithm-analysisalgorithms

For completely random reasons*, I wrote some code that calculates and displays the following expression:

P=1*(2+2)(3+3+3)(4+4+4+4)..(n+n……

Which is equivalent to (n!)^2

The version I wrote (listed below the line) isn't terribly efficient; especially not in how it calculates the inner product.

How should I go about simplifying the algorithm so it will operate faster and be more easily understood?

And yes, I know I should use something other than int for the variables. That aspect of my code isn't as interesting as re-working the algorithmic approach.

* In the spirit of full disclosure this off-topic question sparked my working on the code


int sequence(int n)
{
  int i, accum, prod, j;

  accum = 1;
  for (i = 2 .. n)
  {
    prod = i;
    for (j = 1 .. i-1)
    {
      prod += i;
    }
    accum *= prod;
  }
  return accum;
}

Best Answer

How it goes for me usually is to first write code that literally performs the calculation or carries out some operations in the obvious non-clever way. Then I look for patterns where I can substitute "better" code.

Some red flags or "code smells" I see in the example code:

  1. There's one 'for' loop inside another. If we're looping over i and j, maybe we could consider the sum i+j and difference i-j, with appropriate limits. Very handy in quantum mechanics. Probably nonsense for the typical business app. There may be other ways to index things. Just a simple swap of inner/outer loops might lead to insight although probably not in itself improve speed.

  2. Again, with nested loops. maybe there's a clever way to look at the two-dimensionality as a one dimensional problem. Precalculate something, perhaps. In the course of trying to improve the code, I may have an insight that lets me change the math or algorithm at an abstract level.

  3. Look for obvious or almost-obvious math simplifications. (4+4+4+4) of course is 4*4=4^2, and so for the general term I'd use n^2. Right there, we lose a loop. After that, recognizing the formula as computing 1*2*3*4*... is just n!, I will choose to use a lookup table. If we weren't adding or multiplying but using trickier things like Legendre polynomials, root-sum-square, or using noisy experimental data instead of nice neat integers, then more sophisticated math is needed. But then I work in physics and graphics not business, so most programmers in general may not need to get so deep into such things.

  4. Plucking out things that are done over and over. In your example we want to do "for n times: add n to a running total" and that could be put into a black box function or object or something. Suppose your "4+4+4+4" and the like weren't just numbers and simple addition but something fancier such as images or audio tracks, and digital filters, FFTs, and modulators. Shovel it into a black box. It may not reduce the big-O complexity right away, but the contents of that black box could always be studied and optimized later. Hopefully we'd reduce its big-O growth but if not, we're likely to find some optimization to reduce the coefficient. (E.g. it still helps to change an algorithm's execution time from 8*n^3 to 4*n^3.)

Sometimes, while doing the optimization work and waiting for compiles, I may websurf for other attempts to solve the problem, and find a whole different formula or method of solution, something so good that my algorithm can be ditched, never mind how it's implemented.

Related Topic