TDD Testing – How to Handle Many Permutations in Test-Driven Development

algorithmsartificial intelligencetddtesting

When creating a system like an AI, which can take many different paths very quickly, or really any algorithm that has several different inputs, the possible result set can contain a large number of permutations.

What approach should one take to use TDD when creating a system that outputs many, many different permutations of results?

Best Answer

Taking a more practical approach to pdr's answer. TDD is all about software design rather than testing. You use unit tests to verify your work as you go along.

So on a unit test level you need to design the units so they can be tested in a completely deterministic fashion. You can do this by taking anything that makes the unit nondeterministic (such as a random number generator) and abstract that away. Say we have a naïve example of a method deciding if a move is good or not:

class Decider {

  public boolean decide(float input, float risk) {

      float inputRand = Math.random();
      if (inputRand > input) {
         float riskRand = Math.random();
      }
      return false;

  }

}

// The usage:
Decider d = new Decider();
d.decide(0.1337f, 0.1337f);

This method is very hard to test and the only thing you really can verify in unit tests is its bounds... but that requires a lot of tries to get to the bounds. So instead, let's abstract away the randomizing part by creating an interface and a concrete class that wraps the functionality:

public interface IRandom {

   public float random();

}

public class ConcreteRandom implements IRandom {

   public float random() {
      return Math.random();
   }

}

The Decider class now needs to use the concrete class through its abstraction, i.e. the Interface. This way of doing things is called dependency injection (the example below is an example of constructor injection, but you can do this with a setter as well):

class Decider {

  IRandom irandom;

  public Decider(IRandom irandom) { // constructor injection
      this.irandom = irandom;
  }

  public boolean decide(float input, float risk) {

      float inputRand = irandom.random();
      if (inputRand > input) {
         float riskRand = irandom.random();
      }
      return false;

  }

}

// The usage:
Decider d = new Decider(new ConcreteRandom);
d.decide(0.1337f, 0.1337f);

You might ask yourself why this "code bloat" is necessary. Well, for starters, you can now mock the behavior of the random part of the algorithm because the Decider now has a dependency that follows the IRandoms "contract". You can use a mocking framework for this, but this example is simple enough to code yourself:

class MockedRandom() implements IRandom {

    public List<Float> floats = new ArrayList<Float>();
    int pos;

   public void addFloat(float f) {
     floats.add(f);
   }

   public float random() {
      float out = floats.get(pos);
      if (pos != floats.size()) {
         pos++;
      }
      return out;
   }

}

The best part is that this can completely replace the "actual" concrete implementation. The code becomes easy to test like this:

@Before void setUp() {
  MockedRandom mRandom = new MockedRandom();

  Decider decider = new Decider(mRandom);
}

@Test
public void testDecisionWithLowInput_ShouldGiveFalse() {

  mRandom.addFloat(0f);

  assertFalse(decider.decide(0.1337f, 0.1337f));
}

@Test
public void testDecisionWithHighInputRandButLowRiskRand_ShouldGiveFalse() {

  mRandom.addFloat(1f);
  mRandom.addFloat(0f);

  assertFalse(decider.decide(0.1337f, 0.1337f));
}

@Test
public void testDecisionWithHighInputRandAndHighRiskRand_ShouldGiveTrue() {

  mRandom.addFloat(1f);
  mRandom.addFloat(1f);

  assertTrue(decider.decide(0.1337f, 0.1337f));
}

Hope this gives you ideas on how to design your application so that the permutations can be forced so you can test all the edge cases and whatnot.