Testing Systems – How to Test Objects That Are Difficult to Mock

adapterdevelopment-processmockingobject-oriented-designtesting

I am working with the following system:

Network Data Feed -> Third Party Nio Library -> My Objects via adapter pattern

We recently had an issue where I updated the version of the library I was using, which, among other things, caused timestamps (which the third party library returns as long), to be changed from milliseconds after the epoch to nanoseconds after the epoch.

The Problem:

If I write tests that mock the third party library's objects, my test will be wrong if I have made a mistake about the third party library's objects. For example, I didn't realize that the timestamps changed precision, which resulted in a need for change in the unit test, because my mock returned the wrong data. This is not a bug in the library, it happened because I missed something in the documentation.

The problem is, I cannot be sure about the data contained in these data structures because I cannot generate real ones without a real data feed. These objects are big and complicated and have a lot of different pieces of data in them. The documentation for the third party library is poor.

The Question:

How can I set up my tests to test this behavior? I'm not sure I can solve this issue in a unit test, because the test itself can easily be wrong. Additionally, the integrated system is large and complicated and it's easy to miss something. For example, in the situation above, I had correctly adjusted the timestamp handling in several places, but I missed one of them. The system seemed to be doing mostly the right things in my integration test, but when I deployed it to production (which has a lot more data), the problem became obvious.

I do not have a process for my integration tests right now. Testing is essentially: try to keep the unit tests good, add more tests when things break, then deploy to my test server and make sure things seem sane, then deploy to production. This timestamp issue passed the unit tests because the mocks were created wrong, then it passed the integration test because it didn't cause any immediate, obvious problems. I do not have a QA department.

Best Answer

It sounds like you're already doing due diligence. But ...

At the most practical level, always include a good handful of both "full-loop" integration tests in your suite for your own code, and write more assertions than you think you need. In particular, you should have a handful of tests that perform a full create-read-[do_stuff]-validate cycle.

[TestMethod]
public void MyFormatter_FormatsTimesCorrectly() {

  // this test isn't necessarily about the stream or the external interpreter.
  // but ... we depend on them working how we think they work:
  var stream = new StreamThingy();
  var interpreter = new InterpreterThingy(stream);
  stream.Write("id-123, some description, 12345");

  // this is what you're actually testing. but, it'll also hiccup
  // if your 3rd party dependencies introduce a breaking change.
  var formatter = new MyFormatter(interpreter);
  var line = formatter.getLine();
  Assert.equal(
    "some description took 123.45 seconds to complete (id-123)", line
  );
}

And it sounds like you're already doing this sort of thing. You're just dealing with a flaky and/or complicated library. And in that case, it's good to throw in a few "this is how the library works" types of tests that both verify your understanding of the library and serve as examples of how to use the library.

Suppose you need to understand and depend on how a JSON parser interprets each "type" in a JSON string. It's helpful and trivial to include something like this in your suite:

[TestMethod]
public void JSONParser_InterpretsTypesAsExpected() {
  String datastream = "{nbr:11,str:"22",nll:null,udf:undefined}";
  var o = (new JSONParser()).parse(datastream);

  Assert.equal(11, o.nbr);
  Assert.equal(Int32.getType(), o.nbr.getType());
  Assert.equal("22", o.str);
  Assert.equal(null, o.nll);
  Assert.equal(Object.getType(), o.nll.getType());
  Assert.isFalse(o.KeyExists(udf));
}

But secondly, remember that automated testing of any kind, and at almost any level of rigor, will still fail to protect you against all bugs. It's perfectly common to add tests as you discover problems. Not having a QA department, this means a lot of those problems will be discovered by end-users.

And to a significant degree, that's just normal.

And thirdly, when a library changes the meaning of a return-value or field without renaming the field or method or otherwise "breaking" dependent code (maybe by changing its type), I'd be pretty damn unhappy with that publisher. And I'd argue that, even though you should probably have read the changelog if there is one, you should probably also pass some of your stress onto the publisher. I'd argue they need the hopefully-constructive criticism ...

Related Topic