Unit Testing – Is It Bad Practice for Unit Tests to Be Dependent on Each Other?

unit testing

Lets say I have some sort of unit tests like this:

let myApi = new Api();

describe('api', () => {

  describe('set()', () => {
    it('should return true when setting a value', () => {
      assert.equal(myApi.set('foo', 'bar'), true);
    });
  });

  describe('get()', () => {
    it('should return the value when getting the value', () => {
      assert.equal(myApi.get('foo'), 'bar');
    });
  });

});

So now I have 2 unit tests. One sets a value in an API. The other one tests to make sure the proper value is returned. However the 2nd test is dependent on the first one. Should I add in a .set() method in the 2nd test before the get() with the sole purpose of making sure the 2nd test is not dependent on anything else?

Also, in this example should I be instantiating myApi for each test instead of doing it once before the tests?

Best Answer

Yes, it's bad practice. Unit tests need to run independently of each other, for the same reasons that you need any other function to run independently: you can treat it as an independent unit.

Should I add in a .set() method in the 2nd test before the get() with the sole purpose of making sure the 2nd test is not dependent on anything else?

Yes. However, If these are just bare getter and setter methods, they don't contain any behavior, and you really shouldn't need to test them, unless you have a reputation for fat-fingering things in such a way that the getter/setter compiles but sets or gets the wrong field.

Related Topic