Asserts are useful for telling you about the internal state of the program. For example, that your data structures have a valid state, e.g., that a Time
data structure won't hold the value of 25:61:61
. The conditions checked by asserts are:
Preconditions, which assure that the caller keeps its contract,
Postconditions, which assure that the callee keeps its contract, and
Invariants, which assure that the data structure always holds some property after the function returns. An invariant is a condition that is a precondition and a postcondition.
Unit tests are useful for telling you about the external behavior of the module. Your Stack
may have a consistent state after the push()
method is called, but if the size of the stack doesn't increase by three after it is called three times, then that is an error. (For example, the trivial case where the incorrect push()
implementation only checks the asserts and exits.)
Strictly speaking, the major difference between asserts and unit tests is that unit tests have test data (values to get the program to run), while asserts do not. That is, you can execute your unit tests automatically, while you cannot say the same for assertions. For the sake of this discussion I've assumed that you are talking about executing the program in the context of higher-order function tests (which execute the whole program, and do not drive modules like unit tests). If you are not talking about automated function tests as the means to "see real inputs", then clearly the value lies in automation, and thus the unit tests would win. If you are talking about this in the context of (automated) function tests, then see below.
There can be some overlap in what is being tested. For example, a Stack
's postcondition may actually assert that the stack size increases by one. But there are limits to what can be performed in that assert: Should it also check that the top element is what was just added?
For both, the goal is to increase quality. For unit testing, the goal is to find bugs. For assertions, the goal is to make debugging easier by observing invalid program states as soon as they occur.
Note that neither technique verifies correctness. In fact, if you conduct unit testing with the goal to verify the program is correct, you will likely come up with uninteresting test that you know will work. It's a psychological effect: you'll do whatever it is to meet your goal. If your goal is to find bugs, your activities will reflect that.
Both are important, and have their own purposes.
[As a final note about assertions: To get the most value, you need to use them at all critical points in your program, and not a few key functions. Otherwise, the original source of the problem might have been masked and hard to detect without hours of debugging.]
Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
Best Answer
I tend to side with your friend because all too often, unit tests are testing the wrong things.
Unit tests are not inherently bad. But they often test the implementation details rather than the input/output flow. You end up with completely pointless tests when this happens. My own rule is that a good unit test tells you that you just broke something; a bad unit test merely tells you that you just changed something.
An example off the top of my head is one test that got tucked into WordPress a few years back. The functionality being tested revolved around filters that called one another, and the tests were verifying that callbacks would then get called in the correct order. But instead of (a) running the chain to verify that callbacks get called in the expected order, the tests focused on (b) reading some internal state that arguably shouldn't have been exposed to begin with. Change the internals and (b) turns red; whereas (a) only turns red if changes to the internals break the expected result while doing so. (b) was clearly a pointless test in my view.
If you have a class that exposes a few methods to the outside world, the correct thing to test in my view are the latter methods only. If you test the internal logic as well, you may end up exposing the internal logic to the outside world, using convoluted testing methods, or with a litany of unit tests that invariably break whenever you want to change anything.
With all that said, I'd be surprised if your friend is as critical about unit tests per se as you seem to suggest. Rather I'd gather he's pragmatic. That is, he observed that the unit tests that get written are mostly pointless in practice. Quoting: "unit tests tend to be commented out rather than reworked". To me there's an implicit message in there - if they tend to need reworking it is because they tend to suck. Assuming so, the second proposition follows: developers would waste less time writing code that is harder to get wrong - i.e. integration tests.
As such it's not about one being better or worse. It's just that one is a lot easier to get wrong, and indeed very often wrong in practice.