Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
You're overthinking the problem. You want to use a list (array or some other iterable) for your arguments when it's likely the consumer of your function would already have things in a list. You want to use variable arguments when you might want to call a function with a different number of arguments.
A classic example of when you should use varargs is C's printf()
. Using varargs makes sense here because, in the general case, there's no reason to put a bunch of arbitrary values in a list just to print something. A good example of where you'd probably want a list is a sum()
where you're probably operating on a bunch of related values that would likely be stashed away together.
It's not something you need to be dogmatic about or have rules for. Use some common sense and write code that people can follow & doesn't force you to jump through arbitrary hoops.
Best Answer
Technically, what you should test depends on your test goals. But in general, you should try to test everything that can go wrong. Here:
You might have forgotten to use the
coll
parameter, i.e. the above statement is missing. This can be checked by a test that depends on thecoll
provided to the constructor.You might have forgotten to clone the parameter:
This can be checked by a test that modifies the parameter
coll
and compares the result with the object-ownedcoll
. They should be different after the modification.You might have forgotten to handle the null case:
This can be checked by a test that omits this parameter/provides a null value. Usually we'd expect an exception, here a null is OK.
Another thing that could go wrong is that you create a null object or default instance when a null is encountered, e.g.:
If your code is correct, you want to ensure that an actual null here, not a default instance.
That's already four test cases that would be sensible for this simple line of code. Using a code coverage tool can help to detect uncovered cases in your code, in particular if you also look at branch coverage. Some tools have problems with expression-level control-flow (
?:
,&&
,||
) so it's better (and more readable for humans, too!) to use statement-level conditionals: