I think calculated expected value results in more robust and flexible test cases. Also by using good variable names in the expression that calculate expected result, it is much more clear where the expected result came from in the first place.
Having said that, in your specific example I would NOT trust "Softcoded" method because it uses your SUT (system under test) as the input for your calculations. If there's a bug in MyClass where fields are not properly stored, your test will actually pass because your expected value calculation will use the wrong string just like target.GetPath().
My suggestion would be to calculate expected value where it makes sense, but make sure that the calculation doesn't depend on any code from the SUT itself.
In Response to OP's update to my response:
Yes, based on my knowledge but somewhat limited experience in doing TDD, I would choose option #3.
Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
Best Answer
I personally prefer to use the [TestCase] where the input and expected outputs are all passed as arguments to the test function, whenever possible:
There is even a way to also do some sort of specified return value as your TestCase arguments in the case of NUnit, but I don't remember off the top of my head how that works or what it's syntax is.
In situations where you can't or it isn't appropriate to pass your inputs and outputs in as parameters, I would define them all up front so you can quickly eyeball what you are feeding to your test. I have been burned plenty of times debugging stuff due to a failing test case, only to find a typo in my input. Grouping all that together makes things easier to deal with.