It's comparing oranges and apples.
Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.
I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:
Integration tests:
Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.
Acceptance tests:
An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.
Unit tests:
For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.
Behaviour tests:
Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.
These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.
How you set up test data is important, and I'd argue both versions are suboptimal.
Every method or class you write has a contract, whether explicit (by documentation for instance) or implicit (by what the code actually does). This contract is important, because it describes what the clients, i.e. the code that uses your class, should expect when it uses it. Unit testing is a method to programmatically document the contract of the code under test, in a way that it ensures that the behaviour is the same even if the implementation changes.
An important characteristic of unit tests is that they want the code under test (CUT) to be isolated from other code. This means you have to be very careful when the CUT has dependencies. If it uses a dependency which has a different reason to change than itself (this is what responsibility means in the single responsibility principle), you'll usually have an abstraction to isolate these two. This abstraction itself has a contract, and in unit testing, you assume that the abstraction on which you depend will behave according to its contract. In unit tests, this generally means that this dependency will be mocked, and you will pilot the mock to behave in a certain way.
Back to your example now. You are unit testing the UserLogic
class. It has two dependencies that I can see: User
and DbContext
. I don't have enough context to know what User
is, but I'll assume it is some sort of value object. In that case, it is fine to use it directly.
DbContext
is a different beast. It seems to be an implementation of some sort of persistence. It definitely has a different reason to change than UserLogic
, which means it should be abstracted by an interface of some sort. I'll assume you already have one which is called Context
.
Therefore, I can assume the implementation of UserLogic.AddUser
looks like something like this:
public boolean AddUser(User user) {
if (context.HasUser(user)) {
return false;
}
context.AddUser(user);
context.SaveChanges();
return true;
}
The outcome that you want to unit test is as follow: If the User
has already been added to the context
, you want to ensure the Context
hasn't changed (no new users were added, nor were the changes saved).
The description of the outcome described pretty much exactly how the unit test should look. What you want to arrange is that the context
already has a specific User
. What you want to act on is AddUser
. What you want to assert is that the User
was not added, and the Context
was not saved. Therefore, your unit test looks like this (in Java, I'm not too familiar with C# testing libraries):
@Test
public void givenContextAlreadyHasTheUser_whenAddUser_thenTheUserIsNotAddedASecondTime() {
// Arrange
Context context = mock(Context.class);
User user = new User("test@test.com");
UserLogic userLogic = new UserLogic(context);
given(context.HasUser(user)).willReturn(true);
// Act
userLogic.AddUser(user);
// Assert
verify(context, never()).AddUser(any());
verify(context, never()).SaveChanges();
}
As a user of the UserLogic
class, I can refer to this test to know exactly what the contract of AddUser
describes in the case where the User
is already added to the context, which is what I'm looking for in unit tests.
Best Answer
You're going to find yourself writing a lot more tests, of much more complicated, interesting, and useful behavior, if you can do so simply. So the option that involves
is quite valid. It does depend on another component. But everything depends on dozens of other components. If you mock something to within an inch of its life, you're probably depending on a lot of mocking features and test fixtures.
Developers sometimes over-focus on the purity of their unit tests, or developing unit tests and unit tests only, without any module, integration, stress or other kinds of tests. All those forms are valid and useful, and they're all the proper responsibility of developers--not just Q/A or operations personnel further down the pipeline.
One approach I've used is to start with these higher level runs, then use the data produced from them to construct the long-form, lowest-common-denominator expression of the test. E.g. when you dump the data structure from the
input
produced above, then you can easily construct the:kind of test that tests at the very lowest level. That way you get a nice mix: A handful of the very most basic, primitive tests (pure unit tests), but have not spent a week writing tests at that primitive level. That gives you the time resource needed to write many more, slightly less atomic tests using the
Parser
as a helper. End result: More tests, more coverage, more corner and other interesting cases, better code and higher quality assurance.