Then don't write it.
You have already come to a point where you
- Have the simplest code which passes all tests.
- Are unable to think of edge-cases that aren't covered by the code.
Time to stop worrying about that method.
If you're writing tests for edge-cases that you know are already covered then you're worrying more about the test part of TDD than the design part. Unfortunately, this is not a very productive thing to do.
Why do you need to know that your method works with negatives? How far do you take that? Do you then test fractions? Do you test odd and even numbers? Each factor of ten? Every possible value of double?
If not then why test negatives? They're not going to act any differently from positives.
You're going to find yourself writing a lot more tests, of much more complicated, interesting, and useful behavior, if you can do so simply. So the option that involves
var input = new Parser().ParseStatement("x = 2 + 3 * a");
is quite valid. It does depend on another component. But everything depends on dozens of other components. If you mock something to within an inch of its life, you're probably depending on a lot of mocking features and test fixtures.
Developers sometimes over-focus on the purity of their unit tests, or developing unit tests and unit tests only, without any module, integration, stress or other kinds of tests. All those forms are valid and useful, and they're all the proper responsibility of developers--not just Q/A or operations personnel further down the pipeline.
One approach I've used is to start with these higher level runs, then use the data produced from them to construct the long-form, lowest-common-denominator expression of the test. E.g. when you dump the data structure from the input
produced above, then you can easily construct the:
var input = new AssignStatement(
new Variable("x"),
new BinaryExpression(
new Constant(2),
BinaryOperator.Plus,
new BinaryExpression(new Constant(3), BinaryOperator.Multiply, new Variable("a"))));
kind of test that tests at the very lowest level. That way you get a nice mix: A handful of the very most basic, primitive tests (pure unit tests), but have not spent a week writing tests at that primitive level. That gives you the time resource needed to write many more, slightly less atomic tests using the Parser
as a helper. End result: More tests, more coverage, more corner and other interesting cases, better code and higher quality assurance.
Best Answer
There is no point to making sure that every defect in your system trips exactly one test.
A test suite has one job: verifying that there are no known defects. If there is a defect, it doesn't matter if one test fails or 10. If you get used to your test suite failing, if you try to gauge how "bad" your program is by counting the failing tests, you're not using regression testing the right way. The test suite should pass all tests before you publish code.
The only valid reason for skipping tests is if they test incomplete functionality and take an inordinate amount of time that you could put to better use while implementing the thing they're supposed to test. (That is only an issue if you don't practice strict test-driven development, but that's a valid choice, after all.)
Otherwise, don't bother trying to make your test suite into an indicator telling you precisely what's wrong. It will never be exact, and it's not supposed to be. It's supposed to protect you from making the same mistake twice, that's all.