I feel there are two issues here. The first is that you didn't realize in advance that your original design may not be the best approach. Had you known this in advance, you may have chosen to develop a quick throw-away prototype or two, to explore the possible design options and to assess which is the most promising way to follow. In prototyping, you need not write production quality code and need not unit test every nook and cranny (or at all), as your sole focus is on learning, not on polishing the code.
Now, realizing that you need prototyping and experiments rather than starting the development of production code right away, is not always easy, and not even always possible. Armed with the knowledge just gained, you may be able to recognize the need for prototyping next time. Or may not. But at least you know now that this option is to be considered. And this in itself is important knowledge.
The other issue is IMHO with your perception. We all make mistakes, and it is so easy to see in retrospect what we should have done differently. This is just the way we learn. Write down your investment into unit tests as the price of learning that prototyping may be important, and get over it. Just strive not to make the same mistake twice :-)
Starting with this concept:
1) Start with the behavior that you desire. Write a test for it. See test fail.
2) Write enough code to get the test to pass. See all tests pass.
3) Look for redundant / sloppy code -> refactor. See tests still pass. Goto 1
So on #1, let's say that you want to create a new command (I'm stretching to how the command would work, so bear with me). (Also, I'll be a bit pragmatic rather than extreme TDD)
The new command is called MakeMyLunch, so you first create a test to instantiate it and get the command name:
@Test
public void instantiateMakeMyLunch() {
ICommand command = new MakeMyLunchCommand();
assertEquals("makeMyLunch",command.getCommandName());
}
This fails, forcing you to create the new command class and have it return its name (purist would say this is two rounds of TDD, not 1). So you create the class and have it implement the ICommand interface, including returning the command name. Running all tests now shows all pass, so you proceed to look for refactoring opportunities. Probably none.
So next you want it to implement execute. So you have to ask: how do I know that "MakeMyLunch" successfully "made my lunch". What changes in the system because of this operation? Can I test for this?
Suppose it is easy to test for:
@Test
public void checkThatMakeMyLunchIsSuccessful() {
ICommand command = new MakeMyLunchCommand();
command.execute();
assertTrue( Lunch.isReady() );
}
Other times, this is more difficult, and what you really want to do is test the responsibilities of the subject-under-test (MakeMyLunchCommand). Perhaps the responsibility of MakeMyLunchCommand is to interact with Fridge and Microwave. So to test it you can use a mock Fridge and mock Microwave. [two sample mock frameworks are Mockito and nMock or look here.]
In which case you would do something like the following pseudo code:
@Test
public void checkThatMakeMyLunchIsSuccessful() {
Fridge mockFridge = mock(Fridge);
Microwave mockMicrowave = mock(Microwave);
ICommand command = new MakeMyLunchCommand( mockFridge, mockMicrowave );
command.execute();
mockFramework.assertCalled( mockFridge.removeFood );
mockFramework.assertCalled( microwave.turnon );
}
The purist says test the responsibility of your class - its interactions with other classes (did the command open the fridge and turn on the microwave?).
The pragmatist says test for a group of classes and test for the outcome (is your lunch ready?).
Find the right balance that works for your system.
(Note: consider that perhaps you arrived at your interface structure too early. Perhaps you can let this evolve as you write your unit tests and implementations, and in step #3 you "notice" the common interface opportunity).
Best Answer
When I understood you correctly, you cannot even write a reliable automated test for your "ghost image" example after you found a solution, since the only way of verifying the correct behaviour is to look at the screen and check if there is no ghost image any more. That gives me the impression your original headline asked the wrong question. The real question should be
And the answer is - for several kind of UI issues, you don't. Sure, one can try to automate making the UI showing the problem somehow, and try to implement something like a screenshot comparison, but this is often error-prone, brittle and not cost-effective.
Especially "test driving" UI design or UI improvements by automated tests written in advance is literally impossible. You "drive" UI design by making an improvement, show the result to a human (yourself, some testers or a user) and ask for feedback.
So accept the fact TDD is not a silver bullet, and for some kind of issues still manual testing makes more sense than automated tests. If you have a systematic testing process, maybe with some dedicated testers, best thing you can do is to add the case to their test plan.