I'm going to use Python-like pseudocode based around your test cases to (hopefully) help explain this answer a little, rather than just expositing. But in short: This answer is basically just an example of small, single-function-level dependency injection / dependency inversion, instead of applying the principle to an entire class/object.
Right now, it sounds like your tests are structured as such:
def test_3_minute_confirm():
req = make_request()
sleep(3 * 60) # Baaaad
assertTrue(confirm_request(req))
With implementations something along the lines of:
def make_request():
return { 'request_time': datetime.now(), }
def confirm_request(req):
return req['request_time'] >= (datetime.now() - timedelta(minutes=5))
Instead, try forgetting about "now" and tell both functions what time to use. If it's most of the time going to be "now", you can do one of two main things to keep the calling code simple:
- Make time an optional argument that defaults to "now". In Python (and PHP, now that I reread the question) this is simple - default "time" to
None
(null
in PHP) and set it to "now" if no value was given.
- In other languages without default arguments (or in PHP if it becomes unwieldy), it might be easier to pull the guts of the functions out into a helper that does get tested.
For example, the two functions above rewritten in the second version might look like this:
def make_request():
return make_request_at(datetime.now())
def make_request_at(when):
return { 'request_time': when, }
def confirm_request(req):
return confirm_request_at(req, datetime.now())
def confirm_request_at(req, check_time):
return req['request_time'] >= (check_time - timedelta(minutes=5))
This way, existing parts of the code don't need to be modified to pass in "now", while the parts you actually want to test can be tested much more easily:
def test_3_minute_confirm():
current_time = datetime.now()
later_time = current_time + timedelta(minutes=3)
req = make_request_at(current_time)
assertTrue(confirm_request_at(req, later_time))
It also creates additional flexibility in the future, so less needs to change if the important parts need to be reused. For two examples that pop into mind: A queue where requests might need to be created moments too late but still use the correct time, or confirmations have to happen to past requests as part of data sanity checks.
Technically, thinking ahead like this might violate YAGNI, but we're not actually building for those theoretical cases - this change would just be for easier testing, that just happens to support those theoretical cases.
Is there any reason you would recommend one of these solutions over another?
Personally, I try to avoid any sort of mocking as much as possible, and prefer pure functions as above. This way, the code being tested is identical to the code being run outside of tests, instead of having important parts replaced with mocks.
We've had one or two regressions that no one noticed for a good month because that particular part of the code was not often tested in development (we weren't actively working on it), was released slightly broken (so there were no bug reports), and the mocks used were hiding a bug in the interaction between helper functions. A few slight modifications so mocks weren't necessary and a whole lot more could be easily tested, including fixing the tests around that regression.
Change the system date and time for the test, using exec(). This can work out if you are not afraid to mess up with other things on the server.
This is the one that should be avoided at all costs for unit testing. There's really no way of knowing what kinds of inconsistencies or race conditions might be introduced to cause intermittent failures (for unrelated applications, or co-workers on the same development box), and a failed/errored test might not set the clock back correctly.
Best Answer
No one said unit tests have to be run all on the same platform - but no one said you could reach 100% test coverage either.
As a first step,
#ifdef
out the code, preferably factoring it into a platform-specific function. Write a suitable implementation of this function for x86. However, I don't think it is appropriate to select the code to be compiled based on "unit testing" or "not unit testing". Rather, simply use the architecture: x86 or SPARC.Now continue working on your tests and on your program. Simply document the "hole" in testing you have created: the function, while unit-tested on x86 is not unit tested on SPARC. Just don't stop your progress because you have not figured out yet how to test this particular piece of code.
Assuming you had written this piece of assembler for a good reason (such as e.g. a very frequently used routine called in tight loops) you should now feel uncomfortable. The good news is that unless you ship yesterday, you have some time to patch the hole. Here are some ideas:
1) You are probably doing integration testing (or field tests, or anything higher-level than unit tests). If the observed behavior is as per spec, then your assembler code is probably fine. At the end of the day, it's all about the testing conditions: if your integration testing includes most of the operational conditions, then you are OK. Granted, you won't get the same feedback than with unit tests (the loop will be longer; the failure will not be as precise as "expected this, got that" and harder to investigate) but still: your code is tested.
2) You might have some colleagues around, who will be happy to do a code review on this specific part.
3) You could try to use an emulator (QEmu seems to support SPARC archs). I have no idea if this could pay back, but why not give it a shot?
4) Are you 100% sure you can't have your hands on the target machine? How much is going to cost a defect in this code? If it's more than a few days of work, then you can probably convince your manager to buy an old Sun station to run tests on. Testing always look too expensive at first sight, but it's like an insurance, or an investment. Really, hardware is cheap compared to man-hours, lost company benefits (or, even worse, court trials).
To sum up: either you feel you must unit test your code (and in this case, make it happen by ruling out the "impossible for me" part), or rely on other forms of testing.