However, it implies that large parts of the applications code are not covered by tests. Why? Because if you have units (and you need a lot of units to get your Unit Tests right) you need code that wires the units together. This code, IMHO, will get complicated enough that it deserves to be tested on a more granular level that integration tests while it probably falls into "Dirty Hybris":
Your assumption is faulty because you are neglecting a layer of testing - acceptance testing.
Your unit tests cover individual units - the classes and methods that compose them. This enables you to test methods and classes in isolation to ensure that they are behaving as expected. Above this lies your integration tests, which tests the collaboration between classes and ensures that larger modules (packages and even inter-package collaboration) work as expected. Finally, your acceptance tests are used to verify and validate your entire system, as assembled, against the user requirements.
Assuming that you have the appropriate unit and integration tests that correspond to requirements and well-defined acceptance criteria and acceptance test plans, then everything in your system is tested. Other aspects of testing - smoke tests, regression tests, and so forth, are simply an appropriate subsampling of the unit, integration, and acceptance tests.
TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests
That particular quote is also missing something. As I was taught, TDD isn't just about unit tests, but developing all tests first. That includes not only unit tests, but the necessary acceptance and integration tests as well.
It's comparing oranges and apples.
Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.
I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:
Integration tests:
Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.
Acceptance tests:
An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.
Unit tests:
For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.
Behaviour tests:
Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.
These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.
Best Answer
If your "whole" script is written as a set of functions including "main" function and is ending like
then all you need is actually call that main. If you have some command line parameters that are parsed, you can provide them as well.
As for REST calls - you use mocks. You either mock the actual call (if it's enough for you to just check that it was correct) or you set up a mock REST server and point your app to it.
For example, my app has an OpenID authentication. So I created my own simple OpenID server and point my application to use it as provider. That is rather slow, so I use it only when I run scenarios involving authentication. In all other cases I just patch the authentication method (I'm using tornado and I'm patching
get_current_user
to return a predefined user ID).