It personally sounds as if your Acceptance tests have encompassed properties of Integration tests and that you are trying to "kill two birds with one stone" as the saying goes.
In the traditional Waterfall model a single Acceptance test should determine if a single requirement has been met. If developing based on a strict SRS document, you may find that even basic input validation is explicitly defined and by the nature of Acceptance testing it needs to be manually tested to be verified.
In the Agile model however the Acceptance Tests verify a single user story, a high level test to verify a user high level stakeholder business need. Typically in the Agile model such fine grained control and specification over concerns like input validation should be understood, unless that validation is unique or specific to a business need.
Simply put in any case, in the example where you wish to verify a duplicate record is not entered into a database is far too low level for a user story, and one would argue is a waste of valuable time to elevate to the importance of an Acceptance Test. Quality assurance or the tester for that feature should be able to verify that the high level requirement has been met with no obvious defects.
Your tests need to be split up:
Automated Unit Tests
For your lowest level tests, typically written and performed by the developer to verify all of the functionality of a specific component or application layer apart, independent of other areas of the application, and reproducible to run multiple times.
Integration Tests
These tests like unit tests can verify situations like creating a new Person record, across all system layers, verifying the integration of all application dependencies while verifying that creation a new Person record, or preventing a duplicate Person record from being created is occurring correctly.
The Case for Integration Tests
One of the most valuable aspects of these kinds of tests is that their are various strategies to not only Automate these tests, one can also use database transactions to make them run in parallel and not conflict, and also this database transaction can be Rolled Back when the test is over reverting your database back to a clean slate and making them reproducible.
I gather that based on your question that most of your "Acceptance Tests" aren't terribly interesting and would be better automated. This isn't to say that Acceptance testing shouldn't occur, but it should be done on a much higher level to where these issues you bring up no longer matter.
I worked at a place that took 5 hours (across 30 machines) to run integration tests. I refactored the codebase and did unit tests instead for the new stuff. The unit tests took 30 seconds (across 1 machine). Oh, and bugs went down too. And development time since we knew exactly what broke with granular tests.
Long story short, you don't. Full integration tests grow exponentially as your codebase grows (more code means more tests and more code means all of the tests take longer to run as there's more "integration" to work through). I would argue that anything in the "hours" range loses most of the benefits of continuous integration since the feedback loop isn't there. Even an order of magnitude improvement isn't enough to get you good - and it's nowhere close to get you scalable.
So I would recommend cutting the integration tests down to the broadest, most vital smoke tests. They can then be run nightly or some less-than-continuous interval, reducing much of your need for performance. Unit tests, which only grow linearly as you add more code (tests increase, per-test runtime does not) are the way to go for scale.
Best Answer
There are several approaches to attacking the time bloat of automated testing. Some of the approaches such as running multiple virtual environments and running the tests in parallel, and reducing the scope of what is actually tested may not not suitable for your workload, but are worth mentioning as possible strategies for other readers in the future.
One of the areas that automated tests can spend a lot of time, is getting the test run into the 'test state'.
One suboptimal solution is group together the tests so that you only need to get into that test state once, and then you run all of the tests in this state as part of a group test. This is flawed, because you can have situations where tests only pass because of artefacts left over by an earlier sequences in the test run.
A more complete solution is to find a mechanism to quickly recreate your starting environment. Have a lot at the systemic effect of the import of these documents. Are they processed into different configuration files? are they updating multiple database tables? and then finding a more efficient mechanism for the injection of this state information into the test environment. However, this approach can be problematic if the format of the state information changes during product development, its a substantial job to recreate.
The solution I would probably use is to use a mechanism that allowed rollback. It might be virtual environment that could rollback to a snapshot, or perhaps using LVM below the file system and then at the start of each test, we just roll back to the start point, losing the artifacts from the previous run.