It personally sounds as if your Acceptance tests have encompassed properties of Integration tests and that you are trying to "kill two birds with one stone" as the saying goes.
In the traditional Waterfall model a single Acceptance test should determine if a single requirement has been met. If developing based on a strict SRS document, you may find that even basic input validation is explicitly defined and by the nature of Acceptance testing it needs to be manually tested to be verified.
In the Agile model however the Acceptance Tests verify a single user story, a high level test to verify a user high level stakeholder business need. Typically in the Agile model such fine grained control and specification over concerns like input validation should be understood, unless that validation is unique or specific to a business need.
Simply put in any case, in the example where you wish to verify a duplicate record is not entered into a database is far too low level for a user story, and one would argue is a waste of valuable time to elevate to the importance of an Acceptance Test. Quality assurance or the tester for that feature should be able to verify that the high level requirement has been met with no obvious defects.
Your tests need to be split up:
Automated Unit Tests
For your lowest level tests, typically written and performed by the developer to verify all of the functionality of a specific component or application layer apart, independent of other areas of the application, and reproducible to run multiple times.
Integration Tests
These tests like unit tests can verify situations like creating a new Person record, across all system layers, verifying the integration of all application dependencies while verifying that creation a new Person record, or preventing a duplicate Person record from being created is occurring correctly.
The Case for Integration Tests
One of the most valuable aspects of these kinds of tests is that their are various strategies to not only Automate these tests, one can also use database transactions to make them run in parallel and not conflict, and also this database transaction can be Rolled Back when the test is over reverting your database back to a clean slate and making them reproducible.
I gather that based on your question that most of your "Acceptance Tests" aren't terribly interesting and would be better automated. This isn't to say that Acceptance testing shouldn't occur, but it should be done on a much higher level to where these issues you bring up no longer matter.
Best Answer
When every run of the test suite gives you the possibility to yield a different result, the test is almost completely worthless - when the suite shows you a bug, you have a high chance that you cannot reproduce it, and when you try to fix the bug, you cannot verify wether your fix works (or not).
So when you think you need to use some kind of random number generator for generating of your test data, either make sure you always initialize the generator with the same seed, or persist your random test data in a file before feeding it into your test, so you can re-run the test again with exact the same data from the run before. This way, you can transform any non-deterministic test into a deterministic one.
EDIT: Using a random number generator to pick some test data is IMHO sometimes a sign for being too lazy about picking good test data. Instead of throwing 100,000 randomly choosen test values and hope that this will be enough to discover all serious bugs by chance, better use your brain, pick 10 to 20 "interesting" cases, and use them for the test suite. This will not only result in a better quality of your tests, but also in a much higher performance of the suite.