A complete answer to this question would be very long. I will try to mention the main points.
To separate concerns, you may be looking at tests to:
A - Validate the database design.
B - Validate that the program(s) are interacting correctly with the database.
The database design validation should be performed by the person(s) who designed the database. The developers (while unit testing) should be concerned more with part (B). The primary difference I see between the two types of tests is the tools used. For (A), you'd use an environment independent of the project code, whereas on (B) you'd of course use the project's code. In the following text, I will mix both.
To answer your specif questions:
Column domain value rules
Each column has an associated data type. Each column must be validated against business rules to prove that it represents the correct data type. Problems may arrise if the column data type is not compatible with business requirements or if the code uses a data type different from how it is defined in the database.
For example:
If the column is defined as small int, you should not be able to store text in it. This is an important test specially when the columns are optional, since it could go unnoticed until someone actually enters some data in it.
Could you store a negative value in a column where the business requires that to happen?
Column default value rules
Some columns are associated with a default value specification in the DDL (Data Def. Language) where if during insert the insert does not provide a value, the database will assume the default value. This can be tested by not passing the value and observing the result value the database stores. This test may also include checking for Nullable columns. This rarely requires a test since it can be verified from DDL unless an unique index is built on the column.
Value existence rules
As I understand this, you verify that data inserted or updated shows as expected in the database.
Row value rules
I am not clear on what this one means exactly.
Size rules
Each column has a size in the database based on how it is defined in DDL. you want to make sure that any value that fits requirements(either entered form GUI or resulting as an output of a computation) would be stored correctly in the column. For example a Small Integer data type does not allow you to store a value of 5 billions. Also, a name defined as VARCHAR2(30) will not accommodate 40 characters, so business rules must be very clear here, specially when the column is used for aggregating data. You want to test what happens in such situations.
guidelines on how/where to begin
One way of doing this is to come up with a testing plan. You want to ensure you are using the correct version of specifications (such as requirements documents and Use Cases) and of the software. You also need to coordinate thes tests with tests done by Unit Testing (if any). You may find duplicate tests that you don't need to perform again. You want to take a copy of the database before testing so that you could repeat a specific test if you ever needed to. The DBA may help you with this. You also need to check with your team how do they document theses tests and verify the testing scope with them. You can break your database into logical parts and begin the testing of each logical part separately. The testing process could begin by studying the DDL of the database and verifying that the columns are defined correctly as required by the business. You should be using the application's software and not any other tool to perform the majority of the tests. For example question the following:
Is the column supposed to be of the defined type (no point in making a Name as Int).
Is the size compatible with business requirements?
Are all the columns in business requirements found in the database?
Are null columns really optional?
etc.
Next, you could design test cases to test the above. You could use the GUI to do most of the tests.
There are other important database tests that you have not mentioned. Those deal with:
1 - Testing that the primary keys are really unique from a business perspective.
2 - Testing that unique indexes (other than the PK) are really unique from business perspective.
3 - Testing Foreign Key constraints work.
4 - Testing what happens when a rows is deleted and its effect on related rows.
5 - Other tests regarding special database constructs such as CHEKC, Triggers if they exist.
6 - Correct table normalization and that normalized columns hold correct values.
The above is not a complete list but it should get you started.
Performance testing seems to be popular
It's popular in theory. In practice, I've seen only a handful of automated performance tests done and they were done haphazardly.
How would I test that Build 1.4 is a pass/fail for this test?
You define a pass/fail criteria. If you need the tests to be consistent, define some failure criteria and measure. If you need the tests to be at least X fast (where X is ideally some metric gathered by usability testing) then do that. If not X worse than last build (not recommended in automated tests, since it requires memory of past builds) then do that.
It is trivial to write up helper code to deal with measurements and/or statistics. But you still need to define the criteria, just like any other test. It's like asking to automatically define pass/fail criteria for business rules - computers do a poor job at that since all they know about business rules is what you tell them.
Best Answer
It personally sounds as if your Acceptance tests have encompassed properties of Integration tests and that you are trying to "kill two birds with one stone" as the saying goes.
In the traditional Waterfall model a single Acceptance test should determine if a single requirement has been met. If developing based on a strict SRS document, you may find that even basic input validation is explicitly defined and by the nature of Acceptance testing it needs to be manually tested to be verified.
In the Agile model however the Acceptance Tests verify a single user story, a high level test to verify a user high level stakeholder business need. Typically in the Agile model such fine grained control and specification over concerns like input validation should be understood, unless that validation is unique or specific to a business need.
Simply put in any case, in the example where you wish to verify a duplicate record is not entered into a database is far too low level for a user story, and one would argue is a waste of valuable time to elevate to the importance of an Acceptance Test. Quality assurance or the tester for that feature should be able to verify that the high level requirement has been met with no obvious defects.
Your tests need to be split up:
Automated Unit Tests
For your lowest level tests, typically written and performed by the developer to verify all of the functionality of a specific component or application layer apart, independent of other areas of the application, and reproducible to run multiple times.
Integration Tests
These tests like unit tests can verify situations like creating a new Person record, across all system layers, verifying the integration of all application dependencies while verifying that creation a new Person record, or preventing a duplicate Person record from being created is occurring correctly.
The Case for Integration Tests
One of the most valuable aspects of these kinds of tests is that their are various strategies to not only Automate these tests, one can also use database transactions to make them run in parallel and not conflict, and also this database transaction can be Rolled Back when the test is over reverting your database back to a clean slate and making them reproducible.
I gather that based on your question that most of your "Acceptance Tests" aren't terribly interesting and would be better automated. This isn't to say that Acceptance testing shouldn't occur, but it should be done on a much higher level to where these issues you bring up no longer matter.