Code coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.
Code coverage is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good tool will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during a particular test.
Our team uses Magellan - an in-house set of code coverage tools. If you are a .NET shop, Visual Studio has integrated tools to collect code coverage. You can also roll some custom tools, like this article describes.
If you are a C++ shop, Intel has some tools that run for Windows and Linux, though I haven't used them. I've also heard there's the gcov tool for GCC, but I don't know anything about it and can't give you a link.
As to how we use it - code coverage is one of our exit criteria for each milestone. We have actually three code coverage metrics - coverage from unit tests (from the development team), scenario tests (from the test team) and combined coverage.
BTW, while code coverage is a good metric of how much testing you are doing, it is not necessarily a good metric of how well you are testing your product. There are other metrics you should use along with code coverage to ensure the quality.
Ideally, you would have business objects that do not directly access the database, but use helper objects or some kind of ORM (Object-relational mapping) framework. Then you can test your BOs without a database, possibly mocking some helper objects. That is probably the cleanest way, because you avoid the complexity of a real DB, and really only test your business logic.
If you cannot avoid combining business rules and DB access into one class (probably a problematic design, but sometimes hard to avoid), then you have to test against a DB.
There pretty much the only reasonable option is to have a separate DB for automatic testing. Your test methods should delete everything on setup on setup, then load all their data, do the test and verify results.
Don't even think about trying initialise the DB once and then run all
tests on the same data. One test will accidentally change data, and
other tests will mysteriously fail. I've done that and regretted it...
Each test really must stand on its own.
To do all this, I strongly recommend some kind of DB testing framework.
These help you to clean the DB, load necessary data, and compare query
results to expected results.
I use DBUnit (for Java), but there are many others for other languages.
Best Answer
It's not clear who owns your code (your employer or the customer) or whether your customer is unwilling to pay for this code or simply unwilling to ship with it. If your customer thinks they don't care about cross-platform, for example, they're quite right in being unwilling to pay for something that may benefit them only in the future, or may benefit only your other customers.
Also, if your customer is developing for an embedded platform, every byte of ROM and RAM counts, and again they're right in asking you to eliminate unnecessary code.
So how do you maintain a single, unified, non-divergent code base while still satisfying the needs of this customer? Specialize!
I have a colleague who founded a company whose sole business is the USB software stack. They have unbelievable cross-platform issues, configuration, and so on. Their solution to this problem is to have a single code base, and then they automatically specialize it for each customer. The automatic specialization includes things like removing code deemed "unnecessary" on a particular platform. Without knowing more about your problem, your business, or your other customers, that's the path I would recommend for you.