Are you using the entire VM drive for the test? On a fresh VM when you run your test you are starting with no allocated data. There is nothing the test needs to delete to write. On your subsequent test, you are likely having to delete, then write, which is going to cause the second test to be slower.
Exchange and SQL activity tends to the frequent, smaller IO/Ops end of the scale. Exchange has quite a few larger I/O bursts as attachments are written/pulled. Backup intervals and long running queries can really play hob as well, and are probably your peak I/O instances. Exchange Online Defrag is our IO-peak for Exchange, and SQL Backups is our I/O peak for our SQL server.
Exchange Online Defrag involves a lot of I/O ops, but not much throughput so the average transfer size is small, 512b small, and a lot of them. Read/Write ratio varies tremendously, but for a well maintained Exchange DB it should be mostly reads. This will be significantly random, but with enough sequential access to keep it interesting (no I don't have exact ratios).
SQL Backups involve a variety of sizes, but unlike online-defrag throughput is actually high as well. Plan for a mix of 512b to 4kb transfer sizes. Read/write ratios depend on where the data is ending up! Writes can be very high speed but (depending on the backup script) almost entirely sequential. The reads are going to be 100% random.
General VM activity depends on what's in the VMs. If you've got Exchange or SQL in there, then monitor for that. If by general you mean "general file-serving" such as web or CIFS sharing, well, that depends on what they're doing; CAD engineers have very different access patterns than an office full of Purchasing Office clerks. There is no generic I/O pattern for 'general VM activity'. You have to plan for what you actually have in VM.
Best Answer
I don't, I use IOZone, which I prefer and has all sorts of nice ways to visualise it;
(source: iozone.org)
Sure is purdy eh!