Putting Oracle redo logs on DRAM SSD for a heavy write database

emcoraclessd

I have a Sun M4000 connected to an EMC CX4-120 array with a write-heavy database. Writes peak at around 1200 IO/s and 12MB/s.

According to EMC, I am saturating the write cache on the EMC array.

I think the simplest solution is to move the redo logs to a DRAM based SSD. This will reduce the load on the EMC array by half and apps won't be seeing log buffer waits. Yes, the DBWR may become a bottleneck, but the apps won't be waiting for it (like they do on redo commits!)

I currently cycle through about 4 4GB redo logs, so even 20GB or so of SSD would make a big difference. Since this is short-term storage and is constantly being overwritten, Flash based SSDs probably aren't a great idea.

The M4000 doesn't have any extra drive lots, so a PCI-E card would be perfect, I could go external or move boot volumes to the EMC and free up the local drives.

Sun sells a Flash Accelerator F20 PCIe card, but that seems to be a cache for some SATA disks, not a DRAM SSD solution. Details are sketchy, it doesn't list the M4000 as supported, and I'm tired of fighting Sun's phone tree looking for human help. 🙁

Do others agree that a DRAM SSD is the way to go? Any hardware recommendations?

UPDATE
In addition to the info in a comment below, I tried various settings for "commit_write" and it didn't make a difference.

Best Answer

First - I guess you have very few disks in the array. 1200IOPS can be easily supported be 12 spinning disks (100 IOPS per disk is very reasonable). If the cache can't handle it, it means that your sustained write rate of 1200 IOPS is way more than your disks can support.

Anyway, SSD for redo logs isn't likely to help. First, are your session wait mostly on the COMMIT statement? Check the top wait events in statspack / AWR to verify. I would guess ~95% of your I/O is not to the redo logs at all. For example, a single row insert to a table with 5 indexes can do 1 I/O to read a table block (that has space for the row), read 5 index blocks (to update them), write 1 data block, 1 undo block and 5 index blocks (or more, if non-leaf blocks are updated) and 1 redo block. So, check statspack and see your wait events, you are likely waiting a lot of both READs and WRITEs for data / indexes. Waiting for reads slows down the INSERT, and the WRITE activity makes READs even slower - it is the same disks (BTW - do you really need all the indexes? dropping those who aren't must have will accelerate the inserts).

Another thing to check is RAID definition - is it RAID1 (mirroring - each write is two writes) or RAID 5 (each write is 2 reads and two writes for checksum calculation). RAID 5 is way slower in write-intensive load.

BTW - if the disks can't hanlde the write load, DBWR will be a bottleneck. Your SGA will be full with dirty blocks, and you will not have room left to read new blocks (like index blocks that needs to be processed / updated) until DBWR can write some dirty blocks to disks. Again, check statspack / awr report /addm to diagnose what's the bottleneck, typically based on the top 5 wait events.