Lets start from the top.
I've got a large database for a
telephony application, about 60GiB or
so
Rephrase that: I have a pretty small database. Seriously, the time where 60 giga where large is about 10 years ago. Compare that to: I have a finaincial data database that has 800gb and growing, with 95% of the data in one table ;)
It'll have four internal HDs,
probably for OS and backup, but the
biggest change is the attached storage
- 12 x 15k drives on an external SAS interface.
Here is what I would do:
- Mirror two discs for boot. Put off a 64gb partition down, the rest up you use for TEMP. You do not want to see a lot of IO there.
Mirror the next 2 discs for logfiles. If those run hifh on IO - relace them with SSD. Given the small amount of changes you have.... small 80gb SSD should be enough.
The rest (12 discs), put up a hugh RAID 10.
More important is that you reconfigure our server to use:
- Minimum 12 data and log files for tempdb. Do NOT autogrow those. Fix them.
- Minimum 12 log files. No joke. Do not autogrow here, either.
- Minimum 12 database files. Did I say - no autogrow?
Then, of course, there's always RAID
10 vs RAID 5/6, vs RAID 50/60 to
consider.
What please is to consider there, given the HUGH performance differences between Raid 10 vs the others - which all blow the water out of Raid 5/6/50/60 for anyhing requiring high IO. RAID 5 / 6 make onl ysense if you put in SSD drives - then the significnat IO loss will be totally eaten up. Actually given your trivial database size, it may be financially idiotic to even go with 2x15 SAS discs. Get 2 x200gb REALSSD drives and you will have about 100 times the IO performance if a RAID 10 over your 30 drives. Given the significant cost of the infrastructure you may save a LOT of money on the way.
Actually the smartest thing would be to not ordet the whole SAS thingy - you have 4 drive slots, put the OS on two drives, use 200gb SSD in a Mirror on the other one. Finished. And a LOT faster than your SAS stuff, too ;) THe joy of having a trivial datbase size. Check http://www.fastestssd.com for the current state. A modern SSD will reah 200mb sustained random rates in that setup, even if not top of the line. THis will seriously wipe the floow with the mediocre IO you get from your SAS setup.
Or: 30 SAS discs are maybe 4800 IOPS. RealSSD gets up to 50.000 - on one disc, with "weak times" of around 36.000 IOPS. That means that ONE SDD is about 7,5 times as fast - in slow moments - as your 12 disc setup. Around 10 times as fast in good times. Ouch.
Be carefull to properly align the parittions and properly format the file systme (hint: do not use the standard 4kb node size - stupid for SQL Server).
I could do a massive RAID 10 of 6
disks and throw the entire DB onto it,
but I've been considering breaking up
TRAFFIC and it's index files onto
separate partitions - and possible
BILLING as well. In addition, the
Everything Else might do well in it's
own File.
That would be stupid abuse of SQL Server. GIven that it does load balancing between files and wants/ asks for multiple files per group (one per logical processor) it would not gain anything - au contraire. Separating files and indices achieves NOTHING if they end up on the same discs anyway. In your case you are better off with one filegroup, 12 files. If you want later scalability, you may want to go for 48 data files to start with - gives you room up to processor 48 cores.
You may want to use two filegroups to splt off the billing data from the less volatile -/ requested one, but not for direct speed, but for the priviledge of posibly moving them totally later off without reorganization - that is what I did with my financial database.
Last words: whoever purchased the server made a bad decision hardware wise. There is no reason to have an external SAS tray for something that small. My database server is from SuperMicro and has 24 disc slots in 2 rack units height. That is without external cage. I dont really want to compare the numbers here- but I bet it was a lot of wasted money.
Best Answer
I would suggest RAID10 for heavy database work that isn't mainly read-only, not RAID5 or 6. On a 4-drive RAID6 array each block write (or partial block write) could be turned into a read (to get the other data block) followed by three writes (one for the initial write, two for the two parity blocks) which can have a significant write performance impact. With RAID10 each (partial) block write is just two writes to disk.
If your database access did include very few writes then RAID6 may be preferable for redundancy as a RAID6 array can survive any two drives failing where-as a RAID10 array of four drive will only survive 4 of the 6 possible combinations of two failed drives, but you state that you expect the activity to be both read and write intensive (and you have good backup and disaster recovery plans, right?).
Obviously make sure you go for a 64 bit edition of Windows and SQL Server, otherwise you'll not be able to use all that RAM without performance draining hacks like PAE. While we are on memory: I would suggest more if you can. RAM is not expensive these days, even good know-brand ECC capable RAM, so going up from 8 to how-ever-much-you-can-fit-on-the-board isn't a bad idea. For most applications in SQL server (with big enough databases) you will notice the benefit under any appreciable load as it will greatly reduce I/O operations for read queries as a larger working set can be held in memory. From your description it sounds like you have a data size and activity pattern that would benefit from how ever much RAM you can practically throw at it. Many new servers support 16Gb of RAM these days, and 32Gb is not uncommon (and increasingly neither is support for 64Gb, though that is getting into the "specialist" market so you may have to pay a premium for the extra).