If the error log says that CHECKDB completed, then it did - that's the very last thing that CHECKDB does before exiting.
This looks like a client issue while printing the results. When I was writing DBCC CHECKDB during SQL Server 2005 development, there were funky issues sometimes with various clients depending on the number of rows of output - e.g. SSMS will only print the first 1000 errors from CHECKDB otherwise it runs out of memory
A few things to try:
- run using WITH ALL_ERRORMSGS, NO_INFOMSGS (this is the way I always recommend people run it)
run it using sqlcmd instead of SSMS
run it from SQLCMD and pipe the results to a file using the SQLCMD -o switch
Thanks
Lets start from the top.
I've got a large database for a
telephony application, about 60GiB or
so
Rephrase that: I have a pretty small database. Seriously, the time where 60 giga where large is about 10 years ago. Compare that to: I have a finaincial data database that has 800gb and growing, with 95% of the data in one table ;)
It'll have four internal HDs,
probably for OS and backup, but the
biggest change is the attached storage
- 12 x 15k drives on an external SAS interface.
Here is what I would do:
- Mirror two discs for boot. Put off a 64gb partition down, the rest up you use for TEMP. You do not want to see a lot of IO there.
Mirror the next 2 discs for logfiles. If those run hifh on IO - relace them with SSD. Given the small amount of changes you have.... small 80gb SSD should be enough.
The rest (12 discs), put up a hugh RAID 10.
More important is that you reconfigure our server to use:
- Minimum 12 data and log files for tempdb. Do NOT autogrow those. Fix them.
- Minimum 12 log files. No joke. Do not autogrow here, either.
- Minimum 12 database files. Did I say - no autogrow?
Then, of course, there's always RAID
10 vs RAID 5/6, vs RAID 50/60 to
consider.
What please is to consider there, given the HUGH performance differences between Raid 10 vs the others - which all blow the water out of Raid 5/6/50/60 for anyhing requiring high IO. RAID 5 / 6 make onl ysense if you put in SSD drives - then the significnat IO loss will be totally eaten up. Actually given your trivial database size, it may be financially idiotic to even go with 2x15 SAS discs. Get 2 x200gb REALSSD drives and you will have about 100 times the IO performance if a RAID 10 over your 30 drives. Given the significant cost of the infrastructure you may save a LOT of money on the way.
Actually the smartest thing would be to not ordet the whole SAS thingy - you have 4 drive slots, put the OS on two drives, use 200gb SSD in a Mirror on the other one. Finished. And a LOT faster than your SAS stuff, too ;) THe joy of having a trivial datbase size. Check http://www.fastestssd.com for the current state. A modern SSD will reah 200mb sustained random rates in that setup, even if not top of the line. THis will seriously wipe the floow with the mediocre IO you get from your SAS setup.
Or: 30 SAS discs are maybe 4800 IOPS. RealSSD gets up to 50.000 - on one disc, with "weak times" of around 36.000 IOPS. That means that ONE SDD is about 7,5 times as fast - in slow moments - as your 12 disc setup. Around 10 times as fast in good times. Ouch.
Be carefull to properly align the parittions and properly format the file systme (hint: do not use the standard 4kb node size - stupid for SQL Server).
I could do a massive RAID 10 of 6
disks and throw the entire DB onto it,
but I've been considering breaking up
TRAFFIC and it's index files onto
separate partitions - and possible
BILLING as well. In addition, the
Everything Else might do well in it's
own File.
That would be stupid abuse of SQL Server. GIven that it does load balancing between files and wants/ asks for multiple files per group (one per logical processor) it would not gain anything - au contraire. Separating files and indices achieves NOTHING if they end up on the same discs anyway. In your case you are better off with one filegroup, 12 files. If you want later scalability, you may want to go for 48 data files to start with - gives you room up to processor 48 cores.
You may want to use two filegroups to splt off the billing data from the less volatile -/ requested one, but not for direct speed, but for the priviledge of posibly moving them totally later off without reorganization - that is what I did with my financial database.
Last words: whoever purchased the server made a bad decision hardware wise. There is no reason to have an external SAS tray for something that small. My database server is from SuperMicro and has 24 disc slots in 2 rack units height. That is without external cage. I dont really want to compare the numbers here- but I bet it was a lot of wasted money.
Best Answer
The other editions of SQL server get you things like the SQL Agent so you can schedule database maintenance and other jobs.
As long as your database can fit withing the limits of the Express edition you'll be just fine.
SQL server likes lots of RAM. The more the better. As the SQL Server can't load data into cache that'll put additional load on the disks. You should look at the Web Edition or Workstation edition of SQL Server. Those editions have higher limits than the Express edition, but cost less than Standard Edition.
If you do start with the Express edition, you can always upgrade later to the Standard Edition after you purchase the license.