Had the exact same problem. Hardware as below...
- HP DL180-G6 Nearline Server
- 4x 300 GB SAS 15k drives
- 2x 1TB SATA 10k drives
- 2x Xeon 5340 2.53 GHz CPU's (8 cores total)
- 32 GB DDR3 1066 MHz
- HP Storageworks HBA P410 (PCI Express - 1 for all HDD's)
- HP Storageworks HBA P212/Zero (PCI Express - 1 for the external tape drive)
- HP Ultrium LTO 4 external SAS tape drive (800/1600 MB)
When we'd run the daily tape backup with tar -options -source from /mnt/backup -destination to /dev/st0 (tape)
, it would basically lock up the whole damn computer. The first service to suffer was MySQL, which would be unreachable through the Unix filesystem socket (/var/lib/mysql/mysql.sock), and then processes would crash one by one. Even the terminal (bash prompt) was unuseable, and forget about opening anything from within the gui (Gnome Desktop).
The solution was not to use 'nice', but rather use 'ionice'. It wasn't CPU loading that was the issue but disk loading. The disks and the processors are fast enough, but the backbone (hard disk adapter / PCI-express bus / etc.) just could not keep up.
So, here was the fix...
Old tar backup command:
[root@somewhere]# /bin/tar -clpzvf /dev/st0 /mnt/backup
New tar backup command:
[root@somewhere]# /usr/bin/ionice -c2 -n5 /bin/tar -clpzvf /dev/st0 /mnt/backup
For your reference, here is the manpage for 'iowait' command... it is supported on kernels 2.6.13 and newer:
- http://linux.die.net/man/1/ionice
- ionice priorities for class 2 systems have 'sane' values between 3 and 5 if you are trying to slow something down without making it take forever. where 3 is moderately slowed down and 5 is very much slowed down.
Effectively doubled the time it takes to run the tape backup (from half an hour, now it is about an hour), but who cares, it is now working as desired.
Question 1 - it depends what they are writing and reading throughout the day, the volume of data, the nature of the application(s) served by the database etc.
If by 'business user' you mean this would be a database server supporting an internal business app used by 2000 users (e.g. a call centre CRM application supporting 2000 seats where response time is critical), then a fairly beefy machine with as much RAM as you can cram in and an intelligent disk arrangement will probably do the job, but no-one would be able to offer concrete advice with the information you've given above.
A couple of badly written queries in your business app or lack of maintenance (indexing, storage management etc.) and no amount of server infrastructure is going to make the system perform well.
As with all database backed systems, everything depends on the required use case and the slowest response time you can get away with without impacting the business requirement - requirements for reporting/read centric operations are vastly different to OLTP environments. You need to figure out where the stress is going to occur first before you start buying tin. Clustering is all well and good, but requires an exponentially greater understanding of both MySQL and the application environment you are servicing.
Question 2 - Separate databases can be on separate servers. We had this in consumer web hosting environments, and we were running between 150 and 300 databases per box (standard dual processor DL380's, if I recall. this was a few years back.) Databases doing large volumes were moved to different boxes depending on load requirements. As with any DB server, loads of RAM and intelligent disk management are your friends.
Question 3 - Multiple cores deliver additional speed for parallel queries, but I recall it having diminishing returns quite quickly. Again, it depends on the type of queries and structure of the DB. Simply put, the odds of a normal use OLTP database being CPU bound are pretty slim - it's usually IO bound and it depends on how effective your caching is, how you have structured the disks (separate channels for read & write) and how you data is stored and retrieved (effective indexing, queries etc.). If I were given a choice between a faster CPU and more cores, I would buy buckets of RAM and spend most of my time agonizing over the disk layout :)
edit: apart from the huge volume of material and case studies on mysql performance available online, the High Performance MySQL book is a a pretty good starting point...
Best Answer
If you want to be able to tune MySQL to utilize multiple cores, you need to upgrade to MySQL 5.5.
According to MySQL's Whitepaper "What’s New in MySQL 5.5 Performance and Scalability" from December 2010:
The parameters mentioned are innodb_read_io_threads and innodb_write_io_threads.
These settings do not exist before MySQL 5.1.38. In fact, only the MySQL Plugin allows for these settings in MySQL 5.1. It is included in MySQL 5.5's native InnoDB.
Perhaps this Blog Post with give you more insight into InnoDB Scalablity for Multiple Core Servers.