Discs. LOTS of FAST discs behind a proper RAID controller. I personally use a SuperMicro 2 rack unit cage that has splace for 24 2.5" discs, together with WD Velociraptor 10k RPM discs - good enough for me. You can easily stack those boxes to address more discs - the Raid controller I use (Adaptec 5805) can address around 190 discs. When talking high end database, withi nserts and updates, discs WILL be your issue.
Get X of those (x >1) for redundancy and master / slave them database wise (no a mySQL Expert here).
network: Possibly 1gbit internally in the cluster. efore you go 10gbit - look at Infiniband (12gbit). With proper boards that is cheaper than using 10gbit ethernet and has a better latency.
Then use smaller / other boxes for front end. Both supermicro as well as Tyan have multi node cages - you can get a 2 rack unit system which is 4 individual computers, each with 2 processors. Cluster the front end ;) Modern processors thank heaven can address a significant amount of RAM, so 50mb / apache process is not that bad from that side. Get used to machines with 32 or 64 gigabytes of ram ;)
Alterantively, you may want to look nito blades for the front end, but I never could make financial sense out of them (WAY too expensive, PLUS the cage - hello?).
pQd's estimate of 7PB seems reasonable, and that's a lot of data for a RDBMS. I'm not sure I've ever heard of someone doing 7PB with any shared disk system, let alone MySQL.
Querying this volume of data with any shared disk system is going to be unusably slow. The fastest SAN hardware maxes out at 20GB/sec even when tuned for large streaming queries. If you can afford SAN hardware of this spec you can affort to use something better suited to the job than MySQL.
In fact, I'm struggling to conceive of a scenario where you could have a budget for a disk subsystem of this spec but not for a better DBMS platform. Even using 600GB disks (the largest 15K 'enterprise' drive currently on the market) you're up for something like 12,000 physical disk drives to store 7PB. SATA disks would be cheaper (and with 2TB disks you would need around 1/3 of the number), but quite a bit slower.
A SAN of this spec from a major vendor like EMC or Hitachi would run to many millions of dollars. Last time I worked with SAN equipment from a major vendor, the transfer cost of space on an IBM DS8000 was over £10k/TB, not including any capital allowance for the controllers.
You really need a shared nothing system like Teradata or Netezza for this much data. Sharding a MySQL database might work but I'd recommend a purpose built VLDB platform. A shared nothing system also lets you use much cheaper direct-attach disk on the nodes - take a look at Sun's X4550 (thumper) platform for one possibility.
You also need to think of your performance requirements.
- What's an acceptable run time for a query?
- How often will you query your dataset?
- Can the majority of the queries be resolved using an index (i.e. are they going to look at a small fraction - say: less than 1% - of the data), or do they need to do a full table scan?
- How quickly is data going to be loaded into the database?
- Do your queries need up-to-date data or could you live with a periodically refreshed reporting table?
In short, the strongest argument against MySQL is that you would be doing backflips to get decent query performance over 7PB of data, if it is possible at all. This volume of data really puts you into shared-nothing territory to make something that will query it reasonably quickly, and you will probably need a platform that was designed for shared-nothing operation from the outset. The disks alone are going to dwarf the cost of any reasonable DBMS platform.
Note: If you do split your operational and reporting databases you don't necessarily have to use the same DBMS platform for both. Getting fast inserts and sub-second reports from the same 7PB table is going to be a technical challenge at the least.
Given from your comments that you can live with some latency in reporting, you might consider separate capture and reporting systems, and you may not need to keep all 7PB of data in your operational capture system. Consider an operational platform such as Oracle (MySQL may do this with InnoDB) for data capture (again, the cost of the disks alone will dwarf the cost of the DBMS unless you have a lot of users) and a VLDB platform like Teradata, Sybase IQ, RedBrick, Netezza (note: proprietary hardware) or Greenplum for reporting
Best Answer
Here are my recommendations (your millage may vary)