The PERC 6/i is a dual-port controller; each port has 4 SAS lanes. On the 8x2.5in R710 chassis, that's a one-to-one mapping of front-panel disks to SAS lanes. On the 3.5in chassis, ports 6 and 7 are unused. With a 4-disk array, you could move 2 disks to slots 4 and 5 to split the workload between channels, although there's still the single processor and memory on the PERC card.
Updating firmware is typically a good idea, and is a fairly painless process (although it does require a reboot.)
The number of channels in your RAID card and how fast they run sets the upper limit for how fast you can access your storage.
How many disks per channel you need to provide that storage depends on what kind of I/O this server will be providing. If you're going to be storing things like workstation disk-images, you'll hit the performance ceiling a lot faster than if you were storing massive amounts of itty bitty files accessed randomly.
For significantly random I/O, disk rotational speed has an impact on your disk:channel ratio. You'll need to provide more 7.2K RPM disks than 10K RPM disks yield to the same performance.
As for SATA-600 (or 6gb SAS), if this RAID server will be connected to the network with 1Gb Ethernet, the different doesn't matter much at all. The network will saturate before the storage channels will. So take into account how your storage consumers will access this storage. It may be that a single channel with 72 drives is all you need. Or, if you have 10GbE, four channels with 24 disks each may be needed.
When it comes to buying your disks, take a look at the warranty period. Drives marketed for enterprise use are rated for 24/7/365 operation, where desktop class drives aren't. This matters most in the cheap 7.2K RPM market segment; drives at 10K or 15K RPM are almost always "enterprise" drives.
When building your RAID sets, keep RAID5 rebuild times in mind. 6TB takes a long time to rebuild, days sometimes, and performance will be degraded during the rebuild. It's better to have more, smaller R5 arrays in a stripe-set than fewer, larger R5 arrays in a stripe set.
SAS vs SATA
Doesn't matter, in my opinion anyway. SAS has a few points going for it that make it better to work with for large storage systems (>48 drives, for instance). A 7.2K RPM SAS drive will perform nearly identically to a 7.2K RPM SATA drive. The market forces an artificial segmentation, where anything 10K or 15K RPM is almost always SAS and 7.2K is mostly SATA. This is where most "SAS vs SATA" arguments are actually focusing on, drive rotational speeds.
Best Answer
That particular RAID controller claims 8-lane PCI Express 2.0 compliancy meaning that effectively you'll already be limited to
8 * 4 Gbs = 32 Gbs
(or 4000 MB/s) regardless of what's connected to the RAID card.Each SAS SFF8088 connector will cary 4 SAS lanes over a single cable, when each link is at the maximum 6 Gbs port speed theoretically you indeed get
2 x 4 x 6 Gbs = 48 Gbs
worth of bandwidth that the RAID controller can manage.A 6 Gbs SAS link will be shared by the number of devices connected to it, so if you connect 4 devices and each is stressed equally you can only get 1,5 Gbs per individual drive.