Which is better and whether it matters depends on what you want to do with the disk capacity.
If you need to totally isolate the IO between the three virtual disks then having three RAID groups makes sense. If you have IO requirements for individual volumes that will benefit from being able to make use of more peak IO\Bandwidth than can be delivered by 8 disks then you may be better off making one larger RAID group (to get all the spindles into one pack) and splitting that up into the multiple virtual disks.
It would work, but personally, I wouldn't do that with hardware RAID. There are too many implementations to deserve full trust in unsupported situations. It was not designed for that. It is a hack. Hacks have potential side effects. You won't know what can go wrong until it is too late. But I am sure there are lots of success stories too.
The biggest danger may simply be that the removed disk has an id saved somewhere, and if you add disks in the machine that have the same id, it might do something stupid. (For a software RAID example of this known issue, look at the difference between the ZFS file system's "zpool detach" and "zpool split". Split was created to support the type of thing you are doing. Detach is what you are thinking of doing.)
Do you even know for sure that the second system's RAID controller is compatible with the first one? If not, the disk won't work. (but the chance is high since it is a mirror)
If you used software raid (or hardware supporting 3 way mirrors), you could just add a 3rd disk to the mirror, and as long as you never move the disk back after removing it, you can't get any side effects on the original system. But it is still the wrong way to do it. If you clear out the MBR, superblock, etc. on those old disks and put them in the new system, the RAID controller might see some metadata you missed (unlikely, but who knows for sure?) and try to join it with the array and mess things up. Of course it 'should not', but you never know... this is not a supported situation. RAID was not designed for this, but other things were.
(based on your other question about windows domain servers, I'll assume this one is also about Windows)
On Linux, I would just copy the files (over the network, eSATA, USB, etc.) and reinstall the bootloader. Mac OSX has a tool that does it for you, as a supported feature. Unfortunately, I don't know the best answer on Windows, but you could try the built-in backup and restore feature instead. Or use some other backup software, or use specialized software for copying bootable systems.
Best Answer
The real suggestion I would make is this: Unless you are running the large hadron colider, in general aggregate disk bandwidth doesn't matter, only IOPS matter for the overwhelming majority of workloads. Stop trying to make spining rust disks fast - they aren't. Disk is the new tape.
If you need performance, most workloads need IOPS more than bandwidth or capacity. Buy your Dell server with the cheapest SATA drives (to get the carriers), and then replace those cheap SATA drives with smallest numbber of Intel 500-series SSDs that meets your capacity needs. Dell's SSD offerings are terribly overpriced comparted with Intel SSDs from say NewEgg, even though the Intels perform better and are more reliable than whatever Dell is shipping for SSDs (Samsung?).
Make one big RAID-5 array of SSDs. Even just 3 modern MLC SSDs in RAID-5 will absolutely destroy 16 15k spinning rust disks in terms of IOPS, by a factor of 10x or more. Sequential throughput is a non-issue for most applications, but the SSDs will also be 2x faster than spinning disks in that regard. Use large capacity 7.2k SATA disks for backup media or for archiving cold data. You'll spend less money and use less power with SSDs.
Resistance to SSDs over reliability are largely FUD from conservative storage admins and SAN vendors who love their wasteful million-dollar EMC arrays. Recent "enterprise MLC SSDs" are at least as reliable as mechanical disks, and probably much more reliable (time will tell). Wear leveling makes write lifetime a non-issue, even in the server space. Your biggest worry is firmware bugs rather than hardware failure, which is why I suggest going with Intel SSDs.