It's been mentioned that RAID is not a backup. VERY TRUE. Keep that in mind.
You're using terabyte sized disks, which increases the chances of an unrecoverable read error, which is a MAJOR PAIN IN THE @#$. Raid 5 is almost unusable as disks get larger; you could have one of the three disks fail completely, you replace it, and that's when you discover that one of the "good" disks has a spot that can't be read from, so you end up having to completely rebuild from backup. We had that happen with a hardware-based RAID (PERC controller).
Your RAID level depends on how you're using the server. I like 1 for most of my purposes (mirroring). It has very good read times because it can spread read commands across drives, but writes can suffer somewhat. How affected it is depends on what you're using for the controller and drive speed. Go to Wikipedia and search for RAID to get a rundown of RAID levels; no one can really tell you what to definitively use without knowing your workload, the server's usage, etc.
Do not use rsync for a backup on the same computer. If your controller is fried or something goes weird on the computer itself (or the machine is damaged in flooding, fire, electrical surge) you risk the backup getting toasted too. Backup means being able to rebuild your data on new hardware if need be after a catastrophic failure.
If you're referring to a hardware RAID controller built into the motherboard-don't. don't don't don't. Motherboard RAID is cheap, crappy, and cheap, and worse than any software-implemented RAID. If you want to go through the trouble of building a production system with RAID, use either the built-in Linux/BSD software RAID or get a good RAID card like one from 3Ware. Personally for a server, I'd get a hardware card and search the specs for features like hot swap capability and lighted alarms to indicate WHICH DRIVES have failed. There's nothing wrong with performance or ability of software RAID, and it's very reliable, but there are many questions about "I have a drive that failed and don't know which one it is", and if you screw it up you can break your data set or erase the wrong data. System administration is supposed to have some element of making your life easier (hee hee!) and puzzling which drive is which cable is which mountpoint is not fun. The hardware cards are $$ but often save you much frustration when trying to puzzle out which is in need of replacement.
Don't skimp on hard drive speed. Faster, the better, especially if this is a heavy usage server. Today's gig lans can easily make the hard disk a bottleneck now for big transfers or heavy sharing.
Make sure you have a way to monitor the RAID, and make it a point to periodically check the status of your drives.
Get a good backup system in. Any fileserver should have a good second-machine backup, whether to tape or disk. If your server blows up tomorrow you should be able to get parts in and start restoring everything from scratch if need be, unless the business issuing the paychecks can survive without their server, in which case I don't know why you'd be worried about RAID.
Hope this helps!
You can use iSCSI for this, it would allow for easy migration of storage off this physical box later, if you choose to do so. On this stage however, you can export physical storage to your VMs, which would reduce complexity of your setup.
You wouldn't turn your single disk/partition ZFS pool into raidz, however you may add disks later, create raidz pool of them and zfs send/zfs receive snapshot from single disk pool to raidz pool.
As I imagine it now, you have a partition to hold your FreeNAS VM. Then you'd create another partition and attach it as a virtual hard drive to FreeNAS VM, create a ZFS filesystems on it and export as iSCSI/NFS/CIFS shares.
What you want to do, however, to use ZFS data health features, is to migrate this filesystem to physical disks as soon as possible.
Rough overview of this migration:
On actual, partition held ZFS filesystem create a snapshot:
zfs snapshot datapool/data@migration
From new disks create raidz pool. Remember, that you can't add another device to raidz setup, but you can add another raidz to the pool itself later:
zpool create datapool2 raidz2 c4t0d0 c4t1d0 c4t2d0 c4t3d0
Send/receive the snapshot you created, to migrate data:
zfs send datapool/data@migration | zfs receive datapool2/data
To understand this better, read this blogpost.
Best Answer
Performance would be my main concern; even if you're going to give the VM "direct" access to the physical disks, there's still VM-related overhead, and if you've got any decent load you'll have problems earlier in a VM than you would on a physical machine. But benchmark it, it might work OK for your workload.
(For the record, I'd definitely lean towards "bad idea", although I reserve "terrible" for things much, much worse than this).