I build a home FreeBSD file server using ZFS.
It is an AMD X2 3200+ with 3GB of RAM. It has a PCI Express Gig-E. The boot drive is an old 400GB and I have 4 750GB Seagte drives (one with a difference firmware version, just in case).
Booting from ZFS would have been nice (it would make install simpler), but I used the ZFSOnRoot instructions to setup the Root/OS drive with ZFS (if all the partitions are ZFS, then it doesn't need to do a fsck at boot to check the UFS filesystems). The reason that you would want this that you can then setup all of your partitions (/var, /usr, /tmp, etc.) with different options as required (such as noatime and async for /usr/obj, which will speed kernel compiles), but they will all share space from a common pool. Then you can setup a data drive and give each user a partition of their own (with different quotes and settings). You can then take snapshots (which are low cost on ZFS).
My home server has a df that looks like:
/dev/ad0s1a 1.9G 744M 1.1G 41% /
devfs 1.0K 1.0K 0B 100% /dev
dozer/data 1.8T 62G 1.7T 3% /data
dozer/home 1.7T 9.6G 1.7T 1% /home
dozer/home/walterp 1.9T 220G 1.7T 11% /home/walterp
tank/tmp 352G 128K 352G 0% /tmp
tank/usr 356G 4.4G 352G 1% /usr
tank/var 354G 2.2G 352G 1% /var
Performance wise, copying files is real fast. The one thing that I would note is that I have been using ZFS on FreeBSD AMD64 systems that have 3-4GB and it has worked well, but from my reading, I'd be worried about running it on a i386 system that had 2GB or less of memory.
I ran of out of SATA ports on the motherboard, so I have not tried to add any new drives. The initial setup was simple, a command to create the RAIDZ and then the command to create /home, which was formatted in seconds (IIRC). I'm still using the older version of ZFS (v6), so it has some limitations (It doesn't require drives of an equal size, but unlike a Drobo, if you had 3 750GB drives and a 1TB drive, the end result will be as if you had 4 750GB drives).
One of the big reasons that I used ZFS with RAIDZ was the end-to-end checksums. CERN published a paper that documented a test they did where they found 200+ uncorrected read errors while running a R/W test over a period of a few weeks (the ECC in retail drives is expected to have a failure once every 12TB read). I'd like the data on my server to to be correct. I had a hard crash because of a power outage (someone overloaded the UPS by plugging a space heater to it), but when the system can back, ZFS came back quickly, without the standard fsck issues.
I like it, because I could then add CUPS to Samba to get a print server. I added a DNS cache and can add other software as I like (I'm thinking about adding SNMP monitoring to the desktops at my house to measure bandwidth usage). For what I spent on the system, I'm sure I could have a bought a cheap NAS box, but then I wouldn't have a 64-bit local Unix box to play with. If you like FreeBSD I'd say go with it. If you prefer Linux, then I'd recommend a Linux solution. If you don't want to do any administration, that is when I would go for the stand alone NAS box.
On my next round of hardware upgrades, I'm planning on upgrading the hardware and then installing the current version of FreeBSD, which has ZFS v13. V13 is cool because I have a battery backed up RAM disk that I can use for the ZIL log (this makes writes scream). It also has support for using SSDs to speed up the file server (the specs on the new Sun File Servers are sweet, and they get them from a ZFS system that using SSD to make the system very quick).
EDIT: (Can't leave comments yet).
I pretty much followed the instructions at http://www.ish.com.au/solutions/articles/freebsdzfs. The one major change that exists in 7.X since those instructions were written was that 7.2 came out and if you have 2+ GB, you should not have to add the following three lines in /boot/loader.conf:
vm.kmem_size_max="1024M"
vm.kmem_size="1024M"
vfs.zfs.arc_max="100M"
The instructions also explain how to create a mirror and how to put the system back into recovery mode (mount with ZFS). After playing with his instructions once or twice, I then used the ZFS Admin manual from Sun http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf to understand better what ZFS was. To create my data store, I used a modified version of the command on Page 91 to create ZFS pools. This being FreeBSD, I had to make a small change:
zpool create dozer raidz /dev/ad4 /dev/ad6 /dev/ad8 /dev/ad10
Where ad4-ad10 where found by doing dmesg |grep 'ata.*master', this are the names of the SATA hard drives on the system that will be used for the big data partition. On my motherboard, the first three ata ports (ad0-3) where the 4 PATA ports and then because each SATA port is a master, there are no old numbers.
To create the file system, I just did:
zfs create dozer/data
zfs set mountpoint=/data dozer/tank
The second command is required because I turned off default mountpoints for shares.
Best Answer
There should be no issue sending a Solaris 10 ZFS v22 snapshot to a FreeBSD server supporting v28. Reciprocally, that saved snapshot, or any snapshot of a clone/descendant of the initial snapshot should be sent back to that Solaris box with no issue as long as you never upgrade the ZFS filesystem on the FreeBSD server.
What matters really are the zfs versions, not the OSes, given the fact the (Open)Solaris code base is used on both sides. Preserving upward compatibility for datasets (filesystems, volumes and snapshots) and pools is likely one of the rules that can't be broken by the ZFS developers.
Note: this somewhat happened in the past but ZFS was still beta: http://hub.opensolaris.org/bin/view/Community+Group+on/2008042301
Current zfs manual pages state about the zfs send stream: