Freebsd – Xenserver 6.1 and ZFS file server

freebsdxenserverzfs

I want a paravirtualized ZFS server for Xenserver 6.1 supporting a 6+ TB zpool.

The old templates for Xenserver 6.0.2 and FreeBSD 9 don't work.

I have been unsuccessful ("Not a Xen-ELF image…") at making my own FBSD9, XS6.1 paravirtual combo, even though I've tried every "step by step" tutorial I've found on the intarwebs. Without PV and Xentools, you are stuck at max 3 VHDs, and with a max VHD of 2TB, I can't make a 6TB zpool since 1 VHD is the VM disk image.

The Solaris 10 template for Xenserver 6.1 is "experimental" and I'm not even sure it would work for us.

ZFS on Linux and ZFS Fuse, while I have tried them both and they work, aren't nearly as fast as FreeBSD's ZFS.

So I ask you this:
What is the best option for ZFS on Xenserver 6.1?

Has anyone no-kidding gotten FBSD 9 or 9.1RC fully paravirtualized for Xenserver 6.1?
If so: why has no one released a pre-baked virtual appliance for template file?

Thanks all!

Best Answer

Hmmmm.

Well, I have an interesting beast built on Citrix XenServer. I used FreeBSD 9.1 x64 with an HVM kernel.

I used passthrough to expose the FC HBA card and an Intel dual port nic to the FreeBSD HVM. The system boots on a small virtual disk provided by the Hypervisor. The rest is installed on the LUNs provided by the san. Thus my zpools look like this:

pool: local state: ONLINE scan: scrub repaired 0 in 0h3m with 0 errors on Mon Feb 11 04:58:53 2013 config:

NAME                     STATE     READ WRITE CKSUM
local                    ONLINE       0     0     0
  raidz1-0               ONLINE       0     0     0
    multipath/DDN-v00p2  ONLINE       0     0     0
    multipath/DDN-v01p2  ONLINE       0     0     0
    multipath/DDN-v02p2  ONLINE       0     0     0

errors: No known data errors

pool: nas state: ONLINE scan: scrub repaired 0 in 2h31m with 0 errors on Sun Feb 10 23:22:57 2013 config:

NAME                   STATE     READ WRITE CKSUM
nas                    ONLINE       0     0     0
  raidz1-0             ONLINE       0     0     0
    multipath/DDN-v03  ONLINE       0     0     0
    multipath/DDN-v04  ONLINE       0     0     0
    multipath/DDN-v05  ONLINE       0     0     0
    multipath/DDN-v06  ONLINE       0     0     0
    multipath/DDN-v07  ONLINE       0     0     0
  raidz1-1             ONLINE       0     0     0
    multipath/DDN-v08  ONLINE       0     0     0
    multipath/DDN-v09  ONLINE       0     0     0
    multipath/DDN-v10  ONLINE       0     0     0
    multipath/DDN-v11  ONLINE       0     0     0
    multipath/DDN-v12  ONLINE       0     0     0
  raidz1-2             ONLINE       0     0     0
    multipath/DDN-v13  ONLINE       0     0     0
    multipath/DDN-v14  ONLINE       0     0     0
    multipath/DDN-v15  ONLINE       0     0     0
    multipath/DDN-v16  ONLINE       0     0     0
    multipath/DDN-v17  ONLINE       0     0     0
  raidz1-3             ONLINE       0     0     0
    multipath/DDN-v18  ONLINE       0     0     0
    multipath/DDN-v19  ONLINE       0     0     0
    multipath/DDN-v20  ONLINE       0     0     0
    multipath/DDN-v21  ONLINE       0     0     0
    multipath/DDN-v22  ONLINE       0     0     0
  raidz1-4             ONLINE       0     0     0
    multipath/DDN-v23  ONLINE       0     0     0
    multipath/DDN-v24  ONLINE       0     0     0
    multipath/DDN-v25  ONLINE       0     0     0
    multipath/DDN-v26  ONLINE       0     0     0
    multipath/DDN-v27  ONLINE       0     0     0

errors: No known data errors

And the NIC's:

xn0: flags=8843 metric 0 mtu 1500 options=503 ether f2:05:91:2c:bb:8a inet 10.1.3.6 netmask 0xffffff00 broadcast 10.1.3.255 inet6 fe80::f005:91ff:fe2c:bb8a%xn0 prefixlen 64 scopeid 0x6 nd6 options=29 media: Ethernet manual status: active

lagg0: flags=8843 metric 0 mtu 1500 options=4019b ether 00:15:17:7d:13:ad inet 10.1.250.5 netmask 0xffffff00 broadcast 10.1.250.255 nd6 options=29 media: Ethernet autoselect status: active laggproto lacp lagghash l2,l3,l4 laggport: em1 flags=1c laggport: em0 flags=1c

Notice the "em's" in the lagg. It's quite fast and works great. Provided you have drives attached to a controller that you can passthrough to the VM, there's no real need to worry about the whole PVM situation.