Understanding the concept: Bare-metal VMware ESXi 5.0 with FreeNAS guest VM and iSCSI

iscsitruenasvmware-esxi

I have a test setup (HP Microserver) with just a single disk at the moment and have ESXi 5.0 bare-metal installed on a USB flash drive and a FreeNAS 8 VM created as a 2GB install, but now I'm at a loss…

In my mind what I want to do is to share the remaining 200GB of the disk flexibly between ESXi for virtual machines and Network shares (Windows/Linux). Would this be iSCSI storage? and how would I go about this? I've seen that there are many tutorials on setting up iSCSi, but I'm not really sure whether I am way off-target with the concept in what I think I want to achieve?

I'm a relative newbie to VMware and have been reading about iSCSI targets, intiators etc.

Finally, how does this scale when I add several more disks and want to create a ZFS RAID set? Do I start from scratch?

I appreciate any input/insight you can provide.

Tim.

Best Answer

You can use iSCSI for this, it would allow for easy migration of storage off this physical box later, if you choose to do so. On this stage however, you can export physical storage to your VMs, which would reduce complexity of your setup. You wouldn't turn your single disk/partition ZFS pool into raidz, however you may add disks later, create raidz pool of them and zfs send/zfs receive snapshot from single disk pool to raidz pool.
As I imagine it now, you have a partition to hold your FreeNAS VM. Then you'd create another partition and attach it as a virtual hard drive to FreeNAS VM, create a ZFS filesystems on it and export as iSCSI/NFS/CIFS shares.
What you want to do, however, to use ZFS data health features, is to migrate this filesystem to physical disks as soon as possible.

Rough overview of this migration:

  1. On actual, partition held ZFS filesystem create a snapshot:

    zfs snapshot datapool/data@migration

  2. From new disks create raidz pool. Remember, that you can't add another device to raidz setup, but you can add another raidz to the pool itself later:

    zpool create datapool2 raidz2 c4t0d0 c4t1d0 c4t2d0 c4t3d0
  3. Send/receive the snapshot you created, to migrate data:

    zfs send datapool/data@migration | zfs receive datapool2/data

To understand this better, read this blogpost.

Related Topic