You may also want to consider Citrix (XenSource) XenServer. It provides a self-contained dom0 similar to ESXi, and with an easier interface than Xen on CentOS.
Particularly if you're just starting to venture into Virtualization, XenServer can be a great choice. Particularly if you intend to run Windows on any of the guests, XenServer has been a smoother experience for me (compared to Xen in CentOS/RHEL 5 and to KVM on Ubuntu and CentoOS).
XenServer is also free, which makes it an excellent choice to use for Proof of Concepts (or even production, paid support is available if needed, too).
Advantages over the other two options:
- Self Contained, low administration overhead
- Small learning curve compared to manual virtualization setup
- Commercial support available if needed
- Better support for Windows guest VMs
- Good Documentation
- VM Management Console for managing VMs (XenCenter)
- Faster Development cycle (Much/most of the Xen development will be rolled out in new XenServer releases before they make it into a CentOS or OpenSolaris release)
Disadvantages:
- More of a black-box solution (less ability to customize/change dom0, although it is a Linux kernel and does allow console/ssh access, so you can make changes if you want)
- No ZFS support
- VM Management Console (XenCenter) is a Windows Application
I know this is a really old question, but I've seen it a few difference places. There's always been some confusion about the value expressed in zfs list as it pertains to using zfs send|recv. The problem is that the value expressed in the USED column is actually an estimate of the amount of space that will be released if that single snapshot is deleted, bearing in mind that there may be earlier and later snapshots referencing the same data blocks.
Example:
zfs list -t snapshot -r montreve/cev-prod | grep 02-21
NAME USED AVAIL REFER MOUNTPOINT
montreve/cev-prod@2018-02-21_00-00-01 878K - 514G -
montreve/cev-prod@2018-02-21_sc-daily 907K - 514G -
montreve/cev-prod@2018-02-21_01-00-01 96.3M - 514G -
montreve/cev-prod@2018-02-21_02-00-01 78.5M - 514G -
montreve/cev-prod@2018-02-21_03-00-01 80.3M - 514G -
montreve/cev-prod@2018-02-21_04-00-01 84.0M - 514G -
montreve/cev-prod@2018-02-21_05-00-01 84.2M - 514G -
montreve/cev-prod@2018-02-21_06-00-01 86.7M - 514G -
montreve/cev-prod@2018-02-21_07-00-01 94.3M - 514G -
montreve/cev-prod@2018-02-21_08-00-01 101M - 514G -
montreve/cev-prod@2018-02-21_09-00-01 124M - 514G -
In order find out how much data will need to be transferred to reconstitute a snapshot via zfs send|recv, you'll need to use the dry-run feature (-n) for these values. Taking the above-listed snapshots try:
zfs send -nv -I montreve/cev-prod@2018-02-21_00-00-01 montreve/cev-prod@2018-02-21_09-00-01
send from @2018-02-21_00-00-01 to montreve/cev-prod@2018-02-21_sc-daily estimated size is 1.99M
send from @2018-02-21_sc-daily to montreve/cev-prod@2018-02-21_01-00-01 estimated size is 624M
send from @2018-02-21_01-00-01 to montreve/cev-prod@2018-02-21_02-00-01 estimated size is 662M
send from @2018-02-21_02-00-01 to montreve/cev-prod@2018-02-21_03-00-01 estimated size is 860M
send from @2018-02-21_03-00-01 to montreve/cev-prod@2018-02-21_04-00-01 estimated size is 615M
send from @2018-02-21_04-00-01 to montreve/cev-prod@2018-02-21_05-00-01 estimated size is 821M
send from @2018-02-21_05-00-01 to montreve/cev-prod@2018-02-21_06-00-01 estimated size is 515M
send from @2018-02-21_06-00-01 to montreve/cev-prod@2018-02-21_07-00-01 estimated size is 755M
send from @2018-02-21_07-00-01 to montreve/cev-prod@2018-02-21_08-00-01 estimated size is 567M
send from @2018-02-21_08-00-01 to montreve/cev-prod@2018-02-21_09-00-01 estimated size is 687M
total estimated size is 5.96G
Yikes! That's a whole heck of a lot more than the USED values. However, if you don't need all of the intermediary snapshots at the destination, you can use the consolidate option (-i rather than -I), which will calculate the necessary differential between any two snapshots even if there others in between.
zfs send -nv -i montreve/cev-prod@2018-02-21_00-00-01 montreve/cev-prod@2018-02-21_09-00-01
send from @2018-02-21_00-00-01 to montreve/cev-prod@2018-02-21_09-00-01 estimated size is 3.29G
total estimated size is 3.29G
So that's isolating the various blocks that were rewritten between snapshots, so we only take their final state.
But that's not the whole story! zfs send is based on extracting the logical data from the source, so that if you have compression activated on the source filesystem, the estimates are based on the uncompressed data that will need to be sent. For example, taking one incremental snapshot and writing it to disk you get something close to the estimated value from the dry-run command:
zfs send -i montreve/cev-prod@2018-02-21_08-00-01 montreve/cev-prod@2018-02-21_09-00-01 > /montreve/temp/cp08-09.snap
-rw-r--r-- 1 root root 682M Feb 22 10:07 cp08-09.snap
But if you pass it through gzip, we see that the data is significantly compressed:
zfs send -i montreve/cev-prod@2018-02-21_08-00-01 montreve/cev-prod@2018-02-21_09-00-01 | gzip > /montreve/temp/cp08-09.gz
-rw-r--r-- 1 root root 201M Feb 22 10:08 cp08-09.gz
Side note - this is based on the OpenZFS on Linux, version :
- ZFS: Loaded module v0.6.5.6-0ubuntu16
You will find some references to optimisations that can be applied to the send stream (-D deduplicated stream, -e more compact), but with this version I haven't observed any impact on the size of the streams generated with my datasets.
Best Answer
I recommend Nexenta or NexentaStor for this purpose. NexentaStor is more of an appliance and doesn't require heavy Unix knowledge. It features a web GUI. But if weren't using either, I'd go the Solaris or Solaris-derivative route (Illumos, OpenIndiana).