You will face the "CAP" theorem problem. You cannot have consistency, availability and partition-tolerance at the same time.
DRBD / MySQL HA relies on synchronous replication at the block device level. This is fine while both nodes are available, or if one suffers a temporary fault, is rebooted etc, then comes back. The problems start when you get a network partition.
Network partitions are extremely likely when you're running at two datacentres. Essentially, neither party can distinguish a partition from the other node failing. The secondary node doesn't know whether it should take over (the primary has failed) or not (the link is gone).
While your machines are in the same location, you can add a secondary channel of communication (typically a serial cable, or crossover ethernet) to get around this problem - so the secondary knows when the primary is GENUINELY down, and it's not a network partition.
The next problem is performance. While DRBD can give decent** performance when your machines have a low-latency connection (e.g. gigabit ethernet - but some people use dedicated high speed networks), the more latency the network has, the longer it takes to commit a transaction***. This is because it needs to wait for the secondary server (when it's online) to acknowledge all the writes before saying "OK" to the app to ensure durability of writes.
If you do this in different datacentres, you typically have several more milliseconds latency, even if they are close by.
** Still much slower than a decent local IO controller
*** You cannot use MyISAM for a high availability DRBD system because it doesn't recover properly/ automatically from an unclean shutdown, which is required during a failover.
This looks like perfectly reasonable design to me. I assume you want to place your virtual machines on data stores on VSA-based HP LeftHand Storage. You have the hardware and networking part covered: using two NICs on vSwitches for your HP LeftHand VSA, the ESXi's iSCSI VMkernels and possibly guests which access the HP LeftHand cluster is best practice; I suggest you create resource group reservations for the VSAs according to the user manual.
[Edit]Because just saw this in the comments to the original question: It is best practice to presented RAID-ed storage to the VSA which is made then virtualizied and made available in the HP LeftHand Cluster. The RAID level is depended on your requirements for capacity, performance and protection. RAID 10 is the way to go for you then.[/Edit]
One thing you need to be careful about is where you place your managers in this HP LeftHand setup. This is very important in a configuration with only two storage nodes! Right now I do not see a HP LeftHand Failover Manager (definitely the preferred option) in your design to maintain quorum in the storage cluster; or are you planning use the Virtual Manager? Depending on the uplink to your main data center (latency <= 20ms, bandwidth ~100Mbit/s), you might be able to place it in there.
PS: You might also want to make use of Remote Copy to replicate your VMs and data into your main data center; you already have everything which is needed in place to make it work.
Best Answer
This is an article where you can learn more about the main idea behind Storage Replica, all its prerequisites and features: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/storage-replica-overview
And here you can find the guide on how to implement volume replication with stretched cluster: https://www.starwindsoftware.com/blog/how-to-configure-storage-replication-using-windows-server-2016-part-2
Please note that you need Datacenter licencies to get started with Storage replica.