Because you're running a proxy server on the gateway-vm(s), presumably they are not doing ip-forwarding on them.
You could enable ip-forwarding on your gateway-vm(s) and route the traffic like:
database-vm1 <-> gateway-vm1 <-ipsec/gre-> gateway-vm2 <-> database-vm2
You would want to be careful that you don't enable traffic forwarding to bypass your proxy servers for outgoing traffic...unlikely, probably, but something to be aware of as you're setting this up.
Getting the routing right could be a bit tricky. If you're really using /16s on the network that the database servers are on, then you may need to adjust some things as 10.10.1.5 and 10.10.2.5 are in the same /16 and that will make the routing setup difficult. Perhaps if you just shift your netmasks to /24's so that they're in different IP networks it'll make the routes easier to figure out?
I'm a little bit of an odd-ball in the networking world as I like pushing routing decisions farther out to the edge, even out into hosts at times, so I'll mention that you might want to look at something like quagga running OSPF to perhaps simplify some of this. Maybe you won't find it simpler, I'm not sure.
The general idea is that gateway-vm1 needs to have a route for the gateway-vm2<->database-vm2 link in its routing table with a next-hop to gateway-vm2 via the IPSec/GRE tunnel. Similarly, gateway-vm2 needs to have a route for the gateway-vm1<->database-vm1 link in its routing table with a next-hop to gateway-vm1 via the IPSec/GRE tunnel.
If you're using IPSec/GRE interconnects...one big subnet isn't strictly off the table (you could theoretically bridge in the gateway-vms between the virtual switch and the GRE link), but I certainly wouldn't want to be setting it up that way.
If I were in your shoes, I might re-think my allocation of the physical NICs and use one for an interconnect between the hardware so you don't have to use IPSec/GRE for the crossover...with that, you could run an ovs instance on each hardware, connect the crossover NIC to it and use it as a private interconnect and the database-vms could then talk to each other "directly" on this private interconnect.
It isn't that the incoming connection isn't coming it, it's that the routing table thinks that the return trip should go through the default gateway.
You should really consider splitting this off into 2 different subnets for the LANs between each WAN device and the server's NICs. That would be the proper way to handle this, and if it is a direct connection between the server NIC and a port on the "gateway" then something as simple as a /30 would work for each "route".
Use 2 different NICs in the Windows Server, each on their respective subnets and respective default gateways to each of these WAN devices. But you'd still have to manually fail over the gateway in the routing (disable one nic for instance during normal operation, and then enable it and disable the other during failover. It would all be manual).
I don't know if RRAS and dead gateway detection might work here...never used it myself, but you may look into it too.
Best Answer
What you're looking for has two possible methods of achieving.
First is Policy-Based Routing, when a next-hop choice is made by some policy. For example this may be a route-map or simple a packet filter forwarding a packet to gateway basing on it's IP header fields values.
Second is multiple FIB support. It's when an IP stack of the operating system has multiple Forward Information Base tables, simply speaking - routing tables. Using this method, packets are marking as belonging to multiple FIBs basing on their IP header values or basing on their source interface. Then the packet is forwarded accordingly to is routing table. And yes, multiple routing tables can have different gateways, including different default gateway.
So far none of these techniques isn't available in Windows. Simply because Windows just isn't a modern network OS.
I'd recommend using an intermediary router that is capable of doing this with any of the two methods I described.