Your thinking around the external BGP advertisements is sound, although as other have suggested, I'd highly recommend you run BGP with your providers for automated route-withdrawal on failure. Consider also the need to take a site offline for maintenance, or the need to add, move or migrate providers in the future.
Having said that, how about an alternative design:
Rather than de-aggregating your prefixes and further polluting the already bloated DFZ, advertise the same /22 out both providers. (Search: BGP Anycast)
Rather than stretching a layer 2-domain across two physically separate sites, make each of those links routed, using something like OSPF and BFD, so that your fail-overs are faster (sub-second), having your sites go split-brain doesn't affect anything and when the day comes that you move one of your sites to a location that is too far away for Layer 2 adjacency, your design will still stand-up.
Configure both your BIG-IP LTMs with the same VIPs, so that either will function, regardless of what is happening with your upstream service-provider(s). Consider having secondary server pools from the neighbouring site so that in the event of maintenance on the back-end, the LTMs can happily continue sending clients to the other site. The VIP subnet would be advertised into OSPF/BGP (preferably from the BIG-IP) making the local LTM the preferred target for traffic from either ISP Gateway. During a maintenance window, this route would be withdrawn but the BIG-IP, and the neighbour site would take over (Search: F5 Route Health Injection)
I don't see anything between the LTMs and the ISP Gateways that is stateful (unless the ISP Gateways themselves are), so return traffic can leave via the nearest ISP gateway regardless of where it originated. For bonus points, if you take a limited BGP feed from each provider (only their customers and local peers) and share that between your Internet Gateways you will provide the best performance to those clients.
BGP doesn't have to be a complex beast, and like any design: Good documentation should address any concerns about supportability into the future. If your management aren't supportive of building it right, then I'd advise that SaaS is not a space they should be getting into. Outsource to a player like AWS and make it Someone Else's Problem™
VoIP over the public Internet can be a problem, but it usually works good enough, most of the time, although there can be times where it sucks. Most ISPs have extra cost features where they will honor some of your QoS markings and policies.
(I know Verizon Business, among others, has some specific packages for QoS, and you may need to adjust your policies and markings sent to it to match one of its packages. We have a problem where we use multiple ISPs, and the QoS packages don't match between them, so we need to fine tune for the particular ISP, or pick a close enough package.)
The problem arises when the traffic must pass through other ISPs. You have no control over what happens in that case. The larger ISPs will have better (possibly direct) connections to many VoIP providers, as the VoIP providers will try to directly connect to the large ISPs or a Tier 1 ISP that directly connects to the large ISPs.
There may also be a possibility of having a VoIP provider, especially a telco like AT&T or Verizon, connect SIP trunks directly into your data center(s).
The big monkey wrench in the works today is the requirement for E911. You, and the VoIP provider, will need to maintain a database of where each phone is connected so that emergency services will get not only the address of a 911 call, but the floor and section of the floor. (We have spent large sums of money to meet the E911 requirements.) While this requirement is not necessarily in every state today, it is being phased in for all the states, and it is not something you can ignore if you don't yet have the requirement (it is far easier to put it in place as you roll out VoIP than it is to try to retrofit it at a later time).
Best Answer
Comment 4 in Ron’s answer is vitally important and dominant for long distance data copies over tcp. You just can’t fill up a large bandwidth pipe with a tcp socket over a long (or even medium) distance. If you are copying a large file long distance one approach is to split and use multiple tcp sockets.
Another approach is to use file transfer software designed for that task. Such software either optimizes the tcp window size or uses udp. There are both public-domain and commercial options available (but specific recommendations are off topic).
Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway. — Andrew S. Tanenbaum