It's my impression that many (all?) of the AWS IP's have been blacklisted as spam sources already, so the deliverability of mail sent from them won't be very good.
Also, using an AWS host as a mailserver seems like a bad choice to me, because it may disappear at any moment - together with the undelivered E-mail.
It would make more sense to have each AWS host hand the mail off to either a service that's paid to handle your outbound mail, or a VPS or traditional server that you control that will be available approximately 24x7. This will make it much easier to get the SPF record set up, etc., to maximize the chances that someone will see your mail.
You might take a look at Recommendations for SMTP services for massive mailing or Sendmail relay out of Amazon EC2? for more ideas.
The canonical solution to this is to not rely on end user IP address, but instead use a Layer 7 (HTTP/HTTPS) load balancer with "Sticky Sessions" via a cookie.
Sticky sessions means the load balancer will always direct a given client to the same backend server. Via cookie means the load balancer (which is itself a fully capable HTTP device) inserts a cookie (which the load balancer creates and manages automagically) to remember which backend server a given HTTP connection should use.
The main downside to sticky sessions is that beckend server load can become somewhat un-even. The load balancer can only distribute load fairly when new connections are made, but given that existing connections may be long-lived in your scenario, then in some time periods load will not be distributed entirely fairly.
Just about every Layer 7 load balancer should be able to do this. On Unix/Linux, some common examples are nginx, HAProxy, Apsis Pound, Apache 2.2 with mod_proxy, and many more. On Windows 2008+ there is Microsoft Application Request Routing. As appliances, Coyote Point, loadbalancer.org, Kemp and Barracuda are common in the low-end space; and F5, Citrix NetScaler and others in high-end.
Willy Tarreau, the author of HAProxy, has a nice overview of load balancing techniques here.
About the DNS Round Robin:
Our intent was for the Round Robin DNS TTL value for our api.company.com (which we've set at 1 hour) to be honored by the downstream caching nameservers, OS caching layers, and client application layers.
It will not be. And DNS Round Robin isn't a good fit for load balancing. And if nothing else convinces you, keep in mind that modern clients may prefer one host over all others due to longest prefix match pinning, so if the mobile client changes IP address, it may choose to switch to another RR host.
Basically, it's okay to use DNS round robin as a coarse-grained load distribution, by pointing 2 or more RR records to highly available IP addresses, handled by real load balancers in active/passive or active/active HA. And if that's what you're doing, then you might as well serve those DNS RR records with long Time To Live values, since the associated IP addresses are highly available already.
Best Answer
I run apache/mod_perl as load-balanced EC2 instances, and do code upgrades regularly just as you say. My process is:
The AWS documentation goes over how to add and remove instance from rotation using either the API or the Console, your choice. You'll notice that with my approach, webservers go out of rotation gracefully, so I'm not worrying about whether a particular user request gets killed. As @cyberx86 mentioned, you can use the command
apachectl -k graceful
to shut down your apache server after each request is processed.