I've evaluated Elastic Beanstalk in addition to other AWS offerings while trying to improve our hand-rolled AWS instances. The reasons I chose not to use it were due to complications that would arrise migrating my existing application and not with the offering itself. The catch is that you don't have as much control about application deployment / configuration of the servers. If you are starting a new application then it may be helpful to not deal with those things right now, if you have an existing application then it is more of a challenge to fit in the Beanstalk model.
Beanstalk provides a similar offering to Heroku and other PaaS vendors but not much of a benefit to those who just want to focus on making their application. You do at least get to determine the virtualized resources to a greater degree than other PaaS vendors.
Problems that I was running into with my application(s):
Git based deployments - I love them but our repo is 1+ GB. Rather large to push on a regular basis. This repo also contains about 40 applications (which really should be split) but that would take some time. Uploading any sort of package could work but most of our applications would take a large amount of work to make it into a package.
Integration with other services - From what I've seen Beanstalk sort of assumes that anything you are connecting to is a single service. This works well if your services are behind and ELB but our were separate nodes that we hit through HAProxy running on each application server. If you are running your datastores and other services as a single endpoint, you should be fine.
In my evaluation I also included OpsWorks and CloudFormation. OpsWorks has similar integration issues with how existing automation was working for these applications. CloudFormation didn't do much more than what some Python scripts and Chef were already taking care of for us.
I wound up choosing to use AWS Autoscaling Groups instead with some automation provided by Asgard. This was the smallest change to existing configuration/application code and provided us with the benefits we were looking for, simple management of multiple servers available through the AWS API.
The restrictions put in place by Elastic Beanstalk on your application are very helpful. You will have to make sure that your application is mostly stateless, provides an endpoint for a service, and relies on other services for state. If you are trying to make reusable stand-alone services that multiple applications in Beanstalk is a great start.
If/when you get to the point of wanting more configuration OpsWorks is a great next step. The predefined roles should make the transition easy to start with and it provides an automation framework around Chef to help coordinate provisioning of multiple servers.
Since RDS requires you to have two availability zones when deploying in a VPC, you need to make sure that beanstalk is able to get to both of them via Network ACLs as well as the permissions for the instance based security groups.
Only your ELB and your NAT instance/NAT gateway need to be public subnets, everything else should be in private subnets.
Security groups are stateful and network groups are stateless so while you only need to allow inbound rules for the security groups, you need to allow BOTH inbound and outbound ports from your beanstalk subnet to both RDS subnets using Network ACLs. See Security in Your VPC.
Here is a sample eb create
to create the beanstalk environment (replace square bracketed strings):
eb create [BEANSTALK_ENVIRONMENT] --instance_type m3.medium --branch_default --cname [BEANSTALK_CNAME] --database --database.engine postgres --database.version [x] --database.size 100 --database.instance db.m4.large --database.password xxxxxxxxx --database.username ebroot --instance_profile [BEANSTALK_EC2_IAM_PROFILE] --keyname [SSH_KEY_NAME] --platform "64bit Amazon Linux 2015.03 v1.3.0 running Ruby 2.2" --region us-east-1 --tags tag1=value1,tag2=value2 --tier webserver --verbose --sample --vpc.id [vpc-xxxxxx] --vpc.dbsubnets [subnet-db000001,subnet-db000002] --vpc.ec2subnets [subnet-ec200001] --vpc.elbsubnets [subnet-elb00001] --vpc.elbpublic --vpc.securitygroups [sg-00000001] --sample --timeout 3600
subnet-db000001 NETWORK ACL RULES:
Inbound: Port Range: 5432, Source [subnet-ec200001 (as ip range)], Allow
Outbound: Port Range: 5432, Source [subnet-ec200001 (as ip range)], Allow
subnet-db000002 NETWORK ACL RULES:
Inbound: Port Range: 5432, Source [subnet-ec200001 (as ip range)], Allow
Outbound: Port Range: 5432, Source [subnet-ec200001 (as ip range)], Allow
subnet-ec200001 NETWORK ACL RULES:
Inbound: Port Range: 5432, Source [subnet-db000001 (as ip range)], Allow
Inbound: Port Range: 5432, Source [subnet-db000002 (as ip range)], Allow
Outbound: Port Range: 5432, Source [subnet-db000001 (as ip range)], Allow
Outbound: Port Range: 5432, Source [subnet-db000002 (as ip range)], Allow
subnet-elb00001 NETWORK ACL RULES:
Inbound: Port Range: 80, Source 0.0.0.0/0, Allow
Inbound: Port Range: 443, Source 0.0.0.0/0, Allow
Outbound: Port Range: 80, Source 0.0.0.0/0, Allow
Outbound: Port Range: 443, Source 0.0.0.0/0, Allow
An additional note about Network ACLs -- many services don't respond on the original port but use an ephemeral port. So you may have to add the following to the inbound AND outbound network ACLs for subnets with EC2 instances:
Outbound: Port Range: 1024-65535, Source 0.0.0.0/0, Allow
Outbound: Port Range: 1024-65535, Source 0.0.0.0/0, Allow
There are also several useful scenarios in Recommended Network ACL Rules for Your VPC.
I hope this helps.
Best Answer
I'm not familiar with Beanstalk, so take this with a grain of salt.
As I understand it, an A/B deploy strategy works kind of like this:
Databases are terribly stateful, and don't take well to swapping like that. As I've seen it done, step 3 up there is done kind of like...
The tricky part here is the database indirection. For this, I suggest going Route53. During the deploy process:
You get the idea.