Nfs – EC2 autoscalling and code deployments

amazon ec2amazon-web-servicesautoscalingdeploymentnfs

I'm just starting with EC2 load balancer and trying to implement autos-calling and fault-tolerance. The database is hosted in RDS, sessions are shared, files are shared using S3.

I can't seem to find the best solution for code deployments. I don't want use fabric to re-run deploy commands on all instances.

Updating code on 1 instance, creating an AMI from that and then restart instances with that AMI seems major overkill for me as we deploy multiple times a day and the whole process is a bit cumbersome.

Ideally I would like to store application code on a shared (NFS?) volume in all instances and update code on that volume during deployments. All instances can watch a particular file for changes and restart the application workers when the file is touched.

Is there a way to use NFS and auto mount shared EBS on all instances?

or is there a better way to do this?

Summary of what is desired:

I update 1 EC2 instance / NFS volume and rest pick it up without the whole process of creaing new AMI, destroying instances and creating new ones. I don't want some instances to go in limbo when application code and database schemas are not in sync. I know best practice is to write code that supports 2 consecutive DB schema changes but we really can't afford that at this point.

Best Answer

I would suggest avoiding NFS shares as they create a single point of failure for your system and NFS clients may sometimes have trouble recovering from NFS server downtime. NFS is not really the "cloud way" of doing things. The AWS way of doing data sharing between virtual machines involves the use of various decentralized AWS services.

For this particular purpose I would suggest using the AWS S3 service. Create a S3 bucket for your software releases, upload your releases there and give the virtual machines access so they can download the releases by polling the bucket. I would also suggest using IAM roles to give the production virtual servers read-only access to the S3 bucket without any hard coded API credentials. This scheme protects your releases and bucket in case an attacker manages to gain access to your servers. As there are no hard coded API credentials they cannot be reverse engineered nor abused elsewhere.

Build and development environments can be set up in a similar way -- build server or CI server can have a role for read-write access to the bucket if it resides on AWS infrastructure. If your build/CI server is elsewhere you can set up an IAM group with appropriate access rights to S3 and create an IAM user with API credentials to be used at an external site.

You still need to prepare an AMI so that it will download and configure the release as you wish and start the application, but at least you don't have to repeat this process every time you make a new release. Using some configuration management tool such as Chef, Puppet or Ansible can probably do everything you desire, but you need to allocate some time to familiarize yourself with the tools and model your environment with them.

The infrastructure (load balancers, asgs, securitygroups, roles etc) can be modeled, created and maintained with AWS CloudFormation. If your environment grows any more complex than a single ELB and a single ASG I would recommend taking a look at that. By modeling your infrastructure with CloudFormation you can quite easily create and maintain exact copies of your entire environment - one for testing/qa and one for production for example. The infrastructure description is a json document that can be kept in your VCS repository.

I hope this helps.

Related Topic