Check out Elastic Beanstalk. It makes everything for you. It is a rather new service from AWS.
Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
It supports many of the common web development languages: PHP, .NET, Java, Python and Ruby. I'm not sure what is your preferred languages, but if it is one of the above, you will find it rather easy to deploy it from your IDE (Eclipse or Visual Studio).
You can still connect to your instances, even if your deployed them with Beanstalk.
The manual way to do it is simply to have your instance installed with your application, attached to EBS. The image of your application can then be launched again and again with more instances. You will have to make some changes to the configurations of the different instances, by using the USER_DATA fields of the instance (again, if it is too complex, look again at Beanstalk...).
A more automated way is to use CloudFormation, which is basically a script that describes your different servers. Using this script your can launch several instance very easily. You can launch them through the web console, or using the CLI tools. I recommend using the open sourced python library boto.
You can also consider using S3 to store and share your web files. You don't need to worry about replicating the files among the instances, and you can also serve them with CloudFront or other CDN services.
Another alternative is to use Route 53, which is the DNS service of AWS, but it has very nice features that allows routing your data to different web servers for A/B testing, or based on the latency of your servers (LBR).
If you're using NGINX or other proxy server, it may be related to it's client_max_body_size
directive. To fix it create folder .ebextensions
in app root, and a file .ebextensions/01_nginx.config
. Then tell build script to add this directive while building your environment, adding this to your created .ebextensions/01_nginx.config
:
container_commands:
01_reload_nginx:
command: "service nginx reload"
files:
"/etc/nginx/conf.d/proxy.conf":
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 25M;
Commit your folder to git, if you're using it. Then eb deploy
, cross fingers and try to upload your 6MB PDF.
However, according to my experience, sometimes you have to completely rebuild entire environment to make it work.
Best Answer
You've got a couple of options...
Longer session duration
You can set the maximum session duration to up to 12 hours - that may be enough for your long running tasks.
Not sure how you're obtaining your temporary credentials, you may have to set the session duration there to 12 hours as well as some tools request tokens valid for to 1 hour by default.
Also check out
get-credentials
script that may facilitate your workflow. Maybe, I don't know what exactly are you doing now, however writing temporary creds to~/.aws/credentials
is usually not the best practice.Copy to EC2 first
If you can’t increase the max duration setting you may be able to work around the limitation by:
first copying the data to an EC2 instance, e.g. using
rsync
.then from EC2 upload to S3, taking advantage of the instance EC2 Instance Role that automatically renews. Besides copying from EC2 to S3 may be faster.
Use EC2 credentials locally
You can also "steal" the EC2 role credentials and use it locally. Check out this
get-instance-credentials
script.These creds are usually good for 6 hours.
Now copy and past those lines to your local non-EC2 machine.
And verify that the credentials work:
As you can see you local non-EC2 server now has the same privileges as the EC2 instance from where you retrieved the credentials.
Use
aws
CLI to multipart-upload the fileYou can split your large file to smaller chunks (see
split
man page) and useaws s3api
multipart-upload sub-commands. Seeaws s3api create-multipart-upload
,complete-multipart-upload
andpart-upload
. You can refresh the credentials between each part and retry the failed parts if your credentials expire half-way through.Create a custom script
You can use the briliant
boto3
Python AWS SDK library to build your own file uploader. It should be very simple to write a little multipart-uploader that will request new credentials every time they expire, including asking for the MFA.As you can see you've got many options.
Hope that helps :)