The "selected" answer is correct, but I wanted to add some extra information as most people using EB and RDS together should have the same requirement too - even if they don't know it yet.
First question: Why would you want the RDS instance to exist outside the EB environment?
Answer: So that the lifetime of the RDS instance is not tied to the lifetime of the EB environment. i.e. when you remove an environment, you don't want to destroy the DB with it. There are very few reasons why you'd want to actually tie your RDS instance to your environment.
A problem with settings up RDS independently of EB is that you don't get the RDS_* variables automatically populated and therefore need to retrieve their values and populate them yourselves via web console or .ebextensions. It's not recommended that you add credentials to your code though, as that can be a security hole.
But then, the next problem is if you want to programmatically create environments (such as for blue-green zero downtime deployments) then you need a solution for how to populate the sensitive RDS values (e.g. password) every time. Unfortunately, this requires you to drop further down the AWS stack and use a CloudFormation template.
The ideal solution is an enhancement to EB so that the "use an existing database" link mentioned in the question actually lets you manually associate an existing RDS database and then have the RDS_* environment variables automatically populated again, rather than redirecting you to unhelpful documentation. AWS Support said this has been raised as a feature request but of course no timeframe given.
From the official Nginx docker file:
Using environment variables in nginx configuration:
Out-of-the-box, Nginx doesn't support using environment variables
inside most configuration blocks.
But envsubst
may be used as a
workaround if you need to generate your nginx configuration
dynamically before nginx starts.
Here is an example using docker-compose.yml:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like
this :
listen ${NGINX_PORT};
Update:
But you know this caused to its Nginx variables like this:
proxy_set_header X-Forwarded-Host $host;
damaged to:
proxy_set_header X-Forwarded-Host ;
So, to prevent that, i use this trick:
I have a script to run Nginx, that used on the docker-compose
file as command option for Nginx server, i named it run_nginx.sh
:
#!/usr/bin/env bash
export DOLLAR='$'
envsubst < nginx.conf.template > /etc/nginx/nginx.conf
nginx -g "daemon off;"
And because of defined new DOLLAR
variable on run_nginx.sh
script, now content of my nginx.conf.template
file for Nginx itself variable is like this:
proxy_set_header X-Forwarded-Host ${DOLLAR}host;
And for my defined variable is like this:
server_name ${WEB_DOMAIN} www.${WEB_DOMAIN};
Also here, there is my real use case for that.
Best Answer
One way of copying a file would be to do it from the container. If you have a running container, use
docker cp
to transfer the file to your host, in this case the EB instnace.Run
docker ps
to get the container ID. If you don't see any output, launch a container based on the image you are interested in. Say, if your image name is 'aws-beanstalk/current-app' -docker run -ti --rm aws-beanstalk/current-app /bin/bash
Then from the docker host, for instance to transfer a file under /code/run.py on the container to /tmp on the host:
docker cp containerID:/code/run.py /tmp
The containerID is the one you see after running
docker ps
You can also use
docker exec -ti containerID /bin/bash
to interactively work on an already running container.