To do this, you need to make sure that your IPTables rules are configured properly. Ubuntu generally leaves their servers wide open by default, which is why I still don't recommend their use as servers unless you are quite well aware of how to do this properly already.
I imagine that your iptables -L -nv
looks something like this, yes?
# iptables -L -nv
Chain INPUT (policy ACCEPT 4M packets, 9M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 8M packets, 4M bytes)
pkts bytes target prot opt in out source destination
It's empty and it's wide open. The Ubuntu IPTables HowTo will probably help quite a bit with this. (https://help.ubuntu.com/community/IptablesHowTo)
I recommend something like this, which allow SSH on any interface and tcp 6379 any interface but the one you don't want:
*filter
:INPUT DROP [92:16679]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [203:36556]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -p tcp -m tcp --dport 6379 -j ACCEPT
-A INPUT -i lo -p udp -m udp --dport 6379 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i lo -j ACCEPT
COMMIT
You would then save this file in /etc/iptables.rules.
Obviously, any other ports that you specifically want open should be added.
Note: I've added the specific 6379 lines for clarity. The bottom ACCEPT right before the COMMIT would actually allow this because all loopback connections must be allowed on a Linux system for proper operation.
You will also want to put the rules in your /etc/network/interfaces file as well, to ensure that they are added when the interface comes up and not later in the boot process. Adding something like this is recommended:
auto eth0
iface eth0 inet dhcp
pre-up iptables-restore < /etc/iptables.rules
Edit: To load this configuration initially, you need to run the iptables-restore command referenced above:
iptables-restore < /etc/iptables.rules
The way we handle this is by creating multiple groups of servers in a layered stack (even if a group currently only needs one instance). The first layer is your Elastic Load Balancer, clearly.
The second layer is an Auto Scaling Group of web servers (multi-availability zone). These boot a custom AMI designed be in a proper ready-state for this task on startup. (Now that our processes are more mature, we actually boot a generic AMI that can auto-configure on startup using Chef.) But we also do a git pull
of the latest production code repository on startup, so we don't have to create a new AMI with each code deployment. This also allows us to change configurations, such as database hosts, Redis hosts, etc more easily.
The third layer is for database and other services such as ElasticSearch and Redis. You can either host all three services on one box, then deal with managing your own mysql slaves, or you can host Redis and ElasticSearch on their own box and use Amazon's RDS for your Mysql services. Your choice, based on whether or not you want to manage your own replication/fault-tolerance in MySQL.
Often the simplest way is to use Amazon RDS in a multi-availability zone configuration. We always try to deploy multi-AZ with everything, so we are still up-and-running if a single AZ fails. Then you run a smaller instance to host just Redis and ElasticSearch.
With ElasticSearch, here's a tip we use for rails installs: Install and maintain a complete instance of your app along with ElasticSearch on the box. Then build an AMI for this role (or a Chef role). The reason is so that you can run the utility tasks on bootup to create your ElasticSearch indices from scratch if you're booting a fresh AMI. Then, put this instance in an multi-AZ ASG as well, with a min and max of one server. If that box or AZ dies, the ASG will boot up a replacement, and it will rebuild it's indices on startup and be ready to serve clients.
For Redis, there is good news on the horizon. redis-cluster
is coming soon, which promises to allow for easier management of scaling redis stores. In the meantime, you can handle your own replication or try Garantia, a hosted scalable redis server solution, which is using a version of redis-cluster in beta now (currently limited to us-east-1 region). This has the advantage of keeping the same IP addresses for your configurations, no matter what happens to your instance pools.
Finally, to protect your data going to-and-from your databases, I would recommend building this inside the private network portion of a public/private Virtual Private Cloud. This sets up your own private network that is isolated from packet sniffers. You can also employ SSL encryption for your MySQL database connections.
Best Answer
It is not an environmental setting, but a default setting. Check the source code:
Change it in the configuration file: