I have a scalable load balancer set up right now that was mostly followed from this template:
https://github.com/satterly/AWSCloudFormation-samples/blob/master/LAMP_Multi_AZ.template
After adjusting the AWS::AutoScaling::AutoScalingGroup.CreationPolicy.ResourceSignal.Count
key to 0
(basically allows for stack to load with no success signals received) I was able to load the stack and I can see all resources available.
I can see the public DNS of the keys being created, but I cannot SSH into the instance.
I have a opened up SSH access to everyone within my instance rules, I can confirm this within the AWS console.
I also configured a route for my VPC as recommended within the official AWS docs: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-ssh-troubleshooting/
I can see the instances exist, but it seems that I am restricting myself access somewhere. However within the AWS EC2 console, the settings seem to look the same as when I am able to SSH into an instance.
Here is my JSON template I'm using where you can view all my settings including my VPC, subnets, Security Groups, etc.: https://gist.github.com/dambrogia/e4cd93a64ae6f3a79d4a58d466f144f8
I am receiving a timeout error from the following command: (my id_rsa
key is valid within ec2)
ssh -i ~/.ssh/id_rsa ec2-user@<ec2_instance>
Best Answer
The problem is that the CloudFormation template creates a
RouteTable
with the default route 0.0.0.0/0 correctly pointing to the IGW, however you don't associate theRouteTable
with your subnets.What you need to do is add these two Route Table Associations to the template:
Then Update the stack ...
And re-check the Route Table
Now you should be able to SSH to the instances:
Hope that helps :)