Ssh – AWS Network ACLs Breaking SSH Connectivity

access-control-listamazon-vpcamazon-web-servicesfirewallssh

I'm trying to set up Network ACLs as a second security layer for production instances. It seems that every time I associate a non-default Network ACL to my private subnet it breaks all SSH connectivity. I'm not sure what I'm doing wrong.

The Network ACL I'm trying to implement looks like this:

looks like this.

Notice there are 3 SSH rules there:

  • The first is for connection from local VPC instances
  • The second is for connection from a Peering VPC
  • The third is for connection from my company's office (I realize I can't connect to private instances over SSH without a Customer Gateway… this is there for public instances that use the same Network ACL)

A simple experiment to reproduce the results is as follows:

  1. Make sure all instance subnets are using the Default Network ACL (0.0.0.0/0 ALLOW)
  2. Connect via SSH to a peering instance in the peering VPC (192.168.0.x)
  3. SSH into the private instance via private IP (success)
  4. Disconnect from private instance
  5. Change private subnet Network ACL to the one above
  6. Reconnect to the private instance (fails)

I can repeat the above steps using a public instance in the same VPC (10.0.0.x) via SSH from the office, and the same problem occurs.

I have absolutely no idea what's going wrong. Please advise.

Best Answer

The part that I was missing has to do with Outbound Rules. My Outbound Rules were set to PORT 22 192.168.0.0/0 ALLOW. Since Network ACLs are stateless, any ephemeral ports can break the function.

I opened all Outbound Rules and SSH is working.