Connecting from an AWS Lambda function or service within a VPC to a customer’s private network over VPN tunnel

amazon-lambdaamazon-vpcamazon-web-servicesvpc-peeringvpn

We are currently running AWS Lambda functions within a VPC and for example already have a peering connection setup to MongoDB Atlas to have our AWS Lambda's within the VPC communicate to our MongoDB Atlas-hosted database.

Now a requirement has come up that a specific service within our VPC that we trigger by an AWS Lambda and that also runs within the same VPC has to access an on-premise network function / host over VPN. Furthermore that network needs to be able to respond to messages to that service so a site-to-site connection is needed I assume.

The customer has given us the IKE Phase One Parameters, IKE Phase Two Parameters (IPSEC), their local peer IP addresses, the VPN communication ports accepted and the local encryption domains.

They are now asking for our remote peer IP addresses and remote encryption domains.

Question 1: Is what we're trying to achieve feasible on AWS in a VPC (I'm reading conflicting posts about this.

Question 2: Am I correct in assuming that the tunnel initiating will have to happen from the customer's side and that we then use network monitoring polling to keep the tunnel active?

Best Answer

Regarding question 1.

Assuming you are referring to the ability to connect via an IPSec based VPN to securely connect to resources that are located outside AWS. The answer is yes. However, the native AWS implementation of this does have some restrictions. The first is that it is not possible to specify any aspects of the phase 1 or phase 2 configuration settings. Instead AWS provide you with the ability to download pre-configured settings for a range of manufacturers, but also provide some good generic examples too.

Some good resources are:

AWS Managed VPN Connections - provides details on the AWS VPN Gateway service

Your Customer Gateway - provides information on the settings required on the device outside of AWS

Regarding question 2.

This is true, if the tunnel drops for some reason, the AWS side cannot initiate it (this is a VERY annoying limitation if you ask me). However there are ways around it. Some devices support sending keep alive packets to keep the tunnel up. For example Cisco ASA's can make use of IP SLA feature to send SLA messages accorss the tunnel to keep it up. Extract from the sample ASA configuration:

In order to keep the tunnel in an active or always up state, the ASA needs to send traffic to the subnet defined in acl-amzn. SLA monitoring can be configured to send pings to a destination in the subnet and will keep the tunnel active. This traffic needs to be sent to a target that will return a response. This can be manually tested by sending a ping to the target from the ASA sourced from the outside interface. A possible destination for the ping is an instance within the VPC. For redundancy multiple SLA monitors can be configured to several instances to protect against a single point of failure.

Or you can simply arrange for a system on one side to periodically send a ping - via a cron job or scheduled task.

Another option however is to deploy your own IPSec gateway into AWS - either running on the instance itself, or on another instance, you then can update the route table on your subnet to route to the off AWS subnets via this instance. This allows you more control on the IPSec settings and behaviour - but is arguably more complex to manage compared to the native AWS service.

Related Topic