I've evaluated Elastic Beanstalk in addition to other AWS offerings while trying to improve our hand-rolled AWS instances. The reasons I chose not to use it were due to complications that would arrise migrating my existing application and not with the offering itself. The catch is that you don't have as much control about application deployment / configuration of the servers. If you are starting a new application then it may be helpful to not deal with those things right now, if you have an existing application then it is more of a challenge to fit in the Beanstalk model.
Beanstalk provides a similar offering to Heroku and other PaaS vendors but not much of a benefit to those who just want to focus on making their application. You do at least get to determine the virtualized resources to a greater degree than other PaaS vendors.
Problems that I was running into with my application(s):
Git based deployments - I love them but our repo is 1+ GB. Rather large to push on a regular basis. This repo also contains about 40 applications (which really should be split) but that would take some time. Uploading any sort of package could work but most of our applications would take a large amount of work to make it into a package.
Integration with other services - From what I've seen Beanstalk sort of assumes that anything you are connecting to is a single service. This works well if your services are behind and ELB but our were separate nodes that we hit through HAProxy running on each application server. If you are running your datastores and other services as a single endpoint, you should be fine.
In my evaluation I also included OpsWorks and CloudFormation. OpsWorks has similar integration issues with how existing automation was working for these applications. CloudFormation didn't do much more than what some Python scripts and Chef were already taking care of for us.
I wound up choosing to use AWS Autoscaling Groups instead with some automation provided by Asgard. This was the smallest change to existing configuration/application code and provided us with the benefits we were looking for, simple management of multiple servers available through the AWS API.
The restrictions put in place by Elastic Beanstalk on your application are very helpful. You will have to make sure that your application is mostly stateless, provides an endpoint for a service, and relies on other services for state. If you are trying to make reusable stand-alone services that multiple applications in Beanstalk is a great start.
If/when you get to the point of wanting more configuration OpsWorks is a great next step. The predefined roles should make the transition easy to start with and it provides an automation framework around Chef to help coordinate provisioning of multiple servers.
First off, to be clear, no Elastic Beanstalk is not PaaS in the way you are thinking about it. If you break it into pieces, it's really more like having virtualized instance templates and application deployment automation like puppet or chef. Along with this you get automated access to awe's load balancer service, and cloud watch monitoring, that allows you to startup new application servers or shutdown existing ones based on metrics.
What makes it feel like PaaS is that the main selling point is the application deployment system that takes your code and copies it to all the application servers in your cluster.
One of the complaints some people have about PaaS is that the PaaS vendor makes decisions for you about the application environment. This seems to me like the value proposition of PaaS: as a customer you get to concentrate on the application functionality and leave all the other details to the PaaS vendor. You're paying for someone else to manage the infrastructure and provide system administration. For that simplicity you're paying them a premium, as in the case of Heroku, which is running their infrastructure on top of ec2 as well, only in a way that's transparent to you.
Amazon is really offering Elastic Beanstalk on top of Ec2 and their REST api's, and not making much of an effort to hide that from you. This is because they are making their money via IaaS, and EB is just orchestrating the setup of a group of ec2 resources you could setup yourself, given the time and know how.
Now, in terms of the specifics of an AMI, again AMI's are one of the many ec2 pieces that are employed to facilitate EB. There is nothing magical about an EB AMI - it's just an Amazon linux ami preconfigured to work with EB. Like any other AMI you can start it up in EC2, tweak it, and derive a new customized AMI from your running instance. Amazon Linux is basically a cross between Centos and Fedora, with paravirtualization patches, and pre-configured yum repos maintained by Amazon.
As you probably know, Amazon linux is already configured to install security patches at boot time. However, running instances are no different than any other server in regards to patching. Patching may interrupt service. If you're extremely concerned about security patching, you can always use a container command and setup cron to run yum update --security at some periodicity.
You also can utilize the EB API to alter the EB configuration, or automate the creation of a new EB environment, you can then swap to it once it's up and ready, followed by shutting down the old one. This is described here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
Like the rest of AWS, there is a way to programmatically access and control every non SaaS feature, so there's nothing stopping you from creating patched AMI's, which are then used to create new EB environments, and rolling those out. EB isn't going to force configuration specifics on you, nor is it providing you a system administration group to maintain the infrastructure.
Best Answer
As described by AWS the VPC base address plus two of your CIDR is a DNS server - eg if the VPC is 192.168.5.0 it's 192.168.5.2. This is likely a suitable resolver.
As Michael has pointed out in the comments 169.254.169.253 is a DNS resolver with a static IP, so it's easier to port across VPCs. That's one IP below the user metadata IP.
You haven't really described exactly what problem you're having, so it's difficult to give you any more advice.