I'm trying to configure an EC2 with known SSH host keys on boot using cloud-init in a cloudformation template. But now I can't SSH into the server. It's tricky to debug 🙂
Here's the userdata part of my EC2 fragment from my template:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#cloud-config", "\n",
"ssh_keys:", "\n",
"- rsa_private: |", "\n",
" -----BEGIN RSA PRIVATE KEY-----", "\n",
" MIIEowCBAAKCAQF71D8K9C/+K0a2fO+S9s441kSI44lF5ml++ewD+Mp115x9", "\n",
" /XwwTlvqxCIpxdzpzq4xXEqH48StHyYIjAOPxoS1/QG0Ti6OqU893PpukLdmV", "\n",
" kLZKn2ph4fTT2aMl...", "\n",
" -----END RSA PRIVATE KEY-----", "\n",
"rsa_public: ssh-rsa AAAAB...", "\n",
I also have entries for (ec)dsa_public/private as per the docs.
Is there some weirdness with cloudinit where I have to also specify ssh_authorized_keys as well? I assume my keypair's public key isn't now being pushed onto my EC2 by Amazon…
Best Answer
The UserData from your question is basically a shell script that runs at first boot of your instance. See the documentation on UserData for more details on that.
If you want to configure a set of public keys, you can use AWS::CloudFormation::Init instead.
In the files section, declare the authorized_keys file:
The user data in this example comes from the Mappings section of the cloudformation template:
Please test it first by using a different filename instead of authorized_keys, as indeed you will be locked out of your instance when you make the slightest mistake.