Inherited an amazon hosted infrastructure consisting of a load balancer, a DB server, and several webservers inside an autoscaling group; with deployment coordinated by CloudFormation.
Problem is, the spun webservers have a puny root volume (8gb), and sometimes during the webserver lifetime the disk gets filled with logs and temporary files and some services stop working.
I found the part where the machine definition is declared (I think):
...
"Properties": {
"ImageId": {
"Fn::FindInMap": [
"AWSRegionArch2AMI",
{
"Ref": "AWS::Region"
},
{
"Fn::FindInMap": [
"AWSInstanceType2Arch",
{
"Ref": "InstanceType"
},
"Arch"
]
}
]
},
"InstanceType": {
"Ref": "InstanceType"
},
"SecurityGroups": [
{
"Ref": "WebServerSecurityGroup"
}
],
"KeyName": {
"Ref": "KeyName"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
...
Instance type is ultimatetly defined somewhere else:
"InstanceType": {
"Description": "WebServer EC2 instance type",
"Type": "String",
"Default": "c4.xlarge",
"AllowedValues": [
"t1.micro",
"t2.nano",
# ... more allowed values
"cg1.4xlarge"
],
"ConstraintDescription": "must be a valid EC2 instance type."
},
But I have no clue how to declare the root volume size, nor if it possible at all; and I can't see anywhere an ebs volume being declared.
This is a linux machine, and the root device is /dev/xvda1
[ec2-user@ip- /]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 1.6G 6.2G 21% /
devtmpfs 3.7G 60K 3.7G 1% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
Tried the solution suggested by @Jason (although replacing sda1
with xvda1
) but CloudFormation wont spin up any new machines after doing that.
I can see that the launch configuration for the machine is updated accordingly after I upload the new template with these changes:
But new machines won't spin up automatically any more. 🙁
Best Answer
I had been stuck for days to change root-disk size using Cloudformation, , this is how it works for me..
If you use
/dev/xvda1
the instance will fail, the correct way is to use/dev/xvda
.Using YAML format: