I have the same question, & you've answered it for me. I run a mail server, a dns server, and a web server (front end to a separate RDS database instance). I used to run it all on one t2.nano instance (not a CPU powerhouse!), without breaking a sweat (keeping the CPU credit balance locked at 72 without any deflections).
Then I added the following four lines to a cron job that ran every minute (each line has a different metric-name):
aws cloudwatch ... --value $(($(df --output=avail / | tail -1)*1024))
aws cloudwatch ... --value $(($(df --output=avail /home | tail -1)*1024))
aws cloudwatch ... --value $(free -b | sed -r 's:Mem([^0-9]*([0-9]*)){6}.*:\2:p;d')
aws cloudwatch ... --value $(free -b | sed -r 's:Swap([^0-9]*([0-9]*)){2}.*:\2:p;d')
That resulted in a continuous decrease in my CPU credit balance, so I changed the cron interval to five minutes, which stabilized my credit balance, with no apparent further decrease or increase. That's ridiculous!
The eventual cure? I figured it was time to upgrade to a t3.nano instance (two vCPUs rather than one), which I did. Now, with the replacement cron job (below) running every minute, it accumulates CPU credits at the rate of 5/hour. Working the math with the first cron job file running every minute, it figures out to a rate of 0.4 CPU credits per hour per aws cloudwatch statement.
It appears that you can combine sending multiple metrics in one aws cloudwatch statement, which executes in the same time as one statement above:
{ cat <<EOF
[
{
"MetricName": "EC2 root",
"Dimensions": [ { "Name": "Instance", "Value": "i-instance-id" } ],
"Value": $(($(df --output=avail / | tail -1)*1024)),
"Unit": "Bytes"
},
{
"MetricName": "EC2 home",
"Dimensions": [ { "Name": "Instance", "Value": "i-instance-id" } ],
"Value": $(($(df --output=avail /home | tail -1)*1024)),
"Unit": "Bytes"
},
{
"MetricName": "EC2 free",
"Dimensions": [ { "Name": "Instance", "Value": "i-instance-id" } ],
"Value": $(free -b | sed -r 's:Mem([^0-9]*([0-9]*)){6}.*:\2:p;d'),
"Unit": "Bytes"
},
{
"MetricName": "EC2 swap",
"Dimensions": [ { "Name": "Instance", "Value": "i-instance-id" } ],
"Value": $(free -b | sed -r 's:Swap([^0-9]*([0-9]*)){2}.*:\2:p;d'),
"Unit": "Bytes"
}
]
EOF
} | aws cloudwatch put-metric-data --namespace MySpace --metric-data file:///dev/stdin
[Note the use of "heredoc" syntax to allow expressions to be evaluated inside a "text" file.]
Who knows what the CloudWatch Agent is doing. I came here looking to see if running the CloudWatch agent would be more efficient than using individual aws cloudwatch statements. Apparently not.
Best Answer
i had the same problem. you can see the logs of the script with:
there were some problems with the locals and a
solved it for me.