http://www.mysqlperformanceblog.com/2010/09/10/cache-miss-storm/
As it turns out, our issue was a cache storm miss, AKA miss stampede.
We've solved it by implementing a 50% cache expiry. Basically for every item in memcache we create a second cache item with a similar key plus the appended 'regenerate' string. This item expires at exactly 50% of the typical cache expiry time, indicating to the next request that we're getting closer to expiring cache and the next request will need to try and regenerate cache.
This prevents the storm of users trying to regenerate cache at the same time and ensures our cache has the best chance of always staying fresh!
A Tricky one to track down!
Did you know there is slow log table? It is called mysql.slow_log
:
mysql> show create table mysql.slow_log\G
*************************** 1. row ***************************
Table: slow_log
Create Table: CREATE TABLE `slow_log` (
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`query_time` time NOT NULL,
`lock_time` time NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) NOT NULL,
`last_insert_id` int(11) NOT NULL,
`insert_id` int(11) NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`sql_text` mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'
1 row in set (0.01 sec)
mysql>
There are two variables you must activate to start recording slow queries in mysql.slow_log:
Here is the situation
mysql> show variables like 'log_output';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_output | TABLE |
+---------------+-------+
1 row in set (0.01 sec)
mysql> show variables like 'slow_query_log';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| slow_query_log | OFF |
+----------------+-------+
1 row in set (0.01 sec)
mysql>
log_output is already defaulted to TABLE
. You will need to enable slow_query_log. Here is what you must do:
If your RDS Instance has no DB Parameter Group...
- Spin up an AmazonRDS MySQL Instance that uses a DB Parameter Group.
- Edit the DB Parameter Group by setting the slow_query_log to 1
- Restart the DB Instance
If your RDS Instance has DB Parameter Group...
- Edit the DB Parameter Group by setting the slow_query_log to 1
- Restart the DB Instance
Once the Instance has been restarted, login to mysql and run this query
mysql> SELECT SLEEP(15);
Since the default for long_query_time is 10 seconds, the query SELECT SLEEP(15);
should show up in mysql.slow_log
. Read the RDS Documentation to see if there are any limits on the table.
CAVEAT
Here is a fair warning. Notice mysql.slow_log
is a CSV file. The table has no key. While you can query the table, it will do full table scans. It would be great if you can do the following:
ALTER TABLE mysql.slow_log ENGINE=MyISAM;
ALTER TABLE mysql.slow_log ADD INDEX (start_time);
Then, you can query very fast for date and time ranges. I tried to run this and I get this:
mysql> ALTER TABLE mysql.slow_log ENGINE=MyISAM;
ERROR 1044 (42000): Access denied for user 'lwdba'@'%' to database 'mysql'
mysql>
Check the RDS Documentation to see if Amazon can do this for you.
Give it a Try !!!
Best Answer
Try experimenting with the Provision IOPS that RDS offers. Regardless of instance size, the IO is limited, and you can use the Provision IOPS settings, at an additional cost, to improve IO performance.
AWS RDS Provision IOPS