There are two distinct plans of attack to follow through with when diagnosing these types of errors:
Firstly, there is the potential that it is an issue relating to the actual software in use: something is essentially sucking up connections and not releasing them back (either at all in terms of hanging thread, or in a reasonable amount of time in terms of a slow query).
The slow query log is very beneficial in diagnosing problems, but your value of 15 seconds is next to useless: if a query is taking 15 seconds then you're pretty screwed full stop. As a general rule of thumb, I look for queries taking more than one or two seconds to run. Work through whatever shows up in this log using the EXPLAIN keyword and look at what is causing the slowdown (bad joins, sorting needing a temp table etc.) - some clever magic with query caching and indexes can often help out if it's not possible to go deeper and tinker with code / database design.
Also, don't overlook the general query log in mysql. Whilst you won't want to leave it turned on (for long) on a production server, it can quickly tell you if, rather than a single query taking an age, a particular function in the software is hammering the database with hundreds of small queries. Obviously the only way to solve these sort of issues is via refactoring the code.
Secondly, you need to investigate if the configuration of the software is to blame. How many concurrent connections are you experiencing? What is the actual number of max connections set to in mysql. It may be something as simple as apache is serving say 100 concurrent requests while mysql is only configured to accept 20 connections - obviously something is going to give. If you can gauge how much traffic you are expecting to handle, then it just takes some common sense (and occasionally a dash of Google to find the correct setting) to balance all the components.
For starters, I placed these lines in /etc/my.cnf
[mysqld]
log-output=TABLE
slow-query-log
slow-query-log-file=slow-queries.log
When you use the slow log with the log_output being TABLE, the table IS NOT created in /var/lib/mysql. The table is created in the mysql folder, /var/lib/mysql/mysql. The storage engine for the default table-based slow log is CSV. You can check this by doing the following:
use mysql
show tables;
You should see the table slow_log
MySQL> show create table slow_log\G
*************************** 1. row ***************************
Table: slow_log
Create Table: CREATE TABLE `slow_log` (
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`query_time` time NOT NULL,
`lock_time` time NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) NOT NULL,
`last_insert_id` int(11) NOT NULL,
`insert_id` int(11) NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`sql_text` mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'
1 row in set (0.00 sec)
Here is how to convert the CSV file for the slow log table to MyISAM
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE mysql.slow_log ENGINE = MyISAM;
SET GLOBAL slow_query_log = @old_log_state;
Keep in mind that the converted MyISAM does not have any indexes.
This is a column called 'start_time' which is a timestamp. Feel free to index it like this:
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE mysql.slow_log ADD INDEX (start_time);
SET GLOBAL slow_query_log = @old_log_state;
Let us know how this worked out, please !!!
Best Answer
Use the Percona query digest tool to build summaries of the slow query log. This will help you find the slowest of the slow:
http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html
Then use mysql's EXPLAIN command to figure out WHY a particular query is slow:
http://dev.mysql.com/doc/refman/5.0/en/explain.html
Hope this helps