you can use this little awk script to do the job:
awk '{ if ($0 ~ /use \[.*\];/) { if ($2 ~ /db1/) { found = 1; } else { found = 0; }} if (found == 1) { print $0; }}' <mysqllogfile>
just replace db1 with the database name you are searching for. when you take your example from above, this script will give you:
use [db1];
SELECT ...
# Time: 090226 11:17:34
# User@Host: user1[user1] @ host [10.0.0.3]
# Query_time: 12 Lock_time: 0 Rows_sent: 0 Rows_examined: 4042560
SELECT ...
# Time: 090226 12:32:40
# User@Host: user2[user2] @ host [10.0.0.3]
# Query_time: 8 Lock_time: 0 Rows_sent: 123390 Rows_examined: 812841
i don't know what is your operation system, but awk/gawk is available for multiple os.
For starters, I placed these lines in /etc/my.cnf
[mysqld]
log-output=TABLE
slow-query-log
slow-query-log-file=slow-queries.log
When you use the slow log with the log_output being TABLE, the table IS NOT created in /var/lib/mysql. The table is created in the mysql folder, /var/lib/mysql/mysql. The storage engine for the default table-based slow log is CSV. You can check this by doing the following:
use mysql
show tables;
You should see the table slow_log
MySQL> show create table slow_log\G
*************************** 1. row ***************************
Table: slow_log
Create Table: CREATE TABLE `slow_log` (
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`query_time` time NOT NULL,
`lock_time` time NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) NOT NULL,
`last_insert_id` int(11) NOT NULL,
`insert_id` int(11) NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`sql_text` mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'
1 row in set (0.00 sec)
Here is how to convert the CSV file for the slow log table to MyISAM
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE mysql.slow_log ENGINE = MyISAM;
SET GLOBAL slow_query_log = @old_log_state;
Keep in mind that the converted MyISAM does not have any indexes.
This is a column called 'start_time' which is a timestamp. Feel free to index it like this:
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE mysql.slow_log ADD INDEX (start_time);
SET GLOBAL slow_query_log = @old_log_state;
Let us know how this worked out, please !!!
Best Answer
I have something a little unorthodox if you really want to have a slow query log for a particular database. Keep in mind that what I am about to suggest works for MySQL 5.1.30 and above:
Step 01) Start off by adding these to /etc/my.cnf
log-output lets you specify the output of general logs and slow logs to be tables rather than text files.
Step 02) service mysql restart
There is a table in the mysql schema called slow_log
In mysql run this
It should be a CSV table.
Step 03) Convert it to MyISAM and Index the Table on the start_time column
It should look like this
Notice that one of the columns is db. You should add an additional index as follows:
From here you could perform one of three(3) options
OPTION 1 : Query mysql.slow_log by the database you want
OPTION 2 : Delete all entries in mysql.slow_log that does not come from whateverdbiwant
OPTION 3 : Copy all entries into another slow_log table for you to query from
Give it a Try !!!
UPDATE 2011-06-20 18:08 EDT
I have an additional unorthodox idea you might like.
Try moving the MyISAM table's .MYD and .MYI for the slow log over to another disk volume. Them symlink /var/lib/mysql/mysql/slow_log.MYD and /var/lib/mysql/mysql/slow_log.MYI to the new location of the real .MYD and .MYI.
If you thought that was crazy, here is yet another unorthodox idea you might like.
If you have binary logging turned on already, setup replication to move the the slow to another box. How in the world do you do that ???
Step 01) Setup replication slave with this option
Step 02) Activate the slow log on the slave
Step 03) Run this command on the master
By using the BLACKHOLE storage engine, This eliminates disk I/O to the slow log on the master. The slave is setup so that its sole purpose is to collect entries for mysql.slow_log.