My guess is that you have some long running queries in your application. When they are executed they cause the connection to stay checked out of the pool for a long time (relative to the usual usage pattern), this causes your pool to become exhausted, grow, and continue to grow up to its maximum, at which point any remaining workers block waiting on connections to be released.
The first thing will be to track down when this happens, that is, is it a cyclical event, or random. If its the former you're in luck, as you can be ready time it happens. If you can't determine a pattern then you'll have to be vigilant.
You may be able to figure this from looking at your website monitoring logs, or sar
from your database to see if there are any correlating spikes.
If you can catch your database when its under load, you should execute the following commands on the mysql server
show innodb status;
show processlist;
The former will print out diagnostic information about the innodb engine (you are using innodb right?), the latter will print out the first few hundred chars of the query that was executing. Look for queries that have been running for a long time, queries generating temporary tables on disk, and queries that are blocked on a resource.
After that, the hard work begins. Use EXPLAIN
to estimate the cost of the query, and the resources it uses. Avoid queries that require sorting on disk via a tmp table. Look for long running reporting jobs, or other scheduled maintenance tasks that periodically lock or saturate your database. It could be something as simple as the backup task, or a job that rolls up old purchase order data.
I recommend having these three settings in your /etc/my.cnf
log_slow_queries
log-queries-not-using-indexes
set-variable = long_query_time=1
For a web application doing 20-30 requests per second, you can't afford to have anything show up in these logs.
btw, IMHO its pointless to increase your connection pool's size beyond your original size as this will only delay the onset of pool exhaustion by at best, a few seconds, and only put more pressure on your db right when it doesn't need it.
Best Answer
It's depends.
In general session creation have too small overhead to worry about, but if sessions are created by same process it is reasonable to make session(s) persistent. If sessions are established by different processes and connection rate is high enough you can run out of the limits with lot of the idle sessions. In that case you have to close idle connection immediately.
If your MySQL server is used by local web-server than you can connect to it via file-socket instead of IP-socket. This is a bit faster method but amount of simultaneous connections is still limited by the same server option.