From what I can see there the table seems fairly self contained (i.e. you don't need to do any LOJ's to pull out normalised data) so MyISAM could certainly have a positive effect on the access speed.
Secondly, and most importantly, do you have the correct indexes for your queries? 2 million rows is a few, but it's not really that many. You need to carefully go through all your SELECT
queries and make sure that you have an appropriate index for each one. This will consume a bit of disk space, but the tradeoff is incredibly fast query times.
Thirdly, and this is just a personal preference and not really much to do with your specific problem I don't think, but NDATA_INVOICE_USER_ELEMENT_ATTRIBUTE1
to NDATA_INVOICE_USER_ELEMENT_ATTRIBUTE50
- this could be designed a lot smarter by moving them into a table called DATA_INVOICE_USER_ELEMENT_ATTRIBUTES
with a PK of INVID,ATTRIBUTEID
and having them stored vertically in there, and immediately you've saved yourself 6.25kb of space per row.
Please look carefully at the processlist and the 'show engine innodb status'. What do you see ???
Process IDs 1,2,4,5,6,13 are all trying to run COMMIT.
Who is holding up everything ??? Process ID 40 is running a query against large_table.
Process ID 40 has been running for 33 seconds. Process IDs 1,2,4,5,6,13 having been running less than 33 seconds. Process ID 40 is processing something. What's the hold up ???
First of all, the query is pounding on large_table's clustered index via MVCC.
Within Process IDs 1,2,4,5,6,13 are rows that have MVCC Data protecting its transaction isolation. Process ID 40 has a query that is marching through rows of data. If there is an index on the field hotspot_id, that key + the key to the actual row from the clustered index must perform an internal lock. (Note: By design, all non-unique indexes in InnoDB carry both your key (the column you meant to index) + a clustered index key). This unique scenario is essentially Unstoppable Force meets Immovable Object.
In essence, the COMMITs must wait until it is safe to apply changes against large_table. Your situation is not unique, not a one-off, not a rare phenomenon.
I actually answered three questions like this in the DBA StackExchange. The questions were submitted by the same person related to the same one problem. My answers were not the solution but helped the question submitter come to his own conclusion on how to handle his situation.
In addition to those answers, I answered another person's question about deadlocks in InnoDB with regard to SELECTs.
I hope my past posts on this subject help clarify what was happening to you.
UPDATE 2011-08-25 08:10 EDT
Here is the query from Process ID 40
SELECT * FROM `large_table`
WHERE (`large_table`.`hotspot_id` = 3000064)
ORDER BY discovered_at LIMIT 799000, 1000;
Two observations:
You are doing 'SELECT *' do you need to fetch every column ? If you need only specific columns, you should label them because the temp table of 1000 rows could be larger than you really need.
The WHERE and ORDER BY clauses usually give away performance issues or make table design shine. You need to create a mechanism that will speed up the gather of keys before gathering data.
In light of these two observations, there are two major changes you must make:
MAJOR CHANGE #1 : Refactor the query
Redesign the query so that
- keys are gathered from the index
- only 1000 or them are collect
- joined back to the main table
Here is the new query which does these three things
SELECT large_table.* FROM
large_table INNER JOIN
(
SELECT hotspot_id,discovered_at
FROM large_table
WHERE hotspot_id = 3000064
ORDER BY discovered_at
LIMIT 799000,1000
) large_table_keys
USING (hotspot_id,discovered_at);
The subquery large_table_keys gathers the 1000 keys you need. The result from the subquery is then INNER JOINed to large_table. So far, the keys are retrieved instead of whole rows. That's still 799,000 rows to read through. There is a better way to get those keys, which leads us to...
MAJOR CHANGE #2 : Create Indexes that Support the Refactored Query
Since the refactored query only features one subquery, you only need to make one index. Here is that index:
ALTER TABLE large_table ADD INDEX hotspot_discovered_ndx (hotspot_id,discovered_at);
Why this particular index ? Look at the WHERE clause. The hotspot_id is a static value. This makes all hotspot_ids form a sequential list in the index. Now, look at the ORDER BY clause. The discovered_at column is probably a DATETIME or TIMESTAMP field.
The natural order this presents in the index is as follows:
- Index features a list of hostpot_ids
- Each hotspot_id has an ordered list of discovered_at fields
Making this index also eliminates doing internal sorting of temp tables.
Please put these two major changes in place and you will see a difference in running time.
Give it a Try !!!
UPDATE 2011-08-25 08:15 EDT
I looked at your indexes. You still need to create the index I suggested.
Best Answer
The problem lies right in innodb_data_file_path.
According to your comment:
innodb_data_file_path = ibdata1:10M:autoextend:max:1024M
The file ibdata1 houses four type of data
Table Data
Table Indexes
MVCC (Multiversioning Concurrency Control) Data
Table Metadata
There may simply be no space to write MVCC data around the old values for the row in ttrss_users that needs to be updated. Try removing the size restriction on the size of ibdata1
Step 01) Change the line in /etc/my.cnf from this
innodb_data_file_path = ibdata1:10M:autoextend:max:1024M
to this
Step 02)
service mysql restart
Step 03) Try your UPDATE statement
Give it a Try !!!
UPDATE 2011-10-21 17:03 EDT
You may want to cleanup ibdata1 and keep InnoDB tables outside of ibdata1