You have a server with 256M, but you can't use all of that -- remember there's some OS overhead. Add to that with the fact you're over committing as other folks have mentioned and you'll definitely thrash here. 256M is only enough for a small DB, 20 connections is a lot with what you've got configured.
1) reduce your max connections to 4 (you're using 3 out of 20)
2) optimize your query cache better; 8M is really large, and 64M total is a lot based on your hits/prunes; try a 4/32 combo and see how it goes. Really I think a 2/24 combo would work for you.
3) you have no sorts requiring temp tables, why is that max_heap_table_size verb in there? Comment that out, use the defaults
4) do you actually have 128 tables? Try cutting that table_cache in half to 64 or 48
5) reduce thread_cache_size to 4
6) optimize those tables to reduce fragmenting
Those are some things to start with. It looks like you threw a bunch of numbers in a config without any actual profiling to know what you needed and have created a mess; if all else fails go back to the defaults and get rid of your custom settings and start over using some performance tuning guides you can find on Google. Get the output of SHOW VARIABLES and SHOW STATUS, find any one of a bajillion tuning guides and plug in your actual, real numbers into their equations and that'll tell you the exact-ish numbers you need to put in your config file.
Well I tried a bunch of stuff and researched a bunch of commands, switches, and structures (not surprisingly, the MySQL docs were most helpful, if extensive—needle/haystack). Eventually some of my tricks worked (it turns out that others had thought of the same tricks, though I did not see the two major parts—recovering the table structure, and recovering the data—in one place, so I’m posting them together here).
What I had to do was to recreate the IBDATA1 file. Unfortunately while running the daemon detects the databases (the directories), it does not pick up the tables Innodb inside (the IBD/FRM files). So what I did was to:
- Empty out the data directory (or move the original and create an empty one)
- Run the daemon, let it create a new, empty IBDATA1 file
- Import the system tables using the SQL scripts in
…\MySQL\share
- Create a dummy database and table of the same name
- Copy over the original FRM file
- Use either
DESCRIBE
or better, SHOW TABLE CREATE
to extract the table structure
- Next, I used
DISCARD TABLESPACE
on the table
- Copied over the original IBD file
- Then I used
IMPORT TABLESPACE
- Finally, I re-ran the daemon with
innodb-force-recovery=6
- And I ran
mysqldump
to extract the structures and data
Of course it was not always smooth. Some tables were fine, but some required dropping the table and database after the SHOW TABLE CREATE
, and using that to re-create the table before trying to import the data. Others did not even work that far, and I had to manually get the comments and names of the columns from the FRM file using a hex-editor (though figuring out what the data types and attributes, keys, etc. were was a crap-shoot). Also, there were plenty of—too many—daemon and client restarts.
(I’m still looking for a tool that will directly parse FRM/IBD files (or at least display table structure from an FRM file), but it looks like nobody has bothered to “reverse-engineer” them even though it’s open-source and the file formats are publicly available. It seems that everyone is complacent using the official MySQL tools—thus creating a great opportunity for data-recovery firms and proprietary/commercial tools.)
The key was to always work with the absolute minimum (eg only the MYSQL directory—ie system tables). Unfortunately while it meant that things would be simplified and easier to work with, it also meant recovering one table at a time—which wasn’t a big deal for me, but for some people it could be.
Anyway, out of the many MySQL recovery pages on the Internet that I saw during the past couple of days, a small handful were pretty useful, and I’ll add them once I scour my history to dig them up.
Hope this can help others in a similar situation.
Best Answer
the short answer:
The "You must know" answer
first at all you must understand that Mysql tables get fragmented when a row is updated, so it's a normal situation. When a table is created, lets say imported using a dump with data, all rows are stored with no fragmentation in many fixed size pages. When you update a variable length row, the page containing this row is divided in two or more pages to store the changes, and these new two (or more) pages contains blank spaces filling the unused space.
This does not impact performance, unless of course the fragmentation grows too much. What is too much fragmentation, well let's see the query you're looking for:
The DATA_LENGTH and INDEX_LENGTH are the space your data and indexes are using, and DATA_FREE is the total amount of bytes unused in all the table pages (fragmentation).
Here's an example of a real production table
In this case we have a Table using (896 + 316) = 1212 MB, and have data a free space of 5 MB. This means a "ratio of fragmentation" of:
...Which is a really low "fragmentation ratio".
I've been working with tables with a ratio near 0.2 (meaning 20% of blank spaces) and never notice a slow down on queries, even if I optimize the table, the performance is the same. But apply a optimize table on a 800MB table takes a lot of time and blocks the table for several minutes, which is impracticable on production.
So, if you consider what you win in performance and the time wasted in optimize a table, I prefer NOT OPTIMIZE.
If you think it's better for storage, see your ratio and see how much space can you save when optimize. It's usually not too much, so I prefer NOT OPTIMIZE.
And if you optimize, the next update will create blank spaces by splitting a page in two or more. But it's faster to update a fragmented table than a not fragmented one, because if the table is fragmented an update on a row not necessarily will split a page.
I hope this helps you.