It was most likely caused by a query wanting to read more pages into the buffer pool, and the buffer pool grabbing more memory to accomodate that. This is how SQL Server is supposed to work. If the box experiences memory pressure, it will ask SQL Server to give up some memory, which it will do. The customer shouldn't be concerned.
You can use the DMV sys.dm_os_buffer_descriptors
to see how much of the buffer pool memory is being used by which database. This snippet will tell you how many clean and dirty (modified since last checkpoint or read from disk) pages from each database are in the buffer pool. You can modify further.
SELECT
(CASE WHEN ([is_modified] = 1) THEN 'Dirty' ELSE 'Clean' END) AS 'Page State',
(CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END) AS 'Database Name',
COUNT (*) AS 'Page Count'
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id], [is_modified]
ORDER BY [database_id], [is_modified];
GO
I explain this a little more in this blog post Inside the Storage Engine: What's in the buffer pool?
You could also checkout KB 907877 (How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005) which will give you an idea of the breakdown of the rest of SQL Server's memory usage (but not per-database).
Hope this helps!
interesting question. Please try to define the bottleneck of your current application first. From your description and some practical knowledge, I would guess that you have four possible bottlenecks:
- nic
- ram
- hd throughput
- cpu (speed)
- cpu (cores)
Now ram would not be an issue, because, since you db is only 2G, you could buffer not only all keys and indizes, but actually the entire db in ram having 2G, if you size (MyISAM-) key_buffer and/or innodb_buffer_pool_cache accordingly! You would probably be fine even with ram < dbsize, because usually not all parts of a db are used at the same time (ymmv).
Of course ram is also used for memory tables, sorting and ordering and some join operations, so you should look at the complexity of queries your database does, and whether it returns very large resultsets. I do not know, but I believe neither wordpress nor mediawiki do really complex operations there. So just get a moderate amount of ram.
HD is the usual bottleneck for any large database, but yours can be cached in ram anyway, and you say that you have mostly read operations, so I'd say: for a normal large database, the rule of thumb would be: hd throughput is a main bottleneck, so: 1. buy hds, and 2. do not necessarily buy the fastest ones, but buy many of them. In you case, I'd say: its all cached anyway.
As for cores:
MySQL can indeed take advantage of many cores, but it mainly needs them for complex calculations, procedural programs and sort and merge operations. Siple queries like "Select * from table" or even select * from table where..." will not benefit much from more cores. Many connections will gain minor benefit. My guess is you should prefer a faster processor over many cores.
I believe you should check for the nic as the main bottleneck, and think about a second (Third, fourth...) nic, depending on the amount of traffic on your primary interface.
So, to sum it all up, I'd spend my money on (in that order):
- more than one nic (if that is indeed a bottleneck)
- a fast processor
- 2 - 4 cores
- 2-4G ram with the option to plug in 8G later (cheaper than cores, anyway)
- best posible disk subsystem (you do not need much now, but it will help you expand later)
Cheers, Nik.
Best Answer
There are a few thresholds out there for 'too much', though they're special cases.
In 32-bit land, PAE is what allows you to access memory over the 4GB line. The theoretical max for 32-bit machines is 64GB of RAM, which reflects the extra 4 bits PAE gives memory addresses. 64GB is less than 80GB.
From there we get processor-specific issues. 64-bit processors currently use between 40 and 48 bits internally for addressing memory which gives a maximum memory limit of between 1TB and 256TB. Both way more than 80GB.
Unless he has some clear reasons for why SQL Server can't handle that much memory, the base OS and hardware can do so without breaking a sweat.