I'm looking for opinions and resources as to how much performance will increase on a server with adding more RAM. What factors come into play? Is there a general calculation that can be performed?
Sql-server – How much will SQL Server 2005 performance increase with more RAM
performancesql server
Related Solutions
It was most likely caused by a query wanting to read more pages into the buffer pool, and the buffer pool grabbing more memory to accomodate that. This is how SQL Server is supposed to work. If the box experiences memory pressure, it will ask SQL Server to give up some memory, which it will do. The customer shouldn't be concerned.
You can use the DMV sys.dm_os_buffer_descriptors
to see how much of the buffer pool memory is being used by which database. This snippet will tell you how many clean and dirty (modified since last checkpoint or read from disk) pages from each database are in the buffer pool. You can modify further.
SELECT
(CASE WHEN ([is_modified] = 1) THEN 'Dirty' ELSE 'Clean' END) AS 'Page State',
(CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END) AS 'Database Name',
COUNT (*) AS 'Page Count'
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id], [is_modified]
ORDER BY [database_id], [is_modified];
GO
I explain this a little more in this blog post Inside the Storage Engine: What's in the buffer pool?
You could also checkout KB 907877 (How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005) which will give you an idea of the breakdown of the rest of SQL Server's memory usage (but not per-database).
Hope this helps!
For a quick&dirty test (i.e. no optimization whatsoever!) I enabled the simple Ubuntu apache2 default website (which just says "It works!") with both http and https (self-signed certificate) on a local Ubuntu 9.04 VM and ran the apache benchmark "ab
" with 10,000 requests (no concurrency). Client and server were on the same machine/VM:
Results for http ("ab -n 10000 http://ubuntu904/index.html
")
- Time taken for tests: 2.664 seconds
- Requests per second: 3753.69 (#/sec)
- Time per request: 0.266ms
Results for https ("ab -n 10000 https://ubuntu904/index.html
"):
- Time taken for tests: 107.673 seconds
- Requests per second: 92.87 (#/sec)
- Time per request: 10.767ms
If you take a closer look (e.g. with tcpdump or wireshark) at the tcp/ip communication of a single request you'll see that the http case requires 10 packets between client and server whereas https requires 16: Latency is much higher with https. (More about the importance of latency here)
Adding keep-alive (ab
option -k
) to the test improves the situation because now all requests share the same connection i.e. the SSL overhead is lower - but https is still measurable slower:
Results for http with keep-alive ("ab -k -n 10000 http://ubuntu904/index.html
")
- Time taken for tests: 1.200 seconds
- Requests per second: 8334.86 (#/sec)
- Time per request: 0.120ms
Results for https with keep-alive ("ab -k -n 10000 https://ubuntu904/index.html
"):
- Time taken for tests: 2.711 seconds
- Requests per second: 3688.12 (#/sec)
- Time per request: 0.271ms
Conclusion:
- In this simple testcase https is much slower than http.
- It's a good idea to enable https support and benchmark your website to see if you want to pay for the https overhead.
- Use wireshark to get an impression of the SSL overhead.
Best Answer
That is entirely subjective...
If you're seeing a crunch in performance, you would need to identify where the bottleneck is and go from there. Using the Windows Performance Monitor may help. Randomly throwing hardware at the problem may not help at all (although generally memory doesn't hurt).