Generally speaking, SQL Server will allocate as much memory as it is allowed to, without using the paging file. This memory is used as a cache for data pages. (There is also a plan cache and some other stuff, but I think that is out of scope for this question.) This allocation has the effect of "using up" all of the free memory on the server and often worries people who are not familiar with SQL Server.
Don't worry about the amount of memory, worry about physical page reads (which come from disk) and logical page reads (which come from memory. Logical page reads aren't as problematic as physical page reads because they occur much more quickly, but they still take a finite amount of time. Even if you had a terabyte of RAM and your entire database was cached in memory, it will still take a finite amount of time for the processors to hunt through all of your data. It is always better if SQL Server has less data to look through, even if it is totally cached in RAM.
Access to data slows down when your data does not fit in RAM and the server has to read it from the disk.
In your situation, it sounds like SQL is reading 20 GB of data. This will not fit in 6 GB of data cache, so SQL Server will re-use memory by loading data from disk over top of "old" data that it thinks that it no longer needs. If another query, from another user, comes along 1 second later, SQL may have to back to the disk to re-read that "old" data back.
Tuning queries and improving indexing should result in SQL Server having less data to look through, meaning that it is more likely that the data that must be looked through can remain in the data cache, in RAM, and less data will have to be read from disk in the case where the required data is not in the data cache.
Increasing RAM ("throwing hardware at the problem") also means that there is a better chance of keeping that data in RAM as well, BUT you can usually get a larger improvement (maybe a factor of 10x or more, if you are lucky and the indexing is really bad to start with--I've seen improvements of 60x and better) than you can by increasing the amount of RAM available for caching data (it is usually impossible to increase the amount of RAM by more than 2x or 3x without a new server or putting a lot of stress that hasn't been planed for on your VM environment.) Granted, it is usually easier/quicker to plug in new RAM chips than tune queries, but some times you do not have any choice.
Another thing that is worth doing is to make sure that your index statistics are being updated regularly. In environments with no DBA, this maintenance often goes by the wayside. Out-of-date statistics can cause the query engine to use inefficient plans, and this is particularly visible/painful with large tables.
One last observation: Restarting the SQL Server will flush all of the cache data, and that data will have to be reloaded from disk as queries come in after the SQL Server has restarted. Usually, that means a SQL Server is more sluggish after a restart, not faster. SQL Servers that are faster after a restart have usually been experiencing excessive blocking (restarting the server throws away all of the blocking and blocked connections) or they have been seeing a storm of queries that are causing it to read lots and lots of data from disk (often times, these queries are caused by distressed users who resubmit the same query several times, a few seconds or minutes apart, because things are "taking too long").
Best Answer
CHeck the statistics it publishes on performance counters (how long pages stay in memory, how often it reloads stuff from disc) and check the statistics yhow long it waits for read IO on the MDF files - that simple.
As useless to gauge anything as the price of the food I ate for lunch today.
Get the Seconds/IO number, that has relevance. When this number increases then - well - your IO subsystem is overloaded. But then - you ahve to be carefull, as RAM is useless to cache transaction log writes for obvious reasons.
Pull down to 16gb, use common sense, then go up as IO starts to get slow. Sadly your IO is likely also virtualized so you can not use up the discs proper? Also we have no idea how large your database is (which kind of is an imporatn number) and how large your hot set is (the data activly in permanent use).
I would glo with: * 16gb, watching counters, adjusting as needed.