If info is swapped out to disc and later read back into memory, it will often be left allocated in the swap area until swap space runs low.
That means that if the same info needs to be swapped out again later and hasn't changed, the OS can just drop the pages from allocated RAM without needing to write anything to disc saving time.
Swap allocated to stuff that has been read back into memory will be freed either
- when the relevant pages are no longer needed at all (i.e. are freed by the application)
- when the relevant pages are changed (so the copy on disc is no longer up to date)
- the machine runs low on swap space so clears some things that are already in RAM to make room
Look in /proc/meminfo
for a line called "SwapCached". This entry counts pages that are found both in RAM and in swap partitions. For instance, picking a small VM at random, the /proc/meminfo
virtual file one of my VMs shows:
SwapTotal: 698816 kB
SwapFree: 624520 kB
SwapCached: 17232 kB
indicating that 74268K of swap space is allocated, but that 17232K worth of those pages are also currently mapped into RAM too (so could be deallocated from swap at a moment's notice if the space is needed by something else).
Also there will no doubt be pages sat there that was swapped out ages ago and have never been used again since. The kernel will not reload pages from swap just because there is some free RAM to read it back into as that free RAM might be better used for cache or buffers - pages written to swap are generally only reread when they are next needed.
If you want to clear out what is in swap, as long as you have enough free and/or freeable (i.e. free+cache+buffers (less those parts of the c+b counts that are not freeable RightThisInstant)), just turn it off and back on again with swapoff -a && swapon -a
.
Of course you could also have a memory leak somewhere too, but that is not the only explanation for the behaviour you are seeing.
Doublecheck if your script translated the raw beancounters correctly. according to this you only have 256 megs of RAM, not 4 gigs as your admin tells you.
concentrate only on 2 beans:
- privvmpages - maximum amount of memory your container can allocate (reserve)
- oomguarpages - guaranteed amount of memory your container will get to actually use. In case of tight memory situation on the host everything using over that amount will probably be killed.
Java is notorious for allocating gobs of memory and then never using them, counting on the OS to overcommit. In my experience you need at least a gig of privvmpages to run JVM reliably, although only couple of dozen megs will be used.
After couple of months experimenting and trying to contain privvmpages in VEs running Java, I have personally given up, I just set the barrier to the max and tweak the oomguarpages appropriately and hope for the best ;)
Best Answer
Yes, you could limit java memory consumption. See for example here: http://viralpatel.net/blogs/2009/01/jvm-java-increase-heap-size-setting-heap-size-jvm-heap.html
But 256M is very small amount for java world. I suggest you to get a better vps.