TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.
Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.
This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.
Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.
Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.
There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.
Some references:
dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!
I have been using freenas on a spare machine with 4x 1TB hard drives (2 raid 1's, so 2TB usable). It has been up 24/7 for 6 months.
I find it brilliant!
I tested many NAS's devices and only got a maximum of 10Mb/s on a gigabit port, and that was rare, typically it was around 3-4. My main reason for a device was to save energy, however 2x 2 drive nas's = more than a 80+% psu on a celeron system.
On freenas, I have a celeron based machine that cost me under £70, and on the internal 100Mb card, I can easily push 70Mb/s on samba.
The most expensive part was I bought a 4 drive enclosure to add/remove hard drives easily! Was a bit of a waste of money, but looks cool!
I can not complain at all about it and love the system. I did look at openfiler, but it seemed a bit OTT and freenas did what I needed...
To the others who recommended it, not saying Openfiler is bad, but freenas suited my needs perfectly, I boot the machine off of a USB stick and works well... The question was "is FreeNAS reliable" and my answer has to be yes.
The system is using software raid and even though the celeron is a single core 64 bit one, even during a raid rebuild + watching a HDTV episode across the network, it never goes above 60% cpu
To get it working, I downloaded the full iso, put a 1GB usb stick in my laptop, used usb pass through on Vmware Workstation and booted from the iso. I then used the install option and chose the USB stick. (You can do this on the actual machine and I have since however this was my first time using it and I couldn't find a blank cd!)
I put the usb stick in to the machine and booted. It worked fine first time!
Steps to actually get it usable as a nas were the following:
- Go in to disk management and add each of the 4 drives.
- Go to format and format all drives to software raid
- Go to software raid and add disks 1 and 2, 3 and 4 to a new raid 1
- Go to format and format both the new raid's to the standard os
- Mount both raids
- Set up Samba and choose both of the mount points as shares
- Set up a couple of users
Then it was accessible over windows by \\ip and using the username and password I chose.
I will be looking at openfiler again soon as AD support is lacking a bit, however for a SOHO / domainless environment, you can not go wrong with freenas.
edit - Via request - Was to big to fit in comments
Best Answer
It seems to be solved in version 8.0.4 of FreeNAS. (i386 one, x64 does not seem to work for some reason...)
The problem with the newer version was I have a 1GB CF and the image is 2GB. I've solved it installing from the ISO onto the CF (using my PC).
It now kmows about all available memory!
Data from sysctl:
So the problem is with 7.3