TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.
Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.
This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.
Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.
Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.
There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.
Some references:
dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!
The 1.5 times physical RAM is just a guideline. There are some general pointers about page file sizing in this Technet article which makes the point:
On server systems, a common objective
is to have enough RAM so that there is
never a shortage and the pagefile is
essentially, not used. On these
systems, having a really large
pagefile may serve no useful purpose.
However for some systems (Domain Controllers, Exchange Servers) totally disabling page files is not a good idea. It's specifically contra-indicated for DC's and it's a very bad idea for Exchange Servers. I've seen the Exchange behaviour described in that article (extreme disk thrashing caused by paging) on an E2K7 Server that wasn't all that busy with 32G of physical RAM where someone set the pagefile size to 1G.
I've never found (or heard of) any specific statements that indicate a paging file is necessary for SQL, apart from the general argument that it helps if something else goes rogue and chews up all physical RAM.
Best Answer
Page faults can be divided into major and minor faults
Major page faults happen when your program, or its data, was swapped out to disk and now need to be swapped-in from disk. These faults are marked "major" because swapping out/in to/from disk is very slow compared to CPU speed. As you have plenty of free RAM (about 50%), and disabling swapping entirely did not bring any performance back, I think your problem is not related to major faults.
Minor page faults happen when the CPU is trying to access a virtual memory address which is not in its small, fast TLB cache and, as results, it has to lookup a larger (and slower) mapping table stored in know DRAM address. A large amount of minor page faults are expected when run a program sporadically and/or when accessing large amount of memory. This problem can be exacerbated by a multi-socket NUMA topology (the same used by your Opteron) when used with non-NUMA aware programs.
If your program is not NUMA-aware, minor page faults can be the source of yours performance problems. To have a rough idea if this is the case, try to run the program on a single-socket machine (or disable all but one socket on your server) and check if CPU usage is higher than expected.
Anyway, only the software house producing the software (or someone very experienced with your specific program) can completely answer to your question.