So what extra advantages does SSD give you?
- Persistence (don't lose data in power outage)
- Cost is still lower, and will drop very rapidly compared to RAM over time
- No upper limit to size - you'll see 1TB SSDs before you see a COTS server that accepts 1TB of RAM
- Common interface - you can move the SSD to any other computer and connect it, or even a USB<-->SATA bridge. Can't do that with RAM without checking the MB specs, removing existing memory if slots are full, etc.
- Can add multiple SSDs to one computer, whereas RAM is ultimately limited.
Why buy and SSD instead of just putting more RAM in your server machine?
When I need fast persistent storage, I use SSD.
When I need fast volatile storage I use RAM.
IF the UPS fails, or the motherboard fails, or the software crashes the OS, you lose everything in RAM.
There is simply no substitute for persistent storage.
Further, though you state the cost is similar, the cost of high performance SSDs is going to drop like a rock over the net two years.
Right now it might make sense if you have read only data, or indexes that you don't mind rebuilding, stored completely in RAM.
In cases where the cost and risk are low, you might even perform more aggressive disk caching against a slower hard drive.
But at the end of the day, if you want persistent storage AND performance, you either buy BOTH a slow hard drive and fast RAM, or you buy a high performance SSD.
In general the SSD is going to be cheaper than both the hard drive and RAM together.
But at any rate, SSDs are still niche items. You don't use an SSD unless you have specific needs.
-Adam
I would use rsync as it means that if it is interrupted for any reason, then you can restart it easily with very little cost. And being rsync, it can even restart part way through a large file. As others mention, it can exclude files easily. The simplest way to preserve most things is to use the -a
flag – ‘archive.’ So:
rsync -a source dest
Although UID/GID and symlinks are preserved by -a
(see -lpgo
), your question implies you might want a full copy of the filesystem information; and -a
doesn't include hard-links, extended attributes, or ACLs (on Linux) or the above nor resource forks (on OS X.) Thus, for a robust copy of a filesystem, you'll need to include those flags:
rsync -aHAX source dest # Linux
rsync -aHE source dest # OS X
The default cp will start again, though the -u
flag will "copy only when the SOURCE file is newer than the destination file or when the destination file is missing". And the -a
(archive) flag will be recursive, not recopy files if you have to restart and preserve permissions. So:
cp -au source dest
Best Answer
Use tmpfs and a big swap partition or file. This file system will cache data in memory as long as it can, and swap them to disk if they don't fit into RAM.