I have been using freenas on a spare machine with 4x 1TB hard drives (2 raid 1's, so 2TB usable). It has been up 24/7 for 6 months.
I find it brilliant!
I tested many NAS's devices and only got a maximum of 10Mb/s on a gigabit port, and that was rare, typically it was around 3-4. My main reason for a device was to save energy, however 2x 2 drive nas's = more than a 80+% psu on a celeron system.
On freenas, I have a celeron based machine that cost me under £70, and on the internal 100Mb card, I can easily push 70Mb/s on samba.
The most expensive part was I bought a 4 drive enclosure to add/remove hard drives easily! Was a bit of a waste of money, but looks cool!
I can not complain at all about it and love the system. I did look at openfiler, but it seemed a bit OTT and freenas did what I needed...
To the others who recommended it, not saying Openfiler is bad, but freenas suited my needs perfectly, I boot the machine off of a USB stick and works well... The question was "is FreeNAS reliable" and my answer has to be yes.
The system is using software raid and even though the celeron is a single core 64 bit one, even during a raid rebuild + watching a HDTV episode across the network, it never goes above 60% cpu
To get it working, I downloaded the full iso, put a 1GB usb stick in my laptop, used usb pass through on Vmware Workstation and booted from the iso. I then used the install option and chose the USB stick. (You can do this on the actual machine and I have since however this was my first time using it and I couldn't find a blank cd!)
I put the usb stick in to the machine and booted. It worked fine first time!
Steps to actually get it usable as a nas were the following:
- Go in to disk management and add each of the 4 drives.
- Go to format and format all drives to software raid
- Go to software raid and add disks 1 and 2, 3 and 4 to a new raid 1
- Go to format and format both the new raid's to the standard os
- Mount both raids
- Set up Samba and choose both of the mount points as shares
- Set up a couple of users
Then it was accessible over windows by \\ip and using the username and password I chose.
I will be looking at openfiler again soon as AD support is lacking a bit, however for a SOHO / domainless environment, you can not go wrong with freenas.
edit - Via request - Was to big to fit in comments
It's been mentioned that RAID is not a backup. VERY TRUE. Keep that in mind.
You're using terabyte sized disks, which increases the chances of an unrecoverable read error, which is a MAJOR PAIN IN THE @#$. Raid 5 is almost unusable as disks get larger; you could have one of the three disks fail completely, you replace it, and that's when you discover that one of the "good" disks has a spot that can't be read from, so you end up having to completely rebuild from backup. We had that happen with a hardware-based RAID (PERC controller).
Your RAID level depends on how you're using the server. I like 1 for most of my purposes (mirroring). It has very good read times because it can spread read commands across drives, but writes can suffer somewhat. How affected it is depends on what you're using for the controller and drive speed. Go to Wikipedia and search for RAID to get a rundown of RAID levels; no one can really tell you what to definitively use without knowing your workload, the server's usage, etc.
Do not use rsync for a backup on the same computer. If your controller is fried or something goes weird on the computer itself (or the machine is damaged in flooding, fire, electrical surge) you risk the backup getting toasted too. Backup means being able to rebuild your data on new hardware if need be after a catastrophic failure.
If you're referring to a hardware RAID controller built into the motherboard-don't. don't don't don't. Motherboard RAID is cheap, crappy, and cheap, and worse than any software-implemented RAID. If you want to go through the trouble of building a production system with RAID, use either the built-in Linux/BSD software RAID or get a good RAID card like one from 3Ware. Personally for a server, I'd get a hardware card and search the specs for features like hot swap capability and lighted alarms to indicate WHICH DRIVES have failed. There's nothing wrong with performance or ability of software RAID, and it's very reliable, but there are many questions about "I have a drive that failed and don't know which one it is", and if you screw it up you can break your data set or erase the wrong data. System administration is supposed to have some element of making your life easier (hee hee!) and puzzling which drive is which cable is which mountpoint is not fun. The hardware cards are $$ but often save you much frustration when trying to puzzle out which is in need of replacement.
Don't skimp on hard drive speed. Faster, the better, especially if this is a heavy usage server. Today's gig lans can easily make the hard disk a bottleneck now for big transfers or heavy sharing.
Make sure you have a way to monitor the RAID, and make it a point to periodically check the status of your drives.
Get a good backup system in. Any fileserver should have a good second-machine backup, whether to tape or disk. If your server blows up tomorrow you should be able to get parts in and start restoring everything from scratch if need be, unless the business issuing the paychecks can survive without their server, in which case I don't know why you'd be worried about RAID.
Hope this helps!
Best Answer