Yes, I have encountered this with low-end cards and buggy drivers. However, no, not on an up-to-date Adaptec rebranded card. Wow is all I can say. One thing to consider, maybe it is more a bug with the drive than the RAID controller.
I don't have a good answer, but since you seem to have exhausted most of your options other than replacing the card, (and replacing the drives did the trick) here's a few ideas you can consider for your troubleshooting:
The WD drives were RE (RAID Edition) drives, right? The time limited error recovery is important, so if you don't have that and the drive is attempting to recover the sector, you are going to get a looooong pause from that drive. If the RAID controller is being patient and not dropping the drive you'll have a big problem on your hands.
Check the SMART data on the drives you removed and see if there is anything interesting.
Another comment about the importance of time limited error recovery (TLER) feature, from NAS / RAID vendor support:
As I mention before, we always suggest customers to use enterprise level drives if they use the drives in RAID settings. Enterprise level drives have more consistent responding time so that the RAID will be safer.
LVM is actually quite heavily used. Basically, LVM sits above the hardware (driver) layer. It doesn't add any redundancy or increased reliability (it relies on the underlying storage system to handle reliability). Instead, it provides a lot of added flexibility and additional features. LVM should never see a disk disappear or fail, because the disk failure should be handled by RAID (be it software or hardware). If you lose a disk and can't continue operating (rebuild the RAID, etc), then you should be going to backups. Trying to recover data from an incomplete array should never be needed (if it is, you need to reevaluate your entire design).
Among the things you get with LVM are the ability to easily grow and shrink partitions/filesystems, the ability to dynamically allocate new partitions, the ability to snapshot existing partitions, and mount the snapshots as read only or writable partitions. Snapshots can be incredibly useful, particularly for things like backups.
Personally, I use LVM for every partition (except /boot) on every box I build, and I've been doing so for the past 4 years. Dealing with non-LVM'ed boxes is a huge pain when you want to add or modify your disk layout. If you're using Linux, you definitely want use LVM. [Note: This above stuff on LVM has been updated to better explain what it is and how it fits into the storage equation.]
As for RAID, I don't do servers without raid. With disk prices as cheap as they are, I'd go with RAID1 or RAID10. Faster, simpler, and much more robust.
Honestly though, unless you're wedded to Ubuntu (which I would normally recommend), or if the box is performing other tasks, you might want to look into OpenFiler. It turns your box into a storage appliance with a web interface and will handle all of the RAID/LVM/etc for you, and allow you to export the storage as SMB, NFS, iSCSI, etc. Slick little setup.
Best Answer
Yes, ZFS doesn't need hardware raid and is actually better when used without it.
It is able to provide partial data protection even with a single device when configured to use ditto blocks but of course won't resist a full disk failure in that case.
You can use it in a mirror configuration (two devices or more) or a RAIDZ one (three devices or more) to survive failures. RAID-Z2 and RAID-Z3 provide 2 and 3 concurrent device failures protection.