LVM is actually quite heavily used. Basically, LVM sits above the hardware (driver) layer. It doesn't add any redundancy or increased reliability (it relies on the underlying storage system to handle reliability). Instead, it provides a lot of added flexibility and additional features. LVM should never see a disk disappear or fail, because the disk failure should be handled by RAID (be it software or hardware). If you lose a disk and can't continue operating (rebuild the RAID, etc), then you should be going to backups. Trying to recover data from an incomplete array should never be needed (if it is, you need to reevaluate your entire design).
Among the things you get with LVM are the ability to easily grow and shrink partitions/filesystems, the ability to dynamically allocate new partitions, the ability to snapshot existing partitions, and mount the snapshots as read only or writable partitions. Snapshots can be incredibly useful, particularly for things like backups.
Personally, I use LVM for every partition (except /boot) on every box I build, and I've been doing so for the past 4 years. Dealing with non-LVM'ed boxes is a huge pain when you want to add or modify your disk layout. If you're using Linux, you definitely want use LVM. [Note: This above stuff on LVM has been updated to better explain what it is and how it fits into the storage equation.]
As for RAID, I don't do servers without raid. With disk prices as cheap as they are, I'd go with RAID1 or RAID10. Faster, simpler, and much more robust.
Honestly though, unless you're wedded to Ubuntu (which I would normally recommend), or if the box is performing other tasks, you might want to look into OpenFiler. It turns your box into a storage appliance with a web interface and will handle all of the RAID/LVM/etc for you, and allow you to export the storage as SMB, NFS, iSCSI, etc. Slick little setup.
There shouldn't really be any isues mixing disk vendors, Linux software raid is discussed pretty thoroughly here, you shouldn't have to do anything differently just because the drives are from different vendors.
"There are no other special requirements to the devices from which you build your RAID devices - this gives you a lot of freedom in designing your RAID solution. For example, you can build a RAID from a mix of IDE and SCSI devices, and you can even build a RAID from other RAID devices"
(From the Linux raid wiki)
As far as setting up the raid during the installer goes, you have to use the "alternate" (textmode) installer, full instructions for this are here on the Ubuntu wiki.
Best Answer
You had me a R5 - don't.
The reason is that in the event of a disk failure you have zero protection until you've replaced the disk and the array has rebuilt.
For large cheapo SATA disks this rebuilt process can take DAYS - meanwhile you are at the mercy of a second disk failing - at which point it's game over.
Also this type of disk is rarely happy to work solidly 24 hours a day and I've seen rebuilds kill disks - again making the whole thing rather dubious.
If you can use RAID 10 over 5 or 6, if you insist on 5/6 then use 'enterprise' disks capable of a 24/365 duty cycle.