LVM is fairly lightweight for just normal volumes (without snapshots, for example). It's really just a table lookup in a fairly small table that block X is actually block Y on device Z. I've never done any benchmarking, but I've never noticed any performance differences between LVM and just using the raw device. It's some small extra CPU overhead on the disc I/O, so I really wouldn't expect much difference.
My gut reaction is that the reason there are no benchmarks is that there just isn't that much overhead in LVM.
The convenience of LVM, and being able to slice and dice and add more drives, IMHO, far outweighs what little (if any) performance difference there may be.
There are up to 3 levels of alignment you need to keep in mind - 1). volume manager, 2). volume partitioning, 3). file system. If you are not using LVM then 1 is irrelevant. If you are not partitioning you volumes with fdisk, then 2 is irrelevant as well. The most important alignment for performance is 3. With proper alignmet you may see up to 15% boost in performance.
For cases 1 and 2 a good general rule would be aligning to a megabyte boundaries.
1). LVM usually does a good job by a) placing it's metadata at the end of the volume and b) giving you an option of specifying size of metadata (for example "pvcreate -M2 --metadatasize 2048K --metadatacopies 2 ")
2). If you need to partition any of these volumes with fdisk then again, try to stick to MB boundaries. Modern Linux fdisk versions have this option as well as recent version of gparted.
3). Aligning of file system is most important of all. I have experience with aligning xfs and ext3 (ext4 should be similar to ext3) and you will need to do some math here and then specify right parameters when creating the file system. Look at the documentation for specific parameters, namely something called "stripe width". Be careful though with interpretation - depending on fs type it is either expressed in 512B blocks or in bytes, so you will need to do calculations accordingly. This interpretations is also depends on number of drives in the RAID array and RAID level. You may also find some useful info in this thread.
Also, you can specify parameters when mounting a file system that may improve performance even further. Here are parameters I use with my 18TB xfs file system "noatime,attr2,nobarrier,logbufs=8,logbsize=256k". But be careful, these are not universal rules and if used incorrectly may compromise reliability of your system (especially "nobarrier").
Another thing to keep in mind that if you planning for future expansion of any of these RAID arrays you should take it into account when you create file systems since it will imminently affect your perfect alignment ;-)
I hope this points you in a right direction. Have fun :-)
Best Answer
Lvm volume is a normal block device, so you can configure various process priorities on lvm volume using cgroup, for more information read this doc https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt
example: