You should make sure you're running the latest OpenManage, although maybe 5/i provides more info than 5/e.
I have a Perc 5/i and under OpenManage Server Administrator I can go System -> Storage -> PERC 5/i Integrated -> Virtual Disks and the very last column is Stripe Size.
I'm running OMSA v5.0.0.
Please use RAID 1+0 with your controller and drive setup. If you need more capacity, a nested RAID level like RAID 50/60 could work. You can get away with RAID 5 on a small number of enterprise SAS disks (8 drives or fewer) because the rebuild times aren't bad. However, 24 drives is a terrible mistake. (Oh, and disable the individual disk caching feature... dangerous)
There are many facets to I/O and local storage performance. There are I/O operations/second, there's throughput, there's storage latency. RAID 1+0 is a good balance between these. Positive aspects here are that you're using enterprise disks, a capable hardware controller and a good number of disks. How much capacity do you require?
You may run into limits to the number of drives you can use within a virtual disk group. PERC/LSI controllers traditionally limited this to 16 drives for single RAID levels and RAID 1+0. The user guide confirms this. You wouldn't be able to use all 24 disks in a single RAID 5 or a single RAID 1+0 group.
Another aspect to consider, depending on your workload, is that you can leverage SSD caching using the LSI Cachecade functionality on certain PERC controllers. It may not be available for this, but understanding your I/O patterns will help tailor the storage solution.
As far as ext4 filesystem creation options, much of it will be abstracted by your hardware RAID controller. You should be able to create a filesystem without any special options here. The parameters you're referring to will have more of an impact on a software RAID solution.
Best Answer
I found an oracle tuning guide (google docs link) that seemed to imply 1MB stripes, but I also found a blog post claiming that replication is done on 512k blocks.