I'm assuming you mean Megabytes/sec there not Megabits/sec.
Check that you've unbound the standard Windows Networking protocols from the iSCSI network for a start, just in case they are causing unwanted additional traffic. Check that you are using the most up to date iSCSI initiator from Microsoft, that Jumbo frames are set to 9k, disable Windows Indexing on that drive if it's enabled, likewise AV if you're testing performance. Check that you're not seeing any errors on the iSCSI NIC or on the Switch.
Online Reconfiguration
The technique you used to change the rr_min_io
is what multipathd does for you under the covers. The user friendly way to adjust values in a running map is echo reconfigure | multipathd -k
For example: Here's a NetApp who's rr_min_io
is currently 128
# dmsetup table
360a98000534b504d6834654d53793373: 0 33484800 multipath 0 1 alua 2 1 round-robin 0 2 1 8:16 128 8:32 128 round-robin 0 2 1 8:64 128 8:48 128
360a98000534b504d6834654d53793373-part1: 0 33484736 linear 251:0 64
/etc/multipath.conf
was changed so rr_min_io
was now 1000
. Then,
# echo reconfigure | multipathd -k
multipathd> reconfigure
ok
To verify the change:
# dmsetup table
360a98000534b504d6834654d53793373: 0 33484800 multipath 0 1 alua 2 1 round-robin 0 2 1 8:16 1000 8:32 1000 round-robin 0 2 1 8:48 1000 8:64 1000
360a98000534b504d6834654d53793373-part1: 0 33484736 linear 251:0 64
I agree multipathd could do a better job in advertising and reporting the additional variables it uses. Whatever delta multipathd doesn't report, dmsetup does, but that doesn't necessarily mean that using dmsetup directly is the best idea to reconfigure those settings. Reconfigure works for just about everything.
Active-Active load balancing
The deployment guide says your SAN is active-active but this term gets misused in the industry, in practice it can be "dual active", which means a LUN can only be accessed by a single storage processor at any one time, but both controllers can be active and drive distinct LUNs, they just can't load balance to the same lun.
Here on p79 under the load balancing section.
Two sessions with one TCP connection are configured from the host to each controller (one
session per port), for a total of four sessions. The multi-path failover driver balances
I/O access across the sessions to the ports on the same controller. In a duplex
configuration, with virtual disks on each controller, creating sessions using each of the
iSCSI data ports of both controllers increases bandwidth and provides load balancing
Note the plural use of virtual disks in the context of duplex configuration, it doesn't call out the same disk. This appears to be an dual-active deployment. True active-active SANs are usually reserved for Fibre Channel deployments. Maybe iSCSI SANs exist that accomplish this, I haven't come across one, though I don't extensively deploy iSCSI either.
Best Answer
iSCSI performance depends a lot on the quality of your networking equipment. A few considerations:
On my home OpenSolaris NAS I briefly tested iSCSI, and the performance from my Windows-based initiator was terrible till I switched to jumbo frames.