I manage a lot of SuperMicro servers using the onboard IPMI. I have a love/hate relationship with the shared (aka sideband) ethernet. In general, the way these things work is that LAN1 appears to have 2 (different) MAC addresses - one is for the IPMI interface, the other your standard Broadcom NIC. Traffic to the IPMI interface (layer 2, based on the MAC address) is magically intercepted below the operating system level and never seen by whatever OS is running.
You've already hit on the one good point for them: less cabling. Now let me cover some of the downsides:
- It's particularly difficult to partition the IPMI interface onto a separate subnet in a secure manner. Since the traffic all goes over the same cable, you (almost) always have to have the IPMI interface and the LAN1 interface on the same IP subnet. On the latest motherboards, the IPMI cards now support assigning a VLAN to the IPMI NIC, so you can get some semblance of separation - but the underlying OS could always sniff the traffic for that VLAN. Older BMC controllers don't allow changing the VLAN at all, although tools like ipmitool or ipmicfg will ostensibly let you change it, it just doesn't work.
- You're centralizing your failure points on the system. Doing configuration on a switch and manage to cut yourself off somehow? Congratulations, you've now cut off the primary network connection to your server AND the backup via IPMI. NIC hardware fail? Congratulations, same problem.
- Early SuperMicro IPMI BMCs were notorious for doing wonky things with the network interface. Whether to use the onboard vs. dedicated IPMI port was often determined at power-on (not restart), and would not toggle from there. If you had a power outage and your switch didn't provide power quickly enough, you could end up with the IPMI failing to work because it autodetected the wrong setting.
- I've personally had lots of weird, unexplainable connectivity issues getting the sideband IPMI working reliably. Sometimes I simply couldn't ping the interface IP for a few minutes. Sometimes I'd get a storm of packets on the assigned VLAN, but the traffic all appeared to be dropped.
While this has nothing to do with sideband-vs.-dedicated, I'll also note that the tools for accessing host systems are very poorly written. Older IPMI cards don't support anything other than local authentication, making password rotation a total pain. If you're using the KVM-over-IP functionality, you're stuck using an improperly-signed, expired Java applet or a weird Java desktop application that only works on Windows and requires UAC elevation to run. I've found the keyboard entry to be spotty at best, sometimes getting "stuck keys" such that it's impossible to type a password to login without trying 10 times.
I've eventually managed to get 40+ systems working with this arrangement. I've got mostly newer systems I could VLAN the IPMI interfaces onto a separate subnet, and I mostly use the serial console via ipmitool which works very well. For the next generation of servers, I'm looking at Intel's AMT technology with KVM support; as this makes it into the server space, I can see replacing IPMI with this.
There's a couple of issues here:
The "ipmitool" command on it's own uses a local interface to the ipmi controller. This is why you need to load the modules in order to use ipmitool from the same host. If you're on a remote host you can use ipmitool over the network, using something like "ipmitool -I lan -H hostname -U username -P password chassis status", substituing appropriate values for hostname, username and password.
If you're not using the dedicated IPMI controller ethernet port, then you may need to actively tell the IPMI controller to use the onboard ethernet port. These IPMI controllers default to an "auto fallback", so if you have an ethernet cable plugged into the dedicated LAN port at the time the IPMI controller is powered up, it will use the dedicated port, otherwise it will fallback. So if you've changed your mind about which port to use, this might be occuring.
The onboard port the IPMI controller piggybacks on is LAN1. Are you sure you're using LAN1? It may not be the same as the interface that your linux install thinks is eth0.
Finally, I've definitely seen connectivity issues when using IPMI over a non-dedicated port. The way the ethernet controller in the IPMI piggybacks onto your host ethernet port can result in DHCP issues, as well as network card driver crashes. I've also seen the situation where the IPMI IP address on a non-dedicated port is accessible from a remote machine, but not from the local one (which isn't a problem generally, because you can use the ipmitool kernel interface anyway).
I always advocate using a dedicated port where available.
In all cases, to reset the IPMI controller you need to either use the ipmitool interface once you get that working, or to physically remove power from the machine (off at the wall/PDU etc - turning the machine off from the button at the front isn't enough, as the IPMI controller is still powered)
Best Answer
Jiri's on the right track with the three options (Dedicated, Share, Failover) for the IPMI interface. The short answer is that yes, you can use LAN1 instead of the dedicated IPMI port, and it generally works that way with the default BIOS settings. It's not possible to run the IPMI on the LAN2 interface.
Here's a more detailed description of the three options:
Dedicated: Always use the dedicated IPMI interface. This is the option you want if you're trying to have the simplest setup, at the expense of additional cabling.
Shared: Always use the LAN1 interface. This is the option you want if you're trying to reduce your cabling to each server, and understand the tradeoffs. Under the covers, there's a virtual switch in hardware that's splitting out traffic to the IPMI card from traffic to the rest of the system; the IPMI card has a separate MAC address to differentiate the traffic. On modern Supermicro boards, you can also set the IPMI traffic to run on a different VLAN from the rest of the system, so you can tag the IPMI traffic. There are some definite security implication to this design; it's not difficult for the main system to access the IPMI network, if you were trying to keep them separated. A failure of the LAN1 interface often means that you lose primary and out-of-band connectivity at the same time.
Failover (factory default): On boot, detect if the dedicated IPMI interface is connected. If so, use the dedicated interface, otherwise fall back to the shared LAN1. I've never found a good use for this option. As best I can tell, this setup is fundamentally flawed - I haven't tested it extensively, but I've heard reports it'll fail to detect the dedicated interface in many circumstances because the upstream switch isn't passing traffic - for example, after a power outage if the switch and system come up simultaneously, or if the switch is still blocking during the spanning tree detection. Combine this with the fact that the check only happens at boot, and it's just generally hard to control what interface you end up using.