I usually check the Cisco Gigabit Ethernet Transceiver Modules Compatibility Matrix and the 10-Gigabit Ethernet Transceiver Modules Compatibility Matrix. However, none of them contains any reference to SG500X
Anyway: regarding the switch, SFP-H10GB-CUxM is listed in the datasheet of the 500 Series as an option for 10G connectivity, so you should have no problem to attach it to the switch. Regarding the NIC, its specification states it supports standard SFF-8431 and according to 10-Gigabit Ethernet Transceiver Modules Compatibility Matrix the SFP-H10GB-CUxM does also support SFF-8431 (as you can see next), so they should be able to work toghether.
Regulatory and Standards Compliance
Standards:
• GR-20-CORE: Generic Requirements for Optical Fiber and Optical Fiber
Cable
• GR-326-CORE: Generic Requirements for Single-Mode Optical Connectors
and Jumper Assemblies
• GR-1435-CORE: Generic Requirements for Multifiber Optical Connectors
• IEEE 802.3: 10-Gigabit Ethernet
• ITU-T G.709: Interfaces for the Optical Transport Network
• ITU-T G.975: GFEC
• ITU-T G.975.1: EFEC
• SFP+ MSA SFF-8431 (Optical Modules, Active Optical Cables, and
Passive Twinax cables)
• SFP+ MSA SFF-8461 (Active Twinax cables)
It depends on how the interfaces are bonded.
One way to do this is that only one NIC is really active. If one of the links goes down, then the other NIC starts using the MAC address of the first NIC, or the system issues a gratuitous ARP with its MAC address to get everyone to update their ARP tables.
A close second to this method is that both NICs are used to send, but only one is used to receive.
Any other configuration requires the cooperation of the switches or the sending parties.
Note that unless the switch and the end device agree on a configuration, you could get some bad behavior. For example, the switch might not know which port actually has which MAC and will instead flood ALL traffic for that MAC. Or you could get a non-functional link.
Since you are using Adaptive Load Balancing, I will explain this mode.
Outgoing packets are split based on load.
Incoming packets are a bit trickier. When an ARP request is received, the MAC sent back is based on the requester's IP address. For example, if client A send an ARP request for your IP, it will get the MAC of NIC 1. Later when client B sends an ARP request, it will get the MAC of NIC 2. That way clients are split among the available NIC's.
Best Answer
The Ethernet standards (at least for 100 megabit and gigabit, i'm not 100% sure about other speeds) are logically defined in terms of a MAC and PHY connected via a standardized medium independent interface.
The MAC handles the transition between frames in a buffer and line rate streams of data. On legacy half-duplex networks it also handles medium access control. On the receive side it performs basic filtering of incoming packets.
The PHY handles stuff that is specific to the physical medium, encoding the data stream into the correct form for the particular physical medium and driving/receiving it from the lines at the correct voltage level.
However just because the standard splits things up in a particular way that does not mean they are required to be implemented that way. There are a number of variants of the medium independent interfaces for each speed trading off bus width and clock speed and when a MAC and PHY are integrated on the same chip the standardized medium independent interface may be eliminated altogether.
In practice a typical 100M or 1G copper network card (i'm not sure what the exact situation is with faster cards and fiber cards) will typically have the MAC, PHY and PCI or PCIe interface integrated onto a single chip. Many higher end controllers will also integrate additional functionality to offload work from the host's network stack. Similarly for USB network adapters the MAC, PHY and USB interface will be integrated on a single chip.
On the other hand the embedded world often uses an arrangement where the MAC is integrated as part of the main "system on chip", then connected to a separate PHY over some variant of MII.