Linux NIC – How to Increase Ring Parameters on a Server

ethernetgigabit-ethernetnic

I used the ethtool utility to increase the rx and tx values for the NIC on one of our servers. I ran the following command:

ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:             2040
RX Mini:        0
RX Jumbo:       8160
TX:             255
Current hardware settings:
RX:             2040
RX Mini:        0
RX Jumbo:       0
TX:             255

Can I change the preset maximums on the card in some way? Or are they a hardware limitation. the NIC we have on the server is :
Broadcom NetXtreme II BCM5709 1000Base-T

Best Answer

Well, there's the example of the pre-set maximum ring buffer figures on Broadcom bnx2 devices being modified in the kernel from 1020 to 2040 a few years ago, so it is possible.

diff --git a/drivers/net/bnx2.h b/drivers/net/bnx2.h
index efdfbc2..62ac83e 100644
--- a/drivers/net/bnx2.h
+++ b/drivers/net/bnx2.h
@@ -6502,8 +6502,8 @@ struct l2_fhdr {
 #define TX_DESC_CNT  (BCM_PAGE_SIZE / sizeof(struct tx_bd))
 #define MAX_TX_DESC_CNT (TX_DESC_CNT - 1)

-#define MAX_RX_RINGS        4
-#define MAX_RX_PG_RINGS        16
+#define MAX_RX_RINGS        8
+#define MAX_RX_PG_RINGS        32
 #define RX_DESC_CNT  (BCM_PAGE_SIZE / sizeof(struct rx_bd))
 #define MAX_RX_DESC_CNT (RX_DESC_CNT - 1)
 #define MAX_TOTAL_RX_DESC_CNT (MAX_RX_DESC_CNT * MAX_RX_RINGS)

You can attempt some of this; I've seen those MAX_RX_RINGS and MAX_RX_PG_RINGS values pushed to 16 and 64 before in certain kernel/driver builds. These are routinely the onboard NICs for Dell PowerEdge and HP ProLiant servers, and a few people in my industry would hack these drivers to make the NICs a bit more usable. But know that it may make sense to understand where the performance issues are. Also know that other NICs models/drivers have bigger ring buffers than the Broadcom.

Intel:

# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:     4096
RX Mini:    0
RX Jumbo:   0
TX:     4096
Current hardware settings:
RX:     4096
RX Mini:    0
RX Jumbo:   0
TX:     2048

Try profiling your application and seeing where the drops are. You didn't specify OS distribution or version, so I can't give too much distro-specific info. A handy portable tool is dropwatch. You can use it to see if drops are happening at the IP, link or application layers.

# dropwatch -l kas

1 drops at tcp_rcv_established+916 (0xffffffff814ae5c6)
2 drops at tcp_v4_rcv+aa (0xffffffff814b78aa)
2 drops at tcp_rcv_established+916 (0xffffffff814ae5c6)
1 drops at skb_copy_datagram_from_iovec+2fe (0xffffffff81455dde)
1 drops at skb_copy_datagram_from_iovec+2fe (0xffffffff81455dde)
2 drops at tcp_v4_rcv+aa (0xffffffff814b78aa)
2 drops at skb_copy_datagram_from_iovec+2fe (0xffffffff81455dde)
1 drops at tcp_v4_rcv+aa (0xffffffff814b78aa)
1 drops at tcp_v4_rcv+aa (0xffffffff814b78aa)
18 drops at unix_stream_connect+1dc (0xffffffff814f4cdc)
2 drops at tcp_v4_rcv+aa (0xffffffff814b78aa)