Linux – interrupt coalescing for high bandwidth packet capture

ethernetlinuxnicpacket-captureredhat

I have an application which does packet capture from an ethernet card. Once in a while we see packets dropped (we suspect due to the buffer in the network card or kernel being overrun). I am trying to figure out if turning on interrupt coalescing will help or worsen the situation. On the one hand, there should be less work on the CPU since there should be less interrupts to process, on the other hand, it seems that if the IRQs are not processed as frequently, there is a higher probability of a buffer being overrun. Does that mean that maybe I should turn it on and increase the size of rmem_max settings?

UPDATED TO INCLUDE OS/HW Details:

Dell PowerEdge 1950, Dual Quad-Core Xeon X5460 @ 3.16GHz
Broadcom NetXtreme II BCM5708
Linux OS

proc/sys/net/core
  dev_weight          64
  netdev_budget       300
  rmem_default        110592
  somaxconn           128
  wmem_max            16777216
  xfrm_aevent_rseqth  2
  message_burst       10
  netdev_max_backlog  65536
  rmem_max            16777216
  warnings            1
  xfrm_acq_expires    30
  xfrm_larval_drop    1
  message_cost        5
  optmem_max          20480
  rps_sock_overflow_entries 0
  wmem_default        110592
  xfrm_aevent_etime   10

Best Answer

Without knowing why you're dropping packets, it's impossible to know whether it'll help or not. Your analysis is fundamentally correct -- if interrupts arrive (are serviced) less often, there's a greater chance of buffers filling up, all things being equal. If you don't know why you're losing packets, though, you can't tell if making that change will improve the situation or not.

Personally, I find throwing good-quality NICs with good drivers into a good quality server makes all my problems go away. Much cheaper than spending days grovelling through debug data.