I am a little confused with the bandwidth setting on Tunnel interfaces between two Cisco devices.
On each end I have Cisco routers with ten gig interfaces connected to my provider.
I Have a tunnel connecting two sites together. However the bandwidth statements don't seem to add up. My Tx/Rx look like they are maxed out. However the traffic on the input/output counters do not reflect the same as the bandwidth should be allowing.
Am I really choking out in the tunnel and not using my full interface bandwidth?
Router 1
Tunnel45 is up, line protocol is up
Hardware is Tunnel
Internet address is z.z.z.z
MTU 17868 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 255/255, rxload 255/255
Encapsulation TUNNEL, loopback not set
Keepalive set (10 sec), retries 3
Tunnel source x.x.x.x (TenGigabitEthernet3/4), destination y.y.y.y
Tunnel Subblocks:
src-track:
Tunnel45 source tracking subblock associated with TenGigabitEthernet3/4
Set of tunnels with source TenGigabitEthernet3/4, 11 members (includes iterators), on interface <OK>
Tunnel protocol/transport GRE/IP
Key disabled, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255, Fast tunneling enabled
Tunnel transport MTU 9078 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
30 second input rate 47213000 bits/sec, 6452 packets/sec
30 second output rate 85312000 bits/sec, 9380 packets/sec
interface Tunnel45
ip address z.z.z.z
load-interval 30
keepalive 10 3
tunnel source TenGigabitEthernet3/4
tunnel destination x.x.x.x
Router 2
Tunnel45 is up, line protocol is up
Hardware is Tunnel
Internet address is z.z.z.z
MTU 9976 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 255/255, rxload 255/255
Encapsulation TUNNEL, loopback not set
Keepalive set (10 sec), retries 3
Tunnel linestate evaluation up
Tunnel source x.x.x.x (TenGigabitEthernet0/0/4), destination y.y.y.y
Tunnel Subblocks:
src-track:
Tunnel45 source tracking subblock associated with TenGigabitEthernet0/0/4
Set of tunnels with source TenGigabitEthernet0/0/4, 2 members (includes iterators), on interface <OK>
Tunnel protocol/transport GRE/IP
Key disabled, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255, Fast tunneling enabled
Tunnel transport MTU 9078 bytes
Tunnel transmit bandwidth 2000000 (kbps)
Tunnel receive bandwidth 2000000 (kbps)
30 second input rate 49418000 bits/sec, 5955 packets/sec
30 second output rate 33804000 bits/sec, 4266 packets/sec
interface Tunnel45
ip address z.z.z.z
load-interval 30
keepalive 10 3
tunnel source TenGigabitEthernet0/0/4
tunnel destination y.y.y.y
tunnel bandwidth transmit 2000000
tunnel bandwidth receive 2000000
end
Best Answer
Cisco IOS/NX-OS/etc. software does not configure the bandwidth for a virtual tunnel interface based on the physical interface to which it is assigned; instead, it applies a default "bandwidth" statement to the interface that depends on model of hardware and the version of software it is running (on many devices the default "BW" for a tunnel is 8kbps!).
As others have mentioned, this bandwidth statement does not actively affect the traffic throughput capability of the tunnel interface- tunnel traffic throughput is limited only by CPU traffic processing capability (if tunnel processing is not being performed in HW- usually not a limitation on most cisco routers unless this is being performed at scale) and the physical interface forwarding hardware. The only exception would be if BW-based QoS policies or custom routing configurations (e.g. non-default EIGRP implementation) were implemented on the tunnel interface, but based on the config you have shared that does not appear to be the case.
The displayed BW, txload and rxload counters that you are worried about are cosmetic only (unless the QoS/routing scenarios above apply) and will not -on their own- limit traffic throughput in any way. If you want the counters to display accurate information, configure the following on each tunnel interface: