Edit 2:
As you mentioned...
ip route 10.1.0.0 255.255.0.0 iface0
Forces the Brocade to proxy-arp for every destination in 10.1.0.0/16 as if it was directly connected to iface0
.
I can't respond about Brocade's ARP cache implementation, but I would simply point out the easy solution to your problem... configure your route differently:
ip route 10.1.0.0 255.255.0.0 CiscoNextHopIP
By doing this, you prevent the Brocade from ARP-ing for all of 10.1.0.0/16 (note, you might need to renumber the link between R1 and R2 to be outside 10.1.0.0/16, depending on Brocade's implementation of things).
Original answer:
I expect that in most, or even all, implementations, there is a hard limit on the capacity of the ARP table.
Cisco IOS CPU routers are only limited by the amount of DRAM in the router, but that is typically not going to be a limiting factor. Some switches (like Catalyst 6500) have a hard limitation on the adjacency table (which is correlated to the ARP table); Sup2T has 1 Million adjacencies.
So, what happens when the ARP cache is full and a packet is offered with a destination (or next-hop) that isn't cached?
Cisco IOS CPU routers don't run out of space in the ARP table, because those ARPs are stored in DRAM. Let's assume you're talking about Sup2T. Think of it like this, suppose you had a Cat6500 + Sup2T and you configured all Vlans possible, technically that is
4094 total Vlans - Vlan1002 - Vlan1003 - Vlan1004 - Vlan1005 = 4090 Vlans
Assume you make each Vlan a /24 (so that's 252 possible ARPs), and you pack every Vlan full... that is 1 Million ARP entries.
4094 * 252 = 1,030,680 ARP Entries
Every one of those ARPs would consume a certain amount of memory in the ARP table itself, plus the IOS adjacency table. I dont know what it is, but let's say the total ARP overhead is 10 Bytes...
That means you have now consumed 10MB for ARP overhead; it still isn't very much space... if you were that low on memory, you would see something like %SYS-2-MALLOCFAIL
.
With that many ARPs and a four hour ARP timeout, you would have to service almost 70 ARPs per second on average; it's more likely that the maintenance on 1 million ARP entries would drain the CPU of the router (potentially CPUHOG messages).
At this point, you could start bouncing routing protocol adjacencies and have IPs that are just unreachable because the router CPU was too busy to ARP for the IP.
I looked at the configs in your stack overflow question.
By way of review, this is your topology...
Ten0/28 Ten0/28
Bldg_L----------------Bldg_S
F10 S25 F10 S25
| |
Vlan200 Vlan400
10.2.0.101 10.4.0.101/16
The problem is that building L's switch proxy-ARPs to resolve 10.4.0.0/16
and building S's switch proxy-ARPs for 10.2.0.0/16
... interface TenGig0/28 (your transit link between the buildings) is answering proxy-ARPs requests. Remove those 10-net statics and use...
- Building L:
ip route 10.4.0.0 255.255.0.0 192.168.1.2
- Building S:
ip route 10.2.0.0 255.255.0.0 192.168.1.1
The reason that a route like ip route 10.4.0.0 255.255.0.0 TenGigabit0/28
proxy-ARPs is because you are essentially telling the switch that the entire /16 subnet is directly connected to TenGigabit0/28 when you static route out an interface like this. Using an IP next-hop only requires an ARP entry for that specific next-hop.
You probably need to move the default gateway to a new interface on the Building L switch, so the whole subnet can default through 10.2.0.101 and either reach 10.4.0.0/16 or the internet.
Sorry to say it, but you are leaving yourself wide open to ARP resource exhaustion problems when you assign a /16 as a connected subnet... ARP is an unauthenticated protocol, and anyone on the LAN can flood the switch with ARPs and it has no choice but to cache / answer them... even for phantom addresses.
Proactively, you might consider DHCP snooping and dynamic ARP inspection, if your version of FTOS supports it. These feature normally require some thought and testing before deployment; however they are well worth using if you have 100s of kids with nothing more exciting than showing off their "hacking" skills. I did a quick search to see if Force10 supports what Cisco calls port security, but I couldnt find it; port security can be used to limit the number of macs learned on a switch port.
Best Answer
ARP stores the mapping between IP addresses and their respective MAC addresses.
Since you mention that your switch can ping, I'm going to assume that you mean it's a Layer 3 switch and not a pure Layer 2 switch.
Assuming the setup is as below:
PC1 (.1) --- R1 (.2) --- S1 (.3) ---- (.4) PC2
[Network: 10.10.10.0/24]
[I'm assuming 10.10.10.2 and 10.10.10.3 in R1 and S1 respectively are SVIs and the links in R1 and S1 are all switch-ports]
Pinging from PC1 to PC2 will make an ARP entry in PC2 as: 10.10.10.1 --- PC1
Pinging from S1 to PC2 will make an ARP entry in PC2 as: 10.10.10.3 --- S1
I think your confusion can be cleared by the fact that when you are pinging from PC1 to PC2, the packet is bridged through R1 and S1 to PC2 without any of the headers being changed, (i.e.) the source MAC address is still going to be PC1.
Now let's assume PC1 and PC2 are in different subnets.
PC1(10.10.10.1) --- (10.10.10.2) R1 (20.20.20.1) --- S1 (20.20.20.2) ---- (20.20.20.3) PC2
[Again here I'm assuming 20.20.20.2 to be a SVI and the links in S1 to be switch-ports]
When you are pinging from PC1 to PC2 and assuming that the default gateway R1 has been set for PC1, then the ping packet will be sent to R1, who will then route it towards the correct network (20.20.20.0/24 in this case).
So pinging from PC1 to PC2 will make an ARP entry in PC2 as: 20.20.20.1 --- R1
Pinging from S1 to PC2 will make an ARP entry in PC2 as: 20.20.20.2 --- S1