Edit 2:
As you mentioned...
ip route 10.1.0.0 255.255.0.0 iface0
Forces the Brocade to proxy-arp for every destination in 10.1.0.0/16 as if it was directly connected to iface0
.
I can't respond about Brocade's ARP cache implementation, but I would simply point out the easy solution to your problem... configure your route differently:
ip route 10.1.0.0 255.255.0.0 CiscoNextHopIP
By doing this, you prevent the Brocade from ARP-ing for all of 10.1.0.0/16 (note, you might need to renumber the link between R1 and R2 to be outside 10.1.0.0/16, depending on Brocade's implementation of things).
Original answer:
I expect that in most, or even all, implementations, there is a hard limit on the capacity of the ARP table.
Cisco IOS CPU routers are only limited by the amount of DRAM in the router, but that is typically not going to be a limiting factor. Some switches (like Catalyst 6500) have a hard limitation on the adjacency table (which is correlated to the ARP table); Sup2T has 1 Million adjacencies.
So, what happens when the ARP cache is full and a packet is offered with a destination (or next-hop) that isn't cached?
Cisco IOS CPU routers don't run out of space in the ARP table, because those ARPs are stored in DRAM. Let's assume you're talking about Sup2T. Think of it like this, suppose you had a Cat6500 + Sup2T and you configured all Vlans possible, technically that is
4094 total Vlans - Vlan1002 - Vlan1003 - Vlan1004 - Vlan1005 = 4090 Vlans
Assume you make each Vlan a /24 (so that's 252 possible ARPs), and you pack every Vlan full... that is 1 Million ARP entries.
4094 * 252 = 1,030,680 ARP Entries
Every one of those ARPs would consume a certain amount of memory in the ARP table itself, plus the IOS adjacency table. I dont know what it is, but let's say the total ARP overhead is 10 Bytes...
That means you have now consumed 10MB for ARP overhead; it still isn't very much space... if you were that low on memory, you would see something like %SYS-2-MALLOCFAIL
.
With that many ARPs and a four hour ARP timeout, you would have to service almost 70 ARPs per second on average; it's more likely that the maintenance on 1 million ARP entries would drain the CPU of the router (potentially CPUHOG messages).
At this point, you could start bouncing routing protocol adjacencies and have IPs that are just unreachable because the router CPU was too busy to ARP for the IP.
To be clear, there are two sub-issues worth identifying--the gateway and the ports. Moving the ports is relatively easy if you connect the two switches together via an ethernet-switching port (I personally use trunks between all switches), copy the remaining port configurations over, then move cables one-at-a-time. This lets you control which ports (and potentially users/services) go down at what time.
The best answer for the gateway is to setup VRRP for that gateway, but it still will require 2 short down times, and has enough configuration gotchas that it may not be worth the time invested. I suggest trunking the two switches together, then simply remove the vlan from one and add it to the other. Using Juniper for both switches, you get the advantage of the commit button. Load your command set on each, the just commit on the first, wait until it completes, then commit on the second. Failback is just as easy: rollback on new, wait, rollback on old.
Best Answer
Routing protocols, such as OSPF, are to share routes between routers. If your routing is done on a single router, or a pair of routers connected to the same networks (your layer-3 switches), it doesn't make any sense to use CPU cycles for the routing protocol, since routers inherently know about directly connected networks. Both your layer-3 switches already have all the routes that the other layer-3 switch has.
OSPF is one choice for a routing protocol, but it is an industry standard, and just about every business-grade router supports it. It is also very well understood by most network engineers, and fairly simple to configure.