cpt_fink and Shane already gave good answers, but I will add $0.02...
Let's assume you flood every frame that comes into a switch, you're essentially turning what we know as switches into a hub. That causes problems with:
- Security / Privacy: IP networks are largely unencypted. Flooding packets shares private information with everyone in that broadcast domain... DNS lookups, FTP passwords, websites you're visiting. All I need is wireshark on a flooded network, and I know a lot about what is happening. It's much easier to hijack TCP sessions if you know the TCP sequence numbers in use.
- End user count: If we flooded packets for every user to every other user, there is a much lower limit for how many users we can pack into one switch, because every user consumes the other user's bandwidth (see next bullet). Ultimately, this means you're spending a lot more money on network administrators, network infrastructure, vendor support costs, power, rack space, cooling, etc...
- User throughput: If any user on the switch sends line-rate traffic (i.e. no faster than 1Gbps on a GigabitEthernet link), we are flooding that line-rate traffic to every other user in the switch (as well as the broadcast domain). This is terribly inefficient. Mac-learning is much better than flooding
FYI: YLearn provided some practical examples of how flooding causes problems in wired networks.
A final thought:
If switches always flooded instead of learning mac-addresses, switched topologies would no longer be viable in modern networks. Engineers would resort to routing as quickly as possible because the three issues listed above are deal-killers for many wired network designs.
Interestingly enough, wireless 802.11 only implements privacy and security from the list above; otherwise wifi has the aforementioned problems with "flooding" non-broadcast traffic to all hosts (technically it isn't flooding, but the result is the same). 802.11's privacy and security measures make wifi at least a viable option.
EDIT:
In a comment, Celeritas said:
... you say that switches are more secure, but I seriously doubt that people choose a switch over a hub because they want security, security would be built at a different level. Also it would be nice if the answers didn't use too much jargon, for example what is "linerate"?
I am afraid you have misunderstood why people choose switches over hubs. People choose switches instead of hubs, because all three disadvantages listed above are a problem for wired networks.
802.11 wifi has two of the three problems that hubs have; however, the privacy and security measures built into wifi make it a feasible choice in some situations.
Apologies for confusing you with the term "line-rate". Line-rate means you're sending traffic as fast as the wired NIC can. On gigabit ethernet links, line-rate depends on the packet size you send, but line-rate cannot exceed the transmit speed of a GE NIC, which is 1,000,000,000 bits per second.
If the clasification isn't available on the switch model, then you need to look at something like ACL-based QoS classification (Layers 2, 3, and 4) so that you can mark what traffic, by IP address range for example and tell the hardware how you want to prioritise the traffic with mappings.
CoS is for traffic that is layer 2 only. If you need to prioritise traffic that is going to get routed, then you would use DSCP which is layer 3.
So CoS will put the marking on an Ethernet frame header and DSCP would do it within the space on a IP header. If the traffic is getting routed, the Ethernet frame get's stripped so you would loose your CoS marking and would rely on DSCP within the IP header.
![CoS and DSCP Marking on Ethernet Frame Header and IP Header](https://i.stack.imgur.com/NVdtb.gif)
You achieve more granuality with DSCP compared to CoS as you can set more options because of the difference in bit size since CoS uses only 3 bits to set markings while DSCP uses 6 bits.
All in all, I would advise you seperate your ISCSI network from your core infrastrucure and then you wouldn't even need to worry about QoS.
Best Answer
You probably didn't create a loop in your network. However, without the peer links, your two switches, instead of acting as one switch, are acting as two independent switches and dropping half of your traffic. That's likely why you couldn't communicate with some of your servers.
A short explanation:
Your vPC sends traffic across both links to each of the switches. With the vPC peer links, the switches can both forward the frames to the destination.
Without the link, one of the switches will shut down the links to prevent loops. So half your traffic is not reaching the destination.
One of my former colleagues wrote a more detailed explanation here.