Let's get some terminology aside first.
"Fiber channel" (or "fibre channel", as it's more typically known) is a specific networking technology used in storage area networks. I think, when you say "fiber channel", you're really saying "ports on an Ethernet switch that fiber optic cables connect to". If you're really talking about fibre channel then, let me know... (but, to my knowledge, 3Com has never made a fibre channel switch).
The term "stacking" typically refers to an interconnection between Ethernet switches through a (typically proprietary) dedicated interface that extends some significantly high fraction of the capacity switch's switching capability to another switch (extending the "switching fabric" outside the box to another switch). Often, stacking interfaces operate at multi-gigabit speeds (40Gb/sec on the Dell PowerConnect 6200-series switches, for example).
There is no "hard limit" to the number of Ethernet switches in a network. You can add as many as you want. Latency will increase as the number of "hops" between any two endpoints goes up and, obviously, you're increasing complexity and odds of failure as you add more switches.
Switched Ethernet LANs can't scale indefinitely. Excessive broadcasts or flooding of frames to unknown destinations will limit their scale. Either of these conditions can be caused by making a single broadcast domain in an Ethernet LAN too big.
Broadcast traffic is easy to understand, but flooding of frames to unknown destinations is a bit more obscure. If you get so many devices that your switch MAC tables are overflowing, switches will be forced to flood non-broadcast frames out all ports if the destination of the frame doesn't match any entries in the MAC table. If you have a large enough single broadcast domain in an Ethernet LAN with a traffic profile that hosts talk infrequently (that is, infrequently enough that their entries have aged out of the MAC tables on your switches), then you can also get excessive flooding of frames to unknown destinations.
At the scale you're talking about (200 computers), there isn't going to be a problem with flooding of frames to unknown destinations. Whether or not you have broadcast problems will depend on the specific protocols and applications used. If you're using off-the-shelf Microsoft OS's and applications, I'd hazard that your level of broadcast traffic is just fine.
In general, a "star" topology that minimizes "hops" between switches is the most effective Ethernet topology. Placing servers or other highly-utilized resources in the center of the star will minimize traffic overall. If your switches support aggregating multiple links together, you can use this feature to increase bandwidth on inter-switch links. You should be using a tool (even a simple one like MRTG) if your switches support monitoring with SNMP to determine where your bandwidth utilization "hotspots" are.
You can create "loops", provided your switches support the spanning-tree protocol, to handle failures of switches or inter-switch links. That's a little more advanced topic, and not something I'd recommend you approach until you have more experience.
Best Answer
Yes. Using single cables to "cascade" multiple Ethernet switches together does create bottlenecks. Whether or not those bottlenecks are actually causing poor performance, however, can only be determined by monitoring the traffic on those links. (You really should be monitoring your per-port traffic statistics. This is yet one more reason why that's a good idea.)
An Ethernet switch has a limited, but typically very large, internal bandwidth to perform its work within. This is referred to as the switching fabric bandwidth and can be quite large, today, on even very low-end gigabit Ethernet switches (a Dell PowerConnect 6248, for example, has a 184 Gbps switching fabric). Keeping traffic flowing between ports on the same switch typically means (with modern 24 and 48 port Ethernet switches) that the switch itself will not "block" frames flowing at full wire speed between connected devices.
Invariably, though, you'll need more ports than a single switch can provide.
When you cascade (or, as some would say, "heap") switches with crossover cables you're not extending the switching fabric from the switches into each other. You're certainly connecting the switches, and traffic will flow, but only at the bandwidth provided by the ports connecting the switches. If there's more traffic that needs to flow from one switch to another than the single connection cable can support frames will be dropped.
Stacking connectors are typically used to provide higher speed switch-to-switch interconnects. In this way you can connect multiple switches with a much less restrictive switch-to-switch bandwidth limitatation. (Using the Dell PowerConnect 6200 series again as an example, their stack connections are limited in length to under .5 meters, but operate at 40Gbps). This still doesn't extend the switching fabric, but it typically offers vastly improved performance as compared to a single cascaded connection between switches.
There were some switches (Intel 500 Series 10/100 switches come to mind) that actually extended the switching fabric between switches via stack connectors, but I don't know of any that have such a capability today.
One option that other posters have mentioned is using link aggregation mechanisms to "bond" multiple ports together. This uses more ports on each switch, but can increase switch-to-switch bandwidth. Beware that different link aggregation protocols use different algorithms to "balance" traffic across the links in the aggregation group, and you need to monitor the traffic counters on the individual interfaces in the aggregation group to insure that balancing is really occurring. (Typically some kind of hash of the source / destination addresses is used to achieve a "balancing" effect. This is done so that Ethernet frames arrive in the same order since frames between a single source and destination will always move across the same interfaces, and has the added benefit of not requiring queuing or monitoring of traffic flows on the aggregation group member ports.)
All of this concern about port-to-port switching bandwidth is one argument for using chassis-based switches. All the linecards in, for example, a Cisco Catalyst 6513 switch, share the same switching fabric (though some line cards may, themselves, have an independent fabric). You can jam a lot of ports into that chassis and get more port-to-port bandwidth than you could in a cascaded or even stacked discrete switch configuration.