FIRST, if you can spend a bit of money, buy a good, fast 48-port GB switch. You want the faster "backplace" .. basically the speed limit inside the switch. I like this one for less than US$600. Plug everything into it. If the devices on the 100-ft switch are closer than 300 feet, plug them in as well, otherwise use a second switch out there.
This should make your little network perform just fine.
This switch is unmanaged and costs less than US$400. But being able to give the switch an IP and browse to it to see status, speed, etc. is well worth the money if you are having issues.
SECOND, if you just can't spend any money. Make the little GB switch the hub, and connect the three switches and the server into it. This will make the least number of hops between any two nodes. Get a GB card for the switch if you can.
THIRD, the specific answers to the specific questions.
If i connect them with only 1 wire will that limit bandwidth? e.g. all 23 computers will be limited to speed of one CAT5e cable?
The speed limitation is the speed of the switch ports and the speed of the switches. CAT5e cable will not be a bottleneck in your network.
DO NOT CONNECT SWITCHES WITH MULTIPLE CABLES, and TAKE CARE TO AVOID LOOPS AND MULTIPLE PATHS. Google "spanning tree" for more information about why.
Will the speed of computer connected to white switch be same as computer connected to top switch?
If the network is busy, no. You have slow, cheap switches, so they will pass traffic slowly. Thereform, more hops = slower traffic.
Will moving white switch right next top switch and having 16 wires comming 100 feet instead of 1 wire comming 100 feet make it faster?
It should not make any difference.
EDIT 2: Everyone mention gigabit switches, but will they do any difference with 10/100 network cards? I then have to use gigabit cards in every computer too? I could in server perhaps, but users will be 10/100
YES, GB in the switch will improve performance even if all of the connections are 100MB. The "backplane speed" (= internal speed) of the switch will be faster, as will any uplinks. And you are really going to want to put a GB card in the server at some point.
Good luck.
One cheeky and simple way to do this is to use two switches rather than one. You could then uplink half of your devices to one switch and half on the other, and thereby double your total throughput.
If you then require more outgoing throughput from a single server, you could bond the two interfaces with uplinks to both switches in your room. Just make sure you use a bonding mode that is intended for this kind of application (that uses different MAC addresses for traffic out of the two interfaces).
Best Answer
We did this in an office where we were stuck with 1 port in each room, and we put a 100Mb switch in each room. It was OK for basic tasks, web surfing, email, etc - but the BIG downside is that if you start doing heavy network traffic (for example, copying a multi-gig file from one office to another) you chew up ALL the bandwidth for two offices, because each office is sharing a single uplink.
So it really depends on your level of activity, but you may have a larger issue with a contractor taking you for a ride (depending on your definition of "exorbitant"). Yes it should and will be more expensive multiple lines if you want them all in different locations, but if you want to run say, 4 lines into one location within the room then the extra costs should be:
What you've got to weigh up against the cost of the installation is the cost of configuring, maintaining and purchasing multiple switches cascading off eachother. If I had a choice I would have paid the extra money to get it done properly (multiple lines into the room) because the hassle of maintaing THAT MANY cascaded switches can become prohibitive.