Networking wire isn't just any old spool of wire. It's rated for the frequencies of the signal going down it (CAT3 for regular phones or 10Mbs Ethernet; CAT5 for 100Mbs Ethernet; CAT5e, CAT6 for 1000Mbs Ethernet), there are pairs of wire twisted in certain ways to reduce the cross-talk between wires, there may be shielding to reduce noise from outside, etc.
It sounds like you're trying to run networking over a random spool of wire. Don't do that.
Once upon a time a twisted pair socket was only wired one way and the attached electronics couldn't change what each wire did. You were either a network device (hub/bridge/switch/router) or an end device. In order to electrically connect two network devices together you need a different cable than one used to connect an end device to a network device.
And thus the straight-through and cross-over cables were born.
Avoiding the usage of a second cable type (that would invariably lose its label and confuse the bejebers out of some network person months/years down the road when they pull it out of the bin) most devices intended to connect to both network devices and end devices had an uplink port that allowed the use of 'normal' cables.
It was as simple as that.
Edit: Google-Fu successful. It WAS ARCnet!
Why weren't switches/hubs designed from the beginning to use crossover cables instead?
Back when the 10base-T specification was still under consideration, the twisted pair architecture most common at the time in office networks was ARCnet. 10base-T wasn't ratified as an actual standard until 1990, later than I thought. Connecting ARCNet hubs together looks to have required a cable with flipped pairs from what ended up connecting to endpoint devices.
Since the standards committee would have been made up of veteran network engineers from the various hardware vendors and other interested parties, they had been dealing with the multiple cable problem for years and likely considered it status-quo. It is also possible that the 'draft' devices under development by the vendors also had electrical requirements for the cable, influenced as they were by ARCnet device manufacture. Clearly the committee didn't consider the use of multiple cable types to be enough of a problem to standardize the practice out of existence.
Best Answer
This is an interesting question since I've never seen anything that authoritatively states the design decisions behind that choice. Everything that I've come across, whether on the Interwebs or from conversation with people smarter than me in this area, seem to indicate two possibilities:
Future Proofing
By the time of the Cat5 spec we had seen the explosion of data cable runs. Telephone had been using Cat3, or something similar for some time, serial connections had been run throughout University campuses, ThickNet had spidered its way around, ThinNet had started to see significant use in microcomputer labs and in some cases offices. It was obvious that networking computing equipment was the wave of the future. We had also learned the terrible costs of changing out cabling to meet the demands of longer segments or higher speeds. Let's face it, replacing cabling is a nightmarish chore and expensive.
The notion of limiting this cost by developing a cable that could be run, and left in place for some length of time, was definitely an appealing one. So forward thinking engineers, who were probably tired of replacing wiring, could easily have found it worthwhile to design extra pairs into the spec. After all, especially at a time when the price of bulk copper was relatively low. Which is more expensive - adding 4 extra wires or having a team of people remove old wiring and add new?
Extra Shielding
Since typical Cat5 is UTP (unshielded twisted pair) it does not contain the extra grounded foil to slough off the extraneous electro-magnetic interference. It has been described to me that, when properly grounded, the unused wires will help buffer the in-use pairs in a similar, albeit less effective way, than actual shielding. This could have been an important feature in the long runs and (electrically) noisy environments we were accustomed to running cabling at the time.
To me the future proofing argument is the most compelling.