There's no reason why you technically can't do this.
I'd probably do something similar, under the circumstances, actually. From a purely linux point of view, it's really easy, just give the connection an IP address with a /30 bitmask, giving you 2 IP addresses, then it's a simple Point-to-Point link.
If you wanted to grow the network, you could get a 10GE switch, and then have a seperate VLAN for traffic between servers. There's some very shiny gear in the Force10 range of switches that can do line-rate 10GE switching, with enormous buffers.
This is an interesting question since I've never seen anything that authoritatively states the design decisions behind that choice. Everything that I've come across, whether on the Interwebs or from conversation with people smarter than me in this area, seem to indicate two possibilities:
- Future proofing
- Extra shielding
Future Proofing
By the time of the Cat5 spec we had seen the explosion of data cable runs. Telephone had been using Cat3, or something similar for some time, serial connections had been run throughout University campuses, ThickNet had spidered its way around, ThinNet had started to see significant use in microcomputer labs and in some cases offices. It was obvious that networking computing equipment was the wave of the future. We had also learned the terrible costs of changing out cabling to meet the demands of longer segments or higher speeds. Let's face it, replacing cabling is a nightmarish chore and expensive.
The notion of limiting this cost by developing a cable that could be run, and left in place for some length of time, was definitely an appealing one. So forward thinking engineers, who were probably tired of replacing wiring, could easily have found it worthwhile to design extra pairs into the spec. After all, especially at a time when the price of bulk copper was relatively low. Which is more expensive - adding 4 extra wires or having a team of people remove old wiring and add new?
Extra Shielding
Since typical Cat5 is UTP (unshielded twisted pair) it does not contain the extra grounded foil to slough off the extraneous electro-magnetic interference. It has been described to me that, when properly grounded, the unused wires will help buffer the in-use pairs in a similar, albeit less effective way, than actual shielding. This could have been an important feature in the long runs and (electrically) noisy environments we were accustomed to running cabling at the time.
To me the future proofing argument is the most compelling.
Best Answer
Just about any device built in the last decade should support Auto MDI-X. If you don't have a link, I would suspect the cable first (maybe try to connect to a switch to see if you have a connection there).
Anyway, you can force MDI-X with
ethtool
. From the manpage: