Here's what I do
Label each cable
I have a brother P-Touch labeler that I use. Each cable gets a label on both ends. This is because if I unplug something from a switch, I want to know where to plug it back into, and vice versa on the server end.
There are two methods that you can use to label your cables with a generic labeler. You can run the label along the cable, so that it can be read easily, or you can wrap it around the cable so that it meets itself and looks like a tag. The former is easier to read, the latter is either harder to read or uses twice as much label since you type the word twice to make sure it's read. Long labels on mine get the "along the cable" treatment, and shorter ones get the tag.
You can also buy a specific cable labeler which provides plastic sleeves. I've never used it, so I can't offer any advice.
Color code your cables
I run each machine with bonded network cards. This means that I'm using both NICs in each server, and they go to different switches. I have a red switch and a blue switch. All of the eth0's go to red switch using red cables (and the cables are run to the right, and all eth1's go to the blue switch using blue cables (and the cables are run to the left). My network uplink cables are an off color, like yellow, so that they stand out.
In addition, my racks have redundant power. I've got a vertical PDU on each side. The power cables plugged into the right side all have a ring of electrical tape matching the color of the side, again, red for right, blue for left. This makes sure that I don't overload the circuit accidentally if things go to hell in a hurry.
Buy your cables
This may ruffle some feathers. Some people say you should cut cables exactly to length so that there is no excess. I say "I'm not perfect, and some of my crimp jobs may not last as long as molded ends", and I don't want to find out at 3 in the morning some day in the future. So I buy in bulk. When I'm first planning a rack build, I determine where, in relation to the switches, my equipment will be. Then I buy cables in groups based on that distance.
When the time comes for cable management, I work with bundles of cable, grouping them by physical proximity (which also groups them by length, since I planned this out beforehand). I use velcro zip ties to bind the cables together, and also to make larger groups out of smaller bundles. Don't use plastic zip ties on anything that you could see yourself replacing. Even if they re-open, the plastic will eventually wear down and not latch any more.
Keep power cables as far from ethernet cables as possible
Power cables, especially clumps of power cables, cause ElectroMagnetic Interference (EMI aka radio frequency interference (or RFI)) on any surrounding cables, including CAT-* cables (unless they're shielded, but if you're using STP cables in your rack, you're probably doing it wrong). Run your power cables away from the CAT5/6. And if you must bring them close, try to do it at right angles.
Edit
I forgot! I also did a HOWTO on this a long time ago: http://www.standalone-sysadmin.com/blog/2008/07/howto-server-cable-management/
Once upon a time a twisted pair socket was only wired one way and the attached electronics couldn't change what each wire did. You were either a network device (hub/bridge/switch/router) or an end device. In order to electrically connect two network devices together you need a different cable than one used to connect an end device to a network device.
And thus the straight-through and cross-over cables were born.
Avoiding the usage of a second cable type (that would invariably lose its label and confuse the bejebers out of some network person months/years down the road when they pull it out of the bin) most devices intended to connect to both network devices and end devices had an uplink port that allowed the use of 'normal' cables.
It was as simple as that.
Edit: Google-Fu successful. It WAS ARCnet!
Why weren't switches/hubs designed from the beginning to use crossover cables instead?
Back when the 10base-T specification was still under consideration, the twisted pair architecture most common at the time in office networks was ARCnet. 10base-T wasn't ratified as an actual standard until 1990, later than I thought. Connecting ARCNet hubs together looks to have required a cable with flipped pairs from what ended up connecting to endpoint devices.
Since the standards committee would have been made up of veteran network engineers from the various hardware vendors and other interested parties, they had been dealing with the multiple cable problem for years and likely considered it status-quo. It is also possible that the 'draft' devices under development by the vendors also had electrical requirements for the cable, influenced as they were by ARCnet device manufacture. Clearly the committee didn't consider the use of multiple cable types to be enough of a problem to standardize the practice out of existence.
Best Answer
There is at least one solution - IEC Lock - that I'm aware of. I think it's based on some kind of spring-friction mechanism and fits existing sockets. I haven't personally seen the need to try any locking solutions, so YMMV.
Places selling those can be found e.g. searching with Google. Prices are around 3-4 USD, so I'd say it's relatively cheap.