For sure :)
# ip address list dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1e:4f:9b:4a:ab brd ff:ff:ff:ff:ff:ff
inet 10.10.141.83/24 brd 10.10.141.255 scope global eth0
inet6 fe80::21e:4fff:fe9b:4aab/64 scope link
valid_lft forever preferred_lft forever
# ip address add 10.10.141.253/24 dev eth0
# ip address list dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1e:4f:9b:4a:ab brd ff:ff:ff:ff:ff:ff
inet 10.10.141.83/24 brd 10.10.141.255 scope global eth0
inet 10.10.141.253/24 scope global eth0
inet6 fe80::21e:4fff:fe9b:4aab/64 scope link
valid_lft forever preferred_lft forever
# ping -I 10.10.141.83 10.10.141.253
PING 10.10.141.253 (10.10.141.253) from 10.10.141.83 : 56(84) bytes of data.
64 bytes from 10.10.141.253: icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from 10.10.141.253: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 10.10.141.253: icmp_seq=3 ttl=64 time=0.038 ms
^C
--- 10.10.141.253 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.034/0.040/0.050/0.010 ms
# ip address delete 10.10.141.253/24 dev eth0
# ping -I 10.10.141.83 10.10.141.253
PING 10.10.141.253 (10.10.141.253) from 10.10.141.83 : 56(84) bytes of data.
From 10.10.141.83 icmp_seq=1 Destination Host Unreachable
From 10.10.141.83 icmp_seq=2 Destination Host Unreachable
From 10.10.141.83 icmp_seq=3 Destination Host Unreachable
^C
--- 10.10.141.253 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3016ms
Actually dead simple. :) (Just kidding, it's always simple if you already know it)
I'm not sure L2 would really work but with ip neigh
you can should be able to modify the arp cache also (so much for dead simple)
splattne has covered what a patch panel is, and why it's different to a switch.
To answer the last part of your question: the reason that host network connections don't go direct to switches is generally to do with ease of management. For example, desk locations on an office floor can be cabled back to a wiring closet patch panel which is labeled with the locations. You can then connect short patches ('tails' or 'whips') between the patch panel and the switch. This makes re-patching desk locations (for user moves etc) much simpler, as the desk->patch panel runs don't need to be touched at all.
In a data centre, a similar argument applies. If a server needs to be moved to a different subnet that is on a different physical switch to the one it's connected to, having intermediate patch panels is very useful. For example, many server rooms have an MDF (master distribution frame); all servers and all switch ports are cabled back to labeled patch panels on this frame. Then, creating a connection between a server and a switch is a simple case of a patch between two ports on the frame, rather than needing to have floor tiles lifted to run a new end-to-end patch.
EDIT: To add a few sample cabling topologies:
1) User floors.
[host]<<--patch-->>[floor port]<<--structured cabling-->>[wiring closet patch panel]<<--harnessed/bundled cabling-->>[wiring closet access switch]
2) Data centres, centralised access.
[host]<<--patch-->>[cabinet patch panel]<<--structured cabling-->>[master frame patch panel A]<<--patch-->>[master frame patch panel B]<<--harnessed/bundled cabling-->>[data centre access switch]
Note in the above, you could have another cabinet patch panel in the switch cabinet; however when using large modular switches (240+ ports per chassis), providing that many patch panel ports tends to use up valuable U-space in the cabinet; hence why these connections are often directly harnessed back to the master frame.
3) Data centres, distributed access (end of row).
[host]<<--patch-->>[cabinet patch panel]<<--harnessed/bundled cabling-->>[end of row access switch]
This kind of topology is often used with blade deployments, as the number of blade chassis you have deployed dictates precisely the number of ports you need to provision. Note the reduced physical flexibility, however - hosts must be cabled to switches in the same row. Your logical network design should take this into account.
4) Data centres, distributed access (top of rack).
[host]<<--patch-->>[top of rack access switch]
Potentially useful where you have a very homogeneous datacentre with lots of nodes with identical requirements.
Note these are just some examples - there are plenty of other approaches as well.
Best Answer
You are correct to suspect that your $5 network tester is inadequate to the task. It's fine for verifying that your wire map is correct and that the you have connectivity, but it won't detect any of the other problems, like the many varieties of crosstalk.
If you are fine with learning about problems as they occur, you're probably fine with the testing process you described. Another good test would be to push some actual traffic through the wire (iperf on both ends of the wire) and see if any of them run dramatically slower than the rest.
The downside is that when some weird network issue occurs, you'll never be sure if it's your wiring or something farther up the stack.
There are companies that lease test equipment. If you could get your hands on a good Fluke meter for a day, it would go a long way to identifying any of the more esoteric wiring problems.