No, you can not assume there will be no signal on the line just because you're not deliberately sending something at the network packet layer. There are things called link pulses, and there can be other negotiation going on between the switch and the PHY and possibly the MAC.
The best thing to do would be to let the MAC and PHY do their jobs. You can use a microcontroller with a MAC and PHY to blip a pin whenever one of your packets is received. You'd write a very bare "driver" that just waits for a packet at the MAC layer, blips a pin, then clears the packet. With such dedicated firmware, the jitter it adds should be quite small.
If you are willing to slow down your packet rate, you can do this even more easily. Most switches and other ethernet devices have send and/or receive LEDs. These are often blipped for a few 10s of ms when a packet is sent or received, which is why sending one every 5 ms won't work. If you send a packet every 100 ms, the LED should be off again when the next packet arrives. You can also wire up your own PHY and set it up to blip a pin a short time for each packet. That's basically what the chip in the switch is doing when it drives the LED.
Both are needed in half-duplex.
Duplex basically means: Two transmission channels, one for sending, one for receiving.
For Ethernet, full duplex means: TX and RX can happen at the same time.
For Ethernet, half duplex means: TX and RX do not happen at the same time, but still, being duplex, using separate channels.
This differs from the use of the word half duplex in other transmission schemes, like serial communications.
This has to do with the origins and definitions of Ethernet. Most of this goes back directly to what was possible 30 years ago, all 10base signals go back to 1981 at least. 100base was just an extension of that. Gigabit Ethernet changes this and does proper full duplex, sending and receiving on all lines simultaneously.
Now, speaking of oldstyle Ethernet, 10base2 etc: The protocols are hardware-independent. The same signal would be encoded on optical or electrical transmission channels. Back then, optical channels could not easily switch between sending and receiving. Also, early structured cabling Ethernet was connected on a hub (not switch), so CSMA-CD had to be implemented, meaning senders had to be able to listen for incoming transmissions (conflicts) during their own sending. And additionally, the early protocol stacks ran on CPUs so wimpy they could not calculate transmission and reception at the same time, giving you reason to drive half-duplex in an environment that was otherwise perfectly capable of full-duplex.
Best Answer
The only thing you might be able to test is the continuity of the cable. If you are attempting to test to see if data can traverse the cable than this is an okay way to test. If you are looking to test data rates, etc. this is not the method you should use.
I have also experienced Layer 1 (Physical Layer) issues where the cable was fine but the female port (The jack on your Ethernet card or motherboard) had bent risers that were not making good contact with the Ethernet connector.
Other than continuity (Does point "A" have an electrical connection to point "B") the electrical test you would be able to perform with a multimeter would have nothing to do with data rates.
PoE testing is best observed at the device, switch, or router because part of the IEEE standard for PoE requires an end-point to negotiate with the end supplying power and no current will be present if you just connect a test lead.