How Hardware Timestamps (Hardstamps) Work

hardware

I am currently diving into topics like PTP (IEEE 1588) and precise timestamping for capturing network traffic. Many resources on the Internet name hardstamps as a feature of network cards, allowing them to add a timestamp to a packet before putting it into a (typically non-deterministic) queue towards the application layer.

What are these timestamps like? Are they measured relative to a start signal sent by the OS or absolute timestamps? Are the absolute stamps synchronized to the OS clock or to a clock managed by the NIC?

Best Answer

The timestamping happens in hardware based on a clock in the NIC. Depending on your NIC there's either a NIC register where your timestamp will be written or some card (Intel 82580) can prepend the information to the packet buffer. For cards that write the timestamp to a register you have to read the register before it can timestamp the next packet, effectively limiting your throughput, while the 82580 can timestamp all packets.

In Linux, you will receive the timestamp in a data structure outside of the packet itself. Here's a bit from the kernel docs on it:

These timestamps are returned in a control message with cmsg_level
SOL_SOCKET, cmsg_type SCM_TIMESTAMPING, and payload of type

struct scm_timestamping {
    struct timespec ts[3];
};

The structure can return up to three timestamps. This is a legacy
feature. Only one field is non-zero at any time. Most timestamps
are passed in ts[0]. Hardware timestamps are passed in ts[2].

ts[1] used to hold hardware timestamps converted to system time.
Instead, expose the hardware clock device on the NIC directly as
a HW PTP clock source, to allow time conversion in userspace and
optionally synchronize system time with a userspace PTP stack such
as linuxptp. For the PTP clock API, see Documentation/ptp/ptp.txt.

Timestamp is based on NIC time. I believe you can modify the NIC clock but the more common way is to let the NIC clock be free running (guaranteeing monotonic time) and do any adjustment in user space. Adjustments could include working out the offset with system time and adjusting all received timestamps to system time. The kernel can/could (was a long time since I checked) do this for you but I believe recommended way is to do it yourself in your user space. In other applications you simply don't care about absolute time as the relative time between packets is the only important factor.

Two colleagues of mine wrote a SLA measurement application relying on this. You'll find the source at on GitHub.

There's also MoonGen which is a DPDK based framework with Lua scripting abilities for packet processing. They have an excellent paper including information on timestamping.