I am running a third party software on a W2008 server. This client connects to a server via TCP. The average ping times between the two are 1ms. However, when I check the tcp connection in resource monitor for the app it shows the latency as 20ms.
I ran packet capture on the client side interface and see the ack times <1ms.
The tcp settings on the client side are as follows:
Receive-Side Scaling State : enabled Chimney Offload State : automatic NetDMA State : enabled Direct Cache Acess (DCA) : disabled Receive Window Auto-Tuning Level : normal Add-On Congestion Control Provider : ctcp ECN Capability : disabled RFC 1323 Timestamps : disabled
Also I have set the TcpNoDelay and TcpAckFrequency as 1 in the registry for the specific interface. The NIC has offload enabled.
How does the resource monitor calculate this 20 ms TCP latency?
Are there any other TCP settings in Win2008 that might reduce this latency?
Best Answer
So to start with, resource monitor/perfmon uses a different measurement system to what wireshark etc uses, so that would be why the latency is different.
Without going into the depths of windows API, the difference is caused by post-processing and low priority.
Given the actual latency is 1ms, no there aren't any further settings you could apply, and I'm not aware of any changes you could make to 'fix' the windows API perfmon so it displays the 'correct' latency.
If you like I can find the exact reason why perfmon is slower, but based on previous experiences aforementioned reason is why.