Wireshark and other capture tools are not aware of TCP streams during capture. They would have to save tcp stream state in memory to do that which would reduce troughput.
That being said there might be a way. If your packets have metadata there might be some sort of identifier (header, string, etc.) that is in all the packets with metadata? If so you could filter these with iptables
and feed them to a NFLOG target which can be captured with dumpcap
, a tool that is shipped with wireshark.
For example, if all your metadata packets have a "X-Metadata" string in them, and you capture them on eth0
you could do:
iptables -A INPUT -i eth0 -m string --algo bm --string "X-Metadata" -j NFLOG --nflog-group 1
dumpcap -i nflog:1 -w test.pcap
This will save all packets with "X-Metadata" in them in the test.pcap
file. Mind you, if there is some other way to identify packets with metadata, iptables
might be able to do that. There are many filters/extensions.
You asked a good question. Don't let anyone tell you otherwise.
Regrettably, there is no rule of thumb for the types of protocols that use TCP verses the types of protocols that use UDP.
The decision whether a protocol uses one or the other come down to whomever wrote/created the protocol to begin with.
If they didn't want to bother with writing their own "reliable delivery" system, then they can simply use TCP which provides all the reliability innately.
If they thought (knowing their own protocol innately) that they could write a better or more appropriate "reliable delivery" system, then they can build that into the protocol itself and simply use UDP as their transport.
As an example, take a look at a UDP TFTP sample capture, you'll notice there are built in acknowledgement systems within TFTP itself -- having both those and the additional acknowledgement systems within TCP would simply be redundant.
Whereas FTP, which runs over TCP, does not have a built-in acknowlegdment system. A user simply request a file, and the sender sends it. There is a "file transfer complete" notification, but nothing that guarantees having received each bit of the file. FTP is relying on TCP's reliability to ensure the file gets all the way across.
That said, I looked through the list of ports on the wiki page you linked, and saw a surprising amount of protocols that supposedly use TCP and UDP. This was foreign to me, and I only know of very few that use both (namely, DNS). But it may be that there is a TFTP implementation that uses TCP, and if so, I'm afraid I have no exposure to it.
Domain Name System (DNS) is traditionally the protocol referred to when discussing protocols that use both TCP and UDP. It doesn't use these at the same time, mind you. But different functions within DNS might call for TCP vs UDP.
For example, when making a simple A-record resolution request, the "request" and "response" are very lightweight, both requiring a single packet. As such, this is typically done over UDP.
But if a request or response requires a larger transfer (above a certain amount of bytes), then DNS chooses to use TCP to ensure "all the bits" get there. This is common with full Zone Transfer requests.
Best Answer
Ok, maybe I found the answer.
So I found a table with following values
Looks like EstimateRTT, DevRTT have at first the same value as the sampleRTT(130). The rest can be calculated with following formula as b = 0.25; a = 0.125
EstimatedRTT = (1- a)EstimatedRTTlast + aSampleRTT
DevRTT = (1-b)DevRTTlast +b|SampleRTT-EstimatedRTT|
Timeout = Estimated RTT + 4*DevRTT
Source: https://www.ukessays.com/essays/it-research/round-trip-time-rtt.php
Update:
Ok, thanks to Zac I was looking in RFC 6298 Where it says:
So in first Segment EstimatedRTT=SampleRTT, DevRTT=SampleRTT/2.
So the table would look in the first segment like this, if I understood it right:
Now which source should I rather trust ? I think the RFC 6298