I'll try to answer your question directly without going into a huge rant about TCP v UDP.
Basically you need to understand that both HTTP and DNS are completely independent applications/protocols. Sometimes you need to actually send a DNS query to a DNS server, sometimes you don't (if the DNS record is cached locally on your PC/Server).
We do NOT have a DNS record cached.
- http://google.com is entered in the browser.
- Your PC checks the local DNS cache, and sees it does NOT have a record for google.com
- A UDP DNS query is sent to a DNS server, in this case it's most likely your ISP's DNS server.
- The DNS server sends a UDP response back.
- You now have your answer in the form of an IP address, now you can initiate your TCP connection to google.com
- The 3-way handshake occurs between you and google.com (SYN, SYN/ACK, ACK) - if you do not know what this is you can search for "TCP 3 way handshake" and find some good information.
- After the handshake completes, HTTP will render in the form of your favorite search engine.
We HAVE a DNS record cached. There is a very small difference here, but I'm going to include the whole thing so you can see the comparison.
- http://google.com is entered in the browser.
- Your PC checks the local DNS cache, and sees it has a record cached in the form of an IP address.
- You now have your IP address for google.com, now you can initiate your TCP connection to google.com
- The 3-way handshake occurs between you and google.com (SYN, SYN/ACK, ACK)
- After the handshake completes, HTTP will render in the form of your favorite search engine.
So just because you're trying to get to a webpage you do not have to send a UDP DNS query. DNS is independent, visiting a webpage is not the only time you'd need to use DNS. Feel free to comment if you need clarification.
Actually there is some truth in what the other person was saying, though it is largely false.
Is it true (that packets in a TCP connection follow an established path)?
Yes, in general, all packets in a TCP stream will follow the same path through the network - even in the presence of a "diamond" network, all packets in the same stream will be routed down the same side of the diamond. However, it's not true that if that side of the diamond goes down, the TCP stream will time out - in that case it will be rerouted and its path will change.
Why would switches give TCP connections this special treatment?
Here I'll answer why switches and routers will always try to send packets in a stream along the same side of a "diamond", hopefully giving some insight into where your interlocutor got their (flawed) understanding of the situation. The brief answer is "performance".
Background: While it's true that TCP can handle reordered packets successfully, in practice reordering tends to cause significant performance issues. For instance, if the other end sees packets in the order "1 2 4 3", it will notice when it sees packet 4 that it has not yet seen packet 3, and request for the other end to resend it - and it will then receive packet 3 twice (the one originally sent plus the resend). This results in bandwith being wasted (by sending packets twice unnecessarily), and also in reduced connection performance, since the sender will assume the "packet loss" it thinks it has seen is due to congestion, so it will slow down its sending.
Therefore most quality switches and routers will go to quite some lengths to avoid packet reordering, and as part of this will try to send all packets from the same TCP stream along the same path (in case one path has higher latency than the other).
Contrary to what others have said, this happens even for core routers. Where there are multiple links that are being load-balanced over, the router will try to send packets from the same TCP stream over the same link. Although this might seem to require the tracking of a huge amount of state, in fact it can be done without tracking all of the streams: the router takes the identifiers of the stream (source/destination IP address, and sometimes source/destination port), and combines them (hashes them) into a single number. This number is used to select which link to send the packet on. (To give an example of one vendor's implementation of this feature, search for ip cef load-sharing algorithm
.)
Doesn't it make the network less reliable in general?
Yes, of course it would, if networks did exactly what your interlocutor described - so they don't do that. The behaviour the other person talked about where a path is dropped and TCP connections using that path time out is not typical: though it may be possible to configure some products to behave like this, in general both TCP and UDP will start to use the alternative route straight away.
When a link goes down, the hashes will be redistributed across the remaining links. This will result in TCP streams changing their paths - which may sometimes result in some reordering - but this is generally acceptable as it happens occasionally, rather than on every packet.
Other relevant information
The discussion was originally about the difference between handling of TCP and UDP packets.
In fact, all of the above handing is done for UDP packets too: that is, UDP packets from the same "connection" will also typically always follow the same path. This is desirable since UDP packets are often used for realtime media streams such as phone calls - if many of your voice packets arrive out of order, this can result in terrible audio quality (since for a phone call large buffering is highly undesirable, so the received will typically drop packets that arrive out of sequence).
Best Answer
This is not the hosts that decide which route a packet will follow, each router in the path make it's own decision.
(Actually, the originating host could use the IP strict source option to force the packets to go through a specific route, but it's rarely, if ever, used, and it's totally ignored by routers on the Internet)
So each router can change the router of packets depending on the network condition (link drop, congestion on a link, load balancing...)
What a host can decide is to alter it's TCP window (flow control), to modify the rate at which it sends information, but this doesn't impact routing.
Except for Policy Based Routing, routing is a layer 3 decision that doesn't take into account layer 4 (TCP / UDP) information, so it's performed in the same way for TCP / UDP / ICMP etc...