I would give an answer of "no, but it is remarkably similar."
Here's some history and a largely complete explanation.
Circuits 101
Information networks can route traffic, basically, in terms of circuit switching or in terms of packet switching. Circuit switching offers many more guarantees than packet switching, but this comes at a cost, and so circuit switched networks can't degrade gracefully. The classic circuit-switched network is the PSTN, and a virtual circuit would be something like a DS0 on the PSTN.
A DS0 basically works as part of a bundle of connections, usually in a DS1. In a DS1, you will have a bundle of DS0's which are transmitted together, frame-by frame in time-division manner, so each DS0 is guaranteed a specific bandwidth, timeliness, etc. by the underlying network transport.
Another way to look at this is that a physical circuit would be something like a cat6 cable running between two terminals. You can send data back and forth over the wires at guaranteed speeds, and no other communications are going to interfere with that. Indeed the early telephone networks worked by connecting physical circuits (that is copper wires) using manual or electromechanical switches. As this was computerized, the circuits were virtualized and digital (as opposed to analog) information was sent down wires on a time division basis again with a circuit reserving a slot in the time division schedule.
What this means is that circuit switching is more about bandwidth reservation than it is about routing. The former leads to the latter. I.e. a circuit reserves bandwidth for the entire connection.
Why TCP Connections are not Virtual Circuits
TCP/IP is fully packet-switched. It makes no provisions for virtual circuits. This is why things like QoS are often necessary when trunking VOIP (a virtual circuit has built-in QoS guarantees). You have no guarantee that all packets will be routed alike. They may not come through in the same order. They may not come through in a timely manner (from a connection-oriented perspective). So you can't really build virtual circuits per se on top of a packet switched protocol like IP.
TCP comes somewhat close and in fact can work as a somewhat imperfect substitute. It offers as many of the guarantees as it can. This is why, when implemented on TCP/IP, H.323 uses TCP connections instead of the virtual circuits the protocol prefers.
But TCP connections still aren't circuits, because they don't reserve bandwidth during connection on every switch between the two nodes.
Of course TCP connections are more than just datagrams. They include routing information (as does UDP) but they also include the accounting information necessary to reconstruct the stream on the other side in order.
The Answer
Both TCP and UDP are datagram protocols. They send a packet of data with routing information to routers with none of the guarantees of that a circuit offers. TCP offers a subset of guarantees on the end points of what a circuit would offer by adding accounting information to allow the end points to handle errors and a series of data in order, but it is only a subset. Of datagram protocols, TCP is the closest thing one will find to a virtual circuit but it is still conceptually and operationally very different.
I'll try to answer your question directly without going into a huge rant about TCP v UDP.
Basically you need to understand that both HTTP and DNS are completely independent applications/protocols. Sometimes you need to actually send a DNS query to a DNS server, sometimes you don't (if the DNS record is cached locally on your PC/Server).
We do NOT have a DNS record cached.
- http://google.com is entered in the browser.
- Your PC checks the local DNS cache, and sees it does NOT have a record for google.com
- A UDP DNS query is sent to a DNS server, in this case it's most likely your ISP's DNS server.
- The DNS server sends a UDP response back.
- You now have your answer in the form of an IP address, now you can initiate your TCP connection to google.com
- The 3-way handshake occurs between you and google.com (SYN, SYN/ACK, ACK) - if you do not know what this is you can search for "TCP 3 way handshake" and find some good information.
- After the handshake completes, HTTP will render in the form of your favorite search engine.
We HAVE a DNS record cached. There is a very small difference here, but I'm going to include the whole thing so you can see the comparison.
- http://google.com is entered in the browser.
- Your PC checks the local DNS cache, and sees it has a record cached in the form of an IP address.
- You now have your IP address for google.com, now you can initiate your TCP connection to google.com
- The 3-way handshake occurs between you and google.com (SYN, SYN/ACK, ACK)
- After the handshake completes, HTTP will render in the form of your favorite search engine.
So just because you're trying to get to a webpage you do not have to send a UDP DNS query. DNS is independent, visiting a webpage is not the only time you'd need to use DNS. Feel free to comment if you need clarification.
Best Answer
Actually there is some truth in what the other person was saying, though it is largely false.
Is it true (that packets in a TCP connection follow an established path)?
Yes, in general, all packets in a TCP stream will follow the same path through the network - even in the presence of a "diamond" network, all packets in the same stream will be routed down the same side of the diamond. However, it's not true that if that side of the diamond goes down, the TCP stream will time out - in that case it will be rerouted and its path will change.
Why would switches give TCP connections this special treatment?
Here I'll answer why switches and routers will always try to send packets in a stream along the same side of a "diamond", hopefully giving some insight into where your interlocutor got their (flawed) understanding of the situation. The brief answer is "performance".
Background: While it's true that TCP can handle reordered packets successfully, in practice reordering tends to cause significant performance issues. For instance, if the other end sees packets in the order "1 2 4 3", it will notice when it sees packet 4 that it has not yet seen packet 3, and request for the other end to resend it - and it will then receive packet 3 twice (the one originally sent plus the resend). This results in bandwith being wasted (by sending packets twice unnecessarily), and also in reduced connection performance, since the sender will assume the "packet loss" it thinks it has seen is due to congestion, so it will slow down its sending.
Therefore most quality switches and routers will go to quite some lengths to avoid packet reordering, and as part of this will try to send all packets from the same TCP stream along the same path (in case one path has higher latency than the other).
Contrary to what others have said, this happens even for core routers. Where there are multiple links that are being load-balanced over, the router will try to send packets from the same TCP stream over the same link. Although this might seem to require the tracking of a huge amount of state, in fact it can be done without tracking all of the streams: the router takes the identifiers of the stream (source/destination IP address, and sometimes source/destination port), and combines them (hashes them) into a single number. This number is used to select which link to send the packet on. (To give an example of one vendor's implementation of this feature, search for
ip cef load-sharing algorithm
.)Doesn't it make the network less reliable in general?
Yes, of course it would, if networks did exactly what your interlocutor described - so they don't do that. The behaviour the other person talked about where a path is dropped and TCP connections using that path time out is not typical: though it may be possible to configure some products to behave like this, in general both TCP and UDP will start to use the alternative route straight away.
When a link goes down, the hashes will be redistributed across the remaining links. This will result in TCP streams changing their paths - which may sometimes result in some reordering - but this is generally acceptable as it happens occasionally, rather than on every packet.
Other relevant information
The discussion was originally about the difference between handling of TCP and UDP packets.
In fact, all of the above handing is done for UDP packets too: that is, UDP packets from the same "connection" will also typically always follow the same path. This is desirable since UDP packets are often used for realtime media streams such as phone calls - if many of your voice packets arrive out of order, this can result in terrible audio quality (since for a phone call large buffering is highly undesirable, so the received will typically drop packets that arrive out of sequence).