I would give an answer of "no, but it is remarkably similar."
Here's some history and a largely complete explanation.
Circuits 101
Information networks can route traffic, basically, in terms of circuit switching or in terms of packet switching. Circuit switching offers many more guarantees than packet switching, but this comes at a cost, and so circuit switched networks can't degrade gracefully. The classic circuit-switched network is the PSTN, and a virtual circuit would be something like a DS0 on the PSTN.
A DS0 basically works as part of a bundle of connections, usually in a DS1. In a DS1, you will have a bundle of DS0's which are transmitted together, frame-by frame in time-division manner, so each DS0 is guaranteed a specific bandwidth, timeliness, etc. by the underlying network transport.
Another way to look at this is that a physical circuit would be something like a cat6 cable running between two terminals. You can send data back and forth over the wires at guaranteed speeds, and no other communications are going to interfere with that. Indeed the early telephone networks worked by connecting physical circuits (that is copper wires) using manual or electromechanical switches. As this was computerized, the circuits were virtualized and digital (as opposed to analog) information was sent down wires on a time division basis again with a circuit reserving a slot in the time division schedule.
What this means is that circuit switching is more about bandwidth reservation than it is about routing. The former leads to the latter. I.e. a circuit reserves bandwidth for the entire connection.
Why TCP Connections are not Virtual Circuits
TCP/IP is fully packet-switched. It makes no provisions for virtual circuits. This is why things like QoS are often necessary when trunking VOIP (a virtual circuit has built-in QoS guarantees). You have no guarantee that all packets will be routed alike. They may not come through in the same order. They may not come through in a timely manner (from a connection-oriented perspective). So you can't really build virtual circuits per se on top of a packet switched protocol like IP.
TCP comes somewhat close and in fact can work as a somewhat imperfect substitute. It offers as many of the guarantees as it can. This is why, when implemented on TCP/IP, H.323 uses TCP connections instead of the virtual circuits the protocol prefers.
But TCP connections still aren't circuits, because they don't reserve bandwidth during connection on every switch between the two nodes.
Of course TCP connections are more than just datagrams. They include routing information (as does UDP) but they also include the accounting information necessary to reconstruct the stream on the other side in order.
The Answer
Both TCP and UDP are datagram protocols. They send a packet of data with routing information to routers with none of the guarantees of that a circuit offers. TCP offers a subset of guarantees on the end points of what a circuit would offer by adding accounting information to allow the end points to handle errors and a series of data in order, but it is only a subset. Of datagram protocols, TCP is the closest thing one will find to a virtual circuit but it is still conceptually and operationally very different.
The issue is that you're operating under the notion that each layer is in and of itself a separate, autonomous entity. Understand that IP is not the only packet delivery protocol in existence, it just happens to be the most common one. Also understand that these "layers" are simply abstraction tools - the take away is that each "layer" is dependent on the one beneath it for some functions, and the layers further down the stack can hand off certain duties (ie reliable data transmission) to the layers above them when appropriate to keep overhead low and performance high. Each lower layer encapsulates data sent from the layers above it on its way down to the wire, and the lower layers decapsulate the data on the way back up the stack to the application.
The paper you linked about the data link layer even states that while its delivery mechanisms are intended to be reliable, they are still best effort, and there is an assumption made that higher layer protocols (ie TCP) will handle retransmission if necessary.
Best Answer
TCP is about as fast as you can make something with its reliability properties. If you only need, say, sequencing and error detection, UDP can be made to serve perfectly well. This is the basis for most real-time protocols such as voice, video streaming etc, where lag and jitter are more important than "absolute" error correction.
Fundamentally, TCP says its streams can be relied upon eventually. How fast that is depends on the various timers, speeds etc. The time taken to resolve errors can be unpredictable, but the basic operations are as fast as practicable when there are no errors. If a system knows something about the kinds of errors which are likely, it might be able to do something which isn't possible with TCP. For example, if single-bit errors are especially likely, you can use error-correcting coding for those bit errors: however, this is much better implemented in the link layer. As another example, if short bursts of whole-packet loss are common, you can address this with multiple transmission without waiting for loss, but obviously this is expensive in bandwidth. Or alternatively, slow the speed down until the error probability is negligible: also expensive in bandwidth. In the end, a protocol has to pay for reliability with either a) bandwidth or b) delay.
In implementation terms, you would find that the programmer-centuries invested in TCP will make it faster than anything general you could afford to make, as well as more reliable in the obscure edge cases.
TCP provides: a ubiquitious method of connecting (essential where the communicating systems have no common control) giving a reliable, ordered, (and deduplicated), two way, windowed, byte stream with congestion control over arbitrary-distance multi-hop networks.
If an application doesn't require ubiquity (your software runs on both sides), or doesn't need all of TCP's features, many people profitably use other protocols, often on top of UDP. Examples include TFTP (minimalistic, with really inefficient error handling, QUIC which is designed to reduce overheads (still marked as experimental), and libraries such as lidgren, which has fine-grained control over exactly which reliability features are required. [Thanks commenters.]