The Maximum windows size in terms of segments can be up to 2^30/ MSS, where MSS is the maximum segment size. The 2^30 = (2^16*2^14) comes through this as Michael mentioned you in his answer. If your network bandwidth and delay product exceeds than the TCP receiver window size than the window scaling option is enabled for the TCP connection and most OS support this features. The scaling supports up to 14-bit multiplicative shift for the window size. You can read following for the better explanation:
http://en.wikipedia.org/wiki/TCP_window_scale_option
http://www.ietf.org/rfc/rfc1323.txt
They are two vastly different mechanisms.
###PSH and the PUSH function
When you send data, your TCP
buffers it. So if you send a character it won't send it immediately but wait to see if you've got more. But maybe you want it to go straight on the wire: this is where the PUSH function comes in. If you PUSH data your TCP will immediately create a segment (or a few segments) and push them.
But the story doesn't stop here. When the peer TCP receives the data, it will naturally buffer them it won't disturb the application for each and every byte. Here's where the PSH
flag kicks in. If a receiving TCP sees the PSH flag it will immediately push the data to the application.
There's no API to set the PSH
flag. Typically it is set by the kernel when it empties the buffer. From TCP/IP Illustrated:
This flag is conventionally used to indicate that the buffer at the side sending the packet has been
emptied in conjunction with sending the packet. In other words, when the packet with the PSH bit field set left the sender, the sender had no more data to send.
But be aware Stevens also says:
Push (the receiver should pass this data to the application as soon as
possible—not reliably implemented or used)
###URG and OOB data
TCP is a stream-oriented protocol. So if you push 64K bytes on one side, you'll eventually get 64k bytes on the other. So imagine you push a lot of data and then have some message that says "Hey, you know all that data I just sent ? Yeah, throw that away". The gist of the matter is that once you push data on a connection you have to wait for the receiver to get all of it before it gets to the new data.
This is where the URG
flag kicks in. When you send urgent data, your TCP creates a special segment in which it sets the URG flag and also the urgent pointer field. This causes the receiving TCP to forward the urgent data on a separate channel to the application (for instance on Unix your process gets a SIGURG
). This allows the application to process the data out of band¹.
As a side note, it's important to be aware that urgent data is rarely used today and not very well implemented. It's far easier to use a separate channel or a different approach altogether.
¹: RFC 6093 disagrees with this use of "out of band" and states:
The TCP urgent mechanism is NOT a mechanism for sending "out-of-band"
data: the so-called "urgent data" should be delivered "in-line" to the
TCP user.
But then it goes on to admit:
By default, the last byte of "urgent data" is delivered "out of band"
to the application. That is, it is not delivered as part of the
normal data stream.
An application has to go out of its way and specify e.g. SO_OOBINLINE
to get standards-conforming urgent semantics.
If all this sounds complicated just don't use urgent data.
Best Answer
To give a short answer: the receive window is managed by the receiver, who sends out window sizes to the sender. The window sizes announce the number of bytes still free in the receiver buffer, i.e. the number of bytes the sender can still send without needing an acknowledgement from the receiver.
The congestion window is a sender imposed window that was implemented to avoid overrunning some routers in the middle of the network path. The sender, with each segment sent, increases the congestion window slightly, i.e. the sender will allow itself more outstanding sent data. But if the sender detects packet loss, it will cut the window in half. The rationale behind this is that the sender assumes that packet loss has occurred because of a buffer overflow somewhere (which is almost always true), so the sender wants to keep less data "in flight" to avoid further packet loss in the future.
For more, start here: http://en.wikipedia.org/wiki/Slow-start