Most often Cisco devices can only receive PAUSE frames. They can't send them.
If you are running storage over your network I can understand why you would be looking at implementing it and some server/storage vendors even recommend you to do so.
Note however that PAUSE frames is a very blunt tool as it can pause all traffic meaning you can't differentiate between packets. That means your high priority packets will be treated the same as low priority packets. If you run a separate storage network then it's no issue and you can safely enable it.
There is a standard 802.1Qbb that enables to send PAUSE frames per class so not all the traffic gets paused.
This article describes how 802.3x works and the implications of running it like adding delay to RTT for TCP packets and such.
In short: Flow control makes sure that the receiver is never overloaded with more data it can handle whereas congestion control is used to avoid congesting the network between sender and receiver.
Flow control: each ACK the receiver sends to the sender includes the current size of the receive window, which states how many more bytes fit into the receive buffer. This procedure is called the sliding window protocol.
Congestion control: after TCP was introduced, it was noticed, that too much traffic resulted in a so-called congestion of the network: not all packages could be delivered when the network bandwidth was exceeded. The TCP implementation reacts to missing ACKs by sending these packets again, which however only makes the situation worse, as the network becomes even more congested.
The answer to this problem is the congestion control. It is quite complex, but basically what happens is, that as soon as a message can't be delivered (when a message isn't acknowledged after a certain timeout, it is considered that the message couldn't be delivered), it is assumed that the package was lost because of a congestion in the network, and the sending rate is reduced. This is also implemented using a sliding window, the congestion window. The actual algorithm is more complex, it nowadays includes the slow-start, congestion avoidance, fast retransmit, and fast recovery algorithms.
At any given moment, a TCP implementation is allowed to send at most min(receive window, congestion window)
unacknowledged packets to respect both the flow control and the congestion control.
Best Answer
QoS will apply policies to different traffic classes as it passes through the device, such as giving priority to certain traffic. But it does not signal to the transmitter to pause.
Flow control operates at the interface level and will send a Pause to an upstream transmitter telling them to pause transmission (assuming they also are set up to honor flow control messages). This can affect all traffic passing over the interface.
There is an enhancement called priority flow control which applies flow control based on CoS class. The devices negotiate this using LLDP DCBX and exchange their QoS configurations, if they match, then they can successfully use priority flow control between them.
Overview of PFC