Based on this presentation some of those PAM5 codes (2 byte "SSD" and respectively 4 byte combo "csreset" and "ESD") mark the start and respectively end of frame, similar to how J/K and T/R do the same for 100BASE-TX.
The 802.3-2005 standard explains how these are coded relative to normal data:
During data encoding, PCS Transmit utilizes a three-state convolutional encoder. The transition from idle or carrier extension to data is signalled by inserting a SSD, and the end of transmission of data is signalled by an ESD. [...] During idle and carrier extension encoding, special code-groups with symbol values restricted to the set {2, 0, –2} are used. These
code-groups are also generated using the transmit side-stream scrambler. However, the encoding rules for the idle, SSD, and carrier extend code-groups are different from the encoding rules for data, CSReset, CSExtend,
and ESD code-groups. During idle, SSD, and carrier extension, the PCS Transmit function reverses the sign of the transmitted symbols. This allows, at the receiver, sequences of code-groups that represent data, CSReset, CSExtend, and ESD to be easily distinguished from sequences of code-groups that represent SSD,
carrier extension, and idle.
CSReset means "Convolutional State Reset" and SSD/ESD mean Start-of-Stream/End-of-Stream Delimiter.
The actual codes are publicly available [for free] as ASCII table 40-1 and table 40-2. The codes are given per byte there, e.g. to send SSD, you send SSD1 and SSD2 in this order.
Your question is pretty confusing, and it is somewhat misleading, but I will try to clarify.
The capacity to which you refer is called the bandwidth, and it is measured in bps (bits per second). For example, 100 Mbps. The speed of transfer on a cable is fixed and limited by physics (the speed of light in the medium of the cable), and it it relatively the same for all your cables. For example, the speed of light in copper cable is about two-thirds the speed of light in a vacuum. The bandwidth is more a function of the device interfaces than it is of the cable.
There are cable standards set by ANSI/TIA/EIA and ISO/IEC. To facilitate the bandwidth of the device interfaces, the cable must meet some parameters. This can get very technical and complicated, which is why the standards bodies have created various cable standards. For example, ANSI/TIA/EIA has categories for copper cabling, and ISO/IEC has cable classes. The various standards define parameters like Insertion Loss, NEXT, FEXT, Return Loss, Propagation Delay, Skew, etc. Depending on the particular set of parameters a cable has, the cable is rated for a maximum frequency it can transmit, e.g. 100 MHz for Category-5e. How the interfaces encode and signal on the cable determines the bandwidth, but a cable must meet the requirements of the interfaces in order to function at the bandwidth of the interfaces.
A big part of whether or not the cable can function correctly at a particular bandwidth is determined by the cable installation. There are standards for this too. For example, ANSI/TIA/EIA 568, Commercial Building Telecommunications Cabling Standard. Poorly installed cable will not function correctly. All components of a cable path (cabling connectors, etc.) must be rated the same, installed properly, and tested with expensive equipment to validate that they perform correctly.
An example of the cable bandwidth would be Category-5e cable. If it is properly installed, the cable can work at 10BASE-T (10 Mbps ethernet), 100BASE-TX (100 Mbps ethernet), and 1000BASE-T (1 Gbps ethernet), but not at 10GBASE-T (10 Gbps ethernet).
It is the interfaces of the devices to which the cable connects that determines the bandwidth of the link. For example, The maximum bandwidth on Category-5e cable would be 1 Gbps. If you try to use it with devices that only work at 10 Gbps, then it will not work at all. Some people may think that the cable will transmit at 1 Gbps in this case, but it doesn't work that way. The interfaces on the devices will send data at frequencies that the cable simply cannot reliably handle, and you will receive garbage at the other end. This is where the comparison to a water pipe fails.
Best Answer
BASE indicates baseband signaling - there is no modulated carrier, the frequency starts near zero and extends to a certain cut-off frequency.
BROAD indicates broadband modulation - there is a wide frequency band with a number of carriers modulated with the data (similar to xDSL).
The X in -TX, -SX, ... stands for 4b/5b (100 Mbit/s) or (improved) 8b/10b line code (PCS block code). The R in 10GBASE-SR or -LR stands for more efficient 64b/66b line code (laRge block). A line code is required to enable clock recovery and bit-level synchronization. Without line code, the receiver would lose track of the bit boundaries when many same bits are transmitted.
-T indicates a twisted-pair medium, two used pairs for 10/100 Mbit/s, four pairs for 1 Gbit/s upwards. Speeds beyond 100 Mbit/s use specialized line codes plus scrambling. Mostly, only -T is used - the X in 100BASE-TX was added since it competed with the differing variants 100BASE-T2 and 100BASE-T4 at the time. Modern, single-pair variants use -T1.
-S stands for Short wavelength optical (~850 nm), -L for Long wavelength (~1300 nm), -E for Extra long wavelength (~1500 nm) and so on. With few exceptions, short wavelength is used with multi-mode fiber for short distance (less than 1 km), longer waves with single-mode fiber for long distance (1 km to 100 km). These three wavelengths are selected from the low-absorption bands of silica glass.
I've compiled a comprehensive and fairly complete list for Wikipedia a while back, all taken from IEEE 802.3.