Assuming you're untagged, your current stack on the wire is:
- 7B preamble
- 1B SFD
- 6B DMAC
- 6B SMAC
- 2B type
- 20B IPv4
- 20B TCP
- nB application data
- 4B CRC
- 12B IFG
First thing is very important, you are looking at L2 overhead, you must consider L1 overhead also.
For ethernet preamble, sfd and ifg are L1, they are not really bytes but ethernet defines them strictly at byte sized amount of time.
For ATM you can't get very accurate reading with your methodology, because it assumes static overhead. If you try send payload of 49B you'll fit it in single ethernet frame, but you'll need two ATM cell, second cell has 52B of overhead and 1B of data.
This problem does not exist in HDLC, PPP or FrameRelay, and your methodology will give decent approximation on those.
If you want really accurate data, you need to have size of each packet send, then calculate what they are when serialized to given L1 technology with given L2 technology.
The larger the packets are that you send, the less sensitive your calculations are to the approximation method as overhead will be smaller contributor.
Is transmission rate same as Bandwidth?
In essence yes, with some small adjustments for the difference between the clocking of the media ("bandwidth") and the actual rate at which packets are accepted onto the media (ie, minus minor overheads like ethernet headers, SDH segmentation, forward-error control, tunnelling like VLANs, MPLS, GRE).
We can do "traffic shaping" in the router so that the packet transmission rate presented to the media is substantially less than the available transmission rate of the media. This is how you can order a 200Mbps service over a gigabit ethernet fibre.
What is throughput
Throughput is the speed visible to the application.
Often we aren't so much interested in the throughput of a link as the "end-to-end throughput" between two applications communicating over a path consisting of multiple links and routers. End-to-end throughput can be affected by the lowest transmission rate of the links on the path; the path error rate, packet re-ordering, latency and jitter affecting the TCP congestion-control algorithm; the path MTU and packet re-ordering affecting operating system efficiency; the choice of TCP congestion control algorithm used.
why do we consider transmission delay
We usually don't. We used to, since getting a 1.5KB of packet onto the wire took some time at 9600bps. But at the high data rates in modern networks the number is so small as to be safely ignored outside of weird situations.
Shouldn't we also consider the propagation delay?
Also called "latency". Latency remains important as it's the one performance factor in a global network which isn't improving rapidly. As everything else improves then avoiding latency becomes increasingly important to improving performance. This has effects from avoiding round-trips packet exchanges in application and protocol designs; fielding new protocols which move data closer to their endpoint; and architectural responses such as content distribution networks.
Best Answer
Like many things, the devil is often in the details. Depending on how the measurement is taken, then yes it is possible. For instance, if you are using some sort of data compression and an application is measuring the amount of actual (uncompressed) data sent between two end points.
With no data compression, the same means of measuring can produce a lower throughput than the actual number of bits sent across the medium. This is because overhead (such as frame or packet headers for example) may not be counted.
In my experience, no it is not normal to expect compression on a network link. However it is fairly common for forms of compression to exist at higher levels of the OSI model, such as HTTP compression.