Your question seems to be contradictory -- you seem to be saying you want to "create a circuit" without actually "designing a circuit".
I'm going to interpret that as saying you want to "build a complete system, including designing a high-level protocol and laying out a few circuit boards and soldering integrated circuits to the boards and plugging sub-assembly modules into those boards", but you'd rather not "design and fab a full-custom ASIC from scratch" or "design something from dozens of discrete transistors instead of a chip" or "design and do EM simulations and construct a full-custom antenna system and get FCC approval".
I've heard that, at least for low data rates, that UWB can be produced using simple circuits using off-the-shelf chips that were common long before anyone ever heard of UWB.
Alas, I don't know any specific chips that you could use for the data rate you want, much less if there exist off-the-shelf modules using those chips, but I hear that such chips exist. Let me give you some links that might lead to those chips.
My understanding is that there is currently only one UWB standard --
WiMedia’s Multiband OFDM, standardized as ECMA-368 and ECMA-369.
My understanding is that "Certified wireless USB" and a potential future version of "Bluetooth" and a potential future version of "Zigbee" are higher-level layers on top of WiMedia's UWB standard.
My understanding is that there are several chip manufacturers producing chips that comply with this standard. a b
I hear that several other chip manufacturers are producing non-ECMA-compliant chips, including Pulse~LINK, DecaWave, IMEC, WiLinx, Wisair.
Presumably those chip use some other proposed standard or proprietary UWB techniques.
If you can't find an off-the-shelf module, and you find yourself looking for individual chips, I suspect that many of the chips developed for HomePlug might be usable as part of a UWB system.
At some point in my life, I used to run the USB business for big semi company. The best result I remember was NEC SATA controller capable of pushing 320Mbps actual data throughput for mass storage, probably current sata drives are capable of this or slightly more. This was using BOT (some mass storage protocol runs on USB).
I can give a technical detailed answer but I guess you can deduce yourself. What you need to see is that, this is ecosystem play, any significant improvement would require somebody like Microsoft to change their stack, optimize etc, which is not going to happen. Interoperability is far more important than speed. Because existing stacks carefully cover the mistakes of slew of devices out there because when the USB2 spec come out probably the initial devices didn't really confirm to the spec that well since the spec was buggy, the certification system was buggy etc. etc.. If you build a home brew system using Linux or custom USB host drivers for MS and a fast device controller you can probably get close to the theoretical limits.
In terms of streaming, the ISO supposed to be very fast but controllers do not implement that very well, since 95% of the apps use Bulk transfer.
As a bonus insight, for example, if you go and build a hub IC today, if you follow the spec to the dot, you will practically sell zero chips. If you know all the bugs in the market and make sure your hub IC can tolerate to them, you can probably get in to the market. I am still amazed today, how well USB is working given number of bad software and chips out there.
Best Answer
You cannot really infer 54 Mbps knowing only the bandwidth. It's a design tradeoff. You could theoretically build a system that would do twice this throughput in the same bandwidth under ideal conditions. The trade-off involves multiple factors, such as power requirements, implementation complexity, robustness in the face of various types of interference, and of course channel bandwidth.
So when it comes to "how the get the data rates from 802.11n and 802.11ac" the answer is, I'm afraid, "just look it up", as it comprises several well-reasoned but ultimately arbitrary decisions about all the specifics of that particular modulation scheme.
But to answer the immediate question of why 54 Mbps is the maximum for 802.11g:
802.11g uses OFDM, a pretty complex modulation scheme which employs multiple overlapping orthogonal subcarriers (meaning that at each subcarrier peak, the sidebands of all other subcarriers add up to zero). 802.11g uses 52 subcarriers, 48 of which carry data. These subcarriers are spread over 16.25 MHz (not 22!), and can be modulated using one of several constellations, of which QAM-64 is the largest. Convolution codes are used for forward error correction.
To operate at 54 Mbps, 802.11g uses the QAM-64 constellation and 3/4 convolution code rate. 48 Mbps is exactly the same but with a more redundant convolution code, 2/3. The QAM-64 constellation enables each subcarrier to carry 6 bits of information per symbol. Each symbol lasts 4µs in 802.11g.
All of the above are well reasoned but ultimately arbitrary numbers, but multiplying them together gives you: 6 bits * 48 subcarriers * 3/4 error correction coding = 216 bits per 4µs. That's 54 million bits per second.
These slides on 802.11b/g were very useful for reminding me how this worked.