For a high bit rate in data communication we need a high bandwidth. Suppose there are hundreds of carriers in a particular region and they are all allocated microwave frequencies. Minimum spacing rules are also applied during frequency allocation. Due to this the resultant bandwidth, that each carrier gets, reduces. So because of this the resultant bit rate would also decrease if we consider the shannon-hartley theorem. Then how do carriers claim to have high-bit rates? Is there a different way through which frequencies are allocated?
Frequency allocation for telecommunication companies
bandwidthbit ratecommunicationdatafrequency
Related Solutions
You've just described two separate and entirely valid technologies used in communication theory today: software-defined radio and (for lack of a good general term that I can remember) multi-symbol/level communication.
If we modulate the amplitude of a wave (I think by providing the oscillator different levels of current), can we not sample this wave with some sort of analog to digital converter and process it on the CPU?
Yes - to a degree. You've just described software-defined radio. The basic idea is what you said: dispense with the majority of the radio frequency equipment and create the modulated sine wave directly from the output of a D/A converter and for the return path use a similarly fast A/D and plenty of DSP processing for both sides. The current problem is that although processor speeds are measured in gigahertz nowadays, the interface with the analog world hasn't yet reached those speeds. This means that direct waveform creation is limited to low frequencies (which, for communications, is still fearfully high compared to frequencies 'normal' analog designers worry about). However, if I read my articles correctly this as still allow removal of some of the intermediate-frequency hardware present in most radios. In the future it may be possible to dispense with more of the hardware.
If this is possible, why stick to base 2? If we can have a unique value for each measurable amplitude, data transfer rates would skyrocket. Imagine transferring data with base 1024, or even higher. If we could accurately sample the wave (each oscillation), I don't see why the rate of transfer could be equal to the frequency of the wave times base divided by 2 bits per second (this is probably not correct mathing).
You're right that it's not perfect but you definitely have the basic idea down. To give an example we'll stick with Amplitude Modulation. When you're trying to transmit 0 or 1 using AM it's called On-Off-Keying (link goes to a site with nice pictures and a description). This works by modulating a pure digital signal - 5v is '1', 0v is '0'. You're right that if you have a number of voltage levels you can send more data at once - this is called Amplitude Shift Keying (another nice description with picture). As you can see, there's multiple levels of voltage for various combinations of bits - 2 bits gives four different voltage levels, 3 gives 8, etc.
The problem with this and other similar schemes is not theoretical but practical - in a communication channel with noise it's very likely you'll have trouble figuring out what exactly was sent. It's just like with analog signals: if my only valid voltage levels are 0 and 5V then if I get 4.3V out I can be reasonably sure it should be 5V. If I have 1024 valid voltage levels then it gets a lot harder to determine.
Also note that you're not limited to Amplitude Modulation - the same techniques can be applied to Phase Modulated signals (similar to FM) or you can step into the realm of Frequency Shift Keying where distinct frequencies represent bits (ie, if you want to transmit '3' in binary that might mean sending a 3KHz sine wave and a 6KHz sine wave, then separating them at the receiving end where sending '1' might just be the 3KHz sine wave).
And these techniques are already in wide use - GSM cell phones use a form of Frequency Shift Keying called Gaussian Minimum Shift Keying. Although I do want to correct one incorrect idea you may have: modulation is still used in all of these schemes. The opposite of a modulated signal is a baseband signal (like a bitstream from a serial port). To communicate at any distance over the air you need modulation, period. It's not going away, but how we generate the modulated waveform will change.
I suggest you take a class in Communication Theory if you can - it sounds like you've got the knack for it.
From a theoretical point of view, the maximum capacity of a channel affected by AWGN Noise (Additive Gaussian Gaussian Noise) is determined by the Shannon–Hartley theorem:
$$ C\leq log_2(1+\frac{S}{N_0B})$$
This means you can't put more than that information on a channel with a band (B= \$f_{MAX}-f_{MIN}\$) without making the communication unreliable.
Then we go on the modulations: every modulation has a particular spectrum efficiency and an erroneous bit probability. More levels you use (QPSK vs 16-QAM, p.e.), more bit for each symbol (= more efficiency) but more erroneous symbols (similar to the bit error rate, with a Gray code).
The spectrum is directly related to the shaping impulse used by the modulation. A very common one is the raised cosine impulse (cause it has no Inter-Symbol Interference), that decreases the efficiency of a factor \$ (1+ \alpha) \$
Again we go on codes, that could give a huge gain, especially using concatenated codes like Reed-Solomon + Viterbi, using Turbo codes or LDPC.
Every effort is done to approach the Shannon capacity limit.
Best Answer
According to Wikipedia, 4G networks (IMT-Advanced)
The possibility of transferring more than 1 bps/Hz is a direct consequence of the Shannon-Hartley theorem
\$C = B \log_2\left(1+\mathrm{SNR}\right)\$
where C is capacity and B is bandwidth.
Achieving 15 bps/Hz thus requires an SNR of at least 215-1 or about 33,000.