Electronic – Is it sensible to always use larger diameter conductors for carrying smaller signals

noisephysicssignal-to-noisesnr

This question as originally written sounds a little bit insane: it was originally asked to me by a colleague as a joke. I am an experimental NMR physicist. I frequently want to perform physical experiments which ultimately boil down to measuring small AC voltages (~µV) at about 100-300 MHz, and draw the smallest current possible. We do this with resonant cavities and impedance-matched (50 Ω) coaxial conductors. Because we sometimes want to blast our samples with a kW of RF, these conductors are often quite "beefy" — 10 mm diameter coax with high quality N-type connectors and a low low insertion loss at the frequency of interest.

However, I think this question is of interest, for the reasons I'll outline below. The DC resistance of modern coax conductor assemblies is frequently measured in ~1 Ω/km, and can be neglected for the 2 m of cable I typically use. At 300 MHz, however, the cable has a skin depth given by

$$
\delta=\sqrt{{2\rho }\over{\omega\mu}}
$$

of about four microns. If one assumes that the centre of my coax a solid wire (and therefore neglects proximity effects), the total AC resistance is effectively

$$
R_\text{AC}\approx\frac{L\rho}{\pi D\delta},
$$

where D is the total diameter of the cable. For my system, this is about 0.2 Ω. However, holding everything else constant, this naïve approximation implies that your AC losses scale as 1/D, which would tend to imply that one would want conductors as large as possible.

However, the above discussion completely neglects noise. I understand that there are at least three main sources of noise I should consider: (1) thermal (Johnson-Nyquist) noise, induced in the conductor itself and in the matching capacitors in my network, (2) induced noise arising from RF radiation elsewhere in the universe, and (3) shot noise and 1/f noise arising from fundamental sources. I am not sure how the interaction of these three sources (and any I may have missed!) will change the conclusion reached above.

In particular, the expression for the expected Johnson noise voltage,

$$
v_n=\sqrt{4 k_B T R \Delta f},
$$

is essentially independent of the mass of the conductor, which I naïvely find rather odd — one may expect that the larger thermal mass of a real material would provide more opportunity for (at least transiently) induced noise currents. Additionally, everything I work with is RF shielded, but I can't help but think that the shielding (and the rest of the room) will radiate as a black body at 300 K…and therefore emit some RF that it is otherwise designed to stop.

At some point, my gut feeling is that these noise processes would conspire to make any increase in the diameter of the conductor used pointless, or down right deleterious. Naïvely, I think that this has clearly got to be true, or labs would be filled with absolutely huge cables to be used with sensitive experiments. Am I right?

What is the optimum coaxial conductor diameter to use when carrying information consisting of a potential difference of some small magnitude v at an AC frequency f? Is everything so dominated by the limitations of the (GaAs FET) preamplifier that this question is entirely pointless?

Best Answer

You're substantially correct on everything you've mentioned. Bigger cable has lower losses.

Low loss is important in two areas

1) Noise

The attenuation of a feeder is what adds Johnson noise corresponding to its temperature onto the signal. A feeder of near zero length has near zero attenuation and so near zero noise figure.

Up to a meter or several (depending on frequency), the noise figure of a typical cable tends to be dominated by the noise figure of the input amplifier you are using, even cables of pencil diameter (you can get really thin cables, sub-mm even, and in these you do have to worry about meter lengths).

To get signals down off your roof into the lab, any feasible cable will be so lossy, even unusually thick ones, that the solution is almost always an LNA on the roof, straight after the antenna.

That's why do tend not to see really fat cables in labs, they're not needed for short hops, they're not sufficient for long drags.

b) High power handling

In a transmitter station, you tend to have the amplifier in the building, and the antenna 'out there' somewhere. Putting the amplifier 'out there' as well is usually not an option, so here you do have fat cables, as fat as possible given that they have to remain TEM, without moding. That means <3.5mm for 26GHz, <350mm for 260MHz etc.

The impedance of the cable also matters, as well as the size. Have a look at this cable manufacturer's tutorial on why we have different cable impedances, so 75\$\Omega\$ for lowest loss, and 50\$\Omega\$ as a compromise that has settled itself in as a standard.