Electronic – What are the effects we need to consider when transmitting extra low voltage (0.1 uV) signal over ~1m long cable

cablesnoiseprecision

What are the effects we need to consider when transmitting extra low voltage (0.1 uV) signal over about 1 meter of cable, either coaxial or shielded twisted pair ?

The source is low impedance (~5 ohm), and the receiver would be high impedance amplifier input.
The signal is about 10kHz in bandwidth, in the 10 kHz to 50 kHz range.

Cable impedance matching is not an issue at these frequencies, for transmission line effects the cable is probably too short.

I would guess that shielding efficiency would be a primary concern ? would triboelectric noise be significant in these conditions ? Anything else ?

EDIT:
The context for the question is working out the noise budget for a complete signal path (physical medium -> sensor -> preamp -> cable -> main amp -> ADC -> digital signal processing gain), to aid in exploring the trade-off of placing the cable after preamp vs after the ADC (in the digital domain).

Best Answer

Shielding efficiency (unless balanced) and triboelectricity are certainly concerns. Balancing mitigates shielding efficiency, and there are cables developed for microphone connections designed to reduce triboelectric effects.

But so is simple noise - Johnson noise, the thermal noise in any resistance.

(Shot noise, the statistical variation in current would be significant at high impedances, but can be ignored here).

Assuming the bandwidth is 10 kHz to 50 kHz (40 kHz BW) and the receiving amplifier is moderately low noise, like 1 nV/sqrt(Hz), its input noise contribution would be 200 nV or twice the signal amplitude.

The 5 ohm source impedance itself contributes 0.28 nV/sqrt(Hz) or 56nV, more than half the signal amplitude.

If the lower frequency limit is 10Hz (not kHs) you'll also have 1/F noise (aka flicker noise) to contend with.


Do you have any signal/noise ratio requirements?


And is there any way you could place a step up transformer (impedance converter) at the source end?

Say, 1:4 in voltage, 1:16 in impedance? That would give an 80 ohm source impedance or 1.1 nV/sqrt(Hz) noise floor, giving you a fighting chance of amplifying it with a noise figure of 2-3 dB.

A transformer will increase both the signal and the source impedance's own noise (56 nV. giving nearly 6 dB SNR). What you gain is that both are now larger than the amplifier's own noise contribution.

A really good amplifier (using discrete PNP transistors) can approach 0.5 nV/sqrt(Hz) which is still about 6dB above your inherent noise level without the transformer, giving 0.1 uV rms noise in 40 kHz, which would degrade your SNR to 0dB. (I've never seen ICs better than the 0.7 to 0.8 nV/sqrt(Hz) range)

But after a 4:1 step-up transformer, this noise is added to 0.4 uV signal and 0.224 uV (56 nV * 4) noise.

Sqrt(0.1 ^2 + 0.224 ^2) = 0.245 uV or only about 1 dB worse; you can say the amplifier has a 1 dB noise figure with the transformer, or about 6dB without it.

(Side note : with transformers for source impedance conversion, vacuum tube based mic amps can still approach the state of the art)


EDIT following question edit : you can then (slightly) improve S/N ratio using a priori knowledge of the signal frequency (per Andy's leading questions).

If you can accurately (or fairly accurately) know or predict the signal frequency, and its amplitude is teh quantity of interest, there are signal processing techniques you can use to extract it from broad band noise. (Useful search terms : PSD or phase-sensitive detector, or lock-in amplifier, for the traditionalist. Now just digitise the lot; FFT it, and analyse the frequency bins of interest).

Absent that, you can filter the 10kHz band of interest - after the low noise amplifier, so the filter's own noise is insignificant. By selecting 1/4 of the original spectrum you can hope to improve SNR by 6 dB.