How different can the digital signal from a low-cost CD player be from that of an expensive Hi-Fi CD player

audiodigital-audio

According to the high-end Hi-Fi connoisseurs' bibles, for a perfect reproduction of the sound, in addition to "special cables", etc., you must have a specialized stage of CD which reads the optical disc and outputs the digital signal.

Assuming that the disc is read flawlessly, the signals should be identical.

If there are errors, the error correction algorithm is still the same as long the number of errors is within the capacity of the error correction algorithm. Beyond this, any "fix" is simply noise.

As I know, and I may be wrong, a very small number of companies effectively produce the CD players' mechanical and optical units, and therefore at least these parts are common to both the cheap and the costly units.

So what makes the difference?

Best Answer

Assuming you read this answer...

Under normal conditions, the data encoded in the digital output is what's on the disc. You can easily verify this by recording it using a soundcard with SPDIF input and comparing digitally with a rip of the CD using PC software. I tried this with various CD players and CDs with different amount of scratches and could confirm that, indeed, it works.

The exception is CD players with digital volume control that also operate on the digital output. In this case, sample data will be multiplied by a constant which is your volume setting. This can be done properly: ideally 16-bit data is multiplied by volume and output as 24 bit digital, or it could be dithered to 16 bits which will lower SNR at low volume settings. However, some players, notably CD723 and most likely all other players using the same decoder chip, have a buggy volume control which multiplies then truncates without dithering. In addition it cannot be set to a multiplier of 1, so at full volume it multiplies by something like 0.99 and truncates. This causes an increase in quantization noise which manifests as a loss of detail. It's baked in the digital data, so it's not possible to get rid of it. The only solution is to avoid such players... or rather was, because CD is obsolete.

Besides that, the bits that come out are what's on the disc, period.

The same problems can occur with soundcards, it takes quite a bit of effort to ensure the bits coming out are actually the bits being played. Sometimes the drivers or OS will sneak in some resampling, volume control, or other effects. It is best to check with a loopback recording using the SPDIF input.

The other problem is jitter, that's either "the plague" or "woo-woo" depending on who you ask. In my experience, it's quite audible, although various DACs have wildly differing sensitivity. A proper DAC should have zero sensitivity to it, but it is quite difficult to achieve. When a DAC has high sensitivity to jitter, it will sound different with different CD players or digital cables, then audiophiles call it "transparent" while it's actually a problem with the DAC. However, one might want to still make the DAC's job easier by using a low jitter source, or at least avoid the most common sources of it.

Here's a scope shot of CD723 digital output from my archives, it has a very slow rise time and a large amount of jitter, about 3ns, visible on a scope.

enter image description here

This is measured by triggering on an edge, then adjusting the timebase delay to display the next edge, so the scope displays the time delay between two edges, which should be constant. In this case it is not constant, so the trace is smeared horizontally.

Jitter on a SPDIF output can be measured with expensive equipment like Audio Precision.

The homegrown way to do it is to use a SPDIF receiver which does not attenuate jitter (like CS8412), input a SPDIF signal, and acquire the receiver's WCLK output with a soundcard. If the digital audio runs at 44.1k, wordclock will be at 44.1k too, so the soundcard should be set to 192k sampling rate to acquire it. Then, you can do a FFT, display it, and look at the phase noise skirts, or do any other kind of processing you like. It is quite instructive.

You can also look at the recovered bitclock with a spectrum analyzer, or mix it with a known stable frequency and look at the resulting frequency difference signal with a soundcard. There are plenty of cheap ways to do it, so if you want to "tweak" a digital source it would be a good idea to setup a jitter measurement rig to at least have an idea of what you're doing.

From my experience, jitter in a CDP digital output comes from:

  • Cheap crystal oscillators

That's the #1 cause, because an oscillator is an analog circuit working with rather low level signals, so it is quite sensitive to noise. It is usually implemented as a crystal slapped on the big decoder VLSI which is the worst case, because these chips are very noisy. Add a noisy power supply and double sided layout with no ground plane, and you can be sure the performance will be way below what you would get with any €1 canned oscillator powered by a dedicated 30c LDO. In addition, crystals are quite sensitive to vibration. The oscillator is usually the weak spot because power supply noise causes frequency variation which is integrated into phase noise, whereas downstream sources only add phase noise, which is not integrated.

  • Cheap power supplies

The optical pickup coil actuators must follow the track on the disc, so they will draw a current from the supply which depends on the shape of the track, how much the disc vibrates, etc. It's basically a big microphone. Then there will be a voltage ripple on the supply that mirrors any vibration in the device. You can probe the supply rails and tap the player with a finger to see the effect, for example. If If the power supply is too cheap or badly designed, and this supply rail is shared with sensitive components, then you get a problem, and audiophiles will discover it sounds better with those fashionable spiky feet that cost more than the player, or put a brick on it, or mill the edges of the CD to make it more round and centered around the hole so it vibrates less, etc.

  • Bad design

The digital audio output usually comes out of a big noisy VLSI, so the correct way to do it would be to use a fast high slew rate flop to re-align the edges with the clean clock from the canned oscillator just before the output. It doesn't cost much. Also the cable should be driven with proper 75R termination taking the driver impedance into account, no coupling cap otherwise you get intersymbol interference, proper shield termination to enclosure and other usual practices when dealing with highspeed signals on a coax. This never happens, usually you get a 75R resistor, a 10nF cap, and a RCA that isn't even grounded to the metal box.

  • Bad layout

Before 4 layer got cheap, the usual way was a 2 layer PCB without ground plane, and the digital output signal would usually pass through ribbon cables with only one ground wire shared with other highspeed signals. This also adds plenty of noise.

Related Topic