Electronic – the origin of multiplicative noise from a DSLR sensor

noisephotosensor

I'm trying to understand the behavior of my DSLR camera sensor (Canon 80D). I've taken a picture of a gradient from top-right to bottom-left corners of a square. This square is positioned at the right-hand side of the picture, so that it'd be easy to just take the diagonal, starting from the top-right corner, that would be composed only of green-filtered photosites.

The shot is done with a defocused lens, so that the detail of the monitor pixels pattern couldn't cause any interference like moire. ISO sensitivity is set to the smallest value: 100, exposure is 1/10 s, and aperture is f/3.2, with f=24 mm.

What I get is that as the intensity registered by a photosite increases, the noise amplitude also increases. See this plot of the raw data of the diagonal, taken from the CR2 file:

enter image description here

The fact that noise amplitude is correlated with signal amplitude makes me wonder. Thermal noise should be the same on all the photosites, regardless of their illuminance. Quantization noise wouldn't even be noticeable on this scale of ~10000 counts (and it's also additive). Shot noise also shouldn't be noticeable at this illuminance.

So what is the origin of this multiplicative noise then?


I've done some more captures to find the relation between the mean and the variance of the pixel values. I've taken 15 shots of a gray gradient, used every 50th row and column of the resulting data, and computed mean and variance in the sets of 15 values for each resulting pixel.

Here are the results. Blue the variance, orange the least-squares fit:

mean vs variance

Blue the variance/mean ratio, orange the estimate of gain from the fit above:
mean vs variance/mean

The plot above smoothed with moving average with 100 points in the window:

smoothed mean-vs-ratio

Is this consistent with the shot noise explanation given in the answers?


After some more comments I've subtracted the DC offset of about 511.9 from all the pixels, and now the smoothed ratio of variance to mean (i.e. estimated gain) as a function of mean looks like this:

smoothed mean-vs-ratio after subtracting DC offset

So, now the answer that explains the noise as the shot noise makes sense.

Best Answer

To amplify a bit on the explanation of "shot noise", remember that we are in the realm where (due to the tiny size of the photosites, the length of the exposure, and the various efficiencies involved) the discrete nature of photons really matters. The sensor is actually counting them.

Now imagine you're photographing a flat field, and every photosite receives the same amount of illumination (for simplicity, say it's a monochrome sensor). Does that mean that in a given exposure, every photosite will record exactly the same number of photons? No! Photons arrive at random, and their arrival is well-modeled by the distribution of a Poisson process.

If you turned the illumination way down so that the average number of photons per photosite per exposure was 1, then about 37% of photosites will record 0 photons, 37% will record 1 photon, 18% will record 2 photons, and 8% will record 3 or more. The standard deviation of this distribution is 1.

poisson mu=1

If you increased the illumination so that the average number of photons per photosite per exposure was 10, then about 88% would record between 5 and 15 photons, with less than 1% seeing more than 18. The standard deviation of this distribution is sqrt(10) ≈ 3.16.

poisson mu=10

If you increased again to an average of 100, then about 90% would record between 84 and 117 photons, and the standard deviation is 10.

poisson mu=100

And the pattern continues. As the illumination increases, there are more and more "rolls of the dice" for a photon to either be detected by a photosite or not, and more and more possible values for the measurement to take on, so the absolute magnitude of the noise increases. At the same time, since the standard deviation of the Poisson process is the square root of the mean rate, the relative magnitude of the noise decreases with increasing illumination. Since what we perceive is ratios of brightness (more or less), this explains why the visible noise goes down. When you get up to, say, an average illumination of a million, the standard deviation is up to 1000... but put another way, that means that practically all of the values are between 99.8% and 100.2% of the average. That's a far cry from the situation where the average illumination was 10, and we could easily see values between 50% and 150% of the average.