Upload & Sell: Off
| p.1 #20 · p.1 #20 · Panasonic Diffractive Color Filter/Splitter patent |
Now, suppose we instead use white, yellow, and cyan filters (not exactly what's happening here, similar). Then, what we directly measure at the three pixels (with its counting statistics noise) is:
W = R+G+B, dW = sqrt(R+G+B)
Y = R+G, dY = sqrt(R+G)
C = G+B, dC = sqrt(G+B)
What? no. We are measuring W, Y, and C here, not R, G and B. The way you are adding the noise is as if each pixel was measuring R, G, and B and then using that to come up with it's value of W, Y, or C.
Starting from pixels that actually measure these things directly, we have
W = W, dW = sqrt(W)
Y = Y, dY = sqrt(Y)
C = C, dC = sqrt(C)
Now, the next step becomes
R' = W-C => dR' = sqrt(W+C)
G' = C+Y-W => dG' = sqrt(C+Y-W)
B' = W-Y => dB' = sqrt(W-Y)
So, we have increased the noise sqrt(2) to sqrt(3) times over what it was in-sensor.
So we've got a 2-3x improvement in signal, with a sqrt(2)-sqrt(3) increase in noise. It's a win.