Upload & Sell: Off
Does anyone know the theory behind this?
The design I'm using right now has 4 individual photocells, each behind a different color filter (orange, blue, green, pink).
The photocell has a load resistor that is in the 400K to 2200K range. This generates a voltage across the load of between 0.008V to 0.130V; values above 0.130V are outside of the linear region of the photocell, so they are thrown out; values below 0.008V are under precision of the ADC so they are also thrown out (for now, this number will likely farther decrease).
Periodically, 10 times a second, all four photocells are sampled by the ADC and normalized based on a white calibration. Then a ratio is computed from this, orange/blue to determine white balance and pink/green to determine tint. This ratio is used against another calibration table to find the approximate white balance (in EV) and tint.
1) Is there a difference between using O-B-G-P vs. R-G-B photocells? I figured the tradeoff here was the complexity of deriving white balance and tint from R-G-B data instead of O-B-G-P.
2) In most cases I am using gray card in front of the sensors to catch the light I am trying to sample. Otherwise it seems like it gets thrown off by background colors (even if they aren't reflecting a lot of light). Still experimenting here, is it better to use a semi-translucent white materiel like some of the commercial models do?
3) Converting between an exponential (2^n) scale like EV and the way white balance is normally expressed (in KV)?