Upload & Sell: On
| p.2 #20 · New Olympus OM-D announced |
Ken, huh? You obviously know more about this stuff than me, enlighten us, what does this mean "read noise" vs. "quantization noise"... seriously, thanks.
At the risk of dragging off topic...
"Read noise" is the minimum amount of noise from a sensor. So if I read a perfectly dark frame (no exposure at all) rather than getting a uniform image of black there will actually be noise in the image. There are other sources of noise as well that depend on other parameters, but no matter what you'll always see at least the read noise. A short write up:
"Quantization noise" is kind of a misleading term. "Quantization error" would be more descriptive. When you take a measurement of the pixel value and convert it to a binary number (say 12-bits in this case) somewhere the measurement is rounded to a whole number. For instance, 12-bits can express the numbers 0 to 4095, if the value being measured was actually 1011.43 we'd actually measure 1011 and there would be an error of 0.43. It turns out for real world data if we look at the rounding errors over a large number of measurements (say 16 million pixels) these "quantization errors" (you could think of them as "rounding errors") look perfectly random and noise like. Hence the name "quantization noise". Another short write up:
A natural question when measuring and storing data is "how many bits do I need to measure/store?" The answer is enough bits that you don't corrupt the measurement with too much "quantization noise". The less noisy the measurement was to begin with the more bits you need to store. The noisier the measurement the fewer bits.
For the example at hand, we are assuming the OM-D is using the sensor from the G3/GX1. From measurements derived from DxO (see sensorgen.info) we know the saturation capacity of the pixels is 12554 electrons and that the read noise is 11.1 electrons. For a 12-bit RAW file lets say it uses only 3800 of the 4096 available levels (at least that's what Panasonic does in their RAW files) then each bit is recording 12554/3800 = 3.3 electrons. That means the maximum quantization error (rounding error) is about 1.6 electrons. But the read noise is 11.1 electrons, nearly 8 times as large. So from that we know 12-bits is more than enough bits to record the sensor data without increasing the noise of the measurement. Going to 14-bits wouldn't help us.
By comparison the Nikon D7000 has a saturation capacity of 49058 and a read noise of just 3.1 electrons. Making the same assumptions above a 12-bit RAW would record 49058/3800 = 12.9 electrons per bit, a maximum quantization error of 6.5 electrons - nearly double the read noise! Hence for the D7000 sensor 12-bit data is not enough, and that's why they offer a 14-bit RAW file for that camera.
Well, way off topic - but you asked
I hope it was slightly more useful than confusing!
Thank you Ken, that was excellent and yes, I do understand better what you are talking about. I remember many posts about the D700 and whether or not to use 12 or 14 bit, some pretty heated. I know nothing about the technology of the sensor, other then what I can see in the final image. I used 14-bit on my D700 but to be honest I never tested to see if it was any better than a 12-bit setting. Sounds like 12-bit will be just fine on the E-M5 based on your information, so thank you for that. One last thought, is there a difference between jpg and RAW files when using your above "read" vs. "quantization"?
For me, I'm hoping there is more room in the E-M5 RAW files for adjustments