Ajay C Offline Upload & Sell: Off
|
Luka and theSuede:
While what Suede largely applies in the CCD world, there is no physical pixel level shifting in the CMOS sensor world.
1. It is very unusual for pixels (the physical photo site) to be shifted. Over ~20 new sensor designs of various pixel sizes and formats pass through us, I don't think I have seen a single sensor where the photosite itself was shifted. However, depending on the size of the pixel, the micro-lenses are shifted radially outwards (most of the time only X direction is shifted, also gapless u-lenses helps). Also, smaller pixels need more micro-lens shift than larger pixels. Even then, position dependent gain algorithms are used to fixel for color imbalances. This is typically referred in the industry as color correction matrix or a similar term. The idea is to multiply the pixel response by the inverse vignetting profile of the sensor, per channel.
2. About the pixel not being rotationally symmetric, modern pixels are drawn to a square format for example 2 um x 2um. What is different though in the X & Y direction of a pixel is the fill factor aspect ratio. (Fill factor is the physical opening of the pixel which collects photons). That is, the metal layers on top of the pixel are not symmetrical (ie not the same in the X&Y direction). However, as the metal stack rises they tend to get symmetrical. i.e. metal 4 layers tends to be more symmetrical (x,y directions) than say metal 2 or metal 3.
3. About the red CFA being larger in size, that is not true in the CMOS world. Before depositing CFA layers on the pixels, there is no way to tell which pixel represents which color plane, i.e. the sensor is monochromatic. And, all the pixels (and the fill factor) are the same size, which means the CFA should be centered on to this physical opening of the pixel, and the metal stack geometry. If you look at a sensor RGB quantum efficieny (QE) curves, red channel typically has the lowest QE, i.e. red pixels have the lowest conversion gain (conversion of incident photons to electrons in the pixel), and to obtain pure white (D65 illuminant), the red channel will be amplified the most before it is mixed with both greens (actually green channel is split into Gr_Red and Gr_blue) and blue. DxO sensor data will have this amplification metric (from which QE can be derived).
Sorry, I went overboard with some details but I know it will be of use to, at least, a few!
|