theSuede Offline Upload & Sell: Off
|
Firstly, I think some of you are thinking about this from the wrong starting point. The output signal (the sensor, the image) has a fixed resolution, and I would guess that's where the confusion originates. Of course the output signal can't have higher resolution than what it physically has (the basis of discrete sampling...), and this might cause a slight logic hiccup when you think about MTF at "frequencies above Nyquist".
But you have to get stuff in order to realize what is what. The MTF frequency is the TARGET's frequency as interpreted by the recording device's resolution - and that gives "an output result" - regardless of the sensor resolution!
So if you measure (shoot in this case...) a sine target with a resolution higher than what the recording media (sensor in this case) can register, but the sensor STILL shows some kind of remaining fluctuations (contrast patterns) in the output signal, then you have output above Nyquist. This is above what the sensor ideally SHOULD be able to record.
The reason for this is that even when a pattern is smaller than a pixel, the value recorded in the pixel is still an integral. To return to the sine wave pattern, imagine that you have a SLIGHTLY higher target resolution than than what the sensor can handle. Not much, maybe 10% higher. Ten half-cycles of signal only covers nine pixel widths...
Then the first pixel will still record a signal that is mostly the first postive part of the sine - and then include 10% of the negative part. The pixel will still be white, only a tiny bit less white than if the signal was a perfect fit.
The second pixel will start measuring about 10% into the negative swing, get the rest of the negative swing and 20% of the next positive part. It will still mostly be black...!
Around pixel number five, you've split the signal polarities beween two pixels so that one pixel gets half the negative and half the positive part (resulting in mid gray), then the next pixel will get half the positive and half the negative - ALSO resulting in mid gray. The contrast between those two is close to zero.
Around pixel 9-10 you've gone a full phase-difference cycle, so that you yet again have maximum output swing. Pixel 9 is close to white, 10 is close to black.
This will result in a pattern with a nine-pixel cycle. This pattern goes from almost maximum contrast between one pixel and the next (pixels 0, 9, 18, 27 and so on) to close to zero contrast (pixels 5, 14, 23, 32 and so on).
This is called aliasing, and it can be a very strongly visible effect in a monochrome sensor.
..................
I'm mobile at a new MBP, so I'm sorry I can't simplify the explanations with some graphics. But a clarification that might be necessary for everything to fall into place is that MTF is a model based on how a sine signal is modulated by a system, regardless of the system resolution. From a purely mathematical PoV, the MTF is the integral of the original contrast over the integral of the image contrast, without any regard what so ever to what resolution/frequency they are.
Understanding that, you might also instinctively grasp that the resulting MTF contrast is the same as the average "interaction result" over a large sample area. So the resulting contrast in the mathematical model is the perfect average of all possible interactions/positions of the pixels and the sine signals.
|