Upload & Sell: Off
| Re: Canon EF 24-70mm f/4L IS Resolution Tests! |
Not really disagreeing with your points about images, Dan. It's only a 15% difference in resolution even at MTF50 and a dozen things other than laboratory resolution tests are more important to any image. I'm like you, I tend to love my inexpensive primes (the 40 f/2.8 lives on my camera, at least at the moment).
I got interested because I see circular logic applied in discussing a number of lenses (not particularly this thread, but all over) -- saying X and Y have equal resolution and I'll fix the one with distortion in post. That's perfectly fine, and perfectly correct to do. But in that case, the person shouldn't be making an argument that the resolution is equal (or better, or whatever), because it's changed.
Of course, what I still have to do is check to see if correcting 1% distortion has a lesser effect than correcting 4%. It may be the change is similar no matter the % distortion corrected, in which case we're back to square 1.
The cool thing about these issues is that we can each test their significance directly by making prints and comparing - and confirm or eliminate the significance of such things on actual photographs.
One of the first times I discovered this was after I got my first DSLR close to 10 years ago. Trying to climb that new learning curve (despite having shot film for many years) I started to read a lot of online stuff, including many, many gear reviews and tests and lots of discussions in photography forums.
The particular moment of enlightenment had to do with diffraction. (Others followed in areas like sharpening for prints and so forth.) I had read about how diffraction blur has a greater effect on photographs as you stop down (which is, without a doubt, true) and a lot of writing about using optimal apertures that sort of split the difference between better lens performance at somewhat smaller apertures and the onset of diffraction blur, in an effort to get maximum image quality. To keep it simple, the "take away" from a lot of the theorizing (which was, I'll stipulate, often based on real data) was more or less that one should shoot "at f/8" (or similar apertures) and steer clear of, say, f/16 if one did not want to compromise sharpness. I steered clear of those smaller apertures, not wanting to compromise my image quality when using my new DSLR.
The theory is indeed correct. Diffraction blur does have have a greater negative effect on sharpness at f/16. However, when I got my first full frame body about 7 years ago or so, I got the idea of running some lenses through various apertures and seeing what this "deterioration" actually amounted to in photographs. While one cannot argue with the fact that diffraction blur has a greater effect on a shot at f/16 than a shot at f/8, one should be interested in the magnitude of that effect on actual photographs. It was by testing a few lenses this way and then looking closely at the resulting images that I discovered that the effect is basically insignificant even in very large photographs!
So, both truths can be compatible. It is true that diffraction blur diminishes sharpness as you stop down. It is also true that you won't be able to see this difference even in very large and well-made prints. (Well, OK, some of you won't buy my belief that the latter is true - but you can also run the tests yourself and use the visual results to determine how to best incorporate aperture decisions in your photography.)
To loop this back to the issue of corrections for distortion in post, here we also have two ideas that seem to be mutually exclusive, but which real world tests (and not just measurements resulting in numbers, as useful and interesting as those numbers can be) might prove to be compatible. Once again, I can say without any doubt that correcting distortion in post will diminish the quality of the image. But the magnitude of the degradation is rarely enough to be visible in a print, even a very large one. (Stipulation: there are limits. While typical adjustments, such as the "lens corrections" now offered in most post-processing software or corrections to align vertical and horizontal lines, are small and almost certain to not create a visible degradation, truly extreme adjustments that grossly warp the original image will create more serious degradation.)
My main point here is that it is important to look at the whole picture. (Literally and figuratively! ;-) I think there are sort of three phases:
1. Prediction, based on understanding of how this works, that an image that is adjusted in these ways will be degraded in ways that can be measured. ("Correction in post will degrade the image.")
2. Measurement of these changes that confirms that the prediction is correct. ("We can measure X amount of degradation when distortions are corrected in post.")
3. Assessing the real world impact of these changes in the image by producing real photographic output - typically prints, though screen-sized jpgs could be used as a lower-standard alternative. ("The print of the corrected image looks great." Or, "Aside from the repaired distortion, there are no visible differences in quality between the two images.")
Since we so often get ourselves locked up in semantic battles in these discussions, in a nutshell what I'm saying is that the theory correctly predicts that distortion adjustments in post will degrade the image, but that prints demonstrate that the degradation is insignificant in all but extreme cases.