Upload & Sell: Off
| p.1 #9 · Depth of field revelation. |
Jason OConnell wrote:
I may be wrong but i believe sensor size plays a role in apparent depth of field. The larger the sensor the shallower the depth of field.
Yes, that's wrong. What you are thinking of is DOF for equivalent framing. To get the same framing with a smaller sensor you have to move back from your subject, increasing the distance to the focal plane and thus increasing DOF. It has however nothing to do with the sensor but simply that you have to change position of the camera in order to get the same framing.
However, to confuse the issue, the sensor size does matter, but in the other direction - larger sensor = more DOF. This is because DOF is a messy concept that deals not with the projected image but on the final output. The concept 'circle of confusion' is used to describe the minimal part of an image that is acceptably sharp. Different image formats (i.e sensor size) have different circles of confusion if you output the final image to the same size. A larger image format will have a larger circle of confusion and more DOF.
There's an decent article that explains the subject here:
Edit: I'll add an old rant of mine on the subject of why I don't like the DOF concept:
Although I do understand why looking at the CoC from the point of view of a final image is practical, it's also complete bullshit as far as the optical theory goes. No, I'm not saying that the DOF equations that use different CoC:s are wrong, but only that such a use confuses the issue (no pun intended).
The circle of confusion is the diameter of the criterion of maximum permissible unsharpness. I find it thoroughly counter productive to using it when discussing an optical system and it will only be relevant when you have a final image in mind - not what is actually being projected on the sensor, film or whatever.
Let me explain. Suppose we have two sensors A & B that have everything identical except size. A is larger than B. When you use a lens (the image circle covers both sizes) to project an image on the sensors the projected image on B will be identical to the center of the projected image on A if focusing distance, subject and lens aperture are identical. Or put in another way if we take a shot using each sensor, when we inspect pixel by pixel we will find that a cropped version of image A will be identical to image B. Pixel by pixel.
Looking at these images (at 100%) we'll determine some maximum tolerable unsharpness and call it the circle of confusion. It will however be identical in both cases. There will be no difference in the CoC regardless if your sensor is an ultra large format one or a tiny 2/3" as long as the pixel pitch of the sensor is the same. This is of course because the smaller image is simply a crop of the larger one and has not changed in any other way.
Now, when looking at things that way - i.e when not considering the various complex issues that occur when you produce a final image (resize, sharpening, viewing distance, eyesight etc etc) and that are way too complex to predict and compare, using the same CoC regardless of format is the correct way to go.
What's the argument for using the same CoC? The same why we look at 100% crops when we want to see how sharp a lens is. Because resize and sharpen and other operations are bound to affect the final results significantly. We don't say that some lens is sharp on a certain format but not on another. Instead we have MTF charts that are a function of spatial resolution - i.e the MTF are not expressed as a function of line pairs but of line pairs per mm.
So I'm not at all sold on the commonly accepted methodology when it comes to DOF. It seems to me much more reasonable to separate the optical properties of the system to the production (and viewing!) of the final image. A crop sensor does, precisely what the name implies, crop the image. A 135 FF sensor is a crop sensor to a medium format sensor and the latter is the same to a large format sensor. How you crop the image will in no way change the parts you are keeping. Opening an image shot with a 135 FF sensor in Photoshop and cropping the image to the same size as if it had been taken with an ASP-C sensor will give you identical results to an image taken with an actual ASP-C sensor. So why confuse the issue by talking about a later post processed (i.e. resized) image? You certainly wouldn't say that you are reducing the sharpness of an image when you are cropping it, so why would you say that you are reducing the DOF?