Home · Register · Join Upload & Sell

Moderated by: Fred Miranda
Username  

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist
  

FM Forums | Canon Forum | Join Upload & Sell

1      
2
       end
  

Archive 2011 · A stupid question

  
 
Monito
Offline
• • • • • •
Upload & Sell: Off
p.2 #1 · p.2 #1 · A stupid question


JSXX wrote:
I have always heard a FF sensor has a smaller dof than a crop sensor, but the calculator shows 1.84 ft tor the 7D and 2.94 ft for the 1Ds?


You heard correctly. dcains is showing it with the same focal length on both cameras, hence the cameras would not be making the same picture.

Assuming maintaining the same distance, if you only change the sensor size (crop it in other words) you have to enlarge the image more to make the same print size (say 8 x 12). Thus the blur disk is enlarged more and the crop factor appears to have less depth of field. But remember, it is not the same picture.

However, if additionally you adjust the focal length by the same factor as the crop factor so that you can make the same picture, then you end up with a change to the blur disk going the other way that more than counter-acts the enlargement. Hence crop factor has to use a smaller focal length and the smaller focal length has a smaller blur disk at the same aperture and that more than counteracts the enlargement of the blur disk.

Thus when when making the same picture (same distance for the same perspective, and same framing for the same composition), the smaller sensor has greater depth of field.



Mar 08, 2011 at 04:12 PM
dcains
Offline
• • • • •
Upload & Sell: Off
p.2 #2 · p.2 #2 · A stupid question


That's true if the two scenarios are equalized properly. For example, if the camera distance is the same in both cases, the lenses must be different focal lenghts to make the field of view the same. So, the crop camera would require a shorter (wider) lens, because its focal length is multiplied by 1.6x because of the smaller sensor. Or, you could use the same lens, and move the 1.6x body farther away from the target. When this is done, so both cases have the same field of view, the DOF will be shallower with the larger sensor.


Mar 08, 2011 at 04:13 PM
JSXX
Offline

Upload & Sell: Off
p.2 #3 · p.2 #3 · A stupid question


Makes sense, didn't think about the image size. Thanks for the explanation guys!


Mar 08, 2011 at 04:23 PM
wickerprints
Offline
• • • •
Upload & Sell: Off
p.2 #4 · p.2 #4 · A stupid question


dcains, this is what happens when one relies on a web calculator instead of understanding the DOF theory and equations themselves--one arrives at a wrong conclusion.

The error in the screenshots you have posted is that the circle of confusion criterion is not the same for the 7D and the 1Ds. The former is assumed to be 0.019mm, whereas the latter is assued to be 0.03mm. However, the latter COC does not take into account what the OP asked, which is if you crop the 1Ds image to only retain the central APS-C portion of the frame, are the two images equivalent.

Under the scenario I stipulated in my previous post in this thread, the cropped full frame sensor and the APS-C sensor will show the exact same DOF. In fact, if the pixel counts are the same as I described in that post, the images would be virtually identical.

The enlargement ratio (i.e., size of the displayed image divided by size of the sensor, also known as print size or print ratio) absolutely does affect DOF. Because APS-C sensors are physically smaller than full frame sensors, if the operator frames the scene identically in both viewfinders (e.g. by choosing a focal length 1.6x longer for the full frame sensor at the same subject distance), then the maximum acceptable COC is smaller for the APS-C sensor because the enlargement ratio is larger. For instance, if we desired to print both resultant images at 8" x 12", then obviously the APS-C image must be enlarged to a greater extent than the full-frame image.

But if, instead of using a 1.6x longer focal length on the full frame sensor, you keep the same focal length on both bodies, cropped the central portion out of the full frame image, and enlarged the result alongside the APS-C sensor--which is the OP's scenario--then the enlargement ratio is identical and thus the COC criterion must be the same.

Therefore, the flaw with using the calculator is that it assumes you are not cropping the full frame image.



Mar 08, 2011 at 04:37 PM
dcains
Offline
• • • • •
Upload & Sell: Off
p.2 #5 · p.2 #5 · A stupid question


That makes good sense, but aren't you describing a hypothetical scenario when you explain cropping the FF sensor before the image is captured? That's very different from cropping a FF image afterwards and expecting the DOF to change, isn't it?


Mar 08, 2011 at 05:10 PM
garyvot
Offline
• • • •
Upload & Sell: On
p.2 #6 · p.2 #6 · A stupid question


Pull up a chair everyone!


Mar 08, 2011 at 05:38 PM
wickerprints
Offline
• • • •
Upload & Sell: Off
p.2 #7 · p.2 #7 · A stupid question


dcains wrote:
That makes good sense, but aren't you describing a hypothetical scenario when you explain cropping the FF sensor before the image is captured? That's very different from cropping a FF image afterwards and expecting the DOF to change, isn't it?


What do you mean by drawing a distinction between cropping before versus after image capture? That is, describe how these two are different in terms of the process of producing an exposure and displaying the resulting image.

My scenario is hypothetical but it is the one that corresponds to the OP's question. The comparison is simple: if you take two images at the same exposure, subject distance, and focal length, the full frame image cropped to the central APS-C boundary is identical to the APS-C image, because the lens doesn't "know" what is put behind it. As long as the image plane is where the lens "expects" it to be, it will project the same image in both situations.

That said, the COC criterion is inextricably tied to the enlargement ratio and the pixel density of the sensor. For example, ignore the APS-C sensor entirely and let's just consider how pixel density and enlargement ratio affect a single, full frame sensor. If the sensor is of poor resolution, say 360 x 240 px (so a density of 100 px/mm^2), then obviously your circle of confusion is very large, on the order of 0.14 mm--this is because your sensor is simply unable to see fine details, no matter how large you attempt to print it. But the max acceptable COC could be even larger, if for some reason you wish to print such images at tiny sizes, like a 15 x 10 mm print (assuming normal viewing distances). The high spatial frequencies are just not seen.

Now, if you have a full frame sensor with 36000 x 24000 px resolution (density = 1,000,000 px/mm^2), then the sensor is unlikely to be the limiting sharpness factor, so your max COC can be very, very small, provided you want to make very large prints. If the prints are modest, say 30 x 20 cm, then your max COC should be around 0.025 mm (again assuming normal viewing distances), because your prints are not taking full advantage of the sensor's resolving capacity. But if you take the same image and enlarge it to, say, 1.5 x 1 m, then suddenly the high spatial frequencies captured by the sensor become visible in the print and your criterion for what is "acceptably sharp" will change accordingly--again, assuming that you are viewing the print from the same distance.

The whole reason why the max COC abstraction exists in the DOF model is because it serves as a proxy variable for three subjective conditions: (1) the resolution of the recording medium; (2) the enlargement ratio; and (3) the print viewing distance. It is formulated and chosen out of convenience, not technical accuracy in understanding what factors play a role in determining DOF. Thus there is a great deal of confusion and inappropriate application of the DOF model because one might choose a max COC that does not properly reflect those subjective conditions.



Mar 08, 2011 at 06:11 PM
BrianO
Offline
• • • • •
Upload & Sell: Off
p.2 #8 · p.2 #8 · A stupid question


Everyone participating in this thread should read wickerprints' above post very carefully, and think about it until they understand it.

He clearly understands the science involved.



Mar 08, 2011 at 06:22 PM
Ian.Dobinson
Offline
• • • • • •
Upload & Sell: Off
p.2 #9 · p.2 #9 · A stupid question


garyvot wrote:
Pull up a chair everyone!


Sh*t I've run out of Popcorn and beer. and the bloody shops are shut.



Mar 08, 2011 at 06:23 PM
Pixel Perfect
Offline
• • • • • • •
Upload & Sell: On
p.2 #10 · p.2 #10 · A stupid question


dcains wrote:
That makes good sense, but aren't you describing a hypothetical scenario when you explain cropping the FF sensor before the image is captured? That's very different from cropping a FF image afterwards and expecting the DOF to change, isn't it?


Yes. Let's consider what the OP asked. Same lens, same subject distance, a 1.6x crop camera and a FF camera.

In this scenario the DoF of the crop shot will be smaller since the magnification is larger (subject will appear larger), no need to look up DoF calculators. Now if you indeed crop the FF shot to the same FOV as the crop shot, the DoF will indeed appear smaller and shots will now look identical. In the real world, if you printed both shots as taken by each camera, you would need to make a print 1.6x larger (talking edge size here) with the FF camera for the subject to appear the same size as from the crop camera. Now what do we know about printing larger, DoF is smaller (this is where the CoC comes in). So if you now crop the print to the same size as the 1.6x print, they will now look identical (ignoring other issues like noise etc)



Mar 08, 2011 at 09:00 PM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.2 #11 · p.2 #11 · A stupid question


NDP_2010 wrote:
I think the light gathering should be equal (if you do not account for light dropping off at the edges of the glass) If you have a certain intensity hitting 1cm square, you have double the intensity hitting 2cm square, so overall the same intensity / area.

A lot of arguing in this thread, but only this post actually sort of touched on my original question.



Mar 08, 2011 at 11:16 PM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.2 #12 · p.2 #12 · A stupid question


To clarify, my motive for asking this question is because I am worried that a 14mm prime that I will be getting for my 5D will be TOO wide. I was wondering if I crop the image to around 1.6 crop factor, I would get an identical image to what a relatively low mp crop camera (such as 20D) would be getting with the lens. If so, I could sometimes use it as if I am using a 22mm lens, which is a more reasonable wide angle. Of course, I would always have the option to go really wide.


Mar 08, 2011 at 11:57 PM
wickerprints
Offline
• • • •
Upload & Sell: Off
p.2 #13 · p.2 #13 · A stupid question


shoenberg3 wrote:
To clarify, my motive for asking this question is because I am worried that a 14mm prime that I will be getting for my 5D will be TOO wide. I was wondering if I crop the image to around 1.6 crop factor, I would get an identical image to what a relatively low mp crop camera (such as 20D) would be getting with the lens. If so, I could sometimes use it as if I am using a 22mm lens, which is a more reasonable wide angle. Of course, I would always have the option to go really wide.


In terms of framing and perspective, cropping an image is identical to selecting a longer focal length, assuming that the subject distance is fixed, and ignoring issues of pixel density. This is what I discussed in my first post in this thread.

A varifocal (zoom) lens demonstrates this fact quite readily. While remaining in the same position, zooming in or out shows that the image simply experiences a change in magnification, not perspective.

Putting the 14mm prime in front of a 5D sensor, then cropping out the central APS-C rectangle, gives you an image whose framing and perspective are equivalent to an image that was taken at 22.4mm on a 20D. The number of pixels in the image will not be the same, however, which may affect your ability to resolve fine detail.



Mar 09, 2011 at 12:30 AM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.2 #14 · p.2 #14 · A stupid question


You are saying, besides for the minor differences in pixel density, everything else will be the same. That's what I have originally inferred in the opening post.
Are there no other subtler factors in play?


Thanks for help



Mar 09, 2011 at 12:52 AM
wickerprints
Offline
• • • •
Upload & Sell: Off
p.2 #15 · p.2 #15 · A stupid question


The "minor differences" are not really that minor. The EOS 5D is 4368 x 2912 px. Cropped to APS-C, this will result in 2730 x 1820 px images, which is approximately 5 megapixels. The EOS 20D has a resolution of 3520 × 2344 px, which is 8.2 megapixels. The difference is quite significant in terms of spatial resolution, noise performance, and dynamic range.

But again, in terms of framing and the image that the lens projects onto the sensor, there is no difference. What kind of "subtler factors" do you think might come into play, and for what reason(s)?



Mar 09, 2011 at 01:09 AM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.2 #16 · p.2 #16 · A stupid question


Why would noise performance and dynamic range be affected by differences in pixel count?


Mar 09, 2011 at 01:17 AM
wickerprints
Offline
• • • •
Upload & Sell: Off
p.2 #17 · p.2 #17 · A stupid question


shoenberg3 wrote:
Why would noise performance and dynamic range be affected by differences in pixel count?


Noise and dynamic range are affected by pixel density, not pixel count. The more pixels per unit area, the greater the variance in per pixel shot noise at a given exposure. There are also additional losses because in practice, pixels are not perfect collectors of incident light--some light is lost as a result of striking a boundary between adjacent pixels.

Dynamic range is affected because a smaller pixel is not as capable of collecting as much light before it become saturated. Furthermore, because the measurement of DR is linked to the "noise floor"--i.e., the darkest level of shadow detail that is not obscured by noise--the DR is also negatively correlated to the noise level.

Of course, these effects may be mitigated by improvements in sensor fabrication methods and technology; for example, microlenses improve the focusing of incident light so that less is wasted. But as a general rule, the greater the pixel density, the more per-pixel noise and the less DR one will observe.



Mar 09, 2011 at 01:27 AM
1      
2
       end




FM Forums | Canon Forum | Join Upload & Sell

1      
2
       end
    
 

You are not logged in. Login or Register

Username       Or Reset password



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.