silly inverse square law question.
/forum/topic/1162172/1

1      
2
       3              9       10       end

curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

Guari wrote:
curious80 wrote:
When you move away from the subject, the light reaching from the subject to the camera of course goes down as per the inverse square law. However at the same time the subject is becoming smaller in your frame, i.e. taking up less pixels. Thus the amount of light falling per pixel remains the same. So the exposure remains the same.


Everyday one learns something new

I wonder how it was even possible to make light meters prior to the introduction of the concept of "light falling per pixel".

With all due respect, your reasoning is flawed. It does not respect the laws of physics. And physics do not change due to the introduction of new technologies.


Sure physics doesn't change. In physics this is called "light falling per unit area" or "light intensity". If you don't like a pixel based description then let me change that to the following:

"When you move away from the subject, the light reaching from the subject to the camera of course goes down as per the inverse square law. However at the same time the subject is becoming smaller in your frame, i.e. taking up less area on the sensor. Thus the amount of light falling per unit area remains the same. So the exposure remains the same." - same thing as before.



tedwca
Registered: Dec 31, 2002
Total Posts: 306
Country: United States

curious80 wrote:
PeterBerressem wrote:
The parts of the reflected light spreading out in different directions don't matter all all. You / the camera / sensor only receive the rays which are directed towards you and this doesn't change with distance.


The rays that come in your "direction" spread out just like the rays that go from the light source to the subject. As you move your camera away the number of rays that fall on the lens surface decrease. This is just like what happens when you move a subject away from the light source. Light is light, it travels and acts the same way in both cases.

However, for the sake of argument, lets assume that what you are saying is true. Assume we are taking the picture of a white square 10 feet away which is printed on a black wall. And lets say that from 10 feet the white square is covering the center half of the frame/sensor with all black around it. Now lets move the camera back to 20 feet. You would agree that the square would now cover only 1/4th of the frame / sensor. Now you are claiming that the amount of light reaching the lens from the subject i.e. the white square has not changes at all. However that amount of light is now being concentrated on a smaller portion of the sensor. So if what you are claiming is true then the white square should become brighter, because we have the same total amount of light now covering a smaller area. In reality however, that does not happen because the total light hitting the lens is now less and that is why the brightness and exposure remains identical as before.

If you are still not convinced then I put my hands up in the air and give up What I have described is a simple and accurate physics of what is happening.


So, based on your theory, what happens when you step back, but zoom in on the subject so it takes up the same space in the frame?



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

tedwca wrote:
curious80 wrote:
PeterBerressem wrote:
The parts of the reflected light spreading out in different directions don't matter all all. You / the camera / sensor only receive the rays which are directed towards you and this doesn't change with distance.


The rays that come in your "direction" spread out just like the rays that go from the light source to the subject. As you move your camera away the number of rays that fall on the lens surface decrease. This is just like what happens when you move a subject away from the light source. Light is light, it travels and acts the same way in both cases.

However, for the sake of argument, lets assume that what you are saying is true. Assume we are taking the picture of a white square 10 feet away which is printed on a black wall. And lets say that from 10 feet the white square is covering the center half of the frame/sensor with all black around it. Now lets move the camera back to 20 feet. You would agree that the square would now cover only 1/4th of the frame / sensor. Now you are claiming that the amount of light reaching the lens from the subject i.e. the white square has not changes at all. However that amount of light is now being concentrated on a smaller portion of the sensor. So if what you are claiming is true then the white square should become brighter, because we have the same total amount of light now covering a smaller area. In reality however, that does not happen because the total light hitting the lens is now less and that is why the brightness and exposure remains identical as before.

If you are still not convinced then I put my hands up in the air and give up What I have described is a simple and accurate physics of what is happening.


So, based on your theory, what happens when you step back, but zoom in on the subject so it takes up the same space in the frame?


First its not a theory, these are simple physics facts

As for zooming that is easy. I am sure you are aware that a 100mm f2.8 lens has twice as large a diameter as a 50mm 2.8 lens. So the area of a 100mm 2.8 lens is 4 times the area of a 50mm 2.8 lens. If you step back and then zoom in, you capture the same amount of total light as before. This is because because although you have moved back, you have now increased your light capturing area by 4 and thus grab a total of as many rays as before. Incidentally this is why lenses are marked with relative aperture size (i.e. f-stops) rather than actual aperture size - longer focal lengths need larger physical apertures to get the same subject exposure as a smaller focal length.



RustyBug
Registered: Feb 02, 2009
Total Posts: 12541
Country: United States

curious80 wrote:
because the total light hitting the lens is now less and that is why the brightness and exposure remains identical






curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

RustyBug wrote:
curious80 wrote: because the total light hitting the lens is now less and that is why the brightness and exposure remains identical





Man you have to read the whole thing. The total light reaching the lens from that subject is less, but that light is now spread over a smaller section of the sensor. So the light intensity on the sensor remains the same and thus the exposure remains the same. There - I have now said the same thing around 5 times in 5 different ways Please refer to my white square in the black wall example above, and tell me what problem do you find in that description?



RustyBug
Registered: Feb 02, 2009
Total Posts: 12541
Country: United States

curious80 wrote:
Man you have to read the whole thing.


Trust me ... I did, several times.

As to the white square / black wall ... I'm still trying to figure out how both you and Neil Armstrong can take a picture of the moon in accordance with your theory regarding ISL @ camera to subject distance.

Let's see, Neil is shooting from 5 feet away ... we are shooting from (238,900 x 5,280) 1,261,392,000 feet away.




curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

RustyBug wrote:
curious80 wrote:
Man you have to read the whole thing.


Trust me ... I did, several times.

As to the white square / black wall ... I'm still trying to figure out how both you and Neil Armstrong can take a picture of the moon in accordance with your theory regarding ISL @ camera to subject distance.

Let's see, Neil is shooting from 5 feet away ... we are shooting from (238,900 x 5,280) 1,261,392,000 feet away.


In this case we also have to accommodate the fact that Neil armstrong is no longer capturing the whole subject but just a portion of it. If you understand what I said in the white square in the black wall example then I can extend it to accommodate this case.



RustyBug
Registered: Feb 02, 2009
Total Posts: 12541
Country: United States

By my calculations the ISL falloff variance between Neil @ 5 ft and us @ earth would be 6.23 x 10^-19.

Or conversely that it would be 1.6 x 10^18 brighter @ Neils camera than ours.

I think you're kinda diggin' your own grave here.



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

RustyBug wrote:
By my calculations 2^30.25 = 1,276,901,417 ... suggesting that the illumination falling on Neils camera would be 30 stops brighter than that which falls on ours. Even allowing for your concentration theory to apply, I'm pretty sure that Hassleblad didn't send up a camera / lens combination that could accommodate a 30 stop variance from what we observe / use here.


Nops. Unlike us, Neil armstrong is getting the light from a very small portion of the moon. Think of it this way, when we are looking at the moon / capturing the moon, the portion of moon that neil armstrong is capturing will be a tiny speck in our picture. The light reaching us from that tiny portion of the moon would indeed be just 1,276,901,417 of what Neil armstrong gets (or whatever he factor comes out to be). However we are getting light from a much larger portion of the moon and thats why we still get the same amount of light per unit area on the sensor as neil armstrong gets.


RustyBug wrote:
....
I think you're kinda diggin' your own grave here.



haha. Not really Physics only defines one way for light to travel, and thats the only one that we can use to understand the phenomenon here.



Guari
Registered: May 16, 2012
Total Posts: 1249
Country: United Kingdom

Curious, you are mistaking things.

Light emited by a constant-output, point light source suffers from spherical divergence (inverse square law) due to the fact that a net ammount of photons are scattered over an increasing-area spheroid, as the distance from the source is increased. In photographic terms, this is incident light.

To a viewer, once an object reflect lights, that light is effectively collimated relative to the spatial position of the viewer. That means that if from the position of the viewer, he is able to register a number of photons, those photons will always be the same, irregardless of how close or far away he is from the subject.

Collimated light does not register an inverse square law decay, because you are always looking at the same ammount of light, no matter how far or near you are from it. 10 photons will always be 10 photons, no matter how far away they have to travel. This is the reason why exposure never changes.

This might seem counterintuitive to photographers, but that is because as we step away, more "background" is allowed into the frame for a constant focal lenght. That background may have a higher or lower brightness, giving the perception of changing illumination of the subject. But as long as you are measuring your subject and not the background, the intensity does not change, ie, your exposure remains constant for the brightness of your desired subject.

A way of thinking of collimated light would be a laser, or a huge wall of with a constant luminance. It dpes not matter how far away or near you get to the wall, the ammount of light it emits is constat. Photographic subjects are not light point sources, even though we do use light point sources to illuminate our subjects.



RustyBug
Registered: Feb 02, 2009
Total Posts: 12541
Country: United States

Guari wrote:
Curious, you are mistaking things.

Light emited by a constant-output, point light source suffers from spherical divergence (inverse square law) due to the fact that a net ammount of photons are scattered over an increasing-area spheroid, as the distance from the source is increased. In photographic terms, this is incident light.

To a viewer, once an object reflect lights, that light is effectively collimated relative to the spatial position of the viewer. That means that if from the position of the viewer, he is able to register a number of photons, those photons will always be the same, irregardless of how close or far away he is from the subject.

Collimated light does not register an inverse square law decay, because you are always looking at the same ammount of light, no matter how far or near you are from it. 10 photons will always be 10 photons, no matter how far away they have to travel. This is the reason why exposure never changes.

This might seem counterintuitive to photographers, but that is because as we step away, more "background" is allowed into the frame for a constant focal lenght. That background may have a higher or lower brightness, giving the perception of changing illumination of the subject. But as long as you are measuring your subject and not the background, the intensity does not change, ie, your exposure remains constant for the brightness of your desired subject.

A way of thinking of collimated light would be a laser, or a huge wall of with a constant luminance. It dpes not matter how far away or near you get to the wall, the ammount of light it emits is constat. Photographic subjects are not light point sources, even though we do use light point sources to illuminate our subjects.



BINGO !!!

Thank you.



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

Lets do one last thought experiment and then I am out of here. I really must be doing something more productive with my time then engaging in an endless discussion on exposure

Suppose for convenience that moon is a big flat disc and neil armstrong is sitting on it looking down and taking the picture of it. Lets say his picture covers a 5 ft x 5 ft portion of the moons surface. Far far away at earth we have our camera ready and about to take the picture of moon. Unfortunately, space mice come in and eat all of the moon around that 5x5 ft portion before we could take a picture. Now we are only left with that patch to take a picture of. First of all we wouldn't even be able to see that patch. By the time the light reaches us it would be so dim that we will not be able to register it. Secondly the thing will be so small in terms of spatial resolution then neither our sensor nor the eye could capture it. Now thats all thats left of the moon, we are desperate to take a picture of the moon so we work very very hard and come up with a sensor which has resolution high enough and sensitivity high enough to capture that portion. Then we will see that the amount of light that is reaching us from that portion will be only 1.6 x 10^18 or whatever of the amount of light that Neil Armstrong's camera was getting.



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

Guari wrote:
Curious, you are mistaking things.

Light emited by a constant-output, point light source suffers from spherical divergence (inverse square law) due to the fact that a net ammount of photons are scattered over an increasing-area spheroid, as the distance from the source is increased. In photographic terms, this is incident light.

To a viewer, once an object reflect lights, that light is effectively collimated relative to the spatial position of the viewer. That means that if from the position of the viewer, he is able to register a number of photons, those photons will always be the same, irregardless of how close or far away he is from the subject.

Collimated light does not register an inverse square law decay, because you are always looking at the same ammount of light, no matter how far or near you are from it. 10 photons will always be 10 photons, no matter how far away they have to travel. This is the reason why exposure never changes.

This might seem counterintuitive to photographers, but that is because as we step away, more "background" is allowed into the frame for a constant focal lenght. That background may have a higher or lower brightness, giving the perception of changing illumination of the subject. But as long as you are measuring your subject and not the background, the intensity does not change, ie, your exposure remains constant for the brightness of your desired subject.

A way of thinking of collimated light would be a laser, or a huge wall of with a constant luminance. It dpes not matter how far away or near you get to the wall, the ammount of light it emits is constat. Photographic subjects are not light point sources, even though we do use light point sources to illuminate our subjects.




I totally agree that Collimated light does not register an inverse square law decay. However the light going from the subject to camera is not collimated light. It spreads just like light going from any light source to the subject.

Again lets take the white square on black wall example. Lets assume what you are saying is true and lets say we have 1000 photons reaching our lens from the white square to our lens. Now step back to twice the distance. Now you will agree that the white square is now only taking 1/4th of the area on our sensor, but from what you are saying the sensor is still getting 1000 photons. So that means we now have 1000 photons striking a much smaller area than before. If that was indeed true then the image of the square would become brighter. Which does not happen. So could you please explain how your assumption could still be correct.



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

RustyBug wrote:

BINGO !!!

Thank you.


Not really, it was a nice try but this is not it



curious80
Registered: Jun 18, 2010
Total Posts: 1245
Country: United States

I am going off for today, but before I leave, Guari here is a quiz for you. Lets say we have two walls, one is our black wall with a 2x2 white square in it and the other is an all white wall. Now lets first take an image of both walls from 5 feet and lets say that the 2x2 square fills the frame in both images. Lets also say that we are getting 1000 photons entering our lens in each case. Now lets move back to 10 feet. Now in one case we are capturing a larger 4x4 white square in the image. In the other case we have a smaller square captured in the image with black border around it. Now tell me what is the number of photons entering the lens in each of the two cases. I think the answer to that question should convince you that what you are saying cannot be true.



Guari
Registered: May 16, 2012
Total Posts: 1249
Country: United Kingdom

curious80 wrote:
Lets do one last thought experiment and then I am out of here. I really must be doing something more productive with my time then engaging in an endless discussion on exposure

Suppose for convenience that moon is a big flat disc and neil armstrong is sitting on it looking down and taking the picture of it. Lets say his picture covers a 5 ft x 5 ft portion of the moons surface. Far far away at earth we have our camera ready and about to take the picture of moon. Unfortunately, space mice come in and eat all of the moon around that 5x5 ft portion before we could take a picture. Now we are only left with that patch to take a picture of. First of all we wouldn't even be able to see that patch. By the time the light reaches us it would be so dim that we will not be able to register it. Secondly the thing will be so small in terms of spatial resolution then neither our sensor nor the eye could capture it. Now thats all thats left of the moon, we are desperate to take a picture of the moon so we work very very hard and come up with a sensor which has resolution high enough and sensitivity high enough to capture that portion. Then we will see that the amount of light that is reaching us from that portion will be only 1.6 x 10^18 or whatever of the amount of light that Neil Armstrong's camera was getting.


Yes, because for this scenario under the conditions you list, you have made a object esentially a point source by reducing it's size. Point sources suffer from spherical divergence. This happens when the viewer-object distances are big enough, relative to the size of the subject. The moon is big enough so that this won't happen. It would be the case if we were to see the moon if we were on planet neptune.

If this was the case, then Neil Armstrong would have had to use an inexistent, super low ISO emulsion to capture the moon, since, following your logic, the moon would be so much brighter as the Armstrong to moon distance is lesser than a earth photographer shooting the moon from Kuala Lumpur.

The moon is not a point source. It does not emit light, it reflects it.



Guari
Registered: May 16, 2012
Total Posts: 1249
Country: United Kingdom

curious80 wrote:
I am going off for today, but before I leave, Guari here is a quiz for you. Lets say we have two walls, one is our black wall with a 2x2 white square in it and the other is an all white wall. Now lets first take an image of both walls from 5 feet and lets say that the 2x2 square fills the frame in both images. Lets also say that we are getting 1000 photons entering our lens in each case. Now lets move back to 10 feet. Now in one case we are capturing a larger 4x4 white square in the image. In the other case we have a smaller square captured in the image with black border around it. Now tell me what is the number of photons entering the lens in each of the two cases. I think the answer to that question should convince you that what you are saying cannot be true.


Of course it is different. You are right about that.

My point was, as I wrote above, is that the number (or luminance) will be constant for a given subject from a particular vantage point. This is because we are now measuring reflectivity and our subject is not a point source.

This is exactly what I meant when I said that it might seem counterintuitive to photographers. In your scenario, you are basically allowing for the "background" (or to rephrase it, anything that is not your subject) to modify exposure. And I made emphasis on it as I was fully referring to the subject (the white box, in your example). I could measure how many photons were reflected by one of your sides of the white box and only from the white box. If i were to walk back and re-measure from a distance with the same instrumentation then i would have a different count as i would be measuring the same subject plus "something else". That "something else" is what basically modifies my exposure, not a diminishing amount of light due to the inverse square law. The light that reaches the viewer reflected from the subject for a given spatial position of the viewer effectively behaves as collimated light.

In photographic terms, The only way to insure that I were to measure the same white box from a distance without background contamination would be to use a different eye piece for a reflective light meter as I increased my subject to viewer distance. This is not due to a change in area, but just to insure that I were to measure the white box, only the white box and nothing else. This is the essence of spot metering. And this is in essence what you do when you walk away from your subject and you zoom in with your lens. Your exposure will never change as long as you are measuring the same object from the same line of sight, irregardless of viewer to subject distance

I hope this helps







RustyBug
Registered: Feb 02, 2009
Total Posts: 12541
Country: United States

Curious80

Take your 5 ft @ 2x2 scenario and errant application of ISL theory and rather than try to use it to explain what you think you are observing when you move back to a 10ft @ 4x4 area ... instead, move to 2.5 ft @ 1x1 for both walls . Both walls would still be filling the frame (as your 2x2) and according to your theory of ISL applicability, the exposure would need to be changed due to your increased proximity to your subject. Move closer yet again, and your exposure (according to your ISL application theory) would again change. Here is where your theory of applying ISL as you've suggested ... falls apart.

Simple observation clearly shows us that this does NOT occur. So while you feel as though you have successfully shown / explained your case, through an explanation of an observed phenomenon ... how would you now explain the NO CHANGE to your exposure when moving to 1/2 the distance or 1/4 distance (each scenario still filling the frame with the same white wall).

You mentioned earlier that physics defines how light travels and that doesn't change. In that regard, you are correct. Light travels in a straight line (wave-like motion). This is the reason why AI = AR.

The point @ ISL is that for a multi-directional point source, the amount of light being emitted is being radially dispersed (much like straight spokes extending from a hub) and yes, it "spreads" as it moves further away from its origin, and from that point source ISL will apply.

But, those "spokes" (i.e. rays) of light have direction and they travel in a straight line, until they strike another object ... at which time a portion of the energy is absorbed, and a portion is reflected. (transmission and refraction may also apply, but aren't immediately pertinent to this dialogue.) That reflection is contingent upon the angle of incidence at which the object was struck, and the corresponding angle of reflection will be the same.

Light travels in a straight line (observe a laser pointer) ... we just tend to work with a whole lot of them traveling in different directions simultaneously.

This is why Guari's point is correct. The radial distribution of light emanating from a point source will send light out in multiple directions and just like the spokes of a wheel are farther apart at the rim than they are at the hub. ISL of the point source's light rays applies with the distance involved. But, just like the individual spokes travel straight from the hub, so do the light rays that are traveling away from the point source.

I appreciate your perspective that the nature and characteristic of light is unchanging ... something I espouse as well. However, your attempt to apply a radial dispersion of light from a point light source to how light travels once sent in a given direction from that dispersion, and its subsequent reflection is an errant application of ISL. Granted the number of rays/photons STRIKING an object are dependent upon the ISL relative to the objects distance from the light source (as the light "spreads" iaw ISL), but that number or rays/photons STRIKING / REFLECTING (AI=AR) will remain constant regardless of your camera position (light source unchanged) ... continuing to travel in those straight lines derived from AI=AR.

As Guari has pointed out, the object reflecting light is NOT a (radiating) point light source, and as such ISL does not apply ... AI=AR does.

Light travels in a straight line (wave-like motion).

Like Guari ... HTH



FYI ... you might want to pick up a copy of "Light Science and Magic"

http://www.amazon.com/Light-Science-Magic-Fourth-Introduction/dp/0240812255



RDKirk
Registered: Apr 11, 2004
Total Posts: 8976
Country: United States

This is why Guari's point is correct. The radial distribution of light emanating from a point source will send light out in multiple directions and just like the spokes of a wheel are farther apart at the rim than they are at the hub. ISL of the point source's light rays applies with the distance involved. But, just like the individual spokes travel straight from the hub, so do the light rays that are traveling away from the point source.

I hadn't thought about the "spokes of the wheel" analogy, but I like it. If you happen to be on a particular spoke, the spoke stays the same regardless how far along it you travel.



Guari
Registered: May 16, 2012
Total Posts: 1249
Country: United Kingdom

Yes, I'm glad to see that there is an easier way to explain it; never thought of the spokes but it's a perfect analogy.

As long as you are along the path of the spoke, it doesn't matter how far away or near you are from the subject, the brightness is the same, it doesn't fall as inverse square law.

Good stuff rusty



1      
2
       3              9       10       end