Home · Register · Software · Software · Join Upload & Sell

Moderated by: Fred Miranda

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist

FM Forums | Post-processing & Printing | Join Upload & Sell


Archive 2013 · Calibrated Monitor vs. Eye / Brain Accommodation ... ???
• • • • • •
Upload & Sell: On
p.1 #1 · p.1 #1 · Calibrated Monitor vs. Eye / Brain Accommodation ... ???

Many of us understand that when it comes to WB we can't trust our eyes because of the way our eye/brain will accommodate or adjust to its environment and/or what we are looking at.

But, how long does it typically take for us to re-adjust to a change? For instance, good ol' Windows installs some updates overnight and I start up this morning and as everything reloads, my monitor calibration program kicks in and I see the changes go from what appears to be white & bright to what now looks like blue and dim.

My initial gut response is "something's gone wrong, it looked white before (word document, etc.), now it looks blue" ... but I know that nothing is wrong as this has happened many times before and I go on about my business for 20 minutes or so looking at a variety of things. Then, when I go back to look at those things that previously "looked blue and dim" to me (which also were perceived as white & bright pre-calibration), they are now again perceived as a very satisfying bright white.

Nothing has changed, and I'm not inquiring about any kind of a problem, because there isn't a problem in play here. This is similar to whenever I do a recalibration and then compare the before/after, the after always seems "too blue and dim". Of course, this is the amount of correction that is being applied to render neutrality, etc. ... my question is does anyone have any insight into how long it takes for our eye/brain accommodation to "make the switch" so to speak? We hear about allowing time for our monitors to warm up before calibrating, but is there also a "lag time" that we need to allow for ourselves to adjust to the changes before doing any "by eye" adjustments.

I'm not overly worried about it, because I first do my WB "by the numbers" rather than by my eye, then apply any "by eye" aesthetic afterwards. I also work fairly slowly, so I'm pretty sure that I get accommodated ... but still, I am curiously wondering how long it takes for us to accommodate to such changes when we recalibrate or maybe when we re-enter our workstation environment after being in other lighting environments, or when we change environments with our mobile viewing/editing.


Mar 22, 2013 at 12:13 PM
• • • •
Upload & Sell: Off
p.1 #2 · p.1 #2 · Calibrated Monitor vs. Eye / Brain Accommodation ... ???

That's a complicated question since it contains three main parameters.

1) time
2) intensity
3) %-of-vision angle

High intensity light makes you shift quicker. Larger percent of view makes you shift quicker.

I am however quite surprised by that you think the monitor goes "blue" after applying the profile. Most screens do at native settings have a CWB temp of 6700+, more blue than even the normal sRGB screen standard. Profiling should lower CWB in 99*% of all cases, not increase it.

If you're not on a hardware calibration setup, you shouldn't have any shift at all between profiled/not profiled. Any shift you get means you're losing tonal range, since you're letting software LUT handle a change in WB. This should be done in monitor hardware. Even the cheapest crap monitor (with WB controls) out there does this better than an 8-bit LUT.

The important thing is that you're not separating monitor / ambient by to much. It's not good to have a 3000K halogen desk lamp next to a 6500K screen. It confuses the average impression even when you're staring squarely at the screen.

For most home-based workstations, it's a lot better to aim for a non-standard CWB, like 5800K or close to it. This make the difference between ambient and monitor smaller, and it makes moving around in your working space a lot more time-invariant. Proof station standard 5000K is generally to warm for general use.

If you create a flat medium gray surface image, and view it in a full-screen mode at the monitor, you should be able to move around the room and do other stuff - and then look back to the screen without perceiving any obvious "tint" on the gray image that covers it.

Mar 22, 2013 at 01:46 PM
• • • • •
Upload & Sell: Off
p.1 #3 · p.1 #3 · Calibrated Monitor vs. Eye / Brain Accommodation ... ???

The brain can discern even small differences by side-by-side comparison of two samples then calibrates to what it intellectually thinks is more correct for the mental baseline the scene is judged against.

This is better illustrated at capture in how the eye don't see color biases the camera records. For example under trees the light is greenish due to the reflections off foliage but an untrained eye will not see it because the brain seeing neutral / familar content dymanically adapts to it. A photo taken under trees from a camera baseline of Daylight WB will record the green bias the photographer's brain didn't recognize because it adapted to the subject's neutral colors or familar skin tone.

In the People Forum or Critique portraits taken under trees are often posted with an uncorrected greenish bias which makes the red dominant skin look oddly flat (dimensionally) and dull as if covered by a gray veil with hyper-saturated foilige. Apparently the color vision of the photographer who took it and posted the image with a green bias, and many commenting viewing the image on calibrated monitors, adapted based on staring at the image because: 1) the photographer didn't correct it with + magenta in PP, and; 2) the comments about the "flat" appearance suggest changes in lighting, not understanding the root of the problem is WB. In most cases a +25 Magenta photo filter correction neutralizes the color and normalizes the previous "flat" look of the skin.

So it's a safe bet that whenever you sit and stare at an image with bright color background or content on a monitor your eyes will adapt dynamanically based on it. That's why it is good idea take a break every minute or so and look around the room and surround the montior area with a neutral wall illuminated with a similar D65 white point as the calibrated monitor. That way when you look up and away from the monitor your brain will have a chance to "recalibrate" to the familar surroundings of the room. When you look back to the image again you will, until your eyes get skewed by it's bright colors, see it more objectively.

D50 vs D65 viewing standard: In graphic arts we used expensive isolated viewing booths for judging color which were gray and illuminated with high CRI 5000 K sources overhead for the reflected subjects and matched sources in light boxes and projectors for transparencies. But since D65 is used for calibrated monitors an editing space illuminated with high CRI D65 fluorescents would be ideal as a "comparison" reference for digital editing.

I haven't installed D65 lighting in the room where I edit photos but what I'll sometimes do as a reference when editing a color critical image that I suspect is skewing my color vision as I stare at it is to create a 128,128,128 square on a new layer I can toggle on and off over the image I'm editing.

For example if editing a portrait taken using Daylight WB under trees the gray card the subject is holding will look "normal" by eye on the monitor if I expect the card to be gray. Why? My brain will adapt color perception to it. But then if I turn on the 128,128,128 reference patch on the layer above card in the photo will look green by comparison and my brain will understand there's a color cast (relative to technically neutal WB on the comparison layer).

Using the Camera Custom WB as a Baseline Reference:

I do a similar comparision whenever I set Custom WB in the camera. As a baseline I always start with the camera on the Daylight setting when shooting the gray card filling the frame. Then I set Custom WB off that frame, then shoot it a second time for with Custom comparsion.

Standing under trees with my color vision adapted to the + green light the camera playback of the card which is actually + green will look neutral in the camera playback. That's why a photographer not aware intellectually of how trees / grass / barns / brick walls cause color biases usually don't notice them when shooting: his brain sees the light as "normal / neutral" and tunes out color casts such as red shadows when a portrait is taken against a brick wall.

In the field when toggling between my first Daylight WB and second Custom WB frame the Custom WB frame will initially look + magenta by comparison on the camera playback because my overall perception is skewed + green. But intellectually because I understand Custom WB is the more "technically" correct color (if the goal is capture from a technically neutral baseline) the longer I looks at it the more "neutral" my brain will see it. If I stare at that Custom WB reference frame on the back of the camera until it seems neutral then look up at the actual subject I'm more likely to see greenish cast by comparison, but almost immediately my eyes will again adapt to the + green ambient.

The reason I always set Custom WB when possible and shoot before / after Daylight / Custom comparison shots is because back on my computer when I compare the shots the first gives me a very good idea what the color of the ambient light was relative to Daylight. Knowing that helps me decide intellectually / creatively how to skew the color balance off the technically neutral capture baseline.

Context Clues in a Photo Affect Perception of what seems "Normal" on faces

Something I came to realize in portraiture when critiquing photos is how context affects perception of WB. For example if someone posts a portait taken by a candle / campfire / fireplace and the skintones are seen to be much warmer that technically neutral it will seem perfectly "normal" if the context of the light source is also seen in the photo. But if someone posts a tightly cropped H&S shot with the same warmer than neutral color balance it will no seem "normal" because the context of the source being warmer (relative to capture baseline) isn't grasped.

What happens is the viewer of the photo when seeing a wider shot grasps the context intellectually and adapts their baseline for what seems "normal". In the tighter cropped H&S if there has been nothing to provide a clue where it was taken the warm skin will not seem normal.

Even in a sequence of photos of a campfire wide > med > close-up of faces what will seem most "normal" will be for the WB in the photo to gradually shift from warm > neutral as the crops get tighter because that's exactly how the brain perceived the faces in person when focused on them alone - the eyes see the color cast but "filters" it out and sees the face more normally.

Because of the adaptive nature of vision when shooting in situations like that where I realize crop in the photo will change perception of "normal" the RAW capture will always record from the baseline of the source. How that file will appear when opened will be affected by the camera WB setting. In a situation like a camp fire I will set the camera to Daylight WB but also shoot a face in the light of the fire holding a gray card.

Back home on the computer the files initially will have warm ambience from the Daylight baseline used in the camera. But clicking on the card in the test shot I can "snap" the card and face to neutral. That will make the background perceptually incorrect and the face too "cool" looking in shots where the fire is seen.

The solution to that dilemma? I take the test file with the gray card and make a copy of the RAW. One I balance for the ambience I want in the background labeling it "Background" , the other I label "Normal Faces" I balance how I remembered perceiving the faces; not as warm as in the Daylight WB shots SOOC but not as cool as when "snapped" to neutral via the card. Somewhere halfway in-between determined by eye.

Then for each photo I edit I also dupicate the RAW file into two copies. For one I apply the "Background" test file meta-data and the other the "Normal Faces" meta-data. Then I load the two adjusted files into Photoshop on separate layers and blend together with masks. I put the background copy on the bottom, faces copy on top and blend in the "normalized" faces by opening a black filled mask selectively in the areas like faces where the brain would adapt as see them more normally in person.

If the photos aren't very critical I streamline the process by adjusting one RAW copy to the "faces" meta-data of the adjusted test shot then once in PS dupe it and warm it with a warming photo filter. I do it that way because "normalized" faces are the most critical reference point perceptually in the photo. I then blend the "normalized" and "warmed up" layers by eye the same way.

What the dual layer editing tries to do is simulate how color perception would shift is you walked up to a campfire from a distance, then sat around it and looked closely at the faces. From a distance the faces would seem warmer than "normal" but around the fire with your vision adapted to the warm glow they would still have a warm glow, but not as much as initially perceived.

The best "blend" varies with the crop and context and is trivial to find by before / after comparison by eye when the opacity sliders on the two layers are adjusted until it "looks right".

It's a example of how shooting digitally and shooting in RAW change the entire workflow of how I capture images and control color at capture and during editing. The baseline I'm editing to are the mental notes I take when shooting of my impression of the WB. In a case like a campfire I'll see and want to retain a warm glow, but not as much as Daylight WB will record on the faces. In a situation like under the trees I'll detect the green bias when setting Custom WB on the camera and want to eliminate it completely. It's situational and subjective and just a new technical tool to use in the overall creative process.

Mar 22, 2013 at 02:01 PM
• • • • • •
Upload & Sell: On
p.1 #4 · p.1 #4 · Calibrated Monitor vs. Eye / Brain Accommodation ... ???

Gotcha, thanks.

The delta between my 6500K setting and ambient probably also explains why I "see blue" to a varying degree. I had noticed that sometimes when I calibrate (Spyder 3 Elite) the before/after is very noticeable to me (perceived blue shift) and other times it is much less. I'll attribute that to the ambient variance (window light/time of day/night) that offsets (or not) the color of my nearest (warm) lamp (this particular environment).

As mentioned, I've always trusted my calibration so no worries, but that does make sense at why the "I see blue" in this environment. My other workstation has much less natural light influence and my nearest lamp is 6500K along with hooded monitor, as the "I see blue" environment is for my casual efforts. When I'm doing critical, I move to the workstation.

I might add, after I've had some time to adjust and then go back to look at the before/after calibration comp ... then the comparison appears to be warm/neutral rather than neutral/cool perception at initial calibration changes. Just another interesting aspect of the variance @ how we see things relatively rather than absolutely ... kinda like Antelope Canyon, etc.


Mar 22, 2013 at 02:36 PM

FM Forums | Post-processing & Printing | Join Upload & Sell


You are not logged in. Login or Register

Username     Reset password