cgardner Offline Upload & Sell: Off
|
>>>> Indicates corrections <<<<<
Let's not lose sight of the fact that the capture of a digital image is done with three very narrow band filters over sensor sites. Each "bucket" of the sensor is either red, green or blue and sees the world as a monochromatic tonal range in each of those colors. That's exactly the same way color separation has always been done on graphic arts cameras I operated in the 1970s, on drum scanners I managed in the 1980s, and the camera I use today. I also used color filters with my B&W photography and understand how they work: a narrow band filter reproduces anything the same color as a lighter in tone than by eye and all other colors darker.
The camera is simply recording the scene in terms of its red, green and blue components and then mathematically converting the individual red, green, and blue cell site values into RGB pixel values based on a mathematical model of human visual response. A camera gamut can be profiled by shooting a color reference chart with known color values (as defined by Lab coordinates) within the range of vision. Those known values are then compared with how the camera actually records them. The camera profiling process has little practical use, except in situations such as copying art on a copy camera, because the profile will vary depending on the color temp of the source illuminating the chart.
I did that exercise with my first Kodak DC290 digital camera in 2001 when I had access to all the necessary tools. The process is pretty simple. You shoot an IT-8 color target which is produced with stringent quality control.
http://super.nova.org/PhotoClass/Part7/IT8.jpg
Accompanying the target is a data file which describes what the color of each patch. You then run the camera TIFF file though an application which compares the chart data file with how the camera actually reproduced the file.
http://super.nova.org/PhotoClass/Part7/ICCCamProfile.jpg
That camera had an odd quirk. Color looked normal, in the perceptual sense, if I used daylight or the built-in flash but if I used my Vivitar 285HV flashes blue and neutral gray tones would get a cyan cast and reds would become oversaturated. The color of my jeans and Mac CPU are more accurately reflected in the photo on the left which uses the camera profile. The plot of the profile revealed the reason for the odd color shift:
http://super.nova.org/PhotoClass/Part7/ICC_3D/CompareProfiles2D.jpg
http://super.nova.org/PhotoClass/Part7/ICC_3D/290Profile.gif
Note how the camera gamut as defined by the chart test fall outside of the Lab space in the blue and red corners? I suspected the odd color shifts were due to the fact the Vivitar flash was producing UV and IR wavelengths the camera wasn’t filtering out. The camera saw the UV as just a brighter tone of blue and the IR as a more intense shade of red because it was just recording relative brightness in the color channels through the filters over the sensor sites, not color in the literal sense. Those arbitrary RGB values in the camera file only became odd looking when mapped using icc color management to a working space and my monitor profile with relative colorimetric or perceptual rendering intent. They looked odd because the perceptual rendering intent apparently assumes all the RGB values the camera will capture fit inside the visual spectrum. I didn’t see the same color shifts in photos taken with the built-in flash because it had a heavy UV filter over it. With a bit of web searching I located a professor at RIT who taught a class on profiling using a DC290 and exchanged several e-mails. He confirmed my suspicions. The Vivitar flash didn't cause the same problem on my other digital camera because they had better UV/IR cut filters over the sensor. The solution with my DC290 was to add UV absorbing mylar to the flash head.
Reality Check: You need to realize that any camera profile made from printed chart or transparency is based on the limited CYMK gamut of that target and then extrapolated into outer space by the profile generation software and is only valid for identical lighting conditions. Its not the “real” color because all color is relative. That what I mean when I say all color reproduction is "fake" or an illusion. Color perception and the fact we thing a flat 2D contrast pattern in a photo represent real objects are all perceptual illusions. The real genius of icc based color management is that it takes than into account with the rendering intents used to map colors. But as my DC290 example shows, its also based on certain assumptions of how RGB maps to the gamut of human vision perceptually.
The fact color reproduction works at all has to do with the physiology of the eye and brain.
>>> Correction per Hermie: There are only L, M and S cones (long, middle and short wavelength sensitive cones). Thrichromatic signals are transformed to opponent signals, allowing more efficient signal transmission. The processing of cone signals is analogous to the a and b channels in Lab. Differencing of the cone signals allows construction of red-green (L−M+S) and yellow-blue (L+M−S) opponent signals. <<<<<< Having diffrent points of reference for any color allows its hue to be triangulated by the brain in much the same way as a you can plot your position on a map by taking simple compass bearings off two different mountain peaks then drawing lines on those bearings from the tip of the mountain out into the map: your location will be where the lines intersect.
The rods of the eye are also monochromatic, sensitive to a narrow band in the green/blue area of the spectrum and about 3000x more sensitive than the cones to light energy. That adds a third point of reference to human vision, a very sensitive range of brightness. There are actually far more rods than cones in the eye. If you hold your arm out and stick up your thumb the cones in your eye would be concentrated in an area about >>> twice <<<<< the size your thumbnail, 2% of your field of view, and the rods the rest of your field of vision.
The physiology of the eye and the process of human vision explain some the things you may just accept as givens about digital photography without really knowing why, such as the shape of the Lab gamut (much larger in the green region) and the fact that a digital camera sensor with a Bayer patten has twice as many green sensor wells as red and blue. Both to mimic the response of the human eye. I don't know for certain why Bayer decided on doubling the number of green sites but suspect it was clever workaround to eliminate the noise / signal ratio problem which would occur if a single green site was amplified twice as much as the red and blue ones.
There are physical design considerations which affect the dynamic range of a sensor site. They are analogous to a bucket in that the larger the bucket is the more water it can hold. Because there might be both very dark and very light areas in a photo the process of filling the buckets is like aiming a fire hose at some while filling others with an eye-dropper (the real one - not the one in Photoshop). The process of filling the buckets stops, ideally, at the precise moment the first of the million buckets on the sensor gets filled to the brim. Then all the buckets are dumped and the contents measured.
Ironically, a cheap point-n-shoot camera with a live preview continually dumps and resets the sensor and by constantly measuring the contests can know, down to the pixel level, when any of the buckets have been overfilled, and provide feedback to correct the exposure. But a high-end camera, slave to the optical viewfinder, must meter off the light in the viewfinder in roughly defined zones, and then take a WAG at the exposure. The best is can do is try to warn the photographer via the post facto histogram and over-exposure warning what the state of exposure was. Thus a $300 P&S in most cases will often do as good or better job of getting auto-exposure correct than a $3,000 pro body.
How many bits the camera uses to convert the analog voltage of the sensor dump into a decimal value representing a range of brightness as seen through the red, green and blue filters affects the gradation of tone. The four bit processors in use back when I started programming business applications in the late 1970s could only describe a tonal scale with 16 steps. The lowest 0 would be black, the highest 16 white with 14 gray tones in-between. Nowadays a camera uses 14-bits and there are many more discrete steps in the gray scale but 0 is still pure black and the highest value pure white. When we get to the point past the RAW converter, we have a camera file with RGB values for each pixel assigned within the range the bit-depth can express during the image processing based on the mathematical model of how RGB values map to human vision.
In the days before icc managed color the RGB values in the file would directly drive the video card and the monitor with no intervention by the application. That is still true for applications such as web browsers which are not icc-aware. Back in the early days of desktop color the pros were editing color separations of transparencies and color prints made on scanners on Macs with 5000K / 1.8 gamma Radius Pressview monitors. The color balance was checked visually and empirically against the same chart printing on an offset press.
In the early 1980s when running a Hell DC300 drum scanner, which output “hard dot” litho CYMK film ready for printing, the basic calibration exercise involved printing a optimized standard color target on the offset press with production ink and paper then scanning it. The scanner gray and color balance controls were then adjusted so that the resulting film accurately rendered the target: a CYMK > CYMK round-trip.
There was color management, and it worked quite well within the realm of professional graphic arts if everyone used standardized viewing conditions and SWOP standard inks because it was based not around what a monitor could display but rather what the final result on the press would be: monitors were adjusted by eye, perceptually, to match the press sheet as close as possible. All things considered, in a closed loop environment like our offset printing operation where the designers and pressmen shared identical viewing conditions it worked as well as icc based color management. Every light bulb in the entire facility was a 5000K Chroma so even if you looked at color outside of the viewing booth environment the color temp was still the same.
The Internet and digital cameras started to change the paradigm of color communication several years before icc based color management became universally available on the desktop. The native color balance of an un-calibrated CRT RGB monitor on a typical PC at that time was typically around 8-9000K with a gamma of 2.2. Microsoft and HP, focused on the business market, not graphic arts like Apple, collaborated to create the sRGB standard in 1995 based around the gamut of monitors at that time. The specification called for a white point of 6600K and gamma of 2.2. Needless to say color edited on a Mac didn't look the same on a PC. Apple had introduced icc based color management into its OS in 1992, but it wasn’t until Windows98 that Microsoft actually incorporated icc based profiling into its OS. But because the Internet shifted the medium of communication from the printed page to the monitor and PCs outnumbered Macs sRGB became the de facto standard for web viewing for a very practical reason: it was a close match to most CRT monitors so even an unmanaged file would look OK.
With early digital cameras - my office owned one of the first a .8MP Apple QuickTake 100 - there was no choice of color space. Later cameras offered the choice of sRGB or AdobeRGB, but that choice defined the working space gamut the camera JPG values would be mapped to, not the range the camera could record, which is as described is primarily a function of the filtration over the sensor sites.
In digital camera RAW workflow the first point at a color space is first assigned in the icc model of color is when the digital color values of 0 - the maximum bit value calculated by the camera from the analogue voltages recorded by the camera sensors sites and are assigned to a working space. A RAW camera file has no defined icc gamut: no color space in the icc lexicon. The RAW file contains luminance and color information, not discrete RGB pixel values. The point at which RGB values are first assigned is when the working space is selected. Since whatever values the camera captured are mapped to the working space any colors outside of the working gamut the camera might have recorded are mapped to the other boundary of working space. The beauty of the icc workflow is that you can pick whichever working space you want and the color management driving the conversion will adjust the color values in a way that perceptually the colors look realistic: that bright red fire engine or stop light will be mapped to the appropriate RGB values within that space. To the extent a camera profile created from an IT8 or Macbeth ColorCheck interjected into the RAW > working space conversion will affect the outcome it is by modifying the values within the boundary of the working gamut to make them more accurately reflect the chart values in the test file. The camera profile will not affect the outer boundary of the working space, and will only be accurate for photos taken in the same color light.
Implied in the design of the icc system is the assumption that people actually know how it works and can make intelligent choices of workspace based on image content and output medium. That is the point of this thread. The RGB and CYMK gamuts are analogous to an RGB apple and a CYMK pear: different sizes and different shapes. If you owned a fruit mail order business wanted to stock one size box to ship either fruit, or combinations or both you’d need one that could hold either shape, but not have so much room left over that the fruit would rattle around during shipping. The same is true when picking a working space: the working space is the box.
A RAW file can be opened in Adobe Camera Raw (ACR) and evaluated in different working spaces. The histogram and preview will show clipping due to exposure. Abobe offers this advice:
Clipping occurs when the color values of a pixel are higher than the highest value or lower than the lowest value that can be represented in the image. Overbright values are clipped to output white, and overdark values are clipped to output black. The result is a loss of image detail.
• To see which pixels are being clipped with the rest of the preview image, select Shadows or Highlights options at the top of the histogram. Or press U to see shadow clipping, O to see highlight clipping.
• To see only the pixels that are being clipped, press Alt (Windows) or Option (Mac OS) while dragging the Exposure, Recovery, or Blacks sliders.
For the Exposure and Recovery sliders, the image turns black, and clipped areas appear white. For the Blacks slider, the image turns white and clipped areas appear black. Colored areas indicate clipping in one color channel (red, green, blue) or two color channels (cyan, magenta, yellow).
Note: In some cases, clipping occurs because the color space that you are working in has a gamut that is too small. If your colors are being clipped, consider working in a color space with a large gamut, such as ProPhoto RGB.
But what the ACR preview isn’t showing whether or not the working space is large enough to hold the CYMK printer gamut. The ideal display for ACR would be something similar to soft proofing in Photoshop where the CYMK printer profile could be inserted into the ACR screen preview color management workflow with a separate out of gamut warning showing in the preview for any colors in the CYMK output the working space will clip. That of course can be done now by simply opening the file in Photoshop in the selected working space and applying soft proofing before doing anything else, but the ability to do in ACR would save a step.
Color is adjusted perceptually when the working gamut is displayed on a monitor. When working in a gamut larger than than of the monitor you are actually manipulating the color outside of the monitor gamut by remote control. For example you might have a red which is 100% of the saturation your LCD monitor can display, but only 50% of what the working space can define. That means you can make the red more saturated in the working space but the monitor can’t accurately display it. So what happens? All the other colors your monitor can display will change instead to try to simulate, within the limited monitor gamut, how the all the colors will look in relative terms perceptually.
When you print and convert to CYMK the RGB values will also be remapped in ways a monitor can’t display. What gets mapped to CYMK isn’t what is actually seen on the monitor, which is just a perceptual simulation of the wider working space: the RGB> CYMK transform goes directly from the working space you can’t real see on the monitor to a CYMK space you can’t actually see until you make a test print. Soft Proofing is mostly valuable to the extent it identifies out of gamut output colors by graying them out. By selectively adjusting saturation to eliminate the warning the clipping can be eliminated, but the image on the screen will still not exactly match the output.
The downside of working in a gamut larger than actually needed is that much of the manipulation of the file is shown in relative perceptual ways on the monitor. Using a monitor with the widest possible gamut you can afford will allow you to see more of the actual manipulation rather than a perceptual simulation. But that also has a downside if you are preparing files for the web for people with smaller gamut monitors. Finding the best workflow for the work you do just requires a bit of your time to try them and see which is most convenient. If you only every post the files on the web then working in sRGB would save the step of converting. If you only shoot and edit for offset printing and the Internet then AdobeRGB would fit both. At the current time its only the gamut of some ink jet printers which exceed AdobeRGB
The most objective way to judge printer output with various workflows is to print a carefully prepared test file. It is possible to download standard test files from CIE and other sources:
http://www.colour.org/tc8-03/test_images.html
The choice of working space doesn’t affect the underlying RAW values in the file so there is no irreversible penalty for making a bad working space choice if the RAW file is retained. Worst case the file would need to be re-edited.
Canon offers a more empirical approach to color in DPP.
http://super.nova.org/TP/Styles480sRGB.jpg
A RAW file can be opened in DPP in various working spaces and then various style profiles can be applied interactively which will change the internal color mapping within the working space. That allows the user to empirically pick the one which best matches the content of the photo and the desired mood of you want it to evoke in the mind of the viewer. That, not color by the numbers is what really matters most from my practical point of view.
Edited on Oct 31, 2008 at 05:26 PM · View previous versions
|