Upload & Sell: Off
You never need to consider CYMK when printing exactly because the profiled based management handles the translation automatically, similar to how in a car with auto transmission you don't need to think about shifting. But it only works ideally if the printer profile is accurate.
How does one determine if a profile is accurate?
If one makes a print BEFORE doing any editing based on what is seen on a calibrated monitor one can see more objectively how the appearance of the color values in the file differ from the appearance on screen, both in normal and soft proofing with that printer's profile. I would call that process one of "calibrating" expectations of what the printer gamut is capable of.
After making dozens or hundreds of prints the traits of a printer will become obvious and once aware of the difference, large or minor, most will incorporate corrections into the workflow. All I was suggesting is shortening the learning curve by of "calbrating expectations" by doing a few contolled test with known neutral content like a gray scale in a test shot.
If the photo is a woman in a red dress you'd have difficulty seeing any gray balance error in the profile driving the printing process. Having her hold a gray scale and color patch target in the photo would make it easier to see the differences by virtue of being able to easily lay the actual targets used in the photo over the orint for direct comparison.
The same hold true for monitor calibration. Spend $300 for a calibration tool and use it to calibrate the monitor and brain having faith in the technology then trusts what is seen by eey as the "correct" color. But what if some operator or system error doesn't perform the calibration process correctly. How would one know? If for example in incorrect calibration process created a slight green bias your eyes and brain when looking at the monitor will adapt to it and not see it. That's the reason you want a neutral wall around the computer. Taking a break from staring at the monitor screen and looking at the wall will "recalibrate" color perception of the brain. So while you might not see a green cast after staring at an image, you will be more likely to notice it after "recalibrating" perception by looking away at the netural wall.
One of the reasons I set Custom WB at capture whenever possible, and include gray card and color target in a test shot after setting it, is so when I first open the image I know the from the baseline of my camera the color is technically neutral and I should not see any color cast in the neutral content.
Most scenes don't have neutral content to evaluate. If shooting woman sitting on a red car in a parking lot without any reference target it might be easy to spot a green bias due to a monitor calbration error when opening the file because the tires and pavement you expect to be neutral look green. But if the shot was instead taken on grass under the shade of a tree there may be a combination of problems, not technical, but human perceptual variables, in play. The first is the fact the lighting will have a green bias the viewer usually will not be aware of when shooting. Without any control target a photographer using Daylight WB will produce a file in the camera which faithfully records the green bias they didn't realize was there. When they get home and open the file on their perfectly calibrated monitor and look at the file expecting the netural content to look neutral, their eyes will adapt to the monitor image and they will not detect the green bias there either.
I've seen that scenario in hundreds of photos posted on forums here. A outdoor photo under trees taken with Daylight WB will have more saturated green foliage, which looks good, but dull gray human skintones resulting from green light on the warn toned skin. The lighting will appear flat — low in contrast. Why do they post photos that way? Their brains adapt perception when looking at the calibrated montor. The problem isn't the camera, or the montior, it's the fact human color perception adapts.
Taking the scenario to the next step, printing, what will happen? The file might look perfectly balanced on screen, but when printed the print will show the green bias that is actually there. Why? Because the printer doesn't have a brain driven by expectations and memory of what things look like. It will simply take the RGB values in the file, green bias included, and reproduce them accurately per the printer profile.
Change the scenario slightly. Have the woman next to the card hold a gray card and MacBeth color target, but still shoot with Daylight WB. When opened on the computer the eyes will adapt and not notice the green in the car or woman, but it will be more likely to notice the bias in the neturals of the gray card, or notice that the greens seem more saturated and reds duller on the MacBeth target than normally seen and expected in a daylight shot. Adding the target gives the brain a known set of objects to "recalibrate" to. Having a neutral wall and room lighting similar to the white point of the monitor works similarly. After staring a a shot with a lot of green grass in it for 5 min. your color perception will get biases. Taking a break from the screen and looking at the wall you "trust" as neutral send your brain to a "neutral corner" which will allow you to see the color more objectively when next looking at the monitor.
Setting Custom WB works similarly. In the under tree scenario if Custom WB was set on a gray card held in the green biased light the camera will add the needed + magenta correction to make the card and content neutral at capture. It will not affect the recording of the RAW values but will tell the calibrated monitor to add the same + magenta correction when displaying the image.
When setting Custom WB in camera in essense you are shifting what you trust as the "accurate" color from the monitor to the camera. Let's assume for the sake of example the monitor calibration got messed up or has drifted since the last calibration. Open an image with a Grey Card shot after setting the Custom WB and you would expect it to be R=G=B. Expecting that to be true you will see the card as neutral even if the monitor has a bias. How can you see if the screen has a bias? Click the card image with the correction eyedropper tool.
If a monitor is calibrated correctly there shouldn't be a color shift on the card or photo content in a Custom WB shot. Exposure may shift if the card isn't exposed close to its relfective value, but the colors shouldn't shift. Including the MacBeth color chart and the gray card in the test shot makes it easier to see any color shifts when that simple test is done. What that with the card also does in the event the montor calibraion isn't 100% perfect is to make the image under consideration appear as is would in terms of gray balance on a perfectly calibrated monitor.
With the simple expedient of setting Custom WB and then verifiying it with the eyedropper tool the file as opened will be technically neutral and be displayed neutral, taking the variable of adaptive human perception of color out of the loop. "Trust" regarding what is correct color shifts from what is seen on the montior (from the baseline of thinking it is perfectly calibrated) to trusting the fact that camera will set Custom WB correctly off the gray card. The difference? The camera WB baseline can be verifed with the eyedropper test on the card.
How do you verify your montitor calibration i correct? Compare it to the camera baseline. Take a subject outdoors on a clear sunny day free from any reflected color casts and have them hold a gray card and color chart. Take a shot with Daylight WB. Use that shot to set Custom WB off the card in the center circle of the viewfinder (Canon cameras). Comparing the two frames will show you any difference between the Daylight WB baseline in camera on a clear day vs. Custom WB. They should be similar in appearance and eye dropper values on the card. Open the files on the computer. Do the "click test" on both. If the camera did WB correctly the card should be R=G=B and no shift in color will be seen. Open the file in Photoshop or whatever you use with your normal workflow but make no changes based on appearance and print it as you normally do. Compare the print with the actual targets the subject was holding.
That's a test you only need to do once to evaluate montor and printer. The image of the target on the print will likely not be an exact match to the colors in the chart but the neutrals in the chart and gray card should be a match if the printer profile got the "recipe" for convering RGB neutrals to CYMK neutral correct on your printer / paper combination.
If you get in the habit of including standard targets in your test shots whenever you encounter problems you can't figure out you can make a print the test shot from that session with the targets in similar a similar manner, then lay out that print, your baseline test print, and the actual targets and compare the differences. Since there would be no user modifcations in the loop based on monitor appearance any changes between the two prints would logically be the result of some variance in the printing due to different batch of ink, paper, change in printer calibration, user error (wrong profile selected), etc.
Without standard targets in the workflow it is difficult to objectively see or solve workflow problems. Even when Custom WB isn't practical, a shot of the card and target in a test shot with eye dropper correction when opened gets you to the same neutral baseline for starting the editing process. If as suggested you make a print at that point you can see what changes in print appearance are due to the screen / printer gamut differences more objectively than after tweeking contrast and saturation. The baseline print becomes a road map to show you what needs improvement.
I'm not suggesting you do that with ever photo you print, but if you've never tried the ideas suggested here you might find trying them at least once edifying in a way you can relate to and verify with your eyes by direct comparison. A quirk of human perceptiion is that it adapts rapidly to any single gamut it is exposed to, but presented with two different images it will detect even minor differences between the actual targets and the print and screen images of the them.