Home · Register · Software · Software · Join Upload & Sell

Moderated by: Fred Miranda
Username   Password

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist

FM Forums | Alternative Gear & Lenses | Join Upload & Sell


Archive 2014 · Post Processing / Down Sizing Thread
• • • •
Upload & Sell: Off
p.1 #1 · p.1 #1 · Post Processing / Down Sizing Thread

Hopefully people can share their post processing and down sizing methods in one collective place - here.

~ o ~ o ~ o ~ o ~ o ~ o ~

from Sebboh's post:

here's a link to denoir's downsizing routine: http://www.fredmiranda.com/forum/topic/936822/0&year=2010#8846510

if you don't use PS this may be helpful:

finally, here is phillip reeve's method (though i think it's changed since this thread):

~ o ~ o ~ o ~ o ~ o ~ o ~

hopefully someone can share how they process portraits with high resolving alt lenses.

Jan 05, 2014 at 07:45 PM
Samuli Vahonen
• • •
Upload & Sell: Off
p.1 #2 · p.1 #2 · Post Processing / Down Sizing Thread

During 2013 I moved from step sharpening to single operation resizing using higher quality algorithm. I have automated all in virtual machine running Linux, however my virtual machine is current unavailable as it's located on Mac Pro hard drive, which is currently unavailable = I can only tell the parts of the process, without automation (looping files in directory etc.). Example scripts for Windows command prompt, for Linux change "\"-character to "/"-character. There also are small variations based on camera, these examples are for Sony A7.

1. First I copy images from SD/CF-card to computer. Then I backup them. This is because I will modify the files (EXIF).

2. Then I create subdirectory for each lens I shot with. And move RAW-files to those lens specific subdirectories (using some program like Adobe Bridge, Irfanview, Iridient Developer etc. to browse images and move images).

3. Then I add EXIF generic data, as alternative lenses don't tell camera their focal length etc. For this I use this kind of files (example Rokkor58.txt - and yes I'm very well aware that actually APEX-numbers should be used in MaxApertureValue, but this works better since Sony guys didn't read EXIF-standards very well - they use F-numbers like in example below and NOT APEX-numbers...):
-EXIF:LensModel=Minolta MC Rokkor-PG 1:1.2 f=58mm
-EXIF:LensInfo=58 58 12/10 12/10
-EXIF:Copyright=Samuli Vahonen

With this kind of command:
exiftool -overwrite_original -@ ..\Rokkor58.txt Rokkor58\*.ARW

4. After this generic lens info is on the RAW-files it's time to add shooting aperture, example:
exiftool -overwrite_original -n -EXIF:FNumber=1.2 Rokkor58\_DSC1848.ARW

Naturally doing this for all files. If you have taken lots of files with different apertures use subdirectories for each aperture and then use "f1.2\*.ARW" kind of syntax.

5. Then all lens specific EXIF-info is in place. And then I move all RAW-files from subdirectories back to main directory.

6. Next I add GPS-coordinates to RAW-images. When shooting outdoors I almost always record GPS-trail with Garmin Oregon. If camera stores time zone and daylight saving (Sony A7 does both) then adding GPS-coordinates is very easy (if not, then little more work is needed, see exiftool documentation):
exiftool -overwrite_original -geotag "Track_05-JAN-14.gpx" .\

7. Then I rename all images to have format showing shooting date, time, lens, shooting aperture and ISO value:
exiftool "-Filename<[email protected]${FNumber}_ISO${ISO}.%e" -d %Y%m%d_%H%M%S%%-c *.ARW

And file name in used example Rokkor 58 is for example:
20140105_113343_58 [email protected]_ISO100.ARW

8. Then I rename (in Linux by looping files through and using mv and sed to rename files - Windows: I use "Bulk Rename") files by replacing "58.0mm f1.2" part with short and simple lens name e.g. "Rokkor58" ==> [email protected]_ISO100.ARW

9. Then files are imported to RAW software (in Mac I used Apple Aperture, in Windows Adobe Lightroom). White balance, black level etc. is then adjusted and files exported fullsize 16-bit TIFF-files in Adobe RGB.

10. TIFF-files are resized in ImageMagick.
mogrify -path TIFF1280x972 -filter Lagrange -resize 1280x972 -colorspace sRGB -profile "C:\Windows\System32\spool\drivers\color\sRGB Color Space Profile.icm" -format TIF *.TIF

In my virtual Linux machine this is done with script, which restores EXIF-from original TIFF to resized TIFF. For this temporary Windows machine, I didn't bother to make script I just copy manually exif to new files:
exiftool -overwrite_original -tagsFromFile [email protected]_ISO100.TIF TIFF1280x972\[email protected]_ISO100.TIF

11. Then I use Adobe PhotoShop droplet (drag and drop files and it does all automatically), which performs few operations:
- (optional - many people prefer artifacts created using gamma 2.2-2.5 colorspace) convert to gamma 1.0 profile
- USM (250%, 0.2px)
- convert profile to sRGB
- switch mode from 16-bit to 8-bit
- saves to JPG

12. Finally I remove some crap from EXIF (thumbnail, PhotoShop tag group etc.), and add copyright (for some reason Lightroom removes EXIF:Copyright-tag from outputted TIFFs even RAW-files would have the info):
exiftool -overwrite_original -EXIF:Copyright="Samuli Vahonen" -EXIF:Software=ImageMagick -Thumbnailimage= -Photoshop:All= -MakerNotes:All= -XMP:All= -APP14:All= *.jpg

Sounds rather complicated, but when all is automated and scripts easily copy pastable this will take < 1minute. Step 9 in above process is anyways going to be the step, which takes majority of the time.

Few considerations/variations to the process:
  1. image size: step 10 depending on size and image content one may prefer to use different filters in resizing (try Lanczos, Lagrange, Mitchell - Lanczos sharpest, Mitchell softest - read more ImageMagick filter page). Also step 11 USM try values from 250% to 400%, but I doubt one can never get it working with 0,3px or larger, also 0.1px doesn't work. The 0.2px just works best - also I have not been able to replica exactly same effect in ImageMagick (which is the reason why I keep PhotoShop as part of the process).
  2. skipping PhotoShop part: It's possible to get final directly from ImageMagick, just change "format" parameter to "JPG", and add parameter "-quality 89" (89 is JPG quality, if you get artifacts with 89, try 92 - extreme rarely there is reason to go over 92 even softest gradients tend to work well with 92).


Jan 05, 2014 at 11:56 PM
• • • •
Upload & Sell: On
p.1 #3 · p.1 #3 · Post Processing / Down Sizing Thread

I looked into capture sharpening (CS) some time back and found that many practitioners rely on this initial 'pass'. Jeff Schewe is on record as saying if you don't attend to CS 'you are leaving image quality on the table'. He seems to believe this is the best stage to get started on sharpening. Me too.

As a Sony FF user with 'good' lenses my starting point in ACR for general detailed work is Tim Ashley's RX1 settings: 60/0.7/70/20

The amount I do in PS is shrinking as ACR/LR get better, so I pay close attention to WB/sharpening/noise and use everything else sparingly - down to 5-10% in PS now, I use it only to refine colour (using Joseph Holmes profiles) and add contrast to only parts of the tone distribution, as ACR is limited in what can be done for this - I often want mid tones from level 75-170 to have more contrast, for example, and I use Tony Kuyper's luminosity mask actions for this work with blending modes: Soft Light Hard Light, sometimes Multiply or Screen for tone lifting/dropping - on layers. I also believe in matching colour spaces to images, and use Prophoto in ACR into JH's DCAM3 in PS before fine-tuning with assigned chroma profiles.

After that I simply flatten and downsize in 50% decrements, and take a close look at the final web size image, sometimes adding a layer then a small amount of SmartSharpen with some protective settings and settle on a low percentage opacity. In general I don't do this last step, nor do I see intermediate sharpening as a good idea for me provided I get CS right.

Finally, convert to sRGB, then Mode 16>8 bit, save as whatever grade jpeg I need or can use.

I would also be very interested in owners' sharpening techniques for the a7r. I find so far I need less of the basic formula, especially Detail but also Amount...I see radius as subject dependent to a large degree. Files look very good to start with OOC, and it might take some time to settle on what is preferable.

ken, for portraits I use less of the strong controls of Amount and Detail and start with standard radius of 1.0, then fine-tune. But hey, so many here are better than I am.

Jan 06, 2014 at 04:00 AM
Peter Le
• • •
Upload & Sell: Off
p.1 #4 · p.1 #4 · Post Processing / Down Sizing Thread

Interesting.... "Tag"

Jan 06, 2014 at 04:17 AM

Search in Used Dept. 

• • •
Upload & Sell: Off
p.1 #5 · p.1 #5 · Post Processing / Down Sizing Thread

Lately I have been using step sharpen/downsizing but I use a weaker version of it. Also depends on the camera, on the M9 i will use a very weak script, Canon files a bit stronger, Sony somewhere in the middle. On occasion, I also use Andreas Resch's Websharpener script and find it rather handy:

Jan 06, 2014 at 05:33 AM
Phillip Reeve
• • • •
Upload & Sell: Off
p.1 #6 · p.1 #6 · Post Processing / Down Sizing Thread

After Images are imported to LR and selected down to a few I usually apply a preset, my most often used is this one:

DRODark_noProcessing by reevedata, on Flickr

DRODark by reevedata, on Flickr

Sometimes I make further corrections, most often I don't and i rarely spend more than 2 or 3 minutes processing a picture.

Until yesterday I then used the resizing method described in the link above.
Now yesterday I came upon a script which does the same job with just one click from LR which is much less work than doing it in PS plus it can write exif infos on the frame.
It is is still in development and my workflow isn't final yet but to me this is great news.

Jan 06, 2014 at 08:19 AM
Sami Ruusunen
• • •
Upload & Sell: Off
p.1 #7 · p.1 #7 · Post Processing / Down Sizing Thread

Before I started using high resolution screens I used the step sharpening method which I learned from juzaphoto.com many years ago. It worked quite well for landscape photos where the whole photo area is sharp. In portraits I just sharpened the eyes and sometimes hair.

After the popularity of iPads and other high resolution tablets I started to do things in easy mode:

Landscape photos or other photos where the whole image area is sharp:
- Export photo from Lightroom or Aperture (mild sharpening at export phase with cameras with OLPF)
- resize photo to 2048x1536 in Photoshop (bicubic sharper) and add some smart sharpen in Photoshop.

Portrats or any paid work with small dof:
- Export photo from LR/Aperture (no sharpening at export phase)
- Extract the sharp areas for own image in photoshop
-Resize the sharp part photo to final size with bicubic sharper & the non sharp part photo for final size with bicubic/bicubic softer
- Combine the 2 photos so the sharp part is on top layer, add some smart sharpen if needed

Currently I optimize my photos to 4096 2160 and let the viewing software do the final step, since I can't possibly know what kind of screen or software the viewer is going to have.

The smart sharpen introduced in PS CS5 and the 100% preview when resizing photos (PS CC) help me alot to understand how the final image will look like.

Biggest problem currently is not the images themselves but the way they are displayed in web browser. When viewed with high resolution screen most of the great photos posted for web look like crap if the website is not build so that it scale the photos for higher resolution screens correctly.

Jan 06, 2014 at 11:06 AM

Upload & Sell: Off
p.1 #8 · p.1 #8 · Post Processing / Down Sizing Thread

Tag, very interesting.

Jan 07, 2014 at 05:44 PM

FM Forums | Alternative Gear & Lenses | Join Upload & Sell


You are not logged in. Login or Register

Username   Password    Reset password