StarNut Offline Upload & Sell: Off
|
Thanks again!
sinizter wrote:
Really amazing.
I would love to know the workflow on how to achieve something like this....
I'll give it a try. Each image is different, of course, in subtle ways, but all have lots of similarities.
1. Acquire light-frame images (photos of the target). In ideal conditions, this take only a few nights (even for me, an exposure freak). This was a horrible stretch of weather and general imaging conditions, so I acquired data over four months. Ick! This involves more than just pointing the camera/scope at the object being imaged, since it's imperative to keep it precisely pointed at the same place (within a small fraction of a pixel). To do this, we "guide," using the software to glom onto a star (using a separate chip in the camera--or a separate camera/guidescope piggybacking on the main scope; for this image, I used a second, internal chip in the camera); this is not always possible, even with the best equipment, since wind and/or squirrelly skies will defeat all efforts. I want to take many, many exposures, each of significant duration, so that I can toss the bad ones and still have lots to work with, and because you have to have a lot of individual subexposures in order to use statistical methods to combine them in a way that effectively reduces noise. All images taken through the Ha and OIII filters were 30 minutes in duration, and all taken through the luminance, red, green and blue filters were 15 minutes long.
2. Along the way, you take flat-frame images. These are images that allow you to correct for flaws in the optical train, including mild vignetting and dust (particularly dust on the glass covering the imaging chip). This is done at dawn (you get an even, flat light then), and you want to get many for each filter (and for each side of the meridian, since the camera rotates 180 degrees.
3. You also acquire (and constantly update) a library of dark frames. These super-sensitive chips have "dark current" running through them, which causes thousands of little white spots (of varying intensity) to appear on the chip over time; this dark current is repeatable (not including the inevitable noise), so we take a lot of individual exposures with the shutter closed, at night, combine them to reduce the noise, and then subtract them from the individual light frames to largely eliminate the effects of the dark current. Dark current isn't noise, but it's unwanted signal.
4. Acquisition of the light, dark and flat frames is done automatically; before I go to bed at night (I live in Seattle, USA, and my equipment is in the South Australia desert), I tell the software what I want to do that night, and it will do it (weather permitting, of course). When I've acquired enough data, it's time to process the data. This processing takes many hours.
5. The first steps are the so-called "pre-processing": Using master flats and master darks to get clean subexposures, then combining the subexposures to reduce noise, creating masters for each filter (this image had images taken through six filters; it took several hours to get this tedious process done, since I have many dozens of subexposures, and each of the six sets of filtered data has to be pre-processed separately).
6. Then comes the fun part. For this image, I wanted to use the narrow-band data (Oxygen III, which passes light emitted only by doubly-ionized oxygen atoms, and Hydrogen Alpha, which passed light emitted only by ionized hydrogen atoms), mixing it in with the luminance layer, and all three primary color channels This enhances the detail in the luminance layer, and helps get the color data to show up when the luminance layer is applied..
7. Once I've created the hybrid masters, I combine the new red, green and blue channels into an RGB color image, Then I spend a lot of time carefully stretching the histogram (the raw master frames will show only a few stars; the nebula is so much dimmer that a 16-bit linear image typically won't show any of the nebula). Then I spend some time getting rid of light gradients, correcting the color balance, filtering out remaining noise and faulty pixels, and other things, increasing the saturation.
8. Once I'm satisfied that I'm reasonably close with the RGB layer, I do the same sort of things to the luminance layer (other than saturation, of course, since this is a gray-scale image). We use "luminance layering" for most of our images; the RGB layer carries the color data, and the luminance layer gives the final image its detail, much like analog color TV worked.
9. Then I layer the luminance layer on the color layer, and I have the first version of the real image. I spent some significant time working with this image, sharpening the luminance layer, working on getting the color to come through correctly, fixing subtle flaws that I hadn't seen before, etc.
It's a labor of love, but it gives me a tremendous thrill to end up with such a pretty picture of something that looks like a fuzzy blob when looking at it through the scope with my eyepieces.
I hope this helps! This is not for the faint-of-heart!
Mark
|