Home · Register · Join Upload & Sell

Moderated by: Fred Miranda
Username  

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist
  

FM Forums | Leica & Alternative Gear | Join Upload & Sell

1       2       3              9      
10
       11              13       14       end
  

Archive 2010 · Post Processing Techniques

  
 
denoir
Offline
• • • •
Upload & Sell: Off
p.10 #1 · p.10 #1 · Post Processing Techniques


AhamB wrote:
The test image. In PS both are indeed neutral so it looks like Firefox is rendering your image test_denoir with a green cast here for some reason...


Monitor color profile... Firefox is not properly color managed.


Regarding the relevance of the test image: "line pairs" are not often seen in nature, so I wonder how useful it will be to be looking at the appearance of set of line pairs processed/resized in different ways. Are you hoping to find some "golden mean" conversion procedure that creates least artifacts while retaining most detail?


Heh, resolution is resolution regardless of what you show. Line pairs are a simple representation of spatial frequency and that's why they're commonly used for tests. MTF charts are another example. That does not however mean that it's only applicable to test charts.

Here's an example from an old post - MTF, theory vs practice:

This is the 35/1.4 Lux ASPH @ f/5.6:
http://peltarion.eu/img/comp/mtf/mtf-lux35_56.jpg

We're interested the 40 lp/mm lines.

In expanded form:
http://peltarion.eu/img/comp/mtf/lux35_s40_flat.jpg


Here's the test image:

http://peltarion.eu/img/comp/mtf/lux35-198.jpg


If we superimpose the sagittal MTF data on the image:
http://peltarion.eu/img/comp/mtf/lux35-198b.jpg

If we study a crop of an object that is both in the high resolution zone and where the resolution drops due to field curvature. The arrow indicates the sagittal (radial) direction:

http://peltarion.eu/img/comp/mtf/lux35-198_c1.jpg

We do the same for the tangential curves:
http://peltarion.eu/img/comp/mtf/lux35-198c.jpg

And an example of tangential blur:
http://peltarion.eu/img/comp/mtf/lux35-198_c2.jpg

The MTF charts are measured on line pairs, either just a test chart or through interferometry. It does however not mean that they are applicable only for images that contain tightly packed pairs of lines as shown in the example above.

The same is true for an image constructed for testing resizing methods. The idea is that you can generalize from the results.



Aug 15, 2011 at 12:06 PM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.10 #2 · p.10 #2 · Post Processing Techniques


denoir wrote:
I generated an image in Matlab with four sets of line pairs - 10 lp/mm, 20 lp/mm, 40 lp/mm and (sensor max) lp/mm. I used the M9 sensor as a model - i.e 24x36mm that produces 5212x3468 pixel images. Sensor max in that case is 72 lp/mm. For each set the lines go from 50% gray to black.


I notice that the gray lines are two pixels wider than the white lines, except at 72 lp/mm.
Not that it matters much for this test.


Here's the full image - run your script on it.


I don't have a single script, as I am in a phase of testing and evaluation. The following examples are all the result of a single downsize operation.

Bicubic resize in PSP 7.
Horrendous aliasing.

Bicubic resize in PSP X3.
Much better suppression of aliasing, but with deviating grayscale values. The mean pixel value is much higher than that of all other results. (This is the same algorithm that lost the green of the Berlin green fringing.)

Bicubic resize in Matlab.
The best of the three. Apparently, there is no uniform definition/implementation of bicubic resizing. All four bicubic results (including your photoshop output) are different.


FFT resize in Matlab.
Here I made a script that approaches the ideal brick-wall filter, i.e. a sinc filter in the spatial domain or a rectangular window in the frequency domain. It has higher resolution than any other method and uncompromising anti-aliasing filtering, but unfortunately the ringing in the spatial domain (which is clearly noticed around the text) renders it less suitable for actual applications.

This one looks useful:
Lanczos3 kernel in Matlab.
It avoids the ringing of the sinc filter at the expense of mild aliasing in the 20 lp/mm field.
















Aug 15, 2011 at 01:43 PM
denoir
Offline
• • • •
Upload & Sell: Off
p.10 #3 · p.10 #3 · Post Processing Techniques


Toothwalker wrote:
I notice that the gray lines are two pixels wider than the white lines, except at 72 lp/mm.
Not that it matters much for this test.


Oops, my bad. Error in the generating script (Matlab indexes start with 1 not 0).

Fixed:
Full image
Resize bicubic (PS)
Resize denoir


I don't have a single script, as I am in a phase of testing and evaluation. The following examples are all the result of a single downsize operation.



Interesting, but you can see in all the bicubic cases (except the photoshop & PSP) the 72 lp/mm lines are continuous gradients - i.e you don't see them as line pairs any more. There's also a drop in overall contrast.

What would one want ideally from a resize algorithm?

1) The preservation of detail. On your resized image you'd still want to see the 40 lp/mm stuff even though the picture size only allows for 20 lp/mm. This is possible by boosting the micro contrast enough (i.e. sharpening) so that any weighted resize (like bi-cubic) becomes more like direct sub-sampling (reducing the impact of the averaging).

2) The preservation of contrast - both absolute and relative. In the example above, you'd want the gradients to remain continuous gradients and to have the same start and end points.

3) The reduction of aliasing.


In the case above, the image has been resized by a factor of about 4.3. It can correctly show up to about 17 lp/mm. So the ideal resize would look something like this:
http://peltarion.eu/img/comp/moire/test_ideal.jpg


In principle you represent up to the max spatial frequency that the image can show and the rest at max frequency (i.e lines one pixel wide). That's the theory anyway. The problem is in coming up with the method that performs the best possible mapping between lp_max_original=>lp_max_resized.



Aug 15, 2011 at 02:25 PM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.10 #4 · p.10 #4 · Post Processing Techniques


denoir wrote:
Oops, my bad. Error in the generating script (Matlab indexes start with 1 not 0).

Fixed:
Full image
Resize bicubic (PS)
Resize denoir


Interesting, but you can see in all the bicubic cases (except the photoshop & PSP) the 72 lp/mm lines are continuous gradients - i.e you don't see them as line pairs any more.


That is good, because we don't want to see line pairs that should not be there.


There's also a drop in overall contrast.

What would one want ideally from a resize algorithm?

1) The preservation of detail. On your resized image you'd still want to see the 40 lp/mm stuff even though the picture size only allows for 20 lp/mm. This is possible by boosting the micro contrast enough (i.e. sharpening) so that any weighted resize (like bi-cubic) becomes more like direct sub-sampling (reducing the impact of the averaging).

2) The preservation of contrast - both absolute and relative. In the example above, you'd want the gradients to remain continuous gradients and to have the same start and end
...Show more

I disagree. Since the image can support up to about 17 lp/mm, it cannot support 20 lp/mm, 40 lp/mm, and 72 lp/mm. Thus I don't want to see the 40 lp/mm stuff and not even the 20 lp/mm stuff. You are talking about spatial frequencies that cannot be represented by the smaller image. Hence any detail in these fields after downsizing has to be an artifact. The ideal resize would show line pairs in the 10 lp/mm field, and smooth continuous gradients with the proper contrast in all other fields - without any high-frequency pattern superimposed.




Aug 15, 2011 at 03:05 PM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.10 #5 · p.10 #5 · Post Processing Techniques


Denoir: Is there a reason why you convert to lab mode before doing the sharpening steps?




Aug 15, 2011 at 03:26 PM
AhamB
Offline
• • • • •
Upload & Sell: Off
p.10 #6 · p.10 #6 · Post Processing Techniques


shoenberg3 wrote:
Denoir: Is there a reason why you convert to lab mode before doing the sharpening steps?

He doesn't only convert to Lab, but selects the Lightness channel, so that the color saturation is completely untouched by sharpening. When doing sharpening on the color channels as well, saturation is affected.
Demonstration of the differences between Lab and RGB mode sharpening: https://www.fredmiranda.com/forum/topic/860134/113#8733724

An alternative method is to sharpen a separate (duplicate) layer and set blend mode to luminosity.

Edited on Aug 15, 2011 at 03:43 PM · View previous versions



Aug 15, 2011 at 03:35 PM
AhamB
Offline
• • • • •
Upload & Sell: Off
p.10 #7 · p.10 #7 · Post Processing Techniques


denoir wrote:
Monitor color profile... Firefox is not properly color managed.


I know that Firefox's color management implementation isn't perfect, but that doesn't explain why the bicubic resized test image was perfectly neutral in the browser (while the one resized with your routine had a green cast).

Edited on Aug 15, 2011 at 03:46 PM · View previous versions



Aug 15, 2011 at 03:41 PM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.10 #8 · p.10 #8 · Post Processing Techniques


Is there an advantage of doing it in the lab mode, as opposed to your alternative method (duplicate layer)? It seems the latter method would be much quicker to do.


Aug 15, 2011 at 03:42 PM
AhamB
Offline
• • • • •
Upload & Sell: Off
p.10 #9 · p.10 #9 · Post Processing Techniques


Have a look at those images in the thread I linked to. If you like the boost in saturation, you can sharpen in RGB mode, but if you want to keep the colors as they are, Lab (or separate layer in luminosity blend mode) is needed. In some cases the difference is almost imperceptible, but in others RGB sharpening causes a bit of a color cast in parts of the image.


Aug 15, 2011 at 03:45 PM
shoenberg3
Offline
• • •
Upload & Sell: Off
p.10 #10 · p.10 #10 · Post Processing Techniques


I think you misunderstood my second question. I was wondering if there would be any differences in doing it lab or separate layer in luminosity blend mode, as the latter method seems simpler.


Aug 15, 2011 at 03:46 PM
denoir
Offline
• • • •
Upload & Sell: Off
p.10 #11 · p.10 #11 · Post Processing Techniques


Toothwalker wrote:
I disagree. Since the image can support up to about 17 lp/mm, it cannot support 20 lp/mm, 40 lp/mm, and 72 lp/mm. Thus I don't want to see the 40 lp/mm stuff and not even the 20 lp/mm stuff. You are talking about spatial frequencies that cannot be represented by the smaller image. Hence any detail in these fields after downsizing has to be an artifact. The ideal resize would show line pairs in the 10 lp/mm field, and smooth continuous gradients with the proper contrast in all other fields - without any high-frequency pattern superimposed.


Not at all. The only place where it is problematic is when you have a strictly periodic patterns. Otherwise you're just doing a subsampling . Say for instance that you photo of a gravel road. The individual particles of the gravel may very will be below the 17 lp/mm that you can show directly. However by a proper resize & sharpen you can boost them to be visible at the 17 lp/mm.

Here's an example that I've posted in a previous discussion. First a 100% crop (M9 so 72 lp/mm maximum resolving power):

http://peltarion.eu/img/comp/mtf/A_crop.jpg

Now we take a look at standard bicubic resize:

http://peltarion.eu/img/comp/mtf/A_bicubic.jpg

As you can see a lot of it is lost, but not all of it. It still does better than 17 lp/mm because it smears the high frequency components across multiple pixels.

Now let's take a look at the multi-level sharpen & resize version:

http://peltarion.eu/img/comp/mtf/A_step.jpg

As you can see a lot more detail has been recovered and you can in the image definitely see detail that have spatial frequencies > 17 lp/mm in the original crop. It restores the texture that gets lost in the plain bi-cubic resize.

Of course, there is a question of personal aesthetic preference. My own is to try to keep as an accurate as possible representation of the real world subject as well as to preserve lens rendering characteristics. If you have an image like the one above with high micro contrast in the fine detail (~20-40 lp/mm) then I'd like that to be seen in the final image as well.


shoenberg3 wrote:
I think you misunderstood my second question. I was wondering if there would be any differences in doing it lab or separate layer in luminosity blend mode, as the latter method seems simpler.


No difference, in principle but IMO LAB mode is much easier to use with no need to mess around with multiple layers.


AhamB wrote:
Have a look at those images in the thread I linked to. If you like the boost in saturation, you can sharpen in RGB mode, but if you want to keep the colors as they are, Lab (or separate layer in luminosity blend mode) is needed. In some cases the difference is almost imperceptible, but in others RGB sharpening causes a bit of a color cast in parts of the image.


The primary benefit of doing it on the L-channel only mode it's to minimize the color noise and CA if any.



Aug 15, 2011 at 04:38 PM
theSuede
Offline
• • • •
Upload & Sell: Off
p.10 #12 · p.10 #12 · Post Processing Techniques


I'd like to add three things about what has been written the last few pages (I didn't read to diligently, I just skimmed it...).

1) Whitebalance in raw converters are NEVER, EVER done in Lab. That would be very performance degrading, and actually a very inaccurate way to correct colour.
You have to do WB in the first raw stage, before you even put the raw values through any interpolation. Done at this stage, you get as close to zero conversion artefacting as the value accuracy will allow.

2) Most of the resampling colour faults you see in pictures with high spatial frequencies stem from the fact that you are trying to apply linear theory math to RGB values that are both S-curve- and gamma-treated. You can get around some of those issues by actually doing the resampling in Lab, but you can never get around the fact that you're trying to interpolate between values that are already messed up (have the wrong scene-related contrast).

Working on curve- and gamma-treated values will always have a large impact on the relative detail contrast. The contrast that remains after a rescale depends on the average value of the surrounds - midrange contrasts get a boost, and bright areas are often undersharpened, dark areas oversharpened.

Unfortunately, there's no (practical) way to do things "in the right order" until we have raw-converters that actually give you that option. Until then, we have to make do with what's "visually pleasing", and this is somewhat of an arcane art - it isn't quantifiable.

3) As in most other things, the large software manufacturers are terrified of things that "can go wrong". So almost every editing action that's available is tuned for safety (non-aliasing, non-artefacting) and for performance - performance as in "time spent" or "processing cycles spent", not in "accuracy or pictorial performance". The same goes for resampling algorithms. BiCubic is chosen because of the almost total resistance to aliasing in natural scenes (not testcharts), even though there are other easy ways to do it with much higher accuracy and not much more processing power involved.

Lanczos in the 3- and 5-lobe versions are very good, almost as good as a complete Sinc reconstruction (which would be the same as the FFT toothwalker showed).



Aug 15, 2011 at 05:46 PM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.10 #13 · p.10 #13 · Post Processing Techniques


denoir wrote:
Not at all. The only place where it is problematic is when you have a strictly periodic patterns.


A periodic pattern just reflects one of the constituent frequencies of a normal photographic image. In terms of audio, you want to hear evidence of a 72-Hz tone in an audio file sampled at 33 Hz. You won't hear any, and if you do you are listening to artifacts such as a tone at a lower frequency due to aliasing.


Otherwise you're just doing a subsampling . Say for instance that you photo of a gravel road. The individual particles of the gravel may very will be below the 17 lp/mm that you can show directly. However by a proper resize & sharpen you can boost them to be visible at the 17 lp/mm.


An individual pebble is represented by a broad distribution of spatial frequencies, and after the resize only those <17 lp/mm can remain. All frequency contents >17 lp/mm is necessarily lost.


Here's an example that I've posted in a previous discussion. First a 100% crop (M9 so 72 lp/mm maximum resolving power):

http://peltarion.eu/img/comp/mtf/A_crop.jpg

Now we take a look at standard bicubic resize:

http://peltarion.eu/img/comp/mtf/A_bicubic.jpg

As you can see a lot of it is lost, but not all of it. It still does better than 17 lp/mm because it smears the high frequency components across multiple pixels.

Now let's take a look at the multi-level sharpen & resize version:

http://peltarion.eu/img/comp/mtf/A_step.jpg

As you can see a lot more detail has been recovered and you can in the image definitely see detail that have spatial frequencies > 17 lp/mm in the original crop.
...Show more

I will not comment on the standard bicubic resize, because only Photoshop knows what is happening there. Concerning your multi-level sharpen & resize version, if the detail is indeed an accurate representation of the real world subject you did a good job.
It appears you did, because a single lanczos3 resize followed a single sharpening step looks similar:

http://toothwalker.org/temp/fm/A_crop_lanczos3.jpg



Of course, there is a question of personal aesthetic preference. My own is to try to keep as an accurate as possible representation of the real world subject as well as to preserve lens rendering characteristics. If you have an image like the one above with high micro contrast in the fine detail (~20-40 lp/mm) then I'd like that to be seen in the final image as well.


You can't see it, because it is not there.





Aug 15, 2011 at 06:09 PM
denoir
Offline
• • • •
Upload & Sell: Off
p.10 #14 · p.10 #14 · Post Processing Techniques


Toothwalker wrote:
You can't see it, because it is not there.


No, don't think you've understood what I mean. My fault I'm sure. I'm not suggesting that you can cheat mr Nyquist.

However, consider two things - first, what happens when you sample with an insufficient frequency? Aliasing. You don't end up with nothing - you end up with something with a lower frequency. You still record a signal - it just isn't an accurate representation of the full high frequency signal.

Second, consider how bicubic (and most other) downsampling works. Each target pixel is weighed with neighboring pixels in the source image. So it's not a plain selection of every N:th pixel.

Let's take a look at an pixel level example. First up is photoshop bicubic resize. We start with a 16x16 image and reduce it to 8x8 and finally to 4x4. Note that 8x8 is the last level where we can theoretically capture the high frequency pattern - the lines have a width of just one pixel.


http://peltarion.eu/img/comp/moire/C_bicubic.jpg

You can see the effect of the blending of neighboring pixels. In the final target 4x4 you can still see a hint of the separation between the lines but the contrast is very low.

If we instead apply sharpening before the 8x8 and after the 4x4 resize we now get this:
http://peltarion.eu/img/comp/moire/C_sharpen.jpg

Here we can see the black/white alternating lines again - at half frequency, as expected. The point is however, we can see them as separate lines and not as a gray blur.

To demonstrate this with a better example, consider a mixed 16x16 image that contains both N lp/mm and 2*N lp/mm where 2*N is the maximum supported as they're only one pixel wide.

Bicubic first:
http://peltarion.eu/img/comp/moire/D_bicubic.jpg

With sharpening after resize:
http://peltarion.eu/img/comp/moire/D_sharpen.jpg

So, what do we have here? Well, the N lp/mm lines have been accurately reproduced - they were 2 pixels wide while they are 1 pixel now. More interesting is the former 2*N lp/mm lines. The frequency has been halved so they're at the N lp/mm level now. Thanks to the down sampling we've doubled the period, but they're still lines and the diagonal line through them is also visible.

Now this may seem extremely obvious, but there's an interesting aspect of this. Bicubic resize is usually used rather than direct subsampling to avoid unwanted aliasing, but more importantly to avoid subsampling at the 'wrong' position. If you have lines that are 1 pixel wide and you pick every second line you'll end up with something completely black or something completely white. That's no good. However if you use bicubic in combination with a contrast increase (i.e sharpening) you get the best of both worlds. First you don't get the all black or all white but you get a gray goo where the signal with half the frequency is barely visible. Then you nuke it with sharpening to increase the contrast and you end up with a clean high contrast signal at half the frequency of the original. You end up with getting the maximum resolution at high contrast.

Toothwalker wrote:
It appears you did, because a single lanczos3 resize followed a single sharpening step looks similar:

http://toothwalker.org/temp/fm/A_crop_lanczos3.jpg


That does look good. That's matlab's kernel implementation to be used with imresize, right? I'll have to take a closer look at it. The problem with my method is that it's not general enough. The results are inconsistent depending on the camera used and on the final resolution.



Aug 15, 2011 at 07:36 PM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.10 #15 · p.10 #15 · Post Processing Techniques


denoir wrote:
To demonstrate this with a better example, consider a mixed 16x16 image that contains both N lp/mm and 2*N lp/mm where 2*N is the maximum supported as they're only one pixel wide.

Bicubic first:
http://peltarion.eu/img/comp/moire/D_bicubic.jpg

With sharpening after resize:
http://peltarion.eu/img/comp/moire/D_sharpen.jpg

So, what do we have here? Well, the N lp/mm lines have been accurately reproduced - they were 2 pixels wide while they are 1 pixel now. More interesting is the former 2*N lp/mm lines. The frequency has been halved so they're at the N lp/mm level now. Thanks to the down sampling we've doubled the period, but they're still lines and the diagonal line
...Show more

When you say "they are still lines" the immediate question is: "What are still lines?" The original lines? No, it can't be because there are four black lines in the right half of the original and only two in the resized version. Your resize suggests that the left and right halves have a similar pattern, which is clearly not an accurate representation of the original. Ergo, you show artefacts. The bicubic version is of higher fidelity.



That does look good. That's matlab's kernel implementation to be used with imresize, right?


Yup.

To summarize, if you insist that the fine line pattern at the left side of this target
http://toothwalker.org/temp/fm/pattern.jpg
should leave detail after downsampling, you are in fact objecting to Mr. Nyquist. Do we want to see lines in smaller versions? No, of course not.

Concerning the few isolated lines in the right half, these reflect your pebbles. The white lines are one pixel wide, like in the left half. Do we want to see these lines in smaller versions? Yes, of course. Something like this

http://toothwalker.org/temp/fm/pattern_lanczos3.jpg

and subsequently one can apply sharpening to boost edge contrast.

There is no high-frequency contents preserved in the right half that has been removed in the left half. We are looking at low-frequency information that was there from the start.







Aug 16, 2011 at 10:51 AM
denoir
Offline
• • • •
Upload & Sell: Off
p.10 #16 · p.10 #16 · Post Processing Techniques


Ok, I see I'm still not getting my point through. Let's go back to basics. What does Mr Nyquist tell us?

If we have a signal we must sample it at the double maximum frequency to be able to perfectly reconstruct it:
http://peltarion.eu/img/comp/moire/E_nyq.jpg

If the red dots are the samples, then we can reconstruct this sinus curve. What happens if we do this - sample at a lower frequency?
http://peltarion.eu/img/comp/moire/E_nyq2.jpg

When we reconstruct the signal we can't reproduce the high frequency, but we still get a signal:
http://peltarion.eu/img/comp/moire/E_nyq3.jpg

What I'm saying is that this is a much better reconstruction (although of course imperfect) than a flat line at 0 would be - which you are saying is the optimal answer. It definitely preserves more information than just not bothering with any reconstruction just because you can't represent as high a frequency as the original signal had.



Aug 16, 2011 at 12:38 PM
kwalsh
Offline
• • • •
Upload & Sell: Off
p.10 #17 · p.10 #17 · Post Processing Techniques


denoir wrote:
What I'm saying is that this is a much better reconstruction (although of course imperfect) than a flat line at 0 would be - which you are saying is the optimal answer.


I'm not sure you'll find universal agreement with that. In fact, for repeating patterns that aren't a perfect integer multiples of the sample rate the result is called Moire which is rarely considered desirable.

I think you are driving at the fact that perhaps aliased information isn't always a bad thing aesthetically, which seems reasonable in some cases. I think what Paul is driving at is that plenty of detail is recoverable for small isolated impulse functions in the image (e.g. pebbles) within the Nyquist bandwidth if you use the proper kernel and so there is more risk than benefit in allowing aliasing to occur. Apologies to both of you if I'm failing to parse your intent and just muddying the waters by injecting myself here!

It definitely preserves more information than just not bothering with any reconstruction just because you can't represent as high a frequency as the original signal had.

Hmmm.... I have a hard time with this interpretation. What is "information"? If I alias and cause visually objectionable Moire across part of the image have I "preserved" information or have I in fact irreversibly corrupted the lower frequency scene information with aliased artifacts that have no representation in the real scene? Isn't that more akin to destroying information?

Certainly in some cases the aliasing "false information" may be visibly pleasing or may increase "apparent detail" but is this really "preserving" information or just serendipitous image enhancement that could equivalently and more safely be performed with a proper downsampling kernel and sharpening?

I may be influenced here by my "day job" in which your aliasing proposal is considered an unmitigated disaster clearly corrupting information. The only case in which I know aliasing is used to any advantage is radar signal processing, but in those cases multiple sample rates are used along with things akin to the Chinese remainder theorem to resolve the ambiguity - these techniques depend on the signal source being quasi-stationary which isn't at all applicable to image data. (And I exclude undersampling of bandpass signals here which is sometimes somewhat erroneously considered aliasing despite the fact there is no ambiguity in the sampled data).

There are pitfalls in drawing parallels between communications sampling theory and what "looks good" in image processing - so I've been watching this discussion with interest. But the concept of "preserving information" through the introduction of ambiguity due to improperly bandlimited downsampling raised the DSP hairs on the back of my neck to the point I felt like posting

Thanks to both of you for the continuing good read!

Ken



Aug 16, 2011 at 01:20 PM
wfrank
Offline
• • • •
Upload & Sell: Off
p.10 #18 · p.10 #18 · Post Processing Techniques


denoir wrote:
What I'm saying is that this is a much better reconstruction (although of course imperfect) than a flat line at 0 would be - which you are saying is the optimal answer.


I believe that what Toothwalker is saying is that 50% gray, not 0, is a better representation of existing lines than "new" lines with a different width. This given that the lines are too narrow. With your "sampling" the result would be the latter - if interpreted as e.g. a horizontal lineapproximation in a jpg image. Your sinus curve would build up the left part of his example image, the one with too narrow lines.

Sorry if i missed your point and thanks for this enlightening discussion



Aug 16, 2011 at 01:27 PM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.10 #19 · p.10 #19 · Post Processing Techniques


denoir wrote:
Ok, I see I'm still not getting my point through. Let's go back to basics. What does Mr Nyquist tell us?

If we have a signal we must sample it at the double maximum frequency to be able to perfectly reconstruct it:
http://peltarion.eu/img/comp/moire/E_nyq.jpg

If the red dots are the samples, then we can reconstruct this sinus curve. What happens if we do this - sample at a lower frequency?
http://peltarion.eu/img/comp/moire/E_nyq2.jpg

When we reconstruct the signal we can't reproduce the high frequency, but we still get a signal:
http://peltarion.eu/img/comp/moire/E_nyq3.jpg

What I'm saying is that this is a much better reconstruction (although of course imperfect) than a flat line
...Show more

What I am saying is that, when you have no way of knowing what happened, it is better to be silent than to fabricate a story.


It definitely preserves more information than just not bothering with any reconstruction just because you can't represent as high a frequency as the original signal had.


OK, so you actually like aliasing. You could have said that earlier.

The observer has no way of knowing whether the resulting low-frequency signal is an aliased version of a higher frequency in the original or an accurate representation of that low-frequency component. It is just to avoid this ambiguity that anti-alias filtering is widely applied in digital signal processing.

Your red curve reconstructs something that is not present in the original. That is no preservation of useful information, but an artifact.



Aug 16, 2011 at 01:46 PM
AhamB
Offline
• • • • •
Upload & Sell: Off
p.10 #20 · p.10 #20 · Post Processing Techniques


shoenberg3 wrote:
I think you misunderstood my second question. I was wondering if there would be any differences in doing it lab or separate layer in luminosity blend mode, as the latter method seems simpler.


If you want your sharpening to be destructive (you're sure that you're not going to tweak it at some later stage), you may as well forego creating extra layers and work in Lab.



Aug 16, 2011 at 02:37 PM
1       2       3              9      
10
       11              13       14       end




FM Forums | Leica & Alternative Gear | Join Upload & Sell

1       2       3              9      
10
       11              13       14       end
    
 

You are not logged in. Login or Register

Username       Or Reset password



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.