Home · Register · Search · View Winners · Software · Hosting · Software · Join Upload & Sell

Moderated by: Fred Miranda
Username   Password

  New fredmiranda.com Mobile Site
  New Feature: SMS Notification alert
  New Feature: Buy & Sell Watchlist
  

FM Forums | Alternative Gear & Lenses | Join Upload & Sell

1       2       3              5      
6
       7       8       end
  

Archive 2013 · More Mpix or perfect per-pixel quality?
  
 
theSuede
Offline
• • • •
Upload & Sell: Off
p.6 #1 · More Mpix or perfect per-pixel quality?


Yes, correct wording could be "half as much AGAIN", my actual meaning and intention was of course "150% baseline accuracy" or "50% better than" the non-downsampled Bayer based image.

Any aliasing in the downsampled image is due to:
1) I enlarged the crop with "nearest neighbor" so that you can see individual pixels. This is not an interpolation, it just taking one pixel and blowing it up to into a 2x2 block where all four pixels contain the same value.

2) The base image is very, very close to perfect. That does inherently mean that the object image projection is sampled in squares - and this gives what looks like aliasing. To get less aliasing, you have to lower per-pixel accuracy.

A perfectly non-aliased image has a gaussian blur effect of about 0.3-0.4 pixels before you sample it - my image sample doesn't. Our baseline viewing reference for digital images is square pixel ratios - and this does give aliasing when done correctly. There is no way to avoid this, except to increase blur.

If you look at the actual information in the pixel, you can see that each and every one contains the average of light contained in the square that the pixel "covers" on the target. Getting rid of all aliasing means that you either blur the image, OR modify the information so that it no longer conforms to square pixel ratios.



Mar 28, 2013 at 07:15 PM
Mescalamba
Offline
• • • •
Upload & Sell: Off
p.6 #2 · More Mpix or perfect per-pixel quality?


Hm..

When image is recorded, certain DR is recorded. How it can become "more"?

I know that noise, if you downsize a lot "disappears". But that cant bring back for example burned out highlights or blocked shadows?

That "DR" is more along lines image seems "cleaner where it wasnt"? Cause I cant imagine how you could get more data from place, where are NO data.

So only DR increase is simply that downsized data have less noise?



Mar 28, 2013 at 10:39 PM
theSuede
Offline
• • • •
Upload & Sell: Off
p.6 #3 · More Mpix or perfect per-pixel quality?


Yes, noise is the limitation on detail in shadows. Detail is the definition of DR, an image detail that has an object contrast of black / white has to give a stronger signal than the background noise. When they're equally strong, you have 0dB SNR. The difference in Ev between the whitepoint (clipping) and the 0dB SNR point is DR.

Downsampling increases the SNR of image detail in each pixel. By downsampling you include more light energy in each target pixel area. More light = lower Poisson noise. It also lowers the electronic noise power by averaging the e-noise from all included original pixels from the large image into a new single pixel in the smaller image.



Mar 29, 2013 at 10:07 AM
Paul Gardner
Offline
• • •
Upload & Sell: On
p.6 #4 · More Mpix or perfect per-pixel quality?


theSuede wrote:
Yes, noise is the limitation on detail in shadows. Detail is the definition of DR, an image detail that has an object contrast of black / white has to give a stronger signal than the background noise. When they're equally strong, you have 0dB SNR. The difference in Ev between the whitepoint (clipping) and the 0dB SNR point is DR.

And once again we are back to the requirements for 16 bit ADCs to resolve the bits at the low end. The high end is adequately resolved with 14 bits. If we had a camera with 20 bit ADCs I think
...Show more



Mar 29, 2013 at 11:17 AM
Toothwalker
Offline
• • •
Upload & Sell: Off
p.6 #5 · More Mpix or perfect per-pixel quality?


theSuede wrote:
Yes, noise is the limitation on detail in shadows. Detail is the definition of DR, an image detail that has an object contrast of black / white has to give a stronger signal than the background noise. When they're equally strong, you have 0dB SNR. The difference in Ev between the whitepoint (clipping) and the 0dB SNR point is DR.

Downsampling increases the SNR of image detail in each pixel. By downsampling you include more light energy in each target pixel area. More light = lower Poisson noise. It also lowers the electronic noise power by averaging the e-noise from all
...Show more

Whether downsampling increases the SNR depends on the signal and noise characteristics. If signal and noise have the same spectrum, it does not. In a digital image, however, the signal spectrum falls off towards the Nyquist frequency of the sensor whereas the noise spectrum is whitish. In that case the lowpass filter associated with the downsampling operation filters out more noise than signal, and hence the SNR increases. (This is another way of formulating what you wrote.) One can just apply a lowpass filter without any downsampling, with the same result.

It brings up an interesting discussion. As far as I know, SNR is a meaningless concept without an associated bandwidth. In many fields of science it is defined as the ratio of signal power to noise power, measured in the frequency band of the signal. Removing noise outside the signal band does not count as an SNR improvement.





Mar 29, 2013 at 11:50 AM
kwalsh
Offline
• • •
Upload & Sell: Off
p.6 #6 · More Mpix or perfect per-pixel quality?


Paul Gardner wrote:
And once again we are back to the requirements for 16 bit ADCs to resolve the bits at the low end. The high end is adequately resolved with 14 bits. If we had a camera with 20 bit ADCs I think the DR thinking would change considerably.


The ADC isn't the problem, the pixel read circuitry and pixel well depth are the limits. All current cameras are not limited by a 14-bit ADC and only the very best Sony sensors even come close to needing 14-bit. Many cameras with a 14-bit mode are just for marketing purposes, the extra-bits serve no purpose at all.

Do the math, a 20-bit ADC won't do diddly. In fact, it is a completely senseless suggestion because even the deepest wells in the industry (Canon 1DX) are only 90,000 electrons. That would be 17-bits if you had a read noise of 1 electron (and you don't, read noise on the 1DX is over 30 electrons).

And to demonstrate how irrelevant the ADC typically is, the H3DII 50 has your lusted after 16-bit ADC and a dynamic range of 11 EV...

Big changes in DR are more likely to come from different sensor architectures. Fuji has done dual sensitivity pixels with some success - DR is certainly improved significantly but at a resolution cost. In the academic side very small 1-bit pixels hold promise as well, and they would naturally have a highlight roll-off as well.

Increasing ADC bit depth, however, isn't going to solve anything.

Ken



Mar 29, 2013 at 11:58 AM
kwalsh
Offline
• • •
Upload & Sell: Off
p.6 #7 · More Mpix or perfect per-pixel quality?


Toothwalker wrote:
It brings up an interesting discussion. As far as I know, SNR is a meaningless concept without an associated bandwidth. In many fields of science it is defined as the ratio of signal power to noise power, measured in the frequency band of the signal. Removing noise outside the signal band does not count as an SNR improvement.


A point frequently overlooked. Related is any sort of equalization/high-pass filtering applied to recover the signal (think sharpening, demosaicing, clarity controls, etc.). If you care about fine detail and noise then an optical AA comes into the equation as well not to mention the MTF of the lens used (especially contrast and micro-contrast). I call this the "super-zoom problem", low transmission, low contrast, low micro-contrast lens results in more gain and high pass in PP which amplifies the noise more than if you used a better lens.



Mar 29, 2013 at 12:06 PM
zhangyue
Online
• • • •
Upload & Sell: Off
p.6 #8 · More Mpix or perfect per-pixel quality?


Toothwalker wrote:
Whether downsampling increases the SNR depends on the signal and noise characteristics. If signal and noise have the same spectrum, it does not. In a digital image, however, the signal spectrum falls off towards the Nyquist frequency of the sensor whereas the noise spectrum is whitish. In that case the lowpass filter associated with the downsampling operation filters out more noise than signal, and hence the SNR increases. (This is another way of formulating what you wrote.) One can just apply a lowpass filter without any downsampling, with the same result.

It brings up an interesting discussion. As far as
...Show more
this approaching my field now if image is band limited signal, down sampling will improve SNR, if it is not, down sampling will introduce alias, I am not sure this will really help with image. I think it is fair to say most case in real world shooting, it is beneficial.

However, one thing I am not so sure is camera shaking effect, as 4 blur pixel can make up one sharp one? I feel i can get sharper image easier with 12M than 36M down sized. But since this can not be scientifically verified, it is may not a fact.

The thing of this topic is really depend on your priority of photography. If you are trying to achieve fidenlity, I see no reason you don't want pixel. Consider DR is so high at base ISO even with high pixel density. For me, I don't feel the need of this. A 6M interesting foto have a lot more value than 80M non interested one. I never feel 80M interesting photo really add more value to, say a 12M.

But that is just me.



Mar 29, 2013 at 05:07 PM
zhangyue
Online
• • • •
Upload & Sell: Off
p.6 #9 · More Mpix or perfect per-pixel quality?


kwalsh wrote:
The ADC isn't the problem, the pixel read circuitry and pixel well depth are the limits. All current cameras are not limited by a 14-bit ADC and only the very best Sony sensors even come close to needing 14-bit. Many cameras with a 14-bit mode are just for marketing purposes, the extra-bits serve no purpose at all.

Do the math, a 20-bit ADC won't do diddly. In fact, it is a completely senseless suggestion because even the deepest wells in the industry (Canon 1DX) are only 90,000 electrons. That would be 17-bits if you had a read noise of 1 electron
...Show more

+1, the bottle neck is not DAC, but sensor(ADC) itself. Now days, it is not hard to get even 24 bit DAC performance with DR more than 120dB.

The thing is if you don't need high pixel, but just pixsel quality, apply a LP filter in optical, and let camera to do the over sampling and noise shaping in digital domain, you could achiveve more DR with even existing sensor. But that is shrow away pixsel. And DR is really pretty good as is now

edit: ADC to DAC, I was on cell phone.


Edited on Mar 29, 2013 at 05:38 PM · View previous versions



Mar 29, 2013 at 05:21 PM
kwalsh
Offline
• • •
Upload & Sell: Off
p.6 #10 · More Mpix or perfect per-pixel quality?


zhangyue wrote:
The thing is if you don't need high pixel, but just pixsel quality, apply a LP filter in optical, and let camera to do the over sampling and noise shaping in digital domain, you could achiveve more DR with even existing sensor. But that is shrow away pixsel. And DR is really pretty good as is now


And we drag further off topic...

I'm not sure those techniques would work well here. Noise shaping (at least the kind I'm familiar with) is achieved by putting the quantizer in a feed-back loop where the feed-back loop steers the quantization levels. Which works great when your quantizer is an ADC. In this case the performance limiting quantizer is actually physics - photon/electron counts and their Poisson statistics are setting the fundamental limits in dynamic range and SNR. I don't see how you can noise shape that.

As a side note also remember optical LP filters really stink compared to electrical LP filters. Better to just over-sample to the degree that the lens PSF does the job for you (though as someone pointed out this can be a very high resolution for the center of a sharp wide aperture lens).



Mar 29, 2013 at 05:35 PM
 

Search in Used Dept. 



zhangyue
Online
• • • •
Upload & Sell: Off
p.6 #11 · More Mpix or perfect per-pixel quality?


kwalsh wrote:
And we drag further off topic...

I'm not sure those techniques would work well here. Noise shaping (at least the kind I'm familiar with) is achieved by putting the quantizer in a feed-back loop where the feed-back loop steers the quantization levels. Which works great when your quantizer is an ADC. In this case the performance limiting quantizer is actually physics - photon/electron counts and their Poisson statistics are setting the fundamental limits in dynamic range and SNR. I don't see how you can noise shape that.

As a side note also remember optical LP filters really stink compared to electrical
...Show more

Not an expert here (designed some high speed delta sigma, not imaging processor. ) Just throw out some thought, haven't really been thinking about how to apply Delta-Sigma into this thing. Feel this could be done.

Even with no noise shaping. Oversampling the band limited signal still be beneficial, though this is essentially what we are talking about here, downsize 36M file to 12M.

With future technology march, I can see this could be the trend to just increase sensor density (if we have reach physical technology limit in sensor) to increase DR. But how much DR we really need, are we just chasing number?








Mar 29, 2013 at 05:51 PM
zhangyue
Online
• • • •
Upload & Sell: Off
p.6 #12 · More Mpix or perfect per-pixel quality?


To op, I feel we have almost reached to a stage that you can have both. pixel count and quality. The question is do you really need it (pixel)? I don't. 12M is fine, 18-24M is enough. Even DR is enough for me.








Mar 29, 2013 at 06:09 PM
Mescalamba
Offline
• • • •
Upload & Sell: Off
p.6 #13 · More Mpix or perfect per-pixel quality?


zhangyue wrote:
To op, I feel we have almost reached to a stage that you can have both. pixel count and quality. The question is do you really need it (pixel)? I don't. 12M is fine, 18-24M is enough. Even DR is enough for me.


That "almost" is important part. It depends how much demanding person you are. Thats what this poll is about pretty much.

I wont jump to conclusions. This poll is probably showing that in our particular corner of the internet, we are very demanding about technical aspects of photography and image quality. Probably more than marketing of big companies would like.

If it was poll among photographers that pretty much dont go on forums and only make living from taking photos, then it would be most likely different. Or maybe not. I think thats one thing I will never know.



Mar 29, 2013 at 08:20 PM
theSuede
Offline
• • • •
Upload & Sell: Off
p.6 #14 · More Mpix or perfect per-pixel quality?


The demands on the AD is actually quite easy to define...

A modern sensor like the D7100/7000, D800, 5Dmk3 and so on often have FWC values (how much light/charge they can store per pixel) of about 40-70,000 e-.
That, in itself might point to that you would need 16 bits to cover all bases (2^16 = 65,536). Anything more than this would be like trying to split single electrons - which would be quite futile.

But no sensor today with an FWC that high has a noise (e- read error margin) lower than ~ 1/15000 of the maximum signal (FWC). The absolute leader in that category is still the D800/D7000 and Pentax K5-2 class sensors. They have an FWC of about 50k e-, and a minimum electronic read noise at base ISO of about 3.0e-.
50,000:3.0 = 16,600:1

14bits are 2^14 = 16,384

You might gain some extremely small amount of DR by going to a 16-bit converter, but then you also commit to either using twice as many A/D converters - or halving pixel throughput (and thereby also halving the camera fps)

In ANY other situation than a well exposed base ISO image, having more than 14 bits of A/D resolution is a total waste. It might even hurt performance, due to increased bus rates and other small incremental losses that increase electronic noise.



Mar 29, 2013 at 10:49 PM
theSuede
Offline
• • • •
Upload & Sell: Off
p.6 #15 · More Mpix or perfect per-pixel quality?


Toothwalker wrote:
Whether downsampling increases the SNR depends on the signal and noise characteristics. If signal and noise have the same spectrum, it does not. In a digital image, however, the signal spectrum falls off towards the Nyquist frequency of the sensor whereas the noise spectrum is whitish. In that case the lowpass filter associated with the downsampling operation filters out more noise than signal, and hence the SNR increases. (This is another way of formulating what you wrote.) One can just apply a lowpass filter without any downsampling, with the same result.

It brings up an interesting discussion. As far as
...Show more

Yes, DR is a somewhat "hard to grasp" metric when you look at the connection between raw data > finished image.
The SNR that DR is based on is a singe-pixel metric, that doesn't include ANY of the stuff you need to do to convert the raw data into a human viewable image.

Comparing the D7100 and the D7000 (or indeed the 5Dmk2 and the mk1) the figures don't really match up when you look at finished images...
Since the D7100 (and the 5Dmk1) are sharper "per pixel" than their comparisons - with an imaginary "perfect" lens - other parameters come into play too. Bayer interpolation has to be done slightly differently, and you need less sharpening. Both of those actually increase DR as we see it in a real image, as an increase in shadow zone detail and how deep into the shadows we can go without the image detail drowning in noise.



Mar 29, 2013 at 11:13 PM
RustyBug
Offline
• • • • • •
Upload & Sell: On
p.6 #16 · More Mpix or perfect per-pixel quality?




So when we output to screen or print ... are we ultimately limited to DR @ 2^8 @ screen? or 2^ @ print?



Mar 30, 2013 at 03:55 AM
kwalsh
Offline
• • •
Upload & Sell: Off
p.6 #17 · More Mpix or perfect per-pixel quality?


Well the screen has a gamma on its 8 bit data usually. That said, unless you view in a darkened room with a really good monitor don't expect a contrast ratio much better than around 250 or 500 to 1.

For prints getting past 100 to 1 is a challenge unless you use glossy paper and controlled lighting.

Regardless, display dynamic range is much, much lower than what cameras capture. Burn/dodge, tone mapping and tone curves are how we get images that are printable from high dynamic range captures. Or we intentionally block shadows or blow highlights for a more "graphic" representation. So despite the low dynamic range of display media most people want more DR in their capture media so they can use that shadows slider to great effect.

Ken



Mar 30, 2013 at 04:08 AM
Paul Gardner
Offline
• • •
Upload & Sell: On
p.6 #18 · More Mpix or perfect per-pixel quality?


Without going into the math, 14 bits does not have enough bits left on the LSB to handle the dark end of the well depth. Why do you think that MF is 16 bit? Not for the high end, but for the low end. An 18- 20 bit ADC would give us more room on the LSB. Then if necessary we could use a highpass filter to deal with shot effect noise. Granted this would lower FPS, but for landscape cameras, WHO CARES? Most landscape photographers shoot maybe 1 frame in 5 or 10 minutes. ADCs are cheap, it is the battery requirements that also restrict frame rates and shots per charge. So I only get 50-100 images per charge, again, so what?


Mar 30, 2013 at 03:12 PM
mpmendenhall
Offline
• • • •
Upload & Sell: Off
p.6 #19 · More Mpix or perfect per-pixel quality?


Paul Gardner wrote:
Without going into the math, 14 bits does not have enough bits left on the LSB to handle the dark end of the well depth. Why do you think that MF is 16 bit?


You might not go into the math, but fortunately theSuede and kwalsh have a bit in their posts above, explaining why a bunch more bits won't help. At the low end, there's simply no more real information to gain no matter how finely you divide the signal --- when the noise level (from ADC readout noise, and, even if that's perfectly dealt with, fundamental electron counting statistics) is bigger than your LSB precision, you gain practically nothing by finer divisions. MF backs don't need 16 bits; it's a handy marketing point, and might slightly (?) simplify hardware/software design by operating on integer byte multiples.

Then if necessary we could use a highpass filter to deal with shot effect noise.

Can you explain what you mean by this? The problem with noise is that it's indistinguishable from the actual signal. There isn't a special kind of filter that can remove the noise without cutting out just as much from whatever parts of the signal share the same bandwidth. You can smooth out noise all you want with filters, but you'll be blurring out your desired picture just as much.



Mar 30, 2013 at 03:28 PM
RustyBug
Offline
• • • • • •
Upload & Sell: On
p.6 #20 · More Mpix or perfect per-pixel quality?




Are the multiple (connotative/denotative) definitions for DR?

When I think of DR, I think about range of recordable information @ dark - light and the number of stops that range between these two extremes. Yet, much of the discussion is concerning noise and degrees of refinement/distinction/tonal separation within the range of that captured.

I get that using more bits will allow for greater degree of refinement (significant digits/calculations/etc.) and smoother tonal value transitions ... but how does that create more "range" (i.e. stops) between darkest/lightest?

So, is my concept of DR "whack" or am I missing something here?



Mar 30, 2013 at 05:26 PM
1       2       3              5      
6
       7       8       end




FM Forums | Alternative Gear & Lenses | Join Upload & Sell

1       2       3              5      
6
       7       8       end
    
 

You are not logged in. Login or Register

Username   Password    Retrive password