akuba Offline Upload & Sell: Off
|
Taylor Sherman wrote:
I wonder if they didn't have the processing power to do a more traditional, lossless compression on the RAW files. The space savings going from 14bpp to 8bpp is fairly substantial, but of course with a lossless compression, many images would have only been about 50% of the max possible size anyway.
Regarding processing power: Venturing into speculation land, I suspect the driving constraint for Sony was either buffer space or data throughput. The algorithm they are using has the properties that it can be pipelined (it doesn't need all of the data to be staged in a buffer but can be applied as the data is streamed) and it produces compressed data with a fixed upper bound on its size (that being the 36M bytes). While many lossless algorithms can be pipelined, none of them can guarantee a fixed-sized upper bound on the output. In fact most can result in the compressed data being larger than the uncompressed data for worst-case inputs. Most implementations fall back to storing the uncompressed data in those cases but that requires either buffering all of the input data or reprocessing the bloated output back into the raw input for final storage. But at this point I'm digressing too far into computer science geekery so I'll leave it at that. I would assume Sony did a bunch of benchmarking with a representative set of images and concluded this algorithm met their design constraints whereas lossless ones did not.
jhenderson0107 wrote:
All analog to digital converters are imperfect. The ENOB (effective number of bits) of state-of-the-art 14-bit converters sampling at ~180 MHz (as within in the A7R ) is very close to 11 bits, so there is little justification to encode more that 11-bits into the compressed RAW image. The use of 11-bit samples within the signal processing chain in conjunction with the delta-encoding is well proportioned and fully justifiable, and is almost certainly visually-lossless in all but extreme corner cases.
The ENOB may be around 11.5 or so, so there may be some signal being lost as well, but I agree that the algorithm should be visually lossless for well-exposed images and that for everything but the deep shadows we're probably losing mostly noise. If I'm not mistaken, I recall mention that Sony may also be cooking deconvolution sharpening into the RAW, so the definition of what is raw may be changing anyway. But for peace of mind and for being able to push exposure in post-processing I personally would still prefer losslessy compressed data. Storage is cheap while paranoia is taxing. I think at this point most of us are fairly convinced that the encoding is good enough, but I'm not sure many of us would choose this encoding if there were a lossless option in the menu.
|