Upload & Sell: Off
One last thought, is there a difference between jpg and RAW files when using your above "read" vs. "quantization"?
Yes, definitely. JPEG is all about quantization as well, but it isn't quite so simple as it is with a linear RAW file.
In the RAW example the point was we quantized at the point there was no more information available from the sensor. Finer grained quantization would only measure noise more accurately, not provide any more information about scene detail. It was all about recording everything the sensor could "see".
JPEG doesn't really care about noise at all, it is based on a model of what humans see, not what the sensor sees. As a result it actually throws away a whole bunch of valid information from the sensor on the basis that a human won't be able to tell the difference. JPEG does two transformations of the data to achieve this efficiently.
First, it goes from RGB to whats called YCbCr - basically one luminance channel (Y) and two chrominance channels (Cb and Cr) - sort of like the LAB color space in Photoshop. It does this because human vision is much more sensitive to luminance information (tones) than it is to chrominance (colors). So the algorithm can throw away a lot of the Cb and Cr data without us noticing. Noise reduction algorithms work the same way, you can heavily filter the chrominance channels and viewers won't notice so much. It also transforms the data from linear to a gamma curve (more levels at lower values, fewer levels at higher values) which also matches how we perceive images.
Next, it transforms the data from the spatial domain to a frequency domain (think numbers not based on their position but instead on different scales of detail over an area, one number represents the finest detail, the next slightly larger detail and so on). It does this because we are much more sensitive to coarse detail than fine detail so it can heavily quantize (round, throw away data) on the highest frequencies (finest detail) while preserving more data from the lowest.
So there is plenty of quantization going on in JPEG - that quantization is the whole basis of its efficient "lossy" compression, the quantization is the "lossy" part. The difference is what is "acceptable" quantization has nothing to do with what the camera can "see" but rather what the viewer can "see". Most cameras have very high quality JPEG settings available and this lets us still zoom in big and not see a big loss in detail, and do some post processing with no ill effects. But the whole compression model is based on our vision and more extreme post processing breaks the whole premiss. Probably the worst one is doing a B&W conversion from a JPEG with fairly strong channel mixing/selective color conversion. This directly takes chroma data (which is very heavily compressed in JPEG and which we normally can't perceive well) and translates it into luminance data (which we do see very well and now the previously "invisible" chroma compression artifacts become obvious).
Fundamentally there is a limit in how high we can turn up the "quality" knob in lossy JPEG compression - their will always be sensor information left behind and we'll notice it we apply heavy PP to the file. If, on the other hand, there will be no PP applied to the file the information "lost" in the JPEG algorithm isn't visible and so no harm done.
From this you can understand why RAW is far more preferable for image capture (it captures all the information the sensor recorded with no implied rendering intent that might result in information being thrown away) but JPEG is actually just fine for sending to the print house (by definition the final output is for a human viewer so the compression model is perfectly sound).
Well, that was very long winded for a short question. I guess the short answer is that quantization on a RAW file is based on the limits of what the sensor could see where as the quantization on a JPEG files is based on the limits of what we can see.