Upload & Sell: Off
An offset ML layer does not have integer "steps" in correction. What you do is linear scaling and then placement offset.
Like: The APS width is 23.766mm split over 6080px line widths. Then you make a mask for the ML deposit/erosion that has 6080 "bumps" over 23.760mm (6.00Ám smaller) and place it centered on the substrate.
The offset will then be a gradual increase in px(cc)>ML(cc) starting at zero at the optical axis (sensor center) and 3.00Ám at the short edge. At half this image height (23.766/4=5.941mm) the offset will be 1.50Ám, at 90% image height it will be 0.90*3.00=2.70Ám. A smooth increase with image height from center.
My hunch is that it is related to the lossy compression. If you throw a couple of bits of information away, there are bound to be places where you can see that, and very gradually changing sky-colours would fit that scenario.
In a flat surface, the lossy compression has almost exactly zero impact on the tone resolution of ARW2. Posterization in surfaces like skies and other colored flats is almost always a combination of applying color transforms to 8-bit converted data (and it's present in monitor presentations of 16 bit data to, even float32 is affected!).
Raw data > noise variance is way stronger (3x at base ISO and the worst ARW compression) than the value step. But apply even the slightest amount of noise reduction to this - and in this case, even the raw Bayer interpolation counts as "slight NR" since it's a blurring transform! - and then your base data is stairstepped. But WAY below the threshold of visibility.
What makes this smooth surface (with solid patch tones) break up into "bands" of differing brighness/hue is the steps in converting the base data to a presentation format like sRGB or ARGB. Especially if you do in in an 8-bit limited presentation. This is especially noticeable in blue and purple, where the luminance-bearing channels (R,G) are very coarse.
So even a mathematically "perfect" conversion may include (presentation) banding if the base data was close to a perfectly even gradation.
First the data conversion raw color > standard RGB space, and you get a banding that's bordering on visible. After that push the converted, slightly banded data through a monitor profile - which will have another set of threshold levels for the gradation.
Doing this will almost inevitably create round-off errors so that you may get real banding, a gradation where the surface that originally was perfectly even going from dark to bright in stead has distinct brighter-darker-brighter-darker bands that doesn't coincide with the chroma band cutoffs.
Since the image chain in bad executions of CM (like Apples colorsync, and of course also several versions of implementations of the choices you have in Windows) are often limited to 8-bit even if the base data is 16-bit or even floats, you get banding on screen, even with high tone-res material.