ilkka_nissila Offline Upload & Sell: Off
|
Canon already reduced the weight of the 400/2.8 by 53% from the first EF version to the current EF IS III and 600/4 by 49% even though the newer lenses support additional features such as IS. I think this is a quite remarkable achievement. I think people are really lucky to live in such an age to experience such improvements in how easy to make beautiful images it has become. I think it is good that the camera manufacturers put their efforts in the fast superteles as it is obvious when looking at the results e.g. in nature photography competitions that by far the nicest-looking images are usually taken with these lenses and not the smaller-aperture variants. (Of course it is also true that the average photographer using a 400/2.8 is likely to be more skilled than one who uses an inexpensive telezoom or superzoom compact camera, because one doesn't usually make such a commitment (both financial and discomfort in transporting the gear) based on a moment's whim).
Compensating for low light by taking a series of exposures and averaging them has the problem that the resulting image has discontinuous movement, i.e., it appears a bit like if the subject was moving lit by a flashing light. In a longer exposure you get smooth movement blur. This can of course be processed into something that looks less ghastly by modeling how the movement of the subjects in the frame takes place, their trajectories etc. and then interpolating the image, but then why not just capture a long exposure and get a nicer result. If the subject is to be frozen in time, then the multiple exposure approach runs into difficulties. Let's imagine a set of lanterns in water and capture a series of exposures with the intention of combining them to achieve better signal-to-noise ratio. This only works if the subjects look the same towards the camera from shot to shot. If a lantern rotates during the series of exposures, different exposures cannot be combined unless the software somehow also creates a model of the rotation (for each lantern, of course). This is also the case if you have multiple people in the frame. Now, you can say that this already magically works and so I'm just a luddite. But the reality is that the resulting images are really poor quality. There is no display resolution low enough that the photos wouldn't look bad.
For example
https://www.blog.google/products/pixel/see-light-night-sight/
The top banner is uncomfortable to look at because it is not sharp. The portrait of three people below where they show the original series of shots on the left and the final result on the right, it has the same problem of the banner, no fine detail even at this extremely low display resolution and my brain immediately tells me that there is something wrong with the image. The tones are poor, the light looks unnatural, and there are no finer features on the subjects. The people in the photo look a bit like they were ghosts. I cannot possibly see how this approach could be seen as competing with results obtained by using a large-sensor camera. I can see how from the perspective of someone who never looks at images in larger size than a phone, and who has never used a dedicated camera suitable for this type of photography, but the bar is set really low here. The result can only have the level of detail which is consistent with the similarity between the different photos, if the expression changes, the subject cannot have a sharp mouth area. If the subject turns, there is no way to combine the multiple exposures taken from different angles and retain correct subject detail.
ML and DSLR cameras work just fine for "computational imaging." People have been using algorithms to stitch photos, focus stacking to extend the depth of field, and combining multiple different exposures with layers and masks, and even automatic HDR techniques for 15+ years now. Even the cameras have had algorithms built-in to deal with high-contrast situations (e.g. Nikon's D-lighting). In fact similar technologies exist in minilabs that print from scanned film negatives (20 years ago). What you can't do with a typical ML or DSLR is to combine a sequence of rapidly taken photos to supposedly freeze movement but collect a lot of light at the same time, but the issue with that approach is is that the moving subject is typically not of the same shape from shot to shot, so the averaging results in the loss of detail. What the DSLR or ML can do is freeze the subject in one short exposure and collect enough light for a nice photo that shows the subject in a consistent way, as it was captured within a single short exposure. Combining multiple exposures in camera is something that if the manufacturers wanted, could be implemented in dedicated cameras as well, but may not be worth sacrifices needed in other areas of the camera (fast read time sensor leads to lower dynamic range, thus single captures at low ISO would not have quite the same fidelity).
What the dedicated ILC camera with large sensor can do is produce consistent images that are captured at a specific time with high image quality. There is no computerized fudging to deal with inconsistencies between shots, and if multiple exposures are to be combined, the responsibility of how to resolve the inconsistencies is with the person doing the post-processing, and usually it helps a great deal to have an intelligent human being doing this work, because they can see what looks good and what doesn't, and how the combining of images can be done in such a way that it looks right to the viewer and preserves detail as much as possible. The advantage of the human being is that they know the subject and how it is supposed to look. They also know the artistic objectives. Photography is an art form, you know, and replacing the photographer's vision, creativity and skill with that of a camera phone that sells in the same form to billions other users just doesn't seem like the solution. Why would the photographer want to replace their art with that created by some programmers working in a lab producing code that "guesses" the shape, features and emotional expression of the subject out of multiple images where in each image the subject looks different? I don't get it.
Apple's efforts are not that great either. Looking here
https://www.apple.com/fi/iphone-11-pro/
the ultrawide angle image is obviously soft in the outer 20% of the radius of the image circle, large areas of the image look blurred even though one would not think this is possible considering how pixels the image shown on the web page has. The portrait of the woman in red looks completely artificial the color of the dress and the sky doesn't resemble anything seen in real life. In the night mode the portrait of a woman: the woman could have a been wax sculpture or plastic doll and the viewer wouldn't be able to tell the difference from the image. Again the color is nothing like one would see in person. Replacing a complex background with white? How cute. Again there is nothing here that cannot be implemented by algorithms on a computer afterwards, and this type of thing has been done as long as digital editing existed. They make films nowadays where the actors don't see each other or the scenery. They don't shoot these films on mobile phones.
|