Digital Cameras Finally Close to Joining the Revolution
Computational photography might not be exclusive to smartphones for long, exciting possibilities lie ahead
One of the more fascinating qualities of tech is this: sometimes important moments in the evolutionary process of many product categories come quietly. Digital photography had one such moment a few days ago with the official announcement of the Nikon Z9, the company's first full-frame mirrorless digital camera with a stacked sensor. It's a truly amazing product and a return to form for the Japanese manufacturer: it has a 45 Megapixel sensor, a 20 FPS full-resolution or 120 FPS 10 Megapixel burst rate and 8K/30 video capability among other impressive features. But what this camera does not have is, oddly, more important: a traditional mechanical shutter.
This changes everything.
The Z9 is the first of a new generation of digital cameras that were designed from the start in a different way, more akin to the way digital photography subsystems in modern smartphones are. Apart from being so fast that it does not need a mechanical shutter, so it relies on a fully electronic one, the Z9's most important elements that do the actual capturing - the sensor itself, the logic board controlling it and the dedicated RAM necessary - are all "bundled" or "stacked" in one unit, put right next to the camera's image processor. It's not the first time stacked sensors like this are used, but it's the first time it's done in combination with a fully electronic shutter, so the whole system works less like a traditional digital camera and more like a capable "camera smartphone" does.
This brings, for the first time, into sharp focus (huh) the trend that has defined advanced smartphone cameras during the last four years or so: computational photography. Ever since Google made clear that software can play a very important role in the capturing of photos with much greater dynamic range in a variety of lighting conditions, almost every other smartphone manufacturer has been working on improving this basic concept, expanding it in different directions. Apple, Samsung, Huawei and others have all offered smartphone models capable of delivering incredible photographs using hardware that's decidedly inferior to that of a proper digital camera, let alone a Z9.
This is why Nikon's new model is so important: it's the evolutionary step that digital cameras needed to make in order to become capable of using computational photography techniques. The electronic shutter and stacked sensor would only need a powerful enough processor in order to perform the kind of magic a recent Pixel, iPhone or Galaxy S can do with much smaller sensors and lenses. Nikon's Expeed 7 processor is most likely not powerful enough to combine 8-10 frames of different values in order to deliver a proper HDR image at 45 Megapixels, but it could be e.g. at 10 or 12 Megapixels.
Nikon is actually using deep learning algorithms for scene detection and advanced autofocus, so a fair amount of processing is already taking place. It would not be that much of a leap to add a number of computational photography functions via a firmware update in the future. If not that, then maybe Nikon's next camera would incorporate a processor powerful enough to do that at full resolution or a co-processor designed to perform the operations computational photography relies on.
Some photography purists will, of course, cry foul at this prospect. Traditional hi-end digital photography is all about capturing a frame that is as natural and true-to-life as possible, as unprocessed as it can be. Processing will take place later on, on a computer with specialized software. Smartphone photography, on the other hand, is mostly about quickly delivering impressive results fit for posting on social media, so HDR or low light photography work very well for that. Post-processing is rarely, if ever, performed on smartphone photos, so they have to look as spectacular as possible on-device (even if it often means that they are not exactly realistic). They are two decidedly different approaches that many deemed highly unlikely to ever converge.
The truth of the matter, though, is that there really is no reason why this should be an "either/or" situation. Digital camera manufacturers could easily build controls that enable or disable computational photography functions on these powerful devices. Nobody would have to use them if he/she did not want or need them to. What these functions would really do, is offer choice: amateur photographers would not need to learn computer post-processing in order to get impressive HDR images, while professional photographers could use the same functions for practical purposes and challenging situations, such as more convincing low-light or even almost dark captures.
Computational photography coming to advanced digital cameras is a win-win scenario. It’s the kind of development that could rewrite the rules of modern photography in the exact same way digital cameras rewrote the rules of traditional photography two decades ago. In some respects, it may not even be a matter of "if" computational photography is coming to this product category, but "when" - and the first manufacturer to properly implement it in a powerful model will reap the PR benefits of that move. Any bets on who will that be?