Technology

The Computational Revolution Begins with Camera-to-Cloud RAW.

In the wake of its acquisition of Frame.io, Adobe has recently disclosed new Camera to Cloud integrations. The Fuji X-H2S will be the first stills camera to natively send “to the cloud.” Although this may appear to be a niche feature, it has the potential to represent a generational shift if one looks beyond the headlines. This is due to the fact that it allows users to save files to the cloud, such as Google Photos, and the subsequent capabilities that this provides.

Storage has been one of the camera’s most significant limitations, and I would even argue that it is its most significant limitation. In the analog world, significant quantities of money were invested in expediting the workflow from photographer to newsroom, which entailed obtaining the film to begin with before developing and publishing it. A photo discovery was literally headline-grabbing in a world with a dearth of visual reportage.

The advent of digital photography caused a paradigm shift by enabling the instantaneous transmission of images, resulting in the departure of newsrooms worldwide. However, the storage issues persisted. For instance, the 1991 Kodak DCS-100, the first DSLR, was equipped with a Nikon F3 and a digital back that were connected to a battery cell, hard disk, and monitor. Fujifilm introduced the first completely integrated digital camera in 1988 with the DS-1P, which had genuine local storage.

And that is the paradigm in which we have been ensnared: the storage of memory cards. The implosion of the camera market is a direct consequence of the cannibalization of sales by the smartphone; this isn’t because of quality photographs. They do. However, they value internet connectivity much higher, and the definition of “quality” is nebulous, to say the least.

The success of the smartphone producing photographs that are “good enough.” This isn’t a playing field that camera manufacturers can compete on because most people don’t want to carry a phone and a camera; the consumer mass market is well and truly over. That said, what camera manufacturers have been concentrating on is technological innovation to create substantially improved hardware and so generate much higher quality photographs. Any reference to connectivity has been notional — we have WiFi and Bluetooth incorporated in pretty much all models on the market, but these are basic in application and intended for one-off duplicates or auto-backup. The camera remains the central device, with the smartphone adjunct to it.

In the meantime, smartphone manufacturers have been far more innovative in what they actually do with the images they acquire, and so we had the advent of computational photography. This was principally developed to mitigate the image quality; recall the 2012 iPhone 5 delivered with an 8-megapixel (6.15 mm x 5.81 mm) sensor, which is in striking contrast to the abilities of Nikon’s 36-megapixel full-frame D800.

Combining multiple photos can considerably reduce image artifacts, which can then be concealed behind refining and resizing for social media applications. This fundamental concept can be extended to HDR, panorama, and night mode, to name a few applications. With Google extending the concept of the RAW file to computational (multi-shot) RAW, you have the premise for a step-change in processing. Except — of course — camera manufacturers are essentially neglecting this space. Yes, you do have panorama modes, but they are rudimentary and JPEG-based. To be fair, Olympus has been one of the few manufacturers to attempt to innovate; the OM-5 has a multitude of modes, such as handheld high-resolution shot mode, Live Neutral Density (ND) mode, focus stacking, Live Composite, Live Bulb, Interval Shooting/Time Lapse Movie, Focus Bracketing, and HDR.

Solutions to Camera Computational Photography

So what can manufacturers truly do to stem the tidal surge of smartphone innovation? There is a need to progress beyond the paradigm offered by extant camera firmware of a stand-alone device that is solely designed to capture images. One solution is to mirror the smartphone solution and do this in-camera by producing an Android ILC. Samsung attempted with the Galaxy NX, and, more recently, Zeiss introduced the ZX1. Neither was a mainstream success. Camera manufacturers appear unwilling to offer their hardware up for smartphone integration, yet they are also unwilling to profoundly revise existing firmware to enable an app store of algorithms for computational processing.

Nikon has the potential to employ a hybrid version of this approach through its inventive MobileAir program. This recognizes that you won’t have a laptop with you and that a cable connection is preferable. In a workflow not too dissimilar to PhotoMechanic, there is a catalog of images to which you can add IPTC metadata, perform rudimentary adjustments, and then upload to the internet (such as an FTP server). This can be for a single photo, collections, or completely automated.

None of this is extraordinary except that it is achieved from a camera connected directly to a smartphone; quite what is happening, and where, remains somewhat opaque, although the RAW files cannot be leaving the camera (given their size), which means either the camera is exporting a JPEG for editing on the smartphone or that all the editing is achieved in camera.

Given that Fujifilm has long offered in-camera RAW processing (and indeed used the camera to do this when attached to a PC), I’d like to believe that it is the latter. This would imply the camera renders the image before uploading it to a remote server. maintaining image processing on the camera.

Could it also offer the potential to incorporate new image processing algorithms that could be controlled from the smartphone, potentially using a plug-in paradigm where programmers could access lower-level processing options to create new libraries of functions? Could this be divided into both camera (RAW) and smartphone (JPEG) operations? There is genuine scope to establish a co-dependent paradigm of smartphone-camera existence that manufacturers could exploit.

Camera to Cloud

And this brings us to Adobe’s announcement of the Camera to Cloud (C2C) integrations for the Fuji X-H2S. This is the first genuine attempt to address the camera storage problem by removing the RAW files off the camera as soon as feasible. So how does the service work?

Frame.io was developed to facilitate real-time video editing collaboration, and one aspect of it is real-time upload. If you have a Sony A7R IV, then your unadulterated RAWs will be peaking out at more than 100 megabytes, which would take a significantly long time to transmit over a 3G connection. However, it’s the implementation of 5G that has the potential to open up new opportunities with rates in the 15 to 30 megabytes per second range (and ultimately much higher), meaning those images could be uploaded as you capture them.

All of a sudden, the notion of using your camera, smartphone, or PC to process your images seems old news. Storing your photos in the cloud, then using a smartphone or web app to process them, makes much more sense. While Adobe has delivered its subscription apps via the web for more than a decade, it hasn’t been able to deliver a fully featured dedicated web app in the way that Canva, Pixlr, or Fotor do, although Adobe Express is a nod in this direction and the company is bringing Photoshop to the browser very soon — and for free. When , it will be transformative by providing photographers with familiar web-based tools.

Building on this foundation would be a series of automated workflows for standard computational techniques; for example, the camera could identify the next five images as a focus stack or HDR, which would then automatically activate the appropriate workflow and generate the output image. This could be downloaded directly back to the smartphone for immediate use or sent into another workflow for client upload, social media posting, or backup.

The benefit of the smartphone was integrating the camera; real-time RAW could see the integration of the web. Not only does this vision of the future transfer the computational element off-camera, but it also allows the paradigm to fit within extant firmware limitations. Pervasive 5G is the adhesive that binds together the highest quality imaging (the camera) with the best-in-class image processing to outstrip the smartphone at every level.

Quite how camera-to-cloud will develop remains to be seen, and the Fujifilm X-H2S is presently the only camera that supports upload in this manner. However, anticipate this to change as the service expands and Adobe develops the opportunities. Getting other camera manufacturers on board is critical; it could be via Adobe’s Camera to Cloud or other similar services. Either way, getting to a critical mass is essential to completely realize the potential. Roll on the future!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button