Photograph: Gustave Le Gray – The Great Wave, Sète (1857). Click to view larger.
There are possibly millions of photographs floating on the Internet with the indication that they represent HDR photography. Many, if not most of such photographs manifest the tell-tale signs of “HDR Photography” with choked light in the highlights and the shadows and wide halos around the edges. Calling these HDR Photographs is quite simply wrong. H=High, D=Dynamic, R=Range refers to the dynamic range of the scene, not what you see in the photograph. When a photograph is taken, the dynamic range inherent in the scene is converted to what the sensor or the film can record. This is not a new phenomenon. In the 19th century due to the limitations of the emulsions, photographers could not record the tonal range of a landscape with the information in the sky intact. The scene had high dynamic range and the film, or the emulsion, was incapable of recording the tonal range in the sky in a single exposure.
Masters like Gustave Le Gray dealt with this problem by taking two photographs, one for the sky and one for the land or seascape, and then printing the sky from the properly exposed sky negative. Later, when panchromatic film was developed with more even sensitivity to colors and with greater recording range the problem was mostly handled by careful exposure and possibly using appropriate developers and developing process. This resulted in film recording a wider range of scene dynamic range but nobody called the resulting negative or the photograph HDR. The scene they photographed still contained a very high dynamic range.
Each generation of film, negative or slide, brought better handling of the scene brightness range. Yet, under demanding conditions photographers resorted to special developers or simply accepted the fact that there was a limit to what the film could record. Of course the Le Gray technique was still available to those who wanted to use it.
Enter digital photography, with its capabilities and limitations. Early sensors did not offer the dynamic range of good film, but each generation of sensors expanded the ability of the sensors to record greater dynamic range.
At this point it is important to talk about the recorded image, digital or film. Directly captured photographs yield images that are device-referred, that is what you see is determined by the capability of the device you are using. Part of the problem also lies with the way we record digital images, in 8-bit to 14-bit formats and convert them to at best 16-bit files. What that means is that in an 8-bit image there are 256 shades of gray that define each of the primary colors, Red, Green, and Blue. Although currently the best digital cameras do not record in 16-bit, they will likely do so in the not too distant future. In 16-bit images each of the RGB channels can use 65536 shades to represent their color. This is still device-referred, and although a 16-bit image can carry much more information it is still woefully inadequate to encode the scene brightness with adequate faithfulness. Some new cameras even create “HDR” on the fly by exposing several frames and combining them for a single photograph that accommodates the bright highlights as well as the shadow detail. Here in-camera processing may produce a more faithful photograph, at least as a starting point, free from artificial grunge halos or other “HDR” artifacts people cannot resist injecting. (See another photographer’s views and information on in-camera HDR.)
With the accessibility of HDR image formats (there are different ones like Radiance, OpenEXR) the possibility of creating files that are scene-referred emerged. The technology that we call HDR goes back to 1985 when it was developed by Greg Ward. In 1997 Paul Debevec presented in his paper a way to create HDR images from photographs, and the rest is history, as they say. The HDR file format—and I am not talking about what is called HDR Photography—is a 32-bit floating point image format. The jump from 16-bit to 32-bit should not be construed as “twice as much as 16-bit”. The critical change here is the “floating point” encoding of images. Where both 8- and 16-bit formats are capable of using only whole numbers like Red=32, Green=120, and Blue=200, the floating point format allows each color to have decimal fractions, like .562902. There are different HDR file formats (again, not photographs you see) and they can contain about 10-76 orders of magnitude of dynamic range (10, followed by 10 or 76 zeroes), now that is big! This gives the HDR file format the ability to contain an incredible range of scene brightness information. This format, since it contains full scene brightness information, is called “scene-referred” and this file can be processed just as if we are “photographing” that scene at different exposures.
This is all great but the resulting brightness range in an HDR file (not the photograph!) is enormously wider than what our measly display monitors can handle. What are we to do? Well, one option is to shell out upwards of $40,000 to purchase Professional Reference Monitor PRM-4200 from Dolby labs. Dolby acquired the BrightSide technology and improved upon it to offer this magnificent and magnificently expensive HDR display. This is not a real solution for mere mortals! The second option is to map the tones in the HDR file to that of a 16-bit or 8-bit file format which the current crop of displays can do a decent job of presenting it to many viewers.
You see, there is indeed an HDR image format but we cannot view it unless it is converted to LDR, Low Dynamic Range. The creation of the HDR image file from a series of digital captures about 2-stops apart is almost purely scientific process. At this stage there are not many options other than a few like reducing the chromatic aberration, image alignment, or eliminating ghosting that may be introduced by slightly moving objects like a tree branch. Then the algorithm goes and creates the best scene-referred HDR file it can. No artistic involvement there, just gather the information and store it in a format that can be extracted in different ways according to needs. So the “real HDR photograph” is a bundle of information that cannot be viewed in its entirety using standard equipment. In the process of creating the HDR file, we may view parts of it as Photomatix viewer provides for instance.
What is erroneously called an “HDR Photograph” is the result of tone-mapping and different tone-mapping engines may produce slightly, or significantly different results. In this conversion several things are attempted to bring under control. Overall scene contrast and brightness range is one, then the micro contrast which controls the contrast between adjacent areas of different tonality. Due to the nature of the process of micro-contrast and smoothing adjustments, some halos are intentionally or unintentionally introduced. This is what most HDR Photographs seem to strive to show, that’s why I asserted that this result has nothing to do with HDR recording and this kind of photograph need not come from multiple exposures. Indeed, you will find many examples of “single file HDR” which mainly go after the altered look rather than to capture a wide dynamic range. Whether you like this look, which is also called “the grunge look”, or surreal look, is a matter of taste. My taste in photography and proper use of the tools makes them unattractive to me. Not because they are HDR, which they are not, because I was there at the beginning of “HDR revolution”. I used to manually blend a linear conversion with one that was developed for the shadows from a single capture. (See related posts.) That was essentially an attempt to do what Le Gray did in the 19th century, the original HDR Photographer! That method is now incorporated into software like Lightroom or Adobe Camera Raw to extract the full 14-bit information in a way we would like to render it by using the appropriate sliders. Creating a linear conversion has become quite obscure as most software do not even show the linear encoded image, although I suspect Breeze Browser may still provide a linear conversion. (I checked, it still provides Linear Conversion as well as a “combined” option)
The main use of the HDR technology is to capture as much brightness information from a scene as possible and then decide how best to present the scene, on-screen or in print. The current trend in the “grunge look” is the result of the early tools not being very capable and some early users not getting the hang of the tool and creating unwanted halos; believe me in those days halos were unwanted. They took the “artistic” defense and claimed that it was their interpretation and the technology that is also known as HDR got forever, and regrettably linked to that look. A well done photograph using HDR intermediary step and careful tone-mapping should look like, well, a properly exposed photograph with rich tonal range and detail. That was, and to me still is, the main purpose of developing this technology. HDR is also used in films, yet we do not recall any wide halos from Matrix or Spiderman series which used HDRI. Much of the CGI special effects also use HDRI with none of the tell-tale signs of “grunge”.
Let us call a spade, a spade. Grunge may be a look, but is not HDR. Sure, poorly handled tone-mapping after HDR creation may yield wide halos, but not because the source is HDR but because the user is either not careful in using the tone-mapping process or s/he is making an “artistic” statement. It still begs the question, why go through all that trouble of capturing the immense range and then pull everything to the mid-tones augmented by the grunge halos? You can easily do that practically with any photograph.
Imagine in the film days Kodak or Fuji announcing the “New HDR Film Capable of Recording 18-Stop Range Of Brightness”. Would we have taken photographs with that film and then immediately start to scratch the edges? It sounds silly, right? Of course it does because it is silly to think of scratching the film that contains that gorgeous photograph. Why do we feel compelled to do exactly the same thing on our gorgeous photographs digitally captured?