Increasing the dynamic range of photos

Correction 15 - increasing the dynamic range of the photo

Author: NON. Publication date: July 04, 2010. Category: Photo processing in Photoshop.

One great way to visually improve a photo is to increase its dynamic range. This Photoshop will show you how to give each element in a photo its appropriate dynamic range using the Levels .

Let's open the original photo. It's a beautiful location, but the photo is pretty lifeless. Let's fix the photo while working in Photoshop. The idea is this: we will select mountains, water and grass one by one and increase their dynamic range. As a result, our photo should sparkle with bright colors.

Let's create a copy of the original layer - Ctrl+J .

We need to highlight the mountains. To solve this problem, I chose the “ Rectilinear Lasso ” ( L ). You choose the tool with which you are comfortable working. We highlight the mountains. It is not necessary to make a very precise selection, since in subsequent steps we can easily correct this inaccuracy.

Call the “ Levels ” tool - Ctrl+L . As you can see from the histogram, there are gaps in its left and right parts.

Increasing the dynamic range of mountains. To do this, pull the left slider to the right, and the right slider, respectively, to the left. Move the middle slider a little to the right to darken the mountains a little.

We remove the selection - Ctrl + D and this is what we got.

Call “ Levels ” - Ctrl+L . As you can see, there is a gap on the left side.

Move the right slider to the left, and move the middle slider a little to the left.

Remove selection - Ctrl+D . This is what we got. As you can see, there are dark stripes along the edges of the banks that should not exist. Let's fix this shortcoming.

Select the Eraser ( E ). Set the brush hardness closer to zero.

Let's process the dark stripes along the edges of the river. This is what we got.

At the third stage of our correction, we will select the grass.

Call “ Levels ” again - Ctrl+L , move the sliders as in the first step.

Here is our final image.

Compare the image before and after processing the photo in Photoshop . As you can see, in a very simple way you can achieve very impressive results.

If you don’t want to miss interesting lessons on photo processing, subscribe to the newsletter.

Dynamic range. Part 1

Instead of a beautiful sky in a sunset photo, did you end up with a white spot? Or maybe, on the contrary, you managed to capture the sunset, but there is only a black background below? Have you photographed a person in front of a window, and behind him a white veil has formed in the frame? It's time to figure out where these errors come from and how to fix them!

You've probably noticed that sometimes it is very difficult to show both the bright sun and dark details in a frame: either the sky turns out to be overexposed, or the lower part of the frame becomes too dark. Why is this happening? The fact is that the camera is capable of perceiving a limited range of brightness. We're talking about dynamic range. In the days of photographic film, this concept was called “photographic latitude.”

Lack of dynamic range in the frame: the sky is “lost”, replaced by a white spot.

The sky is preserved, all details are included in the dynamic range.

When is dynamic range most likely to be lacking?

In practice, photographers are constantly faced with the problem of insufficient dynamic range. First of all, it will be noticeable when shooting contrasting scenes.

A classic example is shooting at sunset. It will not be so easy to capture both the bright sun and the shaded areas at the bottom of the frame, the ground. The lack of range is also felt when photographing in backlit conditions (for example, if you are shooting indoors in front of a window).

All areas that are not included in the dynamic range in the image appear either too light or dark and lack all details. This, of course, leads to a loss of image quality and technical defects.

Some examples of high dynamic range scenes:

Almost any landscape

Some city sketches

Taking pictures with the Moon; night shooting in the city

Portraits in backlight

What is the dynamic range of a camera? How to measure it?

So, dynamic range (DR) is a characteristic of a camera that is responsible for the range of brightness it can show in one frame. Typically, manufacturers do not indicate this parameter in the technical specifications of the camera. However, it can be measured by looking at how much detail in the dark and light areas of the frame a particular camera can convey.

Compare: a smartphone camera has a narrow dynamic range, while a Nikon D810 DSLR has a wide one.

A shot taken with a smartphone camera. Details are lost in both light areas (sky) and dark areas (bushes). Instead, there are white and black spots in the photo. This is an example of narrow dynamic range.

A shot taken with a DSLR camera. Details are preserved both in light areas (all shades of the sky are visible) and in dark areas. This is an example of a fairly wide dynamic range.

In addition, there are special laboratories that measure the characteristics of cameras. For example, DXOmark, which has a lot of tested cameras in its database. Note that the specific testing of this laboratory is such that the dynamic range is measured at minimum ISO values. So, at higher ISO values, the picture may change somewhat.

Dynamic range is measured in exposure stops (EV). The more stops of exposure a camera can display in a photograph, the wider its dynamic range. For example, the Nikon D7200 has a dynamic range of 14.6 EV (according to DXOmark). This is an excellent result, however, it is worth noting that in general the dynamic range is usually higher in cameras with full-frame sensors, such as Nikon D610, Nikon D750, Nikon D810. But the dynamic range of compact cameras can be only 10 EV, and even less for smartphones.

Note that the potential of DSLR cameras (including their dynamic range) can only be assessed when working with RAW files. After all, many in-camera settings will affect JPEG images. For example, the camera can greatly increase the contrast of images, narrowing the dynamic range. On the other hand, many cameras can artificially expand it when shooting in JPEG, but more on that later.

How to lose dynamic range in a photo? Common mistakes

Even if a camera has a wide dynamic range, it doesn't guarantee that your photos will show all the detail in the dark and bright areas. Let's look at the main mistakes photographers make that lead to a significant reduction in dynamic range and poor detail elaboration.

  • Exposure errors . Exposure errors are always fraught with the fact that either overexposed or “black” areas will appear in the photo. Even a wide dynamic range will not save a frame ruined by incorrect exposure.

Let's look at an example of a overexposed frame:

Theoretically, the dynamic range of the camera should have been enough for this scene, but there was a loss of detail in the bright areas of the frame (in the sky) due to incorrectly adjusted exposure. The frame turned out too bright.

The opposite situation is that the frame is underexposed and dark.

This time the details were lost in the dark areas of the frame.

  • Processing errors . Rough processing of photos on a computer or the use of in-camera image processing filters can greatly reduce the dynamic range in your shots. Therefore, do not overuse excessive contrast enhancement, working with color saturation, exposure correction, etc.

Original frame: all details are preserved thanks to the wide DD and proper exposure of the image.

The photographer overdid it with processing - details in dark and light areas were lost.

We fit into the dynamic range

Often, even when shooting complex scenes with large differences in brightness, you don’t have to resort to any complex tricks to expand the dynamic range. You just need to wisely use what the camera can provide.

  • Choose suitable shooting conditions . To get high-quality shots, you need to choose suitable lighting conditions. Often the photographer drives himself into conditions in which it is almost impossible to take a high-quality photograph. Instead of trying to capture a scene that is too contrasty, it's worth considering whether it might be better to choose a different angle, different time of shooting, or different lighting. For example, the sunset sky will balance in brightness with the earth after sunset. By the way, it’s not always worth taking the sun into the frame. Think about whether you can do without it. This way you will be able to avoid unnecessary overexposure. This also applies to shooting portraits in front of a window. It is enough to take a couple of steps from the window and shoot from the side of it - the bright window will not be overexposed, and beautiful side lighting will fall on your model.

When taking this photo, I did not include the rising Sun, which is located slightly to the right of the frame, in the composition. This way I saved myself from overexposure in the area of ​​the solar disk.

When shooting a portrait outdoors, you don’t have to include the sun in the frame. The main thing is to get beautiful lighting from it.

Follow the exposure . As we have already said, in order to preserve maximum detail in a photograph, it must be exposed correctly. Pay attention to adjusting exposure settings, use appropriate metering methods and histograms. Also, always review the captured footage and check its brightness. If necessary, make takes lighter or darker so that you have plenty to choose from later.

Take photos in RAW . If you're shooting a complex scene, it's always better to have some wiggle room. The RAW format will provide you with a whole airfield, because absolutely all the information about the photo will be saved on the camera’s matrix. When processing, you can make the dark areas of the photo lighter or even slightly “stretch out” the details in the light areas of the frame. Please note that the RAW format allows you to lighten dark areas of the frame much better than darken light areas. Therefore, protecting the light areas of the image, photographers sometimes deliberately make frames darker than necessary, so that later during processing they can “pull out” the necessary details from the shadows. Almost any modern RAW converter can do this processing. Including Nikon Capture NX-D. We will prepare a special material on expanding the dynamic range with its help.

Shadows are “stretched out” in the RAW converter

In the next part of the lesson we will talk about the possibilities of expanding dynamic range. Some of them are hidden in the camera itself and are available to any photographer. Stay with us!

Dynamic Range Enhancement #1

In this lesson, I will tell you how to create the illusion of expanding the dynamic range in a photograph (in fact, in the final image, it, of course, does not expand, but simply redistributes the tonal range of individual areas). This is not HDR in the sense of the term, which is applied to pictures that have been hypertrophied by various detailing plug-ins. Such processing should be used where it is really necessary. For our picture there is no such need. We will work with standard Adobe Photoshop CS5 tools

To realize the effect of expanding the dynamic range, we need 3 images in JPEG format, taken with exposure bracketing of + 2 steps, or three images in TIFF or PSD format converted from one RAW file with different exposures. How to do this is described in the lesson “Camera Raw for Beginners #12”

You can, of course, draw out details in shadows and highlights at the stage of working in Adobe Camera Raw , but the fact is that the algorithms for the parameters Fill light and Exposure compensation , based on almost the same unsharp masking as the Unsharp Masking filter , are still imperfect and The output is not very good picture quality. In Photoshop, we can completely control the process, suppressing unwanted artifacts in a timely manner.

We upload our pictures into one file in Photoshop. You should get this sandwich of layers: the bottom layer is a picture with normal exposure, the second layer is a picture with exposure + 2 steps , the top layer is a picture with exposure - 2 steps .

Now turn off the visibility of the top two layers and look at the original picture with normal exposure. We see that there is a lack of detail in the shadows and highlights. Let's fix the situation.

First turn on the visibility of the layer +2EV and go to it. From this image we will take the details for the shadow areas. We will not use masks, but will use the layer style, or rather, the blending options. To do this, open the Layer Styles window. We are interested in the lower block with overlay tonal range controls (circled in red)

Then we divide the right slider, which is responsible for light tones, into two parts (to do this, hold down the ALT and drag the half of the slider with the mouse). Move half of the regulator to the extreme left position

Now the shadows in the image have become much lighter, and details have appeared in them. True, the image has become low-contrast, but we will fix this in the following steps.

Now turn on the visibility of the top layer

From this layer we will take the details for the light areas. Call up the layer styles window by double-clicking on the layer outside its name and thumbnail. Now we separate the left control for the top layer, which is responsible for dark tones, and move the half to the extreme right position.

We received a low-contrast image, but it contains information in both light and dark areas.

To make the image look good, we will use the HDR Toning . But first we need to create a duplicate of the image, since we will still need a structure with layers. A duplicate is created in the History

In this case, the image layers merge into the background layer, and the image itself may not look very beautiful. This is because the command applies default settings that are not appropriate for a particular image.

We need to increase detail and adjust the tonal range. In each specific case, the settings will be selected individually. For this image they turned out like this

The image itself began to look much better, but there is one unpleasant moment - halos around contrasting objects on a uniform background (foliage, lights and wires). This is very clearly visible in the image fragment at 100% scale.

Let's get rid of these artifacts. To do this, let's go to the picture with layers, create a new empty layer on top of the others and apply the External channel . We will select a duplicate image as the source, and the overlay channel will be RGB. Now our image, to which we applied the HDR Toning , appears on this layer.

Let's give this layer the name Dark (meaning halos), and set the blending mode to Darken

Then duplicate the layer, call it Light (all the same halos), change the blending mode to Light Replacement . Reduce the opacity of this layer until the visibility of the light halos is reduced to an acceptable level. It is best to control this process at 100% image scale. Below I have presented two images in comparison for clarity.

Now duplicate the top layer again, set the blending mode to Normal and opacity to 100%. Then we will create a mask on it.

Using a black brush with 100% opacity, paint over the areas on the layer mask where you want to get rid of ghosting. This is the area of ​​the sky. Its color may change slightly, but this is not a big deal.

Now let's switch the image to LAB , allowing layers to be flattened. Then create a Curves . Again, the curves will vary in each case. For this image, I slightly increased the contrast of the brightness channel and increased the steepness of the curves of channels a and b . As a result, the image acquired contrast and saturation. While maintaining all the details.

The result is this picture:

In principle, we can stop there. If you want to increase local contrast, you can do it as follows: create a layer from the flattened image on top. To do this, press the key combination CTRL+ALT+SHIFT+E.

Then apply the Unsharp Mask , but with a small effect value and a large radius value. Dan Margulis calls this method HIRALOAM.

We obtain an image with enhanced local contrast.

Now we change the opacity of the top layer to taste, switch the image back to RGB mode and save.

Expansion of "dynamic range"

Why do you need to expand the dynamic range? In order to clearly see as many details as possible in light and dark areas. The human eye is capable of this without much difficulty, but unfortunately there is no camera matrix.

Here is an example of such a photograph and an example of what can be done from it in 10 minutes of work in Photoshop:

PS full-size screenshots are located under the thumbnails.

The end result does not claim to be a “masterpiece” and was deliberately made a little “more than necessary” in order to show what can be achieved.

Minimum of what is needed: one RAW file (as in this case)
It is advisable to have: several RAW files with different exposures (-2EV, 0EV, +2EV)

What software was used: LightRoom 3.6, Photoshop CS5, plugins from Nik Software: Color Efex Pro4 and Define 2.0

I will describe it step by step:

1) Primary processing of the original in LightRoom using the Beginning (which I wrote about in this article ) - setting White Balance to Auto , slightly increasing sharpness and saturation, slightly reducing noise and correcting optical distortions of the lens. The full “picture” of how the preset works is shown here

2) Next, a virtual copy of the processed frame is made in LightRoom . Right-click on the original and select the option Create Virtual Copy . the copy (by moving the Exposure to the left) until the sky gets a decent look. For a copy, you can move the White Balance a little to the left to give the sky a bluer tint. the original until all the buildings are clearly visible. On the contrary, you can darken the original and lighten the copy - there is no difference. Now select both frames and open them in Photoshop as Layers .

3) Place a light layer on top, a dark layer on the bottom. Click on the light layer ( 1 ), select the Magic Wand tool ( 2 ), Add to Selection ( 3 ), Tolerance 20% ( 4 ) and start clicking on the most exposed areas of the sky.

When all areas are selected, you can zoom in and check small areas that may not have been captured.

When the selection is finished, press the add mask button while holding down the Alt - this will fill with black . Without Alt the unselected will be filled with black .

4) We enlarge the image so that the places where the mask “worked” can be clearly seen.

Here you can clearly see that the mask has clear boundaries, which we would not really like, since the “leakage” of one layer into another is too rough. To correct this situation, you need to “soften” . Right-click on the mask and select the Refine Mask .

5) Try to specify the same parameters as mine. If you are not satisfied with the result of “softening” the mask, then try your settings in the settings

It is advisable to place the final result ( Output To New Layer with Layer Mask ) so that the original mask remains unchanged, in case of a second attempt at “softening”.

6) Now the result of the “percolation” of the top layer is more pleasant

7) If you find some areas that you forgot to select in step 3 , then click on the mask, select a soft brush, color black (or white, depending on what you want to do: black - do not show the area of ​​the top layer; white - show), the size is not very large so that you can carefully draw the parts, Opacity 40-60% and we begin to draw.

8) If you click on the mask Alt key pressed , the mask itself will be shown in the main window.

It will be clearly visible which parts are well drawn and which are not. To return, click on the mask again while holding down the Alt .

9) Next, you can slightly darken very light areas of the houses by choosing a large brush size and Opacity of 10-20%. After that my mask looked like this:

10) When you are “finished” with the mask and you are happy with the overall look, then you need to combine all the layers into one. To do this, go to the Layer and select the Flatten Image .

11) At this stage, you can save the finished result, which will look like this

It turned out better than it was... isn't it?

12) If you have plugins installed from Nik Software , such as Color Efex Pro4 and Define 2.0 , then you can go further... Launch Color Efex Pro4 and select Tonal Contrast

Move the sliders until you like the result displayed on the right inside the plugin. Contrast Type is preferably Fine , but if you want cooler results, try other options from this menu. In this example, I set the sliders to almost maximum to show the “power” of Tonal Contrast . Click OK and get a new layer - the result of Tonal Contrast 's work.

13) If you look at the screenshot in step 9 , you will see that in the upper right corner it is slightly sloppily “smeared” with white. At step 10 , this “smudge” is almost not noticeable, but after processing the image in Tonal Contrast, this “smudge” comes out

We monitor the image for similar “jambs” and cure this problem with the Clone Stamp .

14) Often a “side effect” of Tonal Contrast is increased noise , especially if Standard or Strong . Noises are suppressed very well using the Dfine 2.0

15) The result of the plugin can be assessed by turning off the top layer by clicking on the “eye” , this will be especially visible when any area is greatly enlarged

If you are satisfied with the result of the noise reduction, then once again we combine all the layers into one and save the finished result. The saved file will be automatically imported into LightRoom , from where it can be exported to JPEG and posted on the Internet.

For not very complex photographs, it takes me 5-10 minutes for all this work. More complex options, where there are a lot of “tricky” elements, require much more time... sometimes 20-30 minutes... but you can get a good result.

Here is the original again, as it was at the very beginning, and what version I got after the steps described above:

And for comparison, the options obtained in step 11 (before applying plugins from Nik Software ) and after them:

And my final version, which looks more or less natural without going off scale with the sliders in Color Efex Pro

If someone wants to share their options for “expanding the dynamic range,” write in the comments (or better yet, make a similar article, I (and everyone else) will be happy to read it and take note of the proposed moves).

If you have any comments or tips on how to improve/simplify achieving the final result, write and we’ll discuss

Well, as a “snack”, a couple of my photographs obtained this way

It seems to have worked out well. The first photo (the Town Hall building in Hamburg) was also taken from 1 frame; the second (Shanghai Airport Terminal) is made of 3 frames (-2EV, 0EV, +2EV)

If you liked my article and want to say “thank you” , then click on the emoticon:
Thank you already said once

Artificially increasing the dynamic range of an image.

System
Antiviruses
Archivers
CD/DVD/Blu-ray
Hard drive/HDD
File managers
System optimization
Test utilities
Different
Multimedia
Audio/Sound
Video
Graphic arts
Different
Internet. Net. WiFi
Browsers
Chat / Mail
Download managers
Different
Office
Work with documents
Print text
Translators
Dictionaries
Converters
Different
Web development
Analyzers
Directory databases
Web design
Code editors
Website registration
Different
Information
Advertising on the website
Articles about software
Agreement
Connection

Artificially increasing the dynamic range of an image.

Shadow elaboration

First we'll work on the shadows to correct the exposure.

Step 1

Open the digital photo you want to edit in Photoshop. It is recommended to use a photo with good exposure. Any image compression (eg JPEG compression) should be minimal. The quality of the result largely depends on the original image.

Step 2

Create a new Levels adjustment layer by selecting it from the Layer > New Adjustment Layer menu. In the Levels dialog box, adjust the middle slider on the Input scale so that the shadows are properly exposed. Don't worry if the highlights look a little overexposed.

Step 3

Select the layer mask of the Levels adjustment layer by clicking on the mask thumbnail in the Layers palette, which should result in a white frame appearing around it.

Step 4

Open the Apply Image tool from the Image menu and apply the following settings:

• Layer: Background
• Channel: RGB
• Invert: Checked
• Blending: Multiply or Normal (both do the same thing)
• Opacity (Opacity): 100%

Step 5

The layer mask thumbnail should now look like an inverted monochrome version of the image. This layer adjusts shadows, so we'll rename it "Shadows".

Working out the lights

Now we will correct the lights.

Step 6

Duplicate the current layer and name the duplicate "Highlights". We will work on correcting the exposure of the highlights.

Step 7

Select the layer mask of the Highlights adjustment layer, then convert it by pressing Ctrl+I.

Step 8

Open the Levels dialog box by double-clicking on the Highlights levels adjustment layer icon and position the middle slider on the input value scale so that the highlights are well exposed.

Restoring contrast and saturation

As a result of the manipulations performed, the contrast and saturation decreased. Below is a simple trick to restore the original level of contrast and saturation.

Step 9

Now the processed photo should have more detail than the original photo, but it is easy to notice that as a result of the correction the contrast has become weaker, but this is easy to correct. Simply create a new Brightness/Contrast adjustment layer by selecting it from the Layer > New Adjustment Layer menu and increase the contrast. Don't forget that this layer should be at the top of the Layers palette.

Step 10
Copy (Ctrl+J) the background layer and move it to the top of the layers palette, then change the current blending mode to Saturation (Saturation) and adjust the brightness of this layer, choosing a suitable opacity.

Final result

Before and After
Compare images before and after artificial dynamic range enhancement

Tool Shadows/Highlights (Shadows/Lights) against artificial increase in dynamic range

What is the dynamic range of a camera, and how can it benefit a photographer?

Dynamic range is one of the many parameters that everyone looks at when buying or discussing a camera. This term is often used in various reviews along with the parameters of noise and matrix resolution. What does this term mean?

It should be no secret that the dynamic range of a camera is the camera's ability to recognize and simultaneously convey light and dark details of the scene being photographed.

In more detail, a camera's dynamic range is the range of tones it can recognize between black and white. The greater the dynamic range, the more of these tones can be recorded and the more detail can be extracted from the dark and light areas of the scene being filmed.

Dynamic range is usually measured in exposure values, or stops. While it seems obvious that being able to capture as many tones as possible is important, for most photographers the priority remains to try to create a pleasing image. But this does not mean that every detail of the image must be visible. For example, if the dark and light details of the image are diluted with gray undertones rather than black or white, then the entire picture will have very low contrast and look rather dull and boring. The key is the limits of the camera's dynamic range and understanding how you can use it to create photographs with a good level of contrast and without so-called. gaps in lights and shadows.

What does the camera see?

Each pixel in the image represents one photodiode on the camera sensor. Photodiodes collect photons of light and convert them into electrical charge, which is then converted into digital data. The more photons that are collected, the larger the electrical signal and the brighter the pixel will be in the image. If the photodiode does not collect any photons of light, then no electrical signal will be created and the pixel will be black.

However, sensors come in a variety of sizes and resolutions, and they are manufactured using different technologies that affect the size of each sensor's photodiodes.

If we consider photodiodes as cells, then we can draw an analogy with filling. An empty photodiode will produce a black pixel, while 50% full will show gray and 100% full will be white.

Let's say mobile phones and compact cameras have very small image sensors compared to DSLRs. This means they also have much smaller photodiodes on the sensor. So even though both a compact camera and a DSLR may have a 16 million pixel sensor, the dynamic range will be different.

The larger the photodiode, the greater its ability to store photons of light compared to a smaller photodiode in a smaller sensor. This means that the larger the physical size, the better the diode can record data in light and dark areas

The most common analogy is that each photodiode is like a bucket that collects light. Imagine 16 million buckets collecting light compared to 16 million cups. Buckets have a larger volume, due to which they are able to collect more light. The cups have a much smaller capacity, so when filled, they can transmit a much smaller pulse to the photodiode; accordingly, the pixel can be reproduced with a much smaller number of light photons than is obtained from larger photodiodes.

What does this mean in practice? Cameras with smaller sensors, such as those found in smartphones or consumer compacts, have less dynamic range than even the smallest system cameras or DSLRs that use larger sensors. However, it's important to remember what affects your images is the overall level of contrast in the scene you're photographing.

In a scene with very low contrast, there may be little or no difference in the tonal range captured by a cell phone camera and a DSLR. Both cameras' sensors are capable of capturing the full range of tones in a scene if the lighting is set correctly. But when shooting high-contrast scenes, it will be obvious that the greater the dynamic range, the greater the number of halftones it can convey. And since larger photodiodes have a better ability to record a wider range of tones, they therefore have a greater dynamic range.

Let's see the difference with an example. In the photographs below you can observe differences in the reproduction of halftones by cameras with different dynamic ranges under the same conditions of high contrast lighting.

What is image depth?

Bit depth is closely related to dynamic range and dictates to the camera how many tones can be reproduced in an image. Although digital photos are full color by default and cannot be taken in non-color, the camera sensor doesn't actually record color directly, it just records a digital value for the amount of light. For example, a 1-bit image contains the simplest "instruction" for each pixel, so in this case there are only two possible end results: a black pixel or a white pixel.

-bit image already consists of four different levels (2×2). If both bits are equal, it is a white pixel; if both are off, then it is a black pixel. It is also possible to have two options, so that the image will have a corresponding reflection of two more tones. A two-bit image produces black and white plus two shades of gray.

If the image is 4-bit, there are therefore 16 possible combinations to produce different results (2x2x2x2).

When it comes to discussions of digital imaging and sensors, the most commonly heard references are 12, 14, and 16-bit sensors, each capable of recording 4096, 16384, and 65536 different tones, respectively. The greater the bit depth, the more luminance or hue values ​​can be recorded by the sensor.

But there is a catch here too. Not all cameras are capable of producing files with the color depth that the sensor can produce. For example, on some Nikon cameras, the source files can be either 12-bit or 14-bit. The extra data in 14-bit images means that the files tend to have more detail in the highlights and shadows. Since the file size is larger, more time is spent on processing and saving. Saving raw images from 12-bit files is faster, but the tonal range of the image is compressed. This means that some very dark gray pixels will appear as black, and some light tones may appear as full white.

When you shoot in JPEG format, the files are compressed even more. JPEG images are 8-bit files consisting of 256 different brightness values, so many of the fine details available for editing in the original files shot in RAW format are completely lost in the JPEG file.

Thus, if a photographer has the opportunity to get the most out of the entire possible dynamic range of the camera, then it is better to save the sources in a “raw” form - with the maximum possible bit depth. This means that your photos will store the most information about highlights and shadows when it comes to editing.

Why is understanding a camera's dynamic range important for a photographer? Based on the available information, several applied rules can be formulated, adhering to which increases the likelihood of obtaining good and high-quality images in difficult photographic conditions and avoiding serious errors and omissions.

  • It's better to take a lighter photo than to darken it. Highlight details are pulled out more easily because they are not as noisy as shadow details. Of course, the rule applies under conditions of more or less correctly set exposure.
  • When metering exposure in dark areas, it is better to sacrifice detail in the shadows by working more carefully in the highlights.
  • If there is a large difference in the brightness of individual parts of the composition being photographed, the exposure should be measured in the dark part. In this case, it is advisable to level out the overall brightness of the image surface as much as possible.
  • The optimal time for shooting is considered to be morning or evening, when the light is distributed more evenly than at noon.
  • Portrait photography will be better and easier if you use additional lighting using off-camera flashes (for example, buy modern on-camera flashes https://photogora.ru/cameraflash/incameraflash).
  • All other things being equal, you should use the lowest possible ISO value.

Photogora video channel

You can leave your comment on this article

Just about the photo


Dynamic HDR

Dynamic range (DR) is the range in which we can still recognize (see) objects.

You ask, what is there where we DO NOT SEE? Yes, of course, this is even more than we can imagine. When we leave a dark room into the light, we see nothing, we cannot recognize these objects, for us everything is white and white. The eye must get used to it, then we begin to distinguish those same objects. And also, we enter a dark room from the light and see nothing until our eyes get used to it. This is the same dynamic range, beyond which you CANNOT GO.

A stop in photography is an increase in the amount of light by a factor of TWO. That is, when it becomes lighter these two times. So, the dynamic range of a person is 14 stops. For simplicity, we will show how many such stops there are on a sunny day outside. There are 20 of them.

full DD 'EV' (feet)

Stops are designated 'EV'. That is, in the brightest places outside on a sunny day +10(EV), and in the shade -10(EV), for a total of 20(EV). This is why we need to get used to it, or rather our eyes, when we go from the light into a dark room, and vice versa. Why? Because our vision mechanism does not cover such a large difference in illumination. Below is the dynamic range (DR) of human eye perception, it is equal to 14 stops or 14(EV).

Human DD 14(EV)

Why did nature do this to man? It’s very simple so that we don’t peep from the street into windows without curtains; anyway, we won’t see anything there due to the narrowness of our dynamic range. Just a joke, of course, but seriously, because in everyday life, 14EV (stops) of dynamic range is enough. For example, on a cloudy day in nature there will be only 6 stops, which we can easily see and distinguish all objects.

Below is a sunny day and shows our perception in different conditions, it is clear that being in a dark room, we will not be able to see what is brightly illuminated by light, it will be perceived by us as one white. And also, being in the light, we will not be able to distinguish what is in the shadow; it will be perceived by us as one black.

areas of DD that a person does NOT see

Two areas are clearly visible, in the shadow there are 3 stops, which we do not see, for us everything in this place looks like black, and the second area in the light, also 3 stops, everything there seems white to us. But as soon as we go out into the light and wait, our eyes get used to it and we already see EVERYTHING in the highlights, however, we no longer see 6 stops in the shadows, and vice versa, as soon as we go into a dark room, over time we begin to see everything there, but in the lights we no longer see 6 stops. This is the Dynamic Range (DD) of our vision and it is equal to 14 stops 14 (EV).

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

A point-and-shoot camera has a dynamic range of only 4-6 stops. Now it becomes clear why such low-quality images are obtained on such devices. They can take normal photographs only in conditions of very small differences in illumination. DSLRs have not gone far in this regard either. Their dynamic range is 6-8 stops. Let's see how it looks on a very bright sunny day.

areas of the DD that the camera does NOT see

You can see what a small range our camera can capture, but the main part will go into the dark and light, and nothing can be made out there. So that's a total of 8 stops of 8(EV). This is generally death. Most cameras have this dynamic range (DR).

Shoot in RAW format ONLY.

What does this do in RAW? RAW files can be extracted in both highlights and shadows in post-processing on a computer, approximately 1-2 (EV) on each side. As a result, after processing on a computer, the JPEG will turn out to be 9-12(EV), which is not bad. If you take pictures with Medium Format cameras, you will immediately get 12 stops of dynamic range, but we are talking here about regular RAW shots taken on regular DSLRs.

And then the manufacturer went further, they began to make devices in which the Dynamic Range can be expanded, and they called this thing HDR. This means that now the technique shoots THREE JPEG FILES, but automatically makes one photo 3 stops darker (-3EV), one normal (0EV), and the third one three stops lighter (+3EV). And then combines them into one file. What happens? We have 3+6+3 = 12 stops in total, that is, on a simple inexpensive camera we can take a picture of 12(EV), as if we took a picture on an expensive Medium Format camera.

It is clear that this is only suitable for static scenes; for moving ones, we will have the range that the matrix of your camera can produce. Below are three shots and then combined into one.

The output will be THREE summed images combined into one JPEG, thereby expanding the dynamic range (HDR) to 12 stops.

And then the camera will produce a JPEG with 6 stops, but in which it will already cram those 12 stops.

How did you manage to cram 12 stops into six? Very simply, everything dark that went beyond the -3(EV) range became lighter, and everything light that went beyond the +3(EV) range became darker.

As you can see, HDR technology allows you to output a JPEG image with a dynamic range of 12(EV) (isn’t that fantastic), and what the camera could not previously capture, now it can.

Some cameras can output not only JPEG, but also RAW files in HDR; as a result, another 3-4 stops can then be added to the final processed file.

However, as we see, there are still areas that are not covered by HDR; here you will need manual Bracketing, that is, not three, but five, seven shots with different EVs, then brought together to obtain the full Dynamic Range.

Thus, the resulting, most ordinary JPEG of six stops, but assembled manually on a computer from seven photographs with different (EV), will cover in this case 20(EV), that is, the dynamic range is even higher than that of human vision.

Для любых предложений по сайту: [email protected]