Figure 3.0.1. Before
Precision does not mean accuracy. Accuracy is how well precision reflects reality.
—Lane H. Decker
In this chapter, I will explore the concept and teach you the techniques of image harvesting. This is the practice of shooting multiple images of the same subject while changing not only the exposure, but the shutter speed, focus point, and image structures. This practice guarantees that your source files will contain the optimum aesthetic aspects of exposure, DOF, bokeh, and focus, so that you can later combine the best of each to create a single, final image.
The human eye is an amazing biological optical system. It can discern detail in light as dim as moonlight and as bright as noon’s direct sunlight at the equator. It also acts as a motion sensor, sees events as they happen, and changes focus so rapidly that everything, near to far, appears in focus, all while adjusting for changes in the light intensity.
A digital still camera, on the other hand, does not have all the capabilities of the human eye. It is simply an image-capturing device that records a very small part of what we see. It records only fractions of seconds, while the majority of what we see is witnessed in unlimited motion. The human eye has the ability to see multiple objects at different distances and see them all in focus. A digital still camera cannot. If you are using a fixed lens system, like a SLR, where the film/sensor plane and the lens are permanently locked into a relationship with each other, you simply cannot have two objects that are at different distances be in focus. No amount of stopping the lens down will change this.
The human eye is capable of adjusting for changes in light and has the ability to determine detail in both extremely low and high light situations. Cameras, on the other hand, always seek to expose for 18% gray. This means that if you take a picture of a white wall and then a black wall, when viewed, neither image would be black or white; they would both be 18% gray. Additionally, no matter how sensitive the light meter in the camera, it cannot determine what area of the image is the most important. Only the photographer can do that.
So how do you replicate what you saw with a device that does not record the image the way you experienced it when you saw and shot it? The answer is this: rather than shoot one image and hope for the best, consider harvesting many images and combining them into one. This is the best way I know to create an image that looks like what the eye saw—not just what the camera captured.
An interesting plainness is the most difficult thing to achieve.
—Ludwig Mies van der Rohe
The image that will be the outcome of this lesson, Hearing the Whisper of the Green Fairy (Figure 3.0.2), was harvested rather than captured in one image. It is also the first time that I went beyond mere High Dynamic Range (HDR) capture and moved into exploring Extended Dynamic Range (EXDR) photography, although I did not know it at the time.
Figure 3.0.2. After
I decided that while revisiting this image harvesting chapter, I should mention that I believe that the importance of expanding dynamic ranges may be being missed in the rush to use HDR imagery to simply make images look strange. I believe that HDR photography is frequently over-used, misused, and abused. I read a blog that said that if you do not like HDR images then you do not understand Photoshop. You should not have to understand the technology used to create an image (even though you may) to appreciate it. If an image is simply strange, and does not move you, you know that it is not the culmination of the photographer’s vision; it is simply manipulation.
If you don’t like change, you’re going to like irrelevance even less.
—General Eric Shinseki
Though HDR traditionally refers to widening the range of exposure by shooting multiple images, there is more to this concept than merely using tonemapping software to create the “grunge” look. I believe that shooting multiple images for exposure is just a small part of the high of dynamic range thinking.
When Welcome to Oz was first published, there were some critics who thought image harvesting unnecessary, too time consuming, or irrelevant. Now, software exists (Helicon Focus, CS4, and CS5) that creates one completely focused image from several partially focused ones by combining the focused areas of each. In addition, Photoshop automatically aligns multiple images, and there is tonemapping software for assembling images based on exposure (HDRSoft’s very aggressive Photomatix and Nik Software’s kinder, gentler HDR Efex Pro). Obviously, the concept of harvesting multiple images for specific needs has taken hold and entered the mainstream of digital processing techniques.
As you work through the chapters in this book, keep in mind that you are not limited by the medium in which you work; you are limited only by your imagination and your personal vision. As you have seen in the first two chapters, and you are about to see in this and the next chapter, owning a fast lens cannot solve problems of selective focus, and stopping the lens down in a fixed lens/capture plane system (such as an SLR) cannot bring objects at different distances from the sensor plane into focus all at once. That is simply physics.
My wish for you is that you capture images because you are so taken by what you see that you have no choice but to press the shutter. And I understand how frustrating it is when the limitations of the technology keep you from achieving the fulfillment that you realize when you know that your voice has been heard. Because what you see with your eyes is not what your camera sees when you capture an image, I invite you to extend the dynamic range of those images in such a way that no one will be aware of the manipulation when you are done.
The term XDR was coined by John Paul Caponigro when he and I produced an Acme Educational tutorial DVD to teach his techniques and belief that there were many ways to extend the exposure range (XDR) of an image. What I have observed, through the course of my creative work, is that we should be going beyond extending only exposure range and should be looking to extend the dynamic range of all that makes up image structure: time, specific focus points, multiple objects in focus, blur, color, etc. I call that ExDR. In the next two chapters, you will be exploring this concept.
When I first encountered the subject of Hearing the Whisper of the Green Fairy, I stared at those leaves for an hour, captured 87 images, and out of those, I chose to use only four. I made so many captures because I shot some at different exposures, some at different focus points, and some were a combination of both. When capturing those images, I was practicing “preemptive Photoshop”—making informed decisions about the way I was shooting (different exposures, focus points, DOF, etc.) so that I got it right in the camera at the time of capture. This allowed me a large number of options when I sat down later at my computer. Remember to approach Photoshop as a noun and not a verb. You cannot always fix it in Photoshop. Photoshop is only a means to an end. Although your goal should always be to get it right in the camera, when the camera cannot give you your vision, adapt to the situation and give yourself as many choices as you can when you are taking the picture, so that later you are not limited by captures you lack.
I have found that real pixels are infinitely better than artificially generated ones. The image that I chose as the main, or base, image for this lesson was one in which I liked the relationship between its main point of focus—the leaf in the upper part of the picture—and the overall image. From the three other captures, I took those parts that I felt resolved areas in the base image that were issues to me. The issue areas were the leaf in the foreground, the cluster of other leaves in the mid-ground, and the absence of interesting detail in the background. After making the composite, I enhanced it as if it were a single image.
Before you dive into all that follows, think about whether you are creating a believable improbability or a believable probability. Ponder this as you work your way through this lesson.
As with any image, the first steps are to analyze the image, determine which problems need to be addressed, and decide on the appropriate workflow. As discussed earlier, workflow is a dynamic thing and is specific to each image. In image harvesting, you must first choose the images you will use, then:
• fix any problems with the base image.
• combine the desired elements taken from the other images.
• do color correction and aesthetic image manipulation.
It is wise to solve the biggest problems first. Here, for example, the biggest issue is not correcting the color cast of all four images; it is getting one image from the four. It is easier to color correct one final image than to do each separately.
Try to develop a workflow that minimizes artifacting. Every time you do something in Photoshop, you clip or dump some data, and that causes artifacts. While some of these are visually appealing, some are not. And, as I have previously mentioned, all artifacting is cumulative and can be multiplicative. One way to minimize this is to work in 16-bit and in the ProPhoto color space. This image’s workflow is approached with that in mind.
Here are the four harvested images that make up the palette for this lesson:
• Image 1: This is the base image and contains the central focus point of the final image, the large leaf at the top (the primary focus area) (Figure 3.1.1).
Figure 3.1.1. Image 1: the base image
• Image 2: A nearly identical photo with the focus on a cluster of leaves beneath the large leaf at the top (the secondary focus area) (Figure 3.1.2).
Figure 3.1.2. Image 2: focus on the leaf cluster
• Image 3: A slightly different image, where a single leaf below the cluster is in focus (the tertiary focus area) (Figure 3.1.3).
Figure 3.1.3. Image 3: focus on a single leaf
• Image 4: An image taken from a different angle that contains three leaves in focus and a nice pattern for the background (Figure 3.1.4). (See the Image Harvesting sidebar)
Figure 3.1.4. Image 4: interesting background
This is the image map of what you will be doing (Figure 3.1.5).
Figure 3.1.5. Image map of combining images
There are numerous reasons for creating a composite image and they range from duplicating the reality of what you originally saw to creating an image of something that exists only in your mind’s eye. For instance, if you want a shallow DOF because you like the bokeh at 2.8, and your vision of the image includes having the objects in front of the focus point to also be in focus, you will need to combine multiple images into one. Also, you need to combine multiple images if you want multiple objects that are at different distances from the sensor plane to be in focus. It is this last goal that I had in mind for Hearing the Whisper of the Green Fairy.
When combining multiple images into one, keep in mind:
• All of the image manipulation decisions you make should be made based on the way the eye works when it “sees.” As I discussed in Chapter 1, the eye goes to patterns that it recognizes first: areas of light to areas of dark, high contrast to low contrast, high sharpness to low sharpness, in-focus to blur (which is different from high sharpness to low sharpness), and high saturation of color to low saturation of color.
• You cannot simply combine captures taken with different f-stops to put multiple objects in focus that are at different distances from the sensor plane. (See Chapters 1 and 2.) If you want to do this, then you need to have a common point of focus in each image that will be used in the composite.
• It is in the digital domain, both at capture and in the digital darkroom, that impossible is just an opinion. You are limited only by your ingenuity and imagination.
• Most importantly, always solve the biggest problems first and work down to the smallest, i.e., correct from global to granular.
By using the Shift-click method, the copied image will pin register with the base image. In other words, Photoshop will place the copied image exactly on top of the base image. Here is the layer order, from the bottom up:
Photoshop should be set to Snap To Guides (on the View > Snap To menu), and the viewing mode should be set to Full Screen with Menu Bar. Keyboard shortcut F displays the image on top of a neutral gray background.
In the base image, the viewer’s eye is drawn away from the main leaf by a distracting branch. This is the biggest problem, because it will show up in everything that you do until you remove it. Prior to CS5, the only way to do this was by using the Patch tool. CS5 has something better: Content-Aware Fill. (Quite a moniker!) It works by removing a selected element and filling it with detail that matches the surrounding area. It can seamlessly generate bushes, trees, clouds, etc., and fill the selected area so that, for the most part, you will be hard pressed to see that something has been done.
Working global to granular, the biggest issues in this image, besides making four images into one, are the sins of our base image—specifically, the unwanted branches and dead leaves (Figure 3.2.1).
Figure 3.2.1. The image map for removing unwanted elements
The Polygonal Lasso tool is one of three Lasso tools. The others are the Standard Lasso and the Magnetic Lasso tools. By pressing Shift + L, you can move through the Lasso tools to pick the desired one. The Polygonal Lasso is best for extracting pieces of images that have reasonably straight edges or a semi-polygonal shape, while the Magnetic Lasso is best for complicated figures. To use the Polygonal Lasso, click specific points around an image, and Photoshop will draw straight lines from point to point. Also, if you hold down the Shift key while clicking, you can create straight lines and 45° angles. To undo / redo a point, use the Del / Delete key. For this image, I will have you use the Polygonal Lasso.
Figure 3.2.2. The selection of areas to be removed
Figure 3.2.3. Choose Content-Aware in the Fill options
Figure 3.2.4. The image after the unwanted areas are removed
If you need to touch up an area that you missed or there are areas where there is a noticeable edge after you run Content-Aware Fill, simply use the Clone Stamp tool or the Spot Healing brush to touch it up.
I chose the base image because I liked the way the leaf in the upper part of the composition (the central focal point) related to the overall image. But the image was shot at f/5.6, so its DOF is fairly shallow, and the mid and foreground are out of focus. I chose this DOF for aesthetic reasons; I wanted just the leaf cluster to be in sharp focus. As I have already discussed, stopping a lens down increases the area of acceptable out-of-focus, but it does not increase the number of elements that are actually in focus. Therefore, if I had chosen a base image in which I had stopped the lens down when I made the capture, I would have brought unwanted objects into acceptable out-of-focus, which would have taken the viewer’s eye away from the leaf cluster.
Even though this series of images was captured with a camera on a stable tripod, whenever you change the focus, you readjust the position of the elements within the lens. Depending on the lens, this may move the image slightly to the right or left. You will correct for that by replacing the mid-ground leaves with the in-focus leaves from Image 2 (Figures 3.3.1 and 3.3.2).
Figure 3.3.1. Out-of-focus leaves to be replaced
Figure 3.3.2. In-focus leaves that will be used instead
There are two ways to align an image: 1) manually, by changing the layer you want to move to the Difference blend mode, selecting the Move tool (V), and moving the layer to the placement you want with the arrow keys, or 2) by using Auto Align Layers. Auto Align Layers is the first tool to which I go when I want to align images for a composite. When it works, it works wonderfully. But frequently, it changes the orientation of the image in a way that I find unacceptable. Also, if there are too many blurred areas and too few hard edges, it will not work at all. In this lesson, you will work around Auto Align Layers’ penchant for wanting to rotate the entire image, as well as looking at the Difference blend mode approach when Auto Align Layers will not line things up. Both approaches are contained in the 100ppi version of the file that I created while writing this chapter.
Figure 3.3.3. MID_LEAVES layer set to 50% opacity to compare the alignment of the two layers
Figure 3.3.4. MID_LEAVES layer set to Difference
Figure 3.3.5. After moving the MID_LEAVES layer to better align with the BASE layer
Figure 3.3.6. The image map
Figure 3.3.7. The layer mask
Figure 3.3.8. Before the brushwork
Figure 3.3.9. After the brushwork
Auto Align Layers was very good in CS4, but it is great in CS5. However, it still has some limitations. First, if you select multiple layers to align, it will do just that, which means that it may change the orientation and relationship of all of the layers. That translates into potentially having to crop the image, which is to be avoided whenever possible. What happens when you align the layers is shown in Figure 3.3.10. You can work around this limitation by aligning your chosen elements in a separate file, then bringing those elements back into the original file after the alignment is done. This is the next thing that you will do.
Figure 3.3.10. The result of the Auto Align Layers function in CS5
Figure 3.3.11. The guides in place for the lower front leaf
Figure 3.3.12. Drawing a selection using the Marquee tool
Figure 3.3.13. Creating a new file
Figure 3.3.14. The MID_LEAVES_B group copied into the new file
Figure 3.3.15. The Auto-Align Layers dialog box
Figure 3.3.16. The image after using Auto-Align
Figure 3.3.17. Image map
Figure 3.3.18. Layer mask
Figure 3.3.19. Before
Figure 3.3.20. After
The next step is to replace the base image’s out-of-focus foreground leaf with the in-focus leaf from the FRONT_LEAF layer. The issue with this leaf is that the camera position was moved from where it was when the base and mid-leaf images were captured, so that the sensor-plane-to-subject relationship was changed. When you change the angle between the camera and the subject, you also change perspective, and the distortions that were caused in relation to the base image need to be corrected. (For this reason, this is something to avoid if at all possible.) In this case, a correction is needed and you will do this using Free Transform and then the Warp tool that were both new in Photoshop CS2. Ben Willmore, in his book, Photoshop CS2: Up to Speed (Peachpit, 2005) wrote, “The new Warp feature allows you to bend and distort images almost as if they were printed on Silly Putty.”
New to CS5 is the Puppet Warp tool that takes the Warp tool to a new level of controllability. But the simplest way is usually the best way, and for this image, just warping will do.
You have undoubtedly noticed that everything you do to an image is a bit of a dance. For everything you do, there will be some change elsewhere in the image that you may or may not like. As in almost everything you do in Photoshop, the global moves you make will need granular corrections until there is no more work to be done.
Figure 3.4.1. Making a selection with the Marquee tool
Figure 3.4.2. The FG_LEAF layer set to Difference
Figure 3.4.3. Moving the FG_LEAF layer to align it with the layer below
Figure 3.4.4. Moving the Reference point to the tip of the leaf
Figure 3.4.5. Moving the top anchor point to align the stems
Figure 3.4.6. Moving the right anchor point to align the leaf
Figure 3.4.7. Moving the bottom anchor point to align the leaf
Figure 3.4.8. Moving the left anchor point to align the leaf
Figure 3.4.9. The Warp grid
Figure 3.4.10. Moving the right corner handle to align the right side of the leaf
Figure 3.4.11. Moving the upper left second control handle to align the leaf
Figure 3.4.12. Moving the lower left second control handle to align the leaf
Figure 3.4.13. Moving the upper right second control handle to align the leaf
Figure 3.4.14. Finishing the warp and the resulting layer
Figure 3.4.15. The layer after changing the blend mode to Normal
Figure 3.4.16. The image map
Figure 3.4.17. The resulting layer mask
Figure 3.4.18. Before the new layer
Figure 3.4.19. After the new layer
You are going to use the BG_PARTS layer to put some image structure elements in the upper right corner of the background, but the leaves in BG_PARTS look bluer than the ones in BASE IMAGE, so you must first match the colors of the two layers.
The colors in the images are very subtle. I suggest you open the Histogram palette, set it to All Channels View, and see what happens when you repeatedly click the visibility icon for this layer.
Figure 3.5.1. GREEN_LEAVES_16BIT.psd
Figure 3.5.2. BASE IMAGE
In the final step of the assembly, you will use some of the background details from the BG PARTS layer.
Figure 3.6.1. The new BG_LEAF layer
Figure 3.6.2. Drag the BG_LEAF layer up and to the right
Figure 3.6.3. Selecting the Transform tool on BG_LEAF
Figure 3.6.4. Rotating the layer 45°
You now have the two building blocks you need to construct the background. Bear in mind—and make use of—layer order and different levels of opacity.
The composite image is now complete. The focal points on the various leaves are all correct, and the upper-right corner has the desired background detail (Figures 3.6.5 and 3.6.6).
Figure 3.6.5. Before
Figure 3.6.6. After
You now have the main image that is the sum of all of the harvested parts. Before starting the next set of aesthetic corrections, you need to clean up the file structure before merging all active layers into one while keeping all the previous layers intact, i.e., doing “the Move” (see Chapter 1).
In both cases, the result is a master layer that contains all the previous layers. Name it MASTER_1 (Figure 3.6.7).
Figure 3.6.7. The new master layer MASTER_1
When you do “the Move” in CS2 and above, the name “Merge Stamp Visible” appears in the History palette.
As you have done in the previous two chapters, you will now deal with the image’s color cast attributable to the SLR sensor. This color cast is caused by data interpolation errors of the CCD or CMOS image’s Bayer array data in the post-processing of the capture, and occurs in all digitally captured images. (For a complete discussion of color cast correction, see Chapter 1.)
Figure 3.7.1. The image after changing the blend mode to Difference
Figure 3.7.2. Three sample points on the image
Compare the image before (Figure 3.7.3) and after the color cast correction (Figure 3.7.4).
Figure 3.7.3. The image before the color cast correction
Figure 3.7.4. The image after the color cast correction
Now that you have one image with which to work, it is time to map out what aesthetic manipulations you want to make. At the moment that this image took me, I began pre-visualizing it as I wanted it to become, i.e., I asked myself what I might need to do to create the image that I saw in my mind’s eye. I captured, or harvested, images the way I did so that when I sat down at the computer to work, I could create the vision that I had in mind.
As I have previously discussed, there is a very specific way that the eye sees and moves across an image. In this picture, you want the eye to go first to the top leaf, then to the leaf in the middle, then to the foreground leaf, then to the back set of leaves, and finally, to the background.
Here is the image map that I created (Figure 3.7.5).
Figure 3.7.5. The image map
If you look at all three of the of the corrections you just made, you should observe that the BP adjustment adds a little coolness to the image (Figure 3.8.1), WP lightens and slightly warms up the image (Figure 3.8.2), and the MP adds even more warmth (Figure 3.8.3). Also, take a look at what all three corrections look like combined (Figure 3.8.4).
Figure 3.8.1. The image with the BP adjustment
Figure 3.8.2. The image with the WP adjustment
Figure 3.8.3. The image with the MP adjustment
Figure 3.8.4. The image with all three adjustments
Because you are trying to replicate the reality of the light (shadows are bluer than lit areas, warm colors appear to move forward, and cool colors appear to recede), minimize the cumulative and possibly multiplicative effects of image manipulation artifacting, and undo some of the limitations of the technology that was used, you will use the color cast of the file to your advantage.
Compare the image before (Figure 3.8.5) and after the brushwork (Figure 3.8.6). Also look at the resulting layer mask (Figure 3.8.7).
Figure 3.8.5. The image before the brushwork
Figure 3.8.6. The image after the selective brushwork
Figure 3.8.7. The layer mask for the brushwork
Objects photographed in sunlight tend to have a blue cast, and if they are in shadow, they are even bluer. You will use the Nik Software Color Efex Pro 3.0 Skylight filter to color correct the Green Fairy image. This filter removes the blue cast and makes the colors more realistic.
Figure 3.9.1. Before the Skylight filter
Figure 3.9.2. After the Skylight filter
Figure 3.9.3.
You create this layer to preserve the color decisions that you have made to this point. In the next series of steps, you will address issues of contrast and sharpness using the Luminosity blend mode that may affect the image’s saturation. By creating this layer with the Color blend mode, the color choices you made will be preserved.
Let me reiterate. The human eye goes first to patterns it recognizes in light areas and then in dark ones. The eye also tracks from high to low contrast, in-focus to blur, high to low sharpness (which is different than in-focus to blur), and then high to low saturation of color. In this next step, you are going to play with contrast and sharpness.
Contrast is often confused with contrast ratio. Contrast is the difference in brightness between the light and dark areas of a picture. If there is a large difference between those two, the image has high contrast. Contrast ratio is the ratio of the luminance (brightness), as measured on a meter, of the whitest pixel divided by the luminance of the blackest pixel. By adjusting that ratio, you can increase or decrease the contrast in an image.
Sharpening (or specifically in Photoshop, Unsharp Masking) is the increase of apparent edge contrast accomplished by increasing contrast on either side of the pixels’ edges. In this way, contrast and sharpness are related. It is worth noting, however, that contrast is normally spoken of on a global scale, and sharpness is highly localized.
When an image is sharpened using Unsharp Masking, Photoshop does not really sharpen the image. Rather, it creates an illusion of sharpening by increasing contrast at the edges of objects within the image. This illusion (called the Craik-O’Brien-Cornsweet Edge illusion) is created when an edge of an element in an image is modified by lightening one side and darkening the other. After this is done, the area next to the darkened side of the edge appears darker in tone and the area next to the lighter side appears lighter. (For a visual example of this, go to http://en.wikipedia.org/wiki/Cornsweet_illusion.)
The technique of Unsharp Masking is not a new one. It actually had its origins in, and derived its name from, a darkroom printing technique developed in the 1930s. This technique involved printing a normal negative sandwiched with a blurred positive copy of that negative. The result was a print in which the edges of the image elements had the illusion of higher contrast and, therefore, sharpness. Photoshop does this mathematically by manipulating pixels.
Though contrast and sharpness (or more specifically, Un-sharp Masking) are technically similar in digital imagery, they differ in the way they can interact in an image. This is why you will be addressing the issues of selective aesthetic contrast and sharpness on two different layers. I will have you start with selective contrast.
Figure 3.10.1. Zoom in to the upper left leaves in Tonal Contrast
Figure 3.10.2. The image after applying Tonal Contrast globally
Figure 3.10.3. Before the Tonal Contrast layer
Figure 3.10.4. After the Tonal Contrast layer
When copying layer masks to layers where sharpening and contrast are applied, you may see things that were not apparent before copying and applying the layer mask. This is what happened when I first worked with this image. Just tighten up the layer mask with the techniques that were discussed in Chapter 1.
Figure 3.11.1. Zoom in to the main leaf area
Figure 3.11.2. The shadows are blocked up
Figure 3.11.3.
There is a bit of a dance between Contrast Only, Tonal Contrast, and Sharpening. Every image is different and, frequently, that difference determines what the layer order should be. The reason for separating Contrast, Tonal Contrast, and Sharpening into separate layers has to do with the multiplicative nature of artifacting. If you add sharpening to a layer that has any form of contrast applied to it, you will cause serious artifacting that will show up as noticeable halos around the edges of the objects in the image and will also aggravate any noise in the image. By separating out the different forms of sharpening and contrast, you can use layer opacity to blend them together.
You can also increase contrast with the Brightness/Contrast sliders (Image > Adjustments > Brightness/Contrast) or with Curves adjustment layers, a better choice than Photoshop’s Brightness/Contrast. You increase contrast with a Curves adjustment layer by creating such a layer and clicking on the center point of the curve to lock it. You then place the mouse pointer over the curve, two grid lines up and right, and you drag that point up and to the left to increase the contrast until it is visually pleasing. Don’t drag too far; you need but a small correction. Click OK. Create a layer mask filled with black, and do the appropriate brushwork. In my opinion, however, the Nik Software algorithm is superior to the one in Photoshop and you now own it.
Figure 3.11.4. Before the Contrast Only layer
Figure 3.11.5. After the Contrast Only layer
Some believe that you should sharpen an image only to allow for an increased output size. Others believe that sharpening is the last thing you do before printing. In my experience neither is always true. What I have experienced is that most people do not know how to correctly sharpen an image once, let alone how to do it correctly multiple times.
Whether TIFF, JPEG, or RAW, all digitally captured files require some form of sharpening. This is due to the interpolation process that occurs when the data is converted from its original form to the form on which you will work.
In addition to sharpening the original digital capture, another consideration is aesthetic, or selective, sharpening that you are going to do in this step. There is also sharpening for output, which is one of the last things you will do to an image, but not always the very last. So an image may be sharpened as many as three times.
It is important to understand some of the problems associated with sharpening. As I discussed at the beginning of this section, Unsharp Masking does not really sharpen an image. It adds visually appealing artifacts that create the illusion of increased sharpness. There are other issues. When you view an image on your monitor, you view a file that is 240 to 360ppi (pixels per inch) on a 72ppi screen. Such images will be printed at either 720, 1440, 2880, or 5760dpi (dots per inch). So what looks right on the screen is probably under-sharpened for output, and what looks over-sharpened on the screen is probably about right. How do you know how much to over-sharpen? You do not—and that is a problem.
There is more: flat planes, lines, foliage, blur, and skin tones all need to be sharpened differently, because each requires different amounts of sharpening at different values in order to minimize artifacting and image noise, and to maintain a pleasing visual experience. In addition, viewing distance, paper type, printer output resolution, printer type, and the amount of dot gain (the expansion of the ink droplet on the paper) are all factors that affect sharpness. Because of this, I generally sharpen using Approach 1 below, because the Nik software is able to detect if something has been sharpened more than once and adjusts accordingly. You should be aware, however, that I also use some or all of the types of sharpening in the Photoshop Only approach.
In this step, I will show you two ways to sharpen an image: one with third party software and one with Photoshop. Both are contained in the 100ppi version of the file that I created while writing this chapter. I recommend that you explore both approaches.
If you do not have the Nik Software, you can download a fully functional demo at www.niksoftware.com. Also, if you do not want to play with this software, skip to Approach 2: Sharpening with Photoshop Only.
Figure 3.12.1.
Figure 3.12.2.
Figure 3.12.3.
Figure 3.12.4.
Figure 3.12.5.
Before moving on, compare the image before the NIK_SHARPEN layer (Figure 3.12.6), after the NIK_SHARPEN layer (Figure 3.12.7), and then after adding the layer mask (Figure 3.12.8).
Figure 3.12.6. Before the NIK_SHARPEN layer
Figure 3.12.7. After the NIK_SHARPEN layer
Figure 3.12.8. After adding the layer mask to the NIK_SHARPEN layer
There are two reasons that it is critical to make the layer mask as tight as possible, as well as to make sure that any unwanted areas of sharpness are removed. First, whenever you sharpen an image, you add contrast to the edge of pixels that tends to enhance noise. Second, you do not want to sharpen areas that you want blurred. In this next step, you will tighten up the layer mask.
Figure 3.12.9. The image in Quick Mask view
Figure 3.12.10. Cleaning up the mask on the upper right
Figure 3.12.11. Tightening up the layer mask in the midleaf area
Figure 3.12.12. The final layer mask
Figure 3.12.13. The image after lowering the opacity to 59%
In this approach, you will sharpen using three different methods: High Pass, LAB, and Smart Sharpen for lens blur.
Figure 3.12.14. The HIGHPASS layer desaturated
Figure 3.12.15.
Figure 3.12.16.
Figure 3.12.17. The image after applying the High Pass filter
If you did not do Approach 1, do steps 12 through 18 from Approach 1: Using the Nik Software Sharpener Pro 3.0 filter. If you did Approach 1, just copy the layer mask from the NIK_SHARPEN layer.
High Pass sharpening is good for sharpening only the edges in an image, but not the image structures in which the noise tends to be found. Because you need to use the Overlay or Softlight blend modes to do this type of sharpening, using the Luminosity blend mode to avoid sharpening color is not a option. When sharpening with the High Pass filter you should always desaturate the layer first.
To sharpen in the Lab space requires converting the file from the RGB space to the Lab space.
I learned this sharpening technique and the specific approach to increasing saturation in Step 10 from Dan Margulis. I highly recommend his book, Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace (Peachpit, 2005).
Many of the limitations of sharpening have to do with the light artifacts it inserts, rather than the dark ones. The following action separates the two functions; the darkening is on the middle layer and the lightening on top, and the action automatically cuts the lightening in half.
Figure 3.12.18. Converting the image to Lab color
Figure 3.12.19. Starting He-Man Sharpening with High Pass
Figure 3.12.20. Setting the Radius to 1.5 pixels
Lower the Amount of sharpening until you remove the crispyness (or hardness) and halos around the edges of the image. I chose 229% (Figure 3.12.21).
Figure 3.12.21. Lowering the Amount to reduce halos
The Gaussian Blur Removal option is a good call if the image on which you are working is slightly out of focus. This is very similar to the regular unsharp masking technique.
Figure 3.12.22. Using Smart Sharpen Lens Blur, Radius at 1.7 pixels and Amount at 229%
Figure 3.12.23. The image after copying the HIGHPASS mask to the SMART_SHARPEN_LENS layer
Each type of sharpening sharpens the image in a slightly different way and each has a benefit. However, if you were to do all three types of sharpening to the same layer at the amounts that you just did in the steps above, this is what it would look like (Figures 3.12.24.1 and 3.12.24.2). The artifacting that you see here is not only cumulative, it is multiplicative as well. Let me give you an extremely hypothetical example. If you were to make an adjustment to a layer, and it caused a 5% image quality decrease due to artifacting, and then you made a second adjustment to that same layer that caused another 5% quality decrease, the total decrease in quality would not be merely 5 + 5 = 10%, it would be 5 + 5 = 10 × 5 for a 50% quality decrease. It has been my observation that the multiplicative effect of artifacting tends to be a larger issue when dealing with adjustments to contrast and sharpening.
Figure 3.12.24.1. The same three sharpening methods applied to one layer
Figure 3.12.24.2. A close-up of the same three sharpening methods applied to one layer
By separating the different forms of sharpening into different layers and then by using opacity to blend them together, you get the benefits of each, but without undue exacerbation of the artifacting (Figures 3.12.25). Also, this approach allows you the ability to change the order of the sharpening(s) as well as the individual amounts.
Figure 3.12.25. All three sharpening methods applied to different layers to reduce artifacting
Whichever way you go—using just one approach to sharpening or combining High Pass, Lab, Smart Sharpen, and/or Nik Software Sharpener Pro—do the following.
Figure 3.13.1. Adjusting the Curves to increase saturation while in Lab color mode
Figure 3.13.2. Adjusting the “b” channel to increase saturation
Figure 3.13.3. The image after the layer opacity is lowered to 56%
The “L” of Lab is the Lightness or Luminance channel, “A” is Red to Green, and “B” is Yellow to Blue. Keep in mind that the settings used here are only for this image. As a rule, I never move the anchor points more than three grid points to the left or right, depending on whether it is to the top or bottom of the curve. As much as possible, I try to keep movement to one grid point. The positive values, or the top of the curve, address warm colors, and the negative values, or bottom part of the curve, adjust cooler colors.
In this step, you are going to create the dappling of light that occurs when sunlight travels through the leaves of a tree. You will do this by using two Curves adjustment layers and by changing the blend modes to Multiply and Screen.
You are now at the point in this workflow to add dark into this image. You should begin with the Curves adjustment layer set to the Multiply blend mode. You will add more darkness than lightness, so it makes sense to cover the most ground first. Also, by addressing the “dark” issues, it will be easier to see where the additional areas of light should go to create the dappled light effect.
Notice that the lighting of the image has begun to look the way you want it. That is because everything you have been doing up to this point is to reinforce, through the use of color, sharpness, and contrast, the image in your mind’s eye. What you will do next is to remove everything that is not your vision of the image.
Figure 3.14.1. The image before brushing in the D2L_MULTI layer
Figure 3.14.2. The image after the initial brushwork
Figure 3.14.3. The beginning layer mask for D2L_MULTI
Figure 3.14.4. The layer mask after more brushwork
Figure 3.14.5. Brushing in the leaf cluster in the lower right
Figure 3.14.6. Before the brushwork
Figure 3.14.7. After the brushwork
Figure 3.14.8. The final layer mask
Figure 3.14.9. The layer mask for the L2D_SCREEN layer
Figure 3.14.10. The image after initial brushwork
Figure 3.14.11. The layer mask
Figure 3.14.12. The image after brushwork
Figure 3.14.13. The layer mask
Figure 3.14.14. The image after brushwork
Figure 3.14.15. The layer mask after all brushwork
Figure 3.14.16. The image after all brushwork
To add the final touch to this image, you will create the effect of a dappled ray of sunlight hitting the various leaves. But first, there is one problem you need to address. The file on which you are working is in 16-bit, and the Render Lighting Effects filter works only in 8-bit.
You do not need the Render Lighting Effects filter to light this image because you have already done that. You will use the Render Lighting Effects filter because it creates ambient light by adding gray to the haze in a image. This quality is what is needed to finish the look of light coming through tree branches in a forest.
It seems a shame to have come all this way in 16-bit ProPhoto RGB only to have to go to 8-bit Adobe RGB. So here is a work-around for this issue.
Figure 3.15.1. Moving the light to the upper leaf
Figure 3.15.2. Reducing the size of the light
Figure 3.15.3. Adjusting the Intensity and Ambience
You now have two layers named RENDER_LIGHT. One is a Smart Filter layer—the one you can readjust should the need arise, and one that is a snapshot of the Smart Filter layer. You are doing this because the Render Lighting Effects filter works only in 8-bit. Therefore, the Smart Filter would not transfer over.
Figure 3.15.4. The layer mask after brushwork
Figure 3.15.5. The resulting image after brushwork
Figure 3.15.6. The image before the opacity adjustment
Figure 3.15.7. The image after the opacity adjustment
I am always doing things I can’t do; that’s how I get to do them.
—Pablo Picasso
Image harvesting, or ExDR, is a way to recreate what you originally saw in spite of, and by understanding, the limitations of camera technology. The real limitation is not the camera; it does what it does. You are limited only by your imagination, but if you are open to the impossible and view it merely as an opinion, you can do anything.
Is the image that you just created a believable improbability or a believable probability? Is your answer the same as it was at the beginning of this lesson? If your opinion has changed, you will realize that your feeling for any image may change as you work with it, and that is okay as long as it suits your vision and retains your voice.
In the next chapter, you will further explore the concept of ExDR for focus, blur, exposure, and image structure. The goal will be not only to recreate your original vision, but to cause shape to become the unwitting ally of color in your pursuit to guide the viewer’s unconscious eye.
Figure 3.16.1. The final image
18.220.241.64