image

Figure 3.1 © Lee Jordan (www.flickr.com/people/leejordan).

Chapter 3
Fixing Lens Problems

 

“ So distorted and thin, where will it end? ”
—Joy Division

 

 

Postproduction begins with the lens. While aspects such as set design, lighting, and wardrobe all combine to create the overall mood of a shot, the camera lens determines what will actually be seen, discarding everything that doesn't come under its gaze. This process of culling is repeated throughout the postproduction chain, most notably through editing, and is irreversible —you can't recover detail that wasn't originally shot.

The lens itself introduces its own compendium of problems. Lenses are precision instruments, but at the level of detail required by film and video photography, no lens is perfect.

This chapter will provide ways to fix common problems introduced by lenses and the handling of the camera itself.

Lens Distortion

Imperfections in the camera lens can distort the photographed scenes in several ways. Because it's impossible to manufacture a lens to have perfect curvature, the images that are recorded can be squashed or stretched in some way.

The two most common types of distortion are barrel and pincushion. With barrel distortion, the image appears to bulge outward, whereas pincushion is the opposite, almost sucking the picture into its center.

Barrel and pincushion distortion can be detected by examining vertical and horizontal lines in the image. If lines that should be straight (such as the edge of a building) curve toward the picture's center, that is usually evidence of pincushion distortion. Lines that curve outward, on the other hand, are indicative of barrel distortion.

image

Figure 3.2 Barrel distortion makes lines bulge outward.

image

Figure 3.3 Pincushion distortion makes lines bend inward.

image

Figure 3.4 Using a fisheye lens usually leads to barrel distortion. © Joel Meulemans, Exothermic Photography (www.flickr.com/people/Exothermic).

How to Correct Barrel or Pincushion Distortion

Barrel distortion is very simple and quick to fix, using the right filters.

1. Pick a reference frame, and then identify a line or edge in the image that appears curved but that should be perfectly straight, either horizontally or vertically. For best results, choose something toward the edge of the picture.

2. If the reference line curves outward (implying barrel distortion), apply a pinch filter to the footage; otherwise, apply a punch filter (for pincushion distortion).

3. Adjust the parameters of the filter until the distortion goes away. If possible, use an on-screen guide line (or just use the edge of this book) for the most accurate results. Be careful not to go too far or you will end up with the opposite type of distortion.

4. Yo u may need to crop or resize the fixed version if the filter changes the picture size as part of this process.

image

Figure 3.5

image

Figure 3.6 Barrel distortion can be seen along the edge.

image

Figure 3.7 Pushing the filter too far causes pincushion distortion.

The good thing about lens distortions is that they are consistent. If you work out the exact parameters needed to fix lens distortion for a specific shot, then you can usually apply exactly the same parameters to any footage that used that particular lens. Be careful, though, as zoom lenses can exhibit different distortions at different focal lengths.

TIP

If the focal length of the lens varies over the course of the shot (as is the case with crash zooms), then the amount of distortion will also vary across the shot. In this case you will need to keyframe the filter parameters to match the changing focal length.

Warping

If you find that the lens has distorted the image in such a way that a simple application of pinch or punch filters doesn't eliminate the problem, you will need to invest in more exotic tools. These are typically classed as “ warping ” tools, and they typically work by creating a grid of points on the image that can be moved around, stretching the pixels underneath.

image

Figure 3.8 Warping software, such as After Effects's Mesh Warp (www.adobe.com), can be used for more sophisticated lens distortion correction.

Vignettes

Vignetting, which is when there is more light at the center of an image than at the edges, may be caused by a number of factors. The most common is a lens with multiple elements (or even a single lens element with multiple filters). Light striking the edge of the front-most element is

image

Figure 3.9 Vignettes can ruin a shot or enhance it. © Paul Hart (www.flickr.com/photos/atomicjeep).

not refracted enough to pass through the last element and is diminished by the time it reaches the recording medium (i.e., the film or CCD element). The resulting image tends to fade out from the center, occasionally reaching black at the corners. A reduction of brightness is typically seen, as well as a corresponding reduction of saturation.

image

Figure 3.10 A vignette shown on a simple pattern.

From an aesthetic perspective, vignettes may not necessarily be undesirable, as they can draw the viewer's eyes to focus on the center of the frame. From a technical standing, though, it is generally best to avoid producing images with vignettes in-camera, as they degrade the image and can reduce options for color correction later on.

How to Add a Vignette

Digitally adding a vignette to footage is easy, and the results can approximate “ real ” vignettes very well.

1. Load the footage and create a very small circular mask centered on the image.

2. Adjust the mask so that the edge softness extends toward the corners of the frame.

3. Yo u may need to invert the mask to affect just the region outside of the mask.

4. Apply a saturation color correction to the masked region, and reduce the saturation to 0.

5. Apply a brightness color correction to the masked region, and reduce the brightness to 0.

6. Adjust the size and softness of the mask, and increase the saturation and brightness parameters to get the desired result.

7. Render out the sequence.

image

Figure 3.11

TIP

If you are using a system that uses a pixel-based masking system (such as Photoshop) you can recreate the soft edge either by “ feathering ” the mask several times, or by heavily blurring it. An alternative approach is to create an alpha channel using a circular gradient from black at the center to white at the edges.

How to Remove a Vignette

Vignettes are destructive to an image, but in some cases it's possible to undo the damage.

Removing vignetting from footage can be tricky. The main problem is that detail toward the dark regions of the vignette can be lost, making it impossible to recover.

1. Load the footage and examine the pixel values in the vignette areas.

2. Create a very small circular mask centered on the image.

3. Adjust the mask so that the edge softness extends toward the corners of the frame.

4. Yo u may need to invert the mask to affect just the region outside of the mask.

5. Apply a brightness color correction to the masked region, and increase the brightness until the image has the same brightness at the edges as it does in the center.

6. Apply a saturation color correction to the masked region, and increase the saturation until the image has the same saturation at the edges as it does in the center.

7. Adjust the size and softness of the mask, and tweak the color settings until the vignette cannot be seen.

8. Yo u may need to crop and resize the image if there is substantial loss of detail at the corners.

9. Render out the sequence.

image

Figure 3.12

Camera Shake

Although it may not always be noticeable, camera shake happens all the time. Small tremors in the position of the lens, either from mechanical vibration of the lens elements or camera body or from less-than-smooth camera moves (or both), translate into displacement of the image across a sequence. The longer the focal length of the lens, the more exaggerated the effect tends to be.

Digital compositors have had to deal with this problem for years, and the result is that it is fairly easy to correct with modern digital tools.

image

Figure 3.13 Movement of the camera over the course of a single exposure can cause extreme blurriness. © Jessica Merz.

Film Shake

One of the oddities about camera shake in film cameras is that even though it can be much more pronounced than with other media (the movement of the film itself in relation to the lens compounds the motion of the shaky camera), it isn't really noticeable when viewed in a theater. The reason for this is that film projectors have an inherent shake to some degree, which the audience subconsciously compensates for. When you watch projected film, it is only when scrutinizing the edge of the frame, or looking at something that should have a fixed position (such as captions or titles), that you really notice the erratic motion.

Once you view such footage on a monitor or screen, the unsteady motion becomes much more obvious, largely because the edge of the screen is much more prominent and provides a stronger point of reference.

How to Remove Camera Shake

This is a common compositing technique, known as “ stabilization. ”

Removing camera shake from (or “ stabilizing ” ) a sequence successfully takes a bit of time to master, but can be done quickly with practice. It involves two stages: calculating how the camera moves in relation to the scene, and then reversing that motion.

1. Load the footage into a tracking system.

2. Pick a point in the image that should be completely still and track its motion across the entire shot.

3. Yo u may need to tweak the results or add more points, for example, if something moves in front of the point you've tracked at any time during the shot. The resulting tracking data give the motion of the camera during the shot.

4. This tracking data can now be applied to the original footage (you may need to invert the data to get the desired result, depending upon your software).

5. Zoom and crop the image (if necessary) so that the edge of the frame doesn't encroach on the footage, and render the sequence out.

image

Figure 3.14

Picking the right point to track can make all the difference to the success of the final result, so it may be worth experimenting with different points. Ideal points will vary depending upon the specific tracking software you use, but as a rule, try to pick something close to the center, with strong, high-contrast vertical and horizontal edges. For static camera shots, features such as the corners of windows or items of furniture make excellent candidates.

Sometimes there may be an area outside of the picture area that is known to be stable and can serve as an excellent tracking point. For instance, some film cameras burn a reference marker at the edge of the frame independently from the position of the film itself, which can be used to at least eliminate the motion of the film weaving as it passes across the camera's gate.

Although not ideal from the point of view of quality loss, it may be necessary to repeat the process several times on the rendered output to ensure perfectly smooth motion.

TIP

Camera shake typically implies a loss of softness due to motion blur as well as the more apparent motion problems. For the best results, it may be necessary to apply some form of motion blur reduction after removing the shake. See Chapter 10 for more information.

How to Add Camera Shake

Camera shake can be added to footage to imbue it with energy, to add drama, or simply for the sake of continuity.

The secret to adding believable camera shake to a more stable shot is to use footage that naturally has the shaky motion that you're after.

1. Locate some reference footage that already has the characteristics of camera shake (in terms of frequency and amount) that you want to add to the stable footage.

2. Pick a point in the image that should be completely still and track its motion across the entire shot.

3. Yo u may need to tweak the results or add more points, for example, if something moves in front of the point you've tracked at any time during the shot. The resulting data give the motion of the camera during the shot.

4. Save the tracking data.

5. Load the stable shot into the tracking system.

6. Apply the saved tracking data to the shot (you may need to split up the shot or loop the data if the reference shot was shorter than the stable one).

7. Zoom and crop the image (if necessary) so that the edge of the frame doesn't encroach on the footage, and render the sequence out.

image

Figure 3.15

How to Smooth Unsteady Shots

Even the infamous steadicam camera can produce results that leave a lot to be desired.

1. Load the footage into a tracking system.

2. Pick a point in the image that should be completely still and track its motion across the entire shot.

3. Yo u may need to tweak the results or add more points, for example, if something moves in front of the point you've tracked at any time during the shot. The resulting tracking data give the motion of the camera during the shot.

4. Yo u now need to separate the shake from the desired motion. There are a couple of ways to do this.

a. If your tracking software has the capability, you can duplicate the tracking data you've just collected, smooth (or blur) it, and then subtract that from the original tracking data.

b. The other option is to completely remove all camera motion and then manually recreate the original camera move (using the pan-and-scan technique in chapter 9) afterward. If you opt for this approach, make sure that you don't crop the footage at any point (you may need to increase the rendering region of your workspace to prevent this from happening).

image

5. Yo u should now be left with tracking data that contain just the motion that needs to be removed. Apply it to the original footage (you may need to invert the data to get the desired result, depending upon your software).

6. If necessary, apply a pan-and-scan to recreate additional motion.

7. Zoom and crop the image (if necessary) so that the edge of the frame doesn't encroach on the footage, and render the sequence out.

TIP

A very common problem with shot footage, especially for film shoots (which tend to involve a lot of swapping camera parts), is for a hair to get caught in the gate and then make a cameo appearance in the resulting image. This can have several side effects, such as causing scratches, but all of these are generally fixable. See Chapter 4 for more information on film damage.

Focus

Getting good focus during a shoot involves so much skill that an entire profession (focus pulling) is devoted to it. Because of this, it's inevitable that sometimes shots are not quite as focused as they should be, resulting in images that may be “ soft ” (lacking sharpness). Sharpness may be most noticeably lost at the lens, but it is also diminished during the process of digitization (or analog-to-digital conversion).

It is not possible to digitally refocus images, because there is an inherent loss of quality that can never be recovered, short of a reshoot. The easiest way to visualize why this happens is to think

image

Figure 3.17 © Keven Law (www.flickr.com/people/6616454910N00/).

about nighttime photographs that have distant lights in them. The lights that are close to the focal plane will appear as tiny points in the image. The farther the lights are from the focal plane, the more they start to look like circles of color (and the bigger those circles get). Well, that's actually true of any single point of anything ever photographed, whether it's a streetlight or the edge of a table. The farther away from the focal plane that point is, the more distorted it becomes. In the same way that it might be impossible to shrink those large circles of lights back down into points, it might be impossible to recover detail for any part of an image that is not on the focal plane. Things become a little more complicated when you also account for the fact that the brighter those points of light are, the more they overpower the rest of the image.

image

Figure 3.18 The farther from the focal plane, the less defined the edges. The recorded size of a single point is known as the circle of confusion.

Having said that, it is possible to use digital tools to at least fake sharpness in a convincing way. The sharpening process (which is often called “ aperture correction ” when it applies to analog-to-digital conversions) basically finds and then increases the prominence of edges in an image. It requires a delicate balance, though —too much sharpening and telltale problems such as ringing or aliasing can appear.

TIP

Although Chapters 5 and 6 relate to video and digital footage respectively, they contain techniques that are useful for fixing problems caused by too much sharpening, such as ringing and aliasing.

Digital Sharpening

Anyone who has dabbled with digital photography is probably aware of the range of sharpening tools available. To get the greatest benefit from them, it is important to be aware of two things: how we perceive edges and how sharpening algorithms work.

Perceived sharpness is incredibly subjective. It depends on how good your eyesight is as well as things like how bright the subject is, how much motion there is, and how close you are to the subject. Most importantly, it depends on contrast. Images with high-contrast edges will almost always appear to have better definition. Therefore, one of the most important things to do when analyzing footage is to optimize your viewing conditions. This means that you need to make sure that the screen you view the footage on is bright, that there is little ambient light in the room, and that what you're viewing is set at 100%, thus ensuring that it is not being resized by the software to fit the screen. Yo u should also check that your screen itself doesn't apply any sharpening to what you're looking at — unfortunately, many modern screens do this by default.

Another subtle issue is that we are much more sensitive to changes in luminance than we are to chroma. In terms of sharpness, this means that sharpening luminance is much more important than sharpening chroma.

The other key to mastering this process involves understanding how the sharpening algorithms work. Sharpening processes come in a variety of flavors, but are mostly based upon a simple premise: they exaggerate edges. Imagine an edge in an image as a see-saw, with the height of one side compared to the other as the relative sharpness of the edge. A sharpening algorithm adds a little bit of weight to the see-saw, so that the height difference becomes greater. At its most basic, this will result in every edge in the image appearing more pronounced (which isn't necessarily a good thing). Some algorithms can be a little better behaved than that, only affecting edges that already have a minimum degree of sharpness, for example. Others actually overexaggerate the sharpness of pixels on an edge —imagine that someone dug a hole in the ground underneath the see-saw to coax a bit more of an incline out of it.

image

Figure 3.19 Looking at the edge profile of a blurred edge, a sharp edge, and an oversharpened edge reveals digital artifacts that might otherwise be difficult to spot.

There are significant problems with just applying a stock sharpening algorithm to a piece of footage, even though it is often a quick and easy fix. First of all, they almost always affect the footage in a uniform way. This means that every pixel of every frame is treated as a potential edge, with little regard to where it is with respect to the focal plane. Secondly, they tend to analyze the RGB values to determine where there are edges, which ignores the fact that we are more interested in luminance values than chroma values, and also ignores the fact that different amounts of sharpening may be required for the shadows than for the highlights. And, finally, they can lead to destructive artifacts within the footage, particularly if there is a lot of fast motion in the image.

How to Repair Out-of-Focus Shots

The following technique works well in most situations, and at a minimum will give you much better results than just applying a stock sharpening filter.

1. Load in the footage, and make sure it is set to be viewed at 100%.

2. Separate the luminance channel from the color channel. This will form the basic mask (separating into L, A, B channels is the preferred method for doing this, but it is not available in all software packages, in which case you may have to make do with splitting it into H, S, B and working with the B channel).

3. Duplicate the luminance channel a couple of times so that you have three copies.

4. Key each of the luminance channel copies so that one contains just shadow information, one has just mid-tones, and the last just highlights. These will form your three main masks to apply to the original footage.

a. If there are specific regions in the images that need extra sharpening, such as an object or character the audience will be paying attention to, it will be worth creating additional masks to isolate those areas.

b. If the focal point changes significantly during the shot, create a circular mask with a very soft edge and apply a dynamic move on it to roughly track the focal point of the lens throughout the shot.

5. Apply an edge detection filter to the three main masks, and then blur the results by about 200 –300%. This will focus the sharpening effects on just the edges.

6. Apply a series of sharpen filters to the original image, using each of the masks in turn. If your software “bakes in ” the sharpening operations to the original image as you apply them, be sure to set the sharpening strength to a very low value in each case, and then repeat this step until you get the desired result.

7. Render out the sequence.

image

Figure 3.20

TIP

Generally, if you're going to be performing a lot of other fixes to a piece of footage, you should leave the sharpening part until the end.

Sharpness from Noise

You can increase the perception of f a small amount of noise. To exploit this fact, mask any regions representing the focal points, and then add some noise to the luminance channel.

image

Figure 3.21 Detail of the slightly out-of-focus shot.

image

Figure 3.22 The final mask.

image

Figure 3.23 After sharpening.

image

Figure 3.24 Adding a small amount of noise can increase the apparent sharpness.

Color Fringing

Color fringes are halos or bands of color around objects in images. There's a ton of debate as to what exactly is the cause of color fringing in any given situation, but it can usually be attributed to either a lens or recording medium imperfection.

The good news is that they are largely geometric in nature —you won't often see large fringing in one part of an image and not in another — and thus fairly simple to fix.

image

Figure 3.25 © Josef F. Stuefer (www.flickr.com/photos/josefstuefer).

How to Remove Color Fringes

Most color fringes can be removed by shrinking them down.

1. Load the footage and identify the primary RGB colors of the fringe. For example, purple fringes are a combination of red and blue.

2. Separate the red, green, and blue channels.

3. Scale the channels containing the fringe slightly (red and blue channels for a purple fringe), until the fringe cannot be seen.

4. Crop the image, if necessary, to remove any borders at the edge of the frame.

5. Render out the sequence.

image

Figure 3.26

image

Figure 3.27 Close-up of a color fringe. © Andrew Francis (www.xlargeworks.com).

image

Figure 3.28 After removing the color fringe.

Lens Flare

Love them or hate them, lens flares are everywhere. They are the result of light reflecting off the surface of individual lens elements. Traditionally, they would be seen when pointing the lens directly at a bright light source (such as the sun), but the effect is easy to simulate digitally through filters. Ironically, lens flares are now so synonymous with computer-generated images that many cinema-tographers try to avoid them wherever possible just for this reason.

image

Figure 3.29 © Andrew Francis (www.xlargeworks.com).

Lens flare can also be more subtle, with a small but noticeable amount of stray light falling on the lens and contaminating the shot, which may not be noticed until long after the shot is in the can. Lens flares distort the contrast, luminance, saturation, and hues of the areas they fall on, sometimes so much so that even color-correcting the affected areas cannot recover the original detail. They also have a tendency to move around a lot over the course of a shot, but as we'll see, this actually helps in the process of trying to remove them.

How to Remove Lens Flare

Removing lens flare from footage is a laborious process, but the results can be worth it.

1. Load the footage.

2. First examine how the lens flare moves in relation to everything else in the shot. Because the most common source of lens flare is the sun, footage with lens flare tends to contain a lot of sky.

3. Create a “clean plate ” to use throughout the shot. Identify a frame that contains most of the imagery in the shot (you may need to pick several frames at regular intervals to fulfill this function) and clone from them to create a single frame with no discernable lens flare.

a. Where possible, mask parts of the lens flare and color-correct them so that they match the surrounding image.

b. For the rest of the image, clone areas from adjacent frames or from elsewhere in the current frame until the lens flare has been completely removed from the image.

4. Step through each frame, cloning areas from the clean plate on that frame until traces of the flare have gone.

5. Once you reach the end, play through the cloned result in real time, looking for inconsistencies and cloning artifacts that may have arisen. Fix these by using more cloning (you may also need to clone from the original footage if you get into a bit of a mess) until the lens flare is removed and there are no cloning artifacts.

6. Render out the sequence.

image

Figure 3.30

image

Figure 3.31 Close-up of lens flare on a frame.

image

Figure 3.32 Cloning out the lens flare.

Simulating Camera Filters

One of the benefits of working in the digital realm is that it's very easy to simulate certain effects (with an unprecedented degree of control) that would otherwise need additional filters attached to the lens during a shoot (at additional expense and limitations).

Table 3.1 describes how to replicate some of the most popular camera filters easily (where possible) using generic digital processes (note that these are intended as starting points rather than definitive recipes).

Table 3.1

Photographic Filter Digital Equivalent
Neutral density Brightness + contrast
Polariser None
Colour Tint
Starfactor Starburst filter
Soft focus/diffusion Blur with soft circular mask
UV None
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.183.234