CHAPTER 9

Matching Shots

This chapter is devoted to one of the basic colorist tasks: matching shots. This skill is critical because it is the colorist’s job to ensure that in any scene, all of the shots look like they happened in the same place and time, even though they may have been shot over the course of several hours, days, or even locations. It’s critical, especially in a dramatic piece, that the audience doesn’t get pulled out of the story because the color or contrast or luminance levels shift from shot to shot inside of what is supposed to be a single contiguous scene.

This area is one of the least subjective skills in which a colorist must be proficient. Either the shots match or they don’t. You’ll see that there are numerous strategies and tips to make these matches easier, but that the more experienced you become, the more you simply rely on your eyes. Until then, learning to match shots is a valuable skill and training method in learning to force a very specific look on a shot with a clearly definable and quantifiable result.

Many times, the need to match cameras is due to some technical mistake during production, but matching shots still needs to occur with even the most skilled director of photography and most diligent crew. The reasons for these matches often have to do with the quality or color temperature of natural light changing over time. But it can also happen on a shooting stage with completely controlled lighting. Sometimes it can happen between lens changes—for example, with a wide shot, on which the actual exposure of a face matches the close-up, but because of the perception of the light levels, the wide shot may need to be adjusted to match the perceived level. Stephen Nakamura said just such a case happened when he did the grading on David Fincher’s Panic Room. In one scene, the perception of the light level of the wide shot was affected by the light bouncing off of a large wall. But in the close-up, even though the exposure values of the skin on the face were identical, the skin tones seemed brighter, because the eye wasn’t taking in as much of the wall.

I presented the colorists with four matching scenes, (which are also available on the DVD). The first pair includes a shot of the lions in front of the Art Institute of Chicago. One was shot with proper white balance (though fairly warm) and one was shot balanced blue. The second pair includes an interview clip and a B-roll shot that need to cut back to back. The third pair includes two clips from the same interview that was shot outdoors as lighting conditions changed. And the final one is a seemingly impossible match of two horribly overexposed and poorly white balanced images of the Chicago Water Tower.

Matching the Lions of the Art Institute

Craig Leffel, of Chicago post house Optimus, starts us out with his take on the match of the Art Institute lions (Figures 9.1–9.4).

Leffel begins by analyzing the images and correcting the “base” image. “I’m looking at these shadows,” he explains as he points to the black areas above the three colored banners, “since they’re the darkest shadows that I can see the fastest. This,” he says, pointing to the shadowed archways above the doors, “is also a good place to see texture; to see if I’m cranking the blacks too hard or too harshly, this stuff will look pretty awful pretty fast. I’m trying to get the blacks not to look milky and trying to get some richness, but richness with separation. Just adding contrast to an image and just crushing the blacks is not the same as trying to get tonal separation and get richness. Especially when I’m working off something that I know is a piece of tape, I try to separate out as much dynamic range as I can. The way I discern that is by the black to midtone relationship and then the midtone to highlight relationship. And to me, when you start out—it’s one thing where you finish—but where you start, it’s nice to have as much range between each stage of black, gamma, and white as you can without any clipping, crushing … just get as full a tonal range as you can. Imagining it’s a photograph and trying to see every bit of the tone from 16 steps of gray that you can or more. Kind of a Zone System kind of a thing. Whenever I’m doing an image I’m always thinking about—not literally the Zone System—but that’s pretty much how I judge an image.”

image

Fig. 9.1 The “base” shot, though it’s a little warm.

image

Fig. 9.2 Tektronix WVR7100 screengrab. Upper left: YRGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: vectorscope zoomed 5x.

image

Fig. 9.3 The “cool” shot.

image

Fig. 9.4 Tektronix WVR7100 screengrab. Upper left: YRGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: vectorscope zoomed 5x.

Just adding contrast to an image and just crushing the blacks is not the same as trying to get tonal separation and get richness.

– Craig Leffel, Optimus

With the base image looking the way he wants (Figures 9.5 and 9.6), he grabs a still to begin working on the match. Very quickly, without referring to the scopes, Leffel has a pretty close match.

Leffel ignores the fact that the sky in the “cool” image is radically off, knowing that he’ll deal with that later. He switches from the split to cutting back and forth between the still store and the correction. “I have a blue shift in my shadows if you look at the bottom of that lion. The color of the building is right, except the contrast is wrong. You can also see the blue in the shadows in the doorways and in the guy’s jacket on the stairs. However in a case like this of a mis-balanced camera, there’s going to be a trade-off of what compromises you’re willing to make in order to make most of the image feel good.”

Definition

Zone System: A photographic concept espoused heavily by famous landscape photographer Ansel Adams. The system was designed to be used for the initial negative exposure right through the final print. It is a system designed to properly expose prints by envisioning, describing, and targeting the exposures of certain tonal “zones.” There are several books on the subject, including Adams’s original The Negative: Exposure and Development (New York Graphic Society). This is a great book for learning to understand tonal ranges and contrast.

Still Store

Any good color correction application or system should have some method of storing and recalling visuals to which you can refer. There are a number of important ways to use a still store. Several colorists grabbed stills throughout their corrections so that they could judge whether the direction they were heading was improving the image. Others grabbed stills of shots they were trying to match exactly or of scenes in which they were trying to maintain continuity. Also, the still store can be used to maintain consistency over long-form programs.

Learn to use the still store in your application using the keyboard shortcuts. Experiment with ways to use your still store or reference images to improve your corrections. It may seem like pulling these stills and referring to them will slow you down, but they can keep you from straying too far down an unproductive path.

Also, as you see by the example of the colorists throughout the book, you need to decide in which cases you want to cut back and forth between the still store and the live image, or whether you want to wipe between it and your working image. Some prefer one method and some prefer the other, but most of them use both methods at one time or another.

DaVinci Resolve’s Color screen, where most of the correction is done has a dedicated area devoted to pulling stills and using them. Apple Color has an entire “room” devoted to stills.

“So,” I ask Leffel, “Making a perfect match won’t be possible in this case because one or more of the color channels has either become clipped or compressed in one area and not another?”

“Yes,” he responds. “So you have to say, ‘I want to get as much as I can get right.’ Like this is already better, just to take that blue out of the black.”

There’s going to be a trade-off of what compromises you’re willing to make in order to make most of the image feel good.

– Craig Leffel, Optimus

I ask him what he sees as the difference in the images. “It’s mostly red gamma. But if you start worrying about that particular detail, you’re going to lose the rest of it. It’s more an overall perception thing. If you just try and watch the whole image, not trying to see too many details, what your eyeball is going to catch is the overall hue shift. Your eyeball is not going to catch, necessarily, that change in the doorway.”

I ask Leffel to describe what he did as he cuts back and forth between the corrected and uncorrected cool image. “I looked at that blue and said, ‘Most of that is happening in the brightest parts of the picture or gain’ and I immediately tried to take out that blue tone and lean more towards the target image overall—throwing warmth in. Once I had that even remotely close, I started dialing contrast in. So then it was time to hit blacks and gamma and dial in some contrast. Then working black and gamma against each other to try to get full tonal separation again in the shadows and the midtones so that I wasn’t crushing or hitting anything too hard.”

image

Fig. 9.5 First Leffel got the “base” image to a place where he was comfortable with it.

image

Fig. 9.6 Data from the Primary room.

I press him further, asking, “By saying ‘full tonal separation and working blacks against mids,’ you mean how far you pull down the blacks and how high you pull up the mids or how high you bring up the blacks and how low you pull down the mids? And you’re doing that with both hands. Then you do that on the other side with the highlights?”

“Exactly,” he responds. “You open the midtones and darken the whites. It’s a lot easier to match an image if you have some full tones to work with so I added black. I added gamma. I added color saturation. I mostly manipulated midtones. I brought the black down, but I also brought the whole midtone down. You can see that the white values don’t change a whole lot, but the midtones and the blacks do.”

image

Fig. 9.7 Primary correction for cool image.

image

Fig. 9.8 Data from Primary room.

The tonal separation really makes the detail pop. “It looks like you can see individual bricks in the façade,” I comment.

I still use the Zone System every day.

– Craig Leffel, Optimus

“Absolutely, and that midtone kind of really stretches out. One of the things I tell colorists is that you have to discern rather quickly: where’s the white? What’s a white point? If you think of the whitest points and the darkest points, and then everything else is kind of midtone. Then if you manipulate that midtone and think of midtone as a curve that you’re kind of sliding down, you can sort of round this image out to have some richness. So you’ve added a bunch of black and stretched out the image, not to the point where it’s harsh or that you’re clipping anything unnecessarily—in an image like this you kind of have to clip, but—you’ve stretched it out to have dynamic range: a black, a little-bit-higher than black, a middle gray, a slightly-higher than middle gray, something approaching white and then white. If you can get 16 steps of gray into an image, you’re doing a great job … or at least my buddy Ansel Adams said so,” Leffel jokes. “I still use the Zone System every day. I’m really surprised that I do, but I come from printing photographs and the mark of a good printer of photographs is tonal separation. If the creative direction is to eliminate it, then of course that’s what you do, but as a base way to color correct or as a base way to approach an image, I always approach it as a full tone image,” Leffel concludes.

image

Fig. 9.9 Secondary correction, pulling added warmth out of the sky.

image

Fig. 9.10 Data from Secondary room.

Bob Sliga also took on the challenge of the Art Institute lions. His approach was that—even though the “base” image wasn’t ideal—he would treat that as the “hero” grade and would match directly to the uncorrected, slightly warm shot. This is almost the reverse of Kassner’s approach later in the chapter.

I like to look at the vectorscope blown up … as far as I can go because it helps me find a neutral black and a neutral white.

– Bob Sliga

“I look at the waveform monitor, vectorscope [Figures 9.2 and 9.4]. I like to look at the vectorscope blown up a lot. I blow it up as far as I can go because it helps me find a neutral black and a neutral white. I also look at the RGB Parade display, then I look at the picture. What I have up right now is in the still store; I’ve saved a picture that I want to match to. The white balance is extremely different. The exposure level is different on the scope. I can see where I have to put the signals in order to help match the image. I’ll utilize the wipe to the reference image, then I’ll rotate the split so I have a little bit more of the picture,” he says as he rotates the wipe so that it goes from the lower left corner to the upper right corner.

“So, I go to primary in and I’m just going to brighten this up really high. So one of the things I’m looking for is a match in the waveform. Then it’ll be by eye after that. So you kind of get it in the ballpark of the overall video level. I’m also looking at the black level; how we’re higher over here, so it’s not balanced out. So as I come back over here to my parade display what I’m trying to do is balance these off as close as possible. To do this, I’m going to start by making my blacks black.”

Sliga uses the shadow trackball to balance blacks. “You can see as I move around what happens on the vectorscope. You want to have things coming out of the center. We still have a big-time white balance difference and I’m not even looking at the monitor. I can do this in the joyball area and move all three tonal ranges or I can come over here to the advanced side (the Advanced tab in the Primary room) and grab the channels one at a time. I’m going to bring my red lift down just a little bit more in the blacks. Now that we’re in the Advanced tab, it’s easier than moving three joyballs at once. This is just another way of doing this.”

Sliga switches from adjusting the red, green, and blue shadow levels to working on highlights. “We’re just going to try to get the highlights in the ballpark,” Sliga states as he checks the waveform monitor. “We need to take some of the blueness out of this, which I can look to do in one of a couple ways. First, I’m going to start with the blue gamma and bring it down into this area here. Then I’m going to bring the red gamma up a tad and then go back and forth.”

Sliga changes gears again and jumps from the Advanced tab corrections back to the trackball for midtones. “This is one of those places where it’s easier to do with the trackballs, so I’m going to do the rest of the correction over here. We’re getting warmer overall. We’re probably not going to match it totally 100 percent exactly, but the idea is to get it pretty darn close and we should be able to. If we had to use windows and that, we could. Remember I’m just in the primary in room for this right now. So I’m just going to add a bit of color to it. Looks like we have a little bit of a green balance,” he says as he adjusts the highlight trackball. “I’m doing this by eye at this point. Then I’ll come back to the gamma with a little bit more green. Now I’m using shadow sat [saturation] and pulling down some of the saturation that was building up in the shadows.”

With most of the work done in the Primary room (Figures 9.119.14), Sliga moves to secondaries, explaining, “I’m going to use the Saturation curve. What I really want to do is deal with that yellow that’s coming in to the warmth of the bricks.” He pulls the saturation down on the yellow vector of the cure (Figure 9.15).

image

Fig. 9.11 The shot Sliga used for his match.

image

Fig. 9.12 The primary correction.

image

Fig. 9.13 The main primary data.

image

Fig. 9.14 The data from the Advanced tab of the Primary room.

“When you’re actually doing matching like this, you end up trying a lot of things to make it happen because you’re forcing one into the other. And so this one here, by pulling the yellow out, it got our stone [the foundation of the lion] a lot closer, except we’ve got a little bit of color up in there that’s different,” he explains, pointing to the building façade between lion and first doorway. “So I came back up here figuring I could get away with a gain change, that gets it in the ballpark. And if we wipe between the two just to see where we’re at—the building itself is pretty darn close.”

image

Fig. 9.15 (a) The first secondary correction, pulling yellow out of the façade. (b) Data for the first secondary correction.

Using another secondary, Sliga tries to bring the color of the lions closer. He positions the split screen diagonally across the lion, then goes to the Saturation curve and moves the cyan point, moving it up and down radically. “That’s the wrong point. That ain’t gonna work, so I’ll try the green point.” Sliga lifts and lowers green saturation point radically, seeing that it is affecting the right portion of the image. He settles into a lowered saturation on green. “Maybe somewhere in that area,” he determines. “And if we go to the still store and wipe across … I pulled too much out, but we can go back to that and raise it a little.” A few minor tweaks to the Saturation curve later and his match of the lion is complete (Figure 9.16).

Sliga then added another correction to match the sky, pulling a Luminance HSL qualification and a circular garbage matte (Figure 9.17b).

image

Fig. 9.16 (a) The second secondary. pulling saturation out of the cool image. (b) The data for the second secondary.

image

Fig. 9.17 (a) The third secondary, pulling warmth out of the sky. (b) The matte and vignette for the third secondary.

Neal Kassner takes the next crack at matching the lions. His initial corrections are to the cool lions. “All right, the first thing I’m going to do is try to balance the blacks a little bit. And I’m looking at a combination of the waveform and the vectorscope. I’m going to warm up the gammas a little. Now, I don’t know what kind of stone that is [referring to the façade of the building], but I know it’s not as yellow as one or as blue as the other, so I’m just going to try to make it neutral. I’m also going to wind down the overall gain and see what that does to the sky.”

image

Fig. 9.18 The diagonal split between the “base” shot and the corrected “cool” shot. The wipe goes just under the red Ansel Adams banner and above the leftmost arched doorway. Notice the “cut” high on the lion’s legs.

Then he switches over to the warmer lion shot. “So now what I have to do is go the other way with it. And I’m going to take some of the warmth out of the low lights and also out of the midrange. Okay, so this is where it’s getting there, but it’s not close, so I’ll cut back and forth between this and the still. What I’m going to do is match the luminance using the waveform.”

In order to get the contrast ratios right between the two images, Kassner plays the gamma and highlights off each other, bringing gamma down and highlights up, then bringing shadows up then playing shadows up and highlights down at the same time before his correction is in a comfortable range for him. “Okay, the luminance is closer than it was. Now I’m going to concentrate on color.

“Now I’m running into a situation where I like this better than that,” he says, preferring his semiadjusted shot 2 over his completed correction on shot 1.

Kassner starts his grade over using grade 3 in the timeline. I ask him if he’s trying to get the shapes in the waveform to line up. “Exactly. So now there’s a color cast … a little cyan. It appears to be mostly in the gammas. Now I’m looking at the vectorscope, just trying to match the shapes a little better. It almost looks like there’s a black stretch going on in this grade. This,” he says, pointing at the shadow area on waveform monitor, “up to here is a fairly close match, color aside, just luminance. But then, this,” he points at the high midtones, “is getting stretched out more. If I just go and bring up the highlights, it’s also dragging up the lowlights with it. So if I work the two against each other … now we’re getting someplace. That’s actually a little bit closer.” Kassner has been moving the gammas down and the highlight up at the same time. “Then there’s just a question of trimming the colors. A little overall hue correction would be a good cheat.”

image

Fig. 9.19 Kassner chose to balance to the cooler Art Institute shot entirely in primary. This is the primary grade for the warm or “base” shot.

image

Fig. 9.20 Data from the “base” image in the Primary room.

image

Fig. 9.21 “Cool” image graded slightly in primary.

image

Fig. 9.22 Data for the “cool” image in Primary room.

image

Fig. 9.23 Split between shots, “base” image is on the bottom.

Janet Falcon of Shooters Post is next up. She starts in on the correction with her eyes almost completely on the video monitor.

A lot of color correction is about defining edges and contrast and being able to see what you want to see.

– Janet Falcon, Shooters Post

“I’m just trying to get it somewhere close to a starting point before I bother going back and forth.” Falcon wipes between the “correct” and cool lion, deciding, “There’s way too much red in the blacks. I need to brighten this up. This one (in the façade) still looks blue. There’re actually variations of color (across the front of the building), cooler to warmer shades. And this one looks like it’s painted all one color … flat. So this one doesn’t look as good to me. This one looks more realistic because there are different shades. There are lighter areas and darker areas. This one looks flat, so I’m trying to make this one look like that. So basically I need more yellow in the highlights because there’s too much blue in the highlights. Then put a little blue back in the lowlights.” Falcon points at the middle of the doorway arch closest to the lion, commenting, “I’m going back and forth between looking here for blacks, here for gammas,” as she points to the top edge of same archway. “And up here for whites,” she says, pointing to the far right square of building façade above far right archway. “It’s a little pinker.”

Falcon points out that as you get closer to getting a match, it’s easy to forget which side of the correction you’re adjusting. She explains that on a DaVinci, when you’re wiped over a still, you see a green bar so you know you’re on the reference frame. See Figures 9.24–9.27.

image

Fig. 9.24 Primary correction on the warm, “base” image.

image

Fig. 9.25 Data from the primary correction to the “base” image.

image

Fig. 9.26 Primary correction on the “cool” image of the Art Institute.

image

Fig. 9.27 Data from the primary correction to the “cool” image.

image

Fig. 9.28 Secondary correction to the hue and saturation of the lion.

With the buildings matching fairly close, Falcon pulls a secondary HSL key for the lion and matches it as well (Figures 9.28–9.30). She also adds a simple HSL qualification to both the images to correct for the clipped sky. Those corrections are nearly identical to those done by the previous colorists in this chapter.

Falcon offers a final tip at the end of her matching session: “A lot of color correction is about defining edges and contrast and being able to see what you want to see.”

image

Fig. 9.29 Data for the secondary correction.

image

Fig. 9.30 Split screen with “cool” Art Institute image on top.

Matching Scene to Scene

This footage is from a project I edited. In that project these two shots—the interview scene of the woman (Figure 9.33) and the B-roll shot of her with her son (Figure 9.31)—were not cutting together well. I tried trimming the shots one way and then the other with no success. Finally, I decided to color correct the shots so that they’d match better. That was the solution. Figuring that our panel of experts could match them better than I could, I included the scenes in the sessions for the book.

Bob Sliga takes the first crack at matching the scenes. He starts by correcting the interview scene first. “I’m bringing the whites down out of clip. Then I balanced my blacks and brought them down a bit, which got me to here (Figure 9.35). I forgot I even did it. Sometimes it seems like my hands think for me.”

image

Fig. 9.31 Source footage of B-roll shot. Image courtesy Exclaim Entertainment.

image

Fig. 9.32 Tektronix WVR7100 screengrab. Upper left: RGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: RGB Parade zoomed in the show black balance.

image

Fig. 9.33 Source footage of interview shot. Image courtesy Exclaim Entertainment.

image

Fig. 9.34 Tektronix WVR7100 screengrab. Upper left: RGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: RGB Parade zoomed in the show black balance.

image

Fig. 9.35 (a) Primary correction to interview scene. (b) Data for Primary room.

With a basic correction to the interview scene (Figure 9.35), Sliga turns his attention to the shot of the mother and son. “Okay, so now we come over here. I’m going to balance him out too. I’m just going to pull the blacks down to zero. I’m going to bring the overall warmth of this down a little bit in the gain because we see how high that is,” Sliga remarks, referring to the red channel in the RGB Parade scope being much brighter than blue or green (Figure 9.32).

A lot of times I’ll advance the clip to the next scene and then back it up one frame to see how the shot ends.

– Bob Sliga

“I’m going to choose to do it this time on the individual channel. It’s a little easier. Sliga brings the gain of the red channel down, but not so much that it is perfectly even with blue and green. “It is still slightly higher, which it should be because the image is mostly skin tone,” he explains. Then he plays the shot through (Figure 9.36). “A lot of times I’ll advance the clip to the next scene and then back it up one frame to see how the shot ends.”

Sliga returns to the interview shot. “This shot is a lot warmer than the other shot. I could pull the warmth out. People generally look better warmer, so I try to use the warmth to its advantage. I’m going to try to richen her up first, then I’ll match the other to this.

“I’m going to keep what I’ve got here and go to the secondary room. The reason is that if I like where I’m at in primaries, but I want to do some more, then I don’t have to sacrifice what I’ve already done. There’s more than one way to color correct with this software and it all depends on the type of job and the type of work that you’re doing. I’m going to richen her up a bit by pulling the gammas down a bit. I’m going to warm it up a tad. The black is looking nice and black and we’ve got a nice clean white back here. I just richened it up a bit, okay? And by doing that, the saturation kind of came into play on its own … I added more saturation by just darkening it down. So I’m going to keep this and hit Control-I, which will make a still of this.”

image

Fig. 9.36 (a) Primary correction to B-roll scene. (b) Data for Primary room.

image

Fig. 9.37 Data for Advanced tab of Primary room.

Sliga continues with his explanation of his workflow. “Then I’m going to call up the other scene and cut back and forth to the still. I’m more of a cut person instead of using a wipe. First thing I’m going to do is richen this up, warm it up a little here. I’m going to leave primaries where they’re at and I’m going to come into secondaries.”

Sliga enables secondary, but doesn’t qualify anything at all, using it as another layer of primary. “I’ll richen it up a bit,” he continues, pulling down gamma. Then he warms the image by dragging the midtone wheel toward red/yellow. “Now I’ll kick up the whites a bit, bring my blacks back down. That’s going to be a little too warm, I have a feeling,” he speculates as he brings red back down in the mids. “Let’s just see where this is at,” he says as he hits Control-U, checking his match. So I’ve made this a little bit too warm in comparison,” he says, altering the shot slightly (Figure 9.39).

TIP

Sliga explains his workflow for setting his “hero grade”: “I copy it to grade number one. There’s a reason why I use grade one. It’s a quick check. Because we have four grades available, what I’ll do is I’ll always drag the real grade that I want into grade one. And if I go to the Final Print room and choose add all, I can see instantly that I didn’t load the correct grade in because there’s a column in Final Print that shows which grade has been selected.”

image

Fig. 9.38 (a) Secondary correction used “unqualified” as essentially an additional layer of primary correction. (b) Data from Secondary room.

image

Fig. 9.39 (a) Secondary correction used “unqualified” as essentially an additional layer of primary correction. (b) Data from Secondary room.

CBS’s Neal Kassner is next up with this match. Kassner starts out by still storing the shot of mother and son, then begins correcting the interview shot. Unfortunately, the grades for this match were not saved, so I don’t have imagery to accompany the narrative, but I felt there was some good information in having him talk through the match.

“First thing I’m going to do is bring the blacks down a little bit and the gammas down a lot. Bring up the saturation. Move the gamma toward red. Just to kind of get it roughed in. Maybe bring the highlights down just a little bit to protect the window. She needs a little more red in the highlights, I think. Maybe drop the master gamma a little bit to give it a little more contrast. Skin tone is a little bit different on the vectorscope. Now I’ve got it closer on the vectorscope, but it looks wrong. So I’m not going to go with that. The real warm tone in the background elements are a little misleading,” Kassner explains.

I point out that there are a lot of colors in the shot that are similar to flesh tones. “Yeah,” he agrees, “it’s pretty monochromatic. What I’m going to do now is something I should have done in the first place, and that is balance the blacks a little better, ’cause I’m assuming she’s wearing a black dress.

“I’m going to use the shot of her and the little boy as what I’m matching to. I’m just going to keep clicking back and forth between that shot and the one I’m working on. I’m using just the primary controls on this. And at this point mostly the gamma. And every once in a while, I’ll glance over at the vectorscope more than the waveform monitor. So I’m playing with the gain and the gamma just to get the skin tones to look fairly close. There’s different lighting so the contrast is going to be a little different. That’s a closer match than it was. We still have a lot of yellow stuff in there. The door and the lampshade are probably where that’s coming from.”

I ask Kassner if it’s times like this that he has to depend more on his eye than the scopes. “Exactly,” he confirms. “At this point, I’m just relying mainly on the picture monitor to get her face looking close in the two-shot. And once again it’s the gammas and the gains where I’m doing most of my work here. But what I’m a little bit concerned with is that in the interview shot her cheeks are starting to blow out. So I’ll back off on that, but then the overall luminance of her face is a little darker. So it’s really just a question of walking back and forth until you get it to look right.”

Matching When Lighting Changes in a Scene

These shots (Figures 9.40 and 9.42) are from a documentary I produced about my family’s bicycle trip across the United States. During the interview, which I shot on BetaSP without lights, the sun started to go down, so the beginning and end of the interview look somewhat different. The color temperature didn’t actually change much, but the contrast as it got closer to dusk definitely changed.

image

Fig. 9.40 Source footage of interview from early in the day.

image

Fig. 9.41 Tektronix WVR7100 screengrab. Upper left: RGB Parade. Upper right: composite waveform, Lower right: vectorscope. Lower left: RGB Parade waveform zoomed in to show black balance.

image

Fig. 9.42 Source footage of interview from later in the day.

image

Fig. 9.43 Tektronix WVR7100 screengrab. Upper left: RGB Parade. Upper right: composite waveform, Lower right: vectorscope. Lower left: RGB Parade waveform zoomed in to show black balance.

image

Fig. 9.44 Early interview footage with primary correction.

image

Fig. 9.45 Data from Primary room.

image

Fig. 9.46 Later interview footage with primary correction.

image

Fig. 9.47 Data from Primary room.

Nolo Digital’s Mike Matusek starts by correcting the first shot from the interview, and then he corrects the second shot to match the first. “This would be a combination of midtones and blacks that I’d bring down. Midtone may not have enough range,” he says as eyes go back and forth between scopes and monitor as he adjusts.

I ask him what the challenge is in getting these shots to match. “I think you said that the sun was out in this first image and then it started to go down in this second image,” he replies. The first shot has more contrast and the highlight of his right side is up, so I’ll probably put a window on the left side. Probably increase the contrast on [the second shot] to try to get them closer, then just match the flesh tone” (Figures 9.44–9.47).

Matusek puts a window on the left side and lowers the brightness of the background and the face highlight. “See? That’s all it really needed” (Figures 9.48 and 9.49).

image

Fig. 9.48 (a) Early interview footage with secondary correction. (b) Data from Secondary room.

image

Fig. 9.49 The split shows how close skin tones and background tonality match (and this Picasso-like image is sure to make my brother laugh).

Matching AND Saving!

Here’s a pretty impressive “save” of a horribly overexposed and poorly white balanced shot. Alpha Dog’s Terry Curren made great use of Avid Symphony’s Channel Blending capabilities. I’d warned Curren that I had a tricky shot for him, and he knew it as soon as he saw it. “This is the bad guy!” Curren laughs, “This is the one we were waiting for (Figure 9.52). Well, the first thing is, it’s clipped. It’s obviously clipped up there, so that’s a drag.”

“Obviously, one of the advantages we have is that you can go in to the channels and look at individual channels and see … now the blue’s (Figure 9.56) actually got a nice image compared to the green and the red (Figures 9.54 and 9.55), which are really messed up. The red is actually way blown out, so I will knock the red down and build some of that channel back with the blue. And the same thing with the green channel. Then I’ll add a little bit more blue back in the mids. Even though the whites and the blacks end up even, the mids have this little angle to them (slightly higher reds, mid greens, lower blues). I don’t know why, but it just works out that way.”

image

Fig. 9.50 Source footage of Chicago Water Tower “base” image.

image

Fig. 9.51 Tektronix WVR 7100 screengrab. Upper left: YRGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: vectorscope zoomed 5x.

image

Fig. 9.52 Source footage of Chicago Water Tower poorly white balanced and overexposed.

image

Fig. 9.53 Tektronix WVR 7100 screengrab. Upper left: YRGB Parade. Upper right: composite waveform. Lower right: vectorscope. Lower left: vectorscope zoomed 5x.

image

Fig. 9.54 The red channel of the “base” water tower image.

image

Fig. 9.55 The green channel of the “base” water tower image.

image

Fig. 9.56 The blue channel of the “base” water tower image.

Curren switches to the blown-out, poorly white balanced version of the shot. “Now comes the fun. Once again, I’m just going to get down out of the high areas first,” Curren explains, pulling the whites down on the master curve. Now, you can check the channels and it’s exactly inverted from the other one. The green channel is the hot one (Figure 9.58) and the blue channel (Figure 9.59).

“I’m going to do the same thing I did on the other one in channels, only in the opposite direction. Now we know the red channel is the good one” (Figure 9.57), he explains as he blends red with the blue and green channels.

“Still got too much blue,” he states as he goes to the red curve and pulls the high/mid reds up a bit. “This is one of those cases where you have to start messing up the other one to get them to match.” He adds a chroma blur from GenArts’ Sapphire plug-ins to better match the difference in contrast between the two shots.

I ask what the point of the blur was in matching the shots. Curren explains, “Basically, I used a chroma blur and I went in and blurred vertically because I was seeing all the sharp edges. If we go in and look at the channels individually, you can see these hard edges in here. But the blue is not. The blue is a little softer. So I did a vertical blur on the two channels that were nasty, because the hard edges aren’t going that way [horizontally].”

image

Fig. 9.57 The red channel of the poorly white balanced, overexposed water tower image.

image

Fig. 9.58 The green channel of the poorly white balanced, overexposed water tower image.

image

Fig. 9.59 The blue channel of the poorly white balanced, overexposed water tower image.

image

Fig. 9.60 Water tower “base” shot corrected in Avid Symphony Nitris Primary.

image

Fig. 9.61 Water tower “base” shot in Channel Blending tab of Symphony Nitris.

image

Fig. 9.62 Scopes of the correction.

image

Fig. 9.63 Blue balanced water tower shot corrected in Avid Symphony Primary.

image

Fig. 9.64 Blue-balanced Water Tower shot in Channel Blending tab of Symphony Nitris (image does not include GenArts’ Sapphire blur).

image

Fig. 9.65 Scopes of correction.

DaVinci Resolve has a tool similar to Avid Symphony’s Channel Blending tab. On Resolve, the RGB Mixer tab (Figure 9.66) is on the Color Screen in the Primary section. If you are interested in attempting to match Terry’s Symphony correction, you can accomplish it in the Lite version of Resolve.

image

Fig. 9.66 DaVinci Resolve’s RGB Mixer tab works similarly to Avid Symphony’s Channel Blending tab.

Matching Conclusion

So many of the guys—and Janet—have such a depth of experience that most of them did these matches very much by eye. Less experienced colorists will find that one of the greatest ways to match shots is by using the split screens and the RGB Parade waveform monitors, matching the various “shapes” in the trace between each color channel or cell.

Another good way to assist your eye in matching is rapidly cutting back and forth. At first, you can look at the overall image and try to ascertain the differences, but as you get closer, your eye will have to isolate various tonal ranges as the shots cut back and forth so that you can determine if the thing making one shot look redder—for example—than the other shot is red coming from the shadows, midtones, or highlights.

And for a completely different viewpoint, Company 3’s Stefan Sonnenfeld questions the need to match at all, saying, “I have people who will not let me use stills to match. I rarely use film stills. There are a lot of people who will put up a still, take the still reference and just meticulously match all throughout. I do not even use stills. Now and then I do because people insist on it, but most of the time I do my thing and then watch it and watch it in context. That is what it is all about. It is not just technical perfection. This is where a lot of people fall short or flat. There is a lot more to it. There are guys like Michael Bay who will literally get mad at me if I start to try and match up things. It is not realistic. When you are in an action scene and there is smoke and fire and car crashes and guns and this and that it is haphazard. It is craziness. Why would every little piece of image have to look the same? It is boring when it is one canvas. It is one-dimensional, not three-dimensional. But once in a while, it is appropriate.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.19.217