Key Terms

Additive Color Mixing

Aliasing

Anti-Aliasing

ATSC (Advanced Television Systems Committee)

CMYK Color Space

Color Depth

Color Space

Compression

Cropping

DTV (Digital Television)

DVB (Digital Video Broadcasting)

EDTV (Enhanced Definition Television)

Field/Frame

Frame Rate

GIF

Graphics

HDTV (High-Definition Television)

Image

Image Optimization

Interlaced Scanning

JPEG

Moving Image

Native Resolution

NTSC (National Television Standards Committee)

PAL (Phase Alternating Line)

Pixel

Pixel Count

PNG

Progressive Scanning

Raster Image

Refresh Rate

Resampling

Resolution

RGB Color Space

Scaling

SDTV (Standard Definition Television)

SECAM (Sequential Color and Memory)

Still Image

Subtractive Color Mixing

TIFF

Vector Graphic

The whole is greater than the sum of its parts.

—origins unknown (but often attributed to Gestalt psychology, 1920s)

Chapter Highlights

This chapter examines:

  • The nature of computer graphics and digital images
  • The difference between raster image and vector graphic file formats
  • Raster image variables such as aliasing, color depth, color space, resampling, resolution, scaling, and compression
  • Display screen standards and scanning methods
  • Industry standards for Digital Television (DTV) and Digital Cinema

Graphics and Images

William Fetter coined the term computer graphics in 1960 while working as a graphic designer and art director for the Boeing Company. Today, this phrase describes processes in which pictorial data is encoded and displayed by computers and digital devices. Computer graphics are generally divided into two main categories: graphics and images.

Graphics

A graphic is any type of visual presentation that can be displayed on a physical surface such as a sheet of paper, wall, poster, blackboard, or computer monitor. Graphics are a product of human imagination and are typically created by hand or with computer-assisted drawing and design tools. Graphics include things like stick figures, symbols, numbers, drawings, typography, logos, web buttons, illustrations, and line art (see Figure 9.1). A graphic designer is a media arts professional who creates graphics for use in print or electronic media.

Figure 9.1 This assortment of graphics includes clipart, logos, line art, and symbols.

Figure 9.1 This assortment of graphics includes clipart, logos, line art, and symbols.

Images

An image is a two- or three-dimensional representation of a person, animal, object, or scene in the natural world (see Figure 9.2). Images can be still or moving. A still or static image is one that is fixed in time. A moving image—or time-based image— is one that changes over time. Photographs, maps, charts, and graphs typically fall into the still image category. Broadcast television, digital video, and motion pictures are examples of moving images. As we’ll see later in this chapter, moving images are made by presenting a sequence of still images in rapid succession to simulate motion. In reality, there are no moving images, only the optical illusion of movement created by the systematic and sequential presentation of static images.

Digital Imaging

A film camera uses a plastic strip, coated with a light-sensitive emulsion, to record a scene composed by the photographer. The film negative that’s produced is real and can be handled, held up to the light, or passed along “physically” to someone else. In the digital world, everything is reduced to a number, including graphics. For example, a digital camera uses an optical image sensor to convert light into electrons (electrical energy). The electrical signal is then converted into a digital recording and saved as a binary file made up of zeros and ones. While a binary file cannot be touched or held up to the light, it is every bit as real to the computer as the film negative is to the photographer (see Figure 9.3).

Figure 9.2 A large collection of photographic images.

Figure 9.2 A large collection of photographic images.

Figure 9.3 Left: In analog photography, the negative is used to make photographic prints. Right: Because digital images are recorded numerically as binary data, they cannot be directly touched or viewed. A digital device or computer is required to render a binary image for output to a display screen or a printer.

Figure 9.3 Left: In analog photography, the negative is used to make photographic prints. Right: Because digital images are recorded numerically as binary data, they cannot be directly touched or viewed. A digital device or computer is required to render a binary image for output to a display screen or a printer.

Two methods are commonly used to digitally encode and display computer graphics. The first approach, called bitmap or raster imaging, uses pixels to define the structure of a digital image. Tiny squares of color, like tiles in a mosaic, make up the graphic. Depending on the number of pixels, or squares, per inch, you may not even notice them in the final digital or printed graphic. The second approach, called vector imaging, uses mathematically constructed paths to define a graphic’s visual structure. In other words, it records a graphic as a group of interrelated points, lines, curves, and shapes. Table 9.1 compares some of the differences between the two methods.

Table 9.1 Raster Images vs. Vector Graphics

Raster Images Vector Graphics
Image structure Defined using pixels—square picture elements each representing asingle color value Defined using paths—geometric areas Defined by points, lines, curves, and shapes
Editing software Adobe Photoshop, GIMP, Corel Painter, and Corel Paint Shop Pro Adobe Illustrator, Adobe Flash, CorelDRAW, and Adobe FreeHand
Primary output channels Best format for lowresolution electronic display; used in digital photography, video, and web pages Best format for high-resolution printing and prepress applications; also used for rendering 2D or 3D computer animation
Ideal for … Images with lots of color information and complexity Simple drawings, line art, clipart, and logos
Scalability Resolution-dependent (fixed number of pixels), which means image quality deteriorates when enlarged Resolution-independent, which means images can be resized without losing detail or clarity
Common file formats .bmp, .gif, .jpg, .png, and .tif .eps, .svg, .swf, and .wmf
File size Typically large, but raster images can be compressed to reduce file size Relatively small as vector encoding is highly efficient
Figure 9.4 Thirty-four red pixels are used to produce this bitmap graphic of a capital letter H.

Figure 9.4 Thirty-four red pixels are used to produce this bitmap graphic of a capital letter H.

Raster Images

A raster image is formed by dividing the area of an image into a rectangular matrix of rows and columns comprised of pixels (see Figure 9.4). A pixel, short for picture element, is a square area of light representing a single point in a raster image. Every pixel in a raster image is exactly the same size and contains a single color value that’s typically stored as a 24-bit string of binary data. The total number of pixels in a raster image is fixed. In order to make a raster image physically larger, more pixels have to be added to the raster matrix. Likewise, pixels need to be discarded when making a raster image smaller. The width and height of a raster image are determined by how many pixels each row and column contains.

On their own, pixels are relatively meaningless, but when combined with hundreds, thousands, and even millions of other pixels, complex patterns and photorealistic images can be formed. German psychologist Max Wertheimer (1880–1943) developed the concept of perceptual grouping to explain the human tendency to perceive whole shapes and patterns from an arrangement of smaller particles of visual information. This concept is commonly expressed as “the whole is greater than the sum of its parts.” Take a look at the image in Figure 9.5 to see perceptual grouping at work.

The mosaic facade in this photograph is made up of thousands of individually colored tiles carefully arranged to form an exquisitely detailed composite image. It takes very little effort on the part of the viewer to overlook the individual pieces of glass and stone used by the artist. Instead, we’re much more inclined to perceive the scene holistically, forming the impression intended by the artists’ careful and

Figure 9.5 From a distance, the individual pixels forming this image are barely perceptible to the naked eye. In fact, our brain works hard to achieve and maintain a holistic impression. Up close, however, we can see that many small pieces of visual information went into forming this 19th-century mosaic of Christ, the good shepherd.

Figure 9.5 From a distance, the individual pixels forming this image are barely perceptible to the naked eye. In fact, our brain works hard to achieve and maintain a holistic impression. Up close, however, we can see that many small pieces of visual information went into forming this 19th-century mosaic of Christ, the good shepherd.

Source: Historien d’art (own work) [public domain], via Wikimedia Commons.

Figure 9.6 Digital image pixels are much smaller than the bits of tile used in mosaic art. You have to zoom in really close on this image in order to see the pixels used to form it.

Figure 9.6 Digital image pixels are much smaller than the bits of tile used in mosaic art. You have to zoom in really close on this image in order to see the pixels used to form it.

purposeful arrangement. This technique of using tiny bits of colored material to form a composite visual impression dates back to about 3000 BC and is still used today in the print and electronic media industries to convey visual information. In Figure 9.6 we see a digital photograph of a fruit and vegetable basket. This image too is constructed of individual colored tiles. Millions of them, in fact! These tiny pixels can be seen by zooming in on the image using a photo-editing program such as Adobe Photoshop.

Resolution

Resolution is the term most often used to describe the image quality of a raster image and refers to the size and quantity of the pixels the image contains. In the illustration in Figure 9.7, the first drawing of the triangle has only three dots, making it a low-resolution image.

As more picture elements are added, the quality of the image improves considerably, moving it along a continuum from low to high resolution.

Simply put, the more pixels you have in a given area (for example, in a square inch), the more information you have, and the higher the resolution of the image. In the example in Figure 9.8, artist Peter Roche used nearly 10,000 Jelly Belly jellybeans to form this portrait of President Ronald Reagan (left). By comparison, the official White House portrait of President Reagan (right) consists of more than four million pixels.

Jellybeans are a creative medium for artistic expression, but they are quite large compared to the pixels used to form a digital image. Because of this, the image detail in the jellybean artwork pales in comparison to the resolution of the actual photograph. The pixels in the digital photo are so small that they are undetectable with the naked eye.

Figure 9.7 As you move from left to right, this sequence of graphics progresses from low resolution to high resolution as more visual detail are provided. Interestingly, it only takes three dots to trick the eye into perceiving the shape of a triangle. Remember the principle of psychological closure, which was discussed in chapter 4?

Figure 9.7 As you move from left to right, this sequence of graphics progresses from low resolution to high resolution as more visual detail are provided. Interestingly, it only takes three dots to trick the eye into perceiving the shape of a triangle. Remember the principle of psychological closure, which was discussed in chapter 4?

Figure 9.8 The first portrait of President Ronald Reagan (left) was commissioned by the Jelly Belly Candy Company. It was later donated to the Ronald Reagan Presidential Library in Simi Valley, California.

Figure 9.8 The first portrait of President Ronald Reagan (left) was commissioned by the Jelly Belly Candy Company. It was later donated to the Ronald Reagan Presidential Library in Simi Valley, California.

Source: Courtesy of the Ronald Reagan Presidential Foundation and the Jelly Belly Candy Company; artist: Peter Roche.

Tech Talk

Color Space We refer to natural sunlight as white light because it appears to the human eye to be colorless. But if you’ve ever held a prism in front of a window on a sunny day, or if you’ve seen a rainbow, you know it’s possible to separate white light into a dazzling display of various colors. As light travels through a prism, it’s refracted (bent), causing the beam of white light to break apart into its component color wavelengths (see Figure 9.9).

As you learned in grade school, you can mix primary colors to get any color you want. Just add red, yellow, and blue together for any purpose, right? Not exactly. First, those colors are traditional in art, but the pigments used in printing need to be exact, and printers use somewhat different colors. Second, printing relies on a process called subtractive color mixing: the pigments absorb colors, so when you put all the colors together, you theoretically get black: each pigment absorbs a different range of light, so no light is reflected back to your eyes. Computer and television displays, on the other hand, emit light. White light is formed by adding all the colors of the rainbow together. In the absence of light, the image or pixels on an electronic display appear black. This process is called additive color mixing.

RGB Color Model (or Mode)

The primary colors of light are red, green, and blue (RGB). By adjusting the intensity of each, you can produce all the colors in the visible light spectrum (see Figure 9.10). You get white if you add all the colors equally and black by removing all color entirely from the mix. If you were to look at a monitor such as an LCD (liquid crystal display) under a microscope, you’d see that each pixel really displays only the three primary colors of light. How these colors are arranged in a pixel depends on the type of monitor, but in an LCD, they are arranged as stripes. In additive color mixing, red and green make yellow. If you fill a graphic with intense yellow in Photoshop, the pixels really display stripes of intense red and green, with no blue. The individual points of color are tiny, so our brains add the colors together into yellow. If you are designing for electronic display, you will probably create RGB images (see Figure 9.11, left).

Figure 9.9 The primary and secondary colors of white light become visible when refracted by a glass prism.

Figure 9.9 The primary and secondary colors of white light become visible when refracted by a glass prism.

CMYK Color Model (or Mode)

In printing, the primary colors are cyan, magenta, and yellow (CMY) (see Figure 9.11, right). You produce colors by combining pigments in paints, inks, or dyes. If you combine equal amounts of each primary color, you should get black, right? At least in theory! To help produce “pure black” (as opposed to just darkening the imprint), printers add premixed black. To print a full-color image using the CMYK process, each page goes through four presses, each inked with a primary color or black pigment. The letter K refers to black and comes from the term key plate, a black printing plate. If you are designing for print, you will likely create CMYK images. The challenge, of course, is that you’ll be creating them on an RGB color monitor!

Color Depth

The term color depth refers to how many different shades of color a computer or device can utilize when capturing or rendering a digital image. When you capture an image with a digital camera, camcorder, or scanner, the device encodes light into electrical energy and then into bits for storage in a format the computer can process and understand. Computers reverse the process by decoding bits back into electrical energy and into light impulses that are then rendered on a digital display. The more bits you assign to each color sample or pixel, the greater its color depth will be. For example, in a 24-bit RGB image, 24 bits are assigned to each pixel—8 bits for the red channel, 8 for blue, and 8 for green—producing a large color palette of roughly 16.8 million colors. Twenty-four-bit color is often referred to as true color because it surpasses the number of colors the human eye can effectively discern. People are limited to a palette of roughly 10 million colors (see Figure 9.12).

Figure 9.10 In Adobe Photoshop, the color picker is used for selecting or creating colors. In RGB color space (shown here), the amount of red, green, and blue in a color determines its particular hue. Each of the more than 16 million colors in the 24-bit RGB color palette is identified by a unique hexadecimal color code—a six-digit combination of letters (A–F) and numbers (0–9).

Figure 9.10 In Adobe Photoshop, the color picker is used for selecting or creating colors. In RGB color space (shown here), the amount of red, green, and blue in a color determines its particular hue. Each of the more than 16 million colors in the 24-bit RGB color palette is identified by a unique hexadecimal color code—a six-digit combination of letters (A–F) and numbers (0–9).

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Some display systems can render an even higher color depth (up to 48 bits), but the 24-bit color standard is currently the most common and will give you a sufficiently large palette for multimedia applications.

Figure 9.11 RGB color space is used in multimedia design (Web, animation, television, etc.), while CMYK color space is used in four-color printing.

Figure 9.11 RGB color space is used in multimedia design (Web, animation, television, etc.), while CMYK color space is used in four-color printing.

Figure 9.12 The possible color combinations for any pixel in an eight-bit graphic (or a 24-bit display). If you follow the arrows through every possible data combination, you’ll get 256 (or 28) possibilities for each color channel. Combining channels—256 possibilities for red × 256 for green × 256 for blue, or 224 combinations—you’d have about 16.8 million possible combinations.

Figure 9.12 The possible color combinations for any pixel in an eight-bit graphic (or a 24-bit display). If you follow the arrows through every possible data combination, you’ll get 256 (or 28) possibilities for each color channel. Combining channels—256 possibilities for red × 256 for green × 256 for blue, or 224 combinations—you’d have about 16.8 million possible combinations.

Source: Susan A. Youngblood.

Defining the Raster Image

Like the tiny bits of tile in a mosaic, a pixel is the smallest definable element of a raster image. Because of this, image editors rarely have to deal with discrete units of measurement like inches, centimeters, and picas. Instead, editors usually measure graphics in pixels, and pixel count and density determine the physical size and quality of an image.

Pixel Dimensions

When we talk about pixel dimensions, we’re not talking about the size of an individual pixel. Instead, we use the term pixel dimensions to describe the size of a raster image, expressed as the number of pixels along the x-axis (width) by the number of pixels along the y-axis (height). For example, an 800 × 600 pixel image contains 800 pixels across the image from left to right and 600 pixels across the image from top to bottom.

Pixel Count

Pixel count is the total number of pixels in a raster matrix. To determine the pixel count, multiply the horizontal and vertical pixel dimensions. The 30 × 18 pixel image in Figure 9.13 has a pixel count of 540 pixels.

Pixel Density or Resolution

We express the pixel density or display resolution of a raster image in pixels per inch (ppi)—this is pixels per linear inch (across or down), not square inch. Although each pixel in an electronic display is a fixed size, the dimensions of a pixel can vary from image to image. The more pixels you have per inch, the smaller each pixel will be.

Figure 9.13 Pixel count is determined by multiplying the number of pixels across a digital image by the number of pixels high.

Figure 9.13 Pixel count is determined by multiplying the number of pixels across a digital image by the number of pixels high.

Figure 9.14 This chart displays the resolution sizes and compression settings for the Canon G12 digital camera.

Figure 9.14 This chart displays the resolution sizes and compression settings for the Canon G12 digital camera.

Source: Canon G12 User Guide.

The resolution determines the maximum size of an image you print. In order to produce a high-quality print of any size, digital photographs need a pixel density of at least 300 ppi—that’s 90,000 pixels in a square inch! Generally speaking, the more pixels you have relative to the image’s dimensions, the bigger the print you will be able to make without sacrificing image quality. To illustrate this point, let’s consider the Canon G12 digital camera, which has an effective resolution of about 10 million pixels (total pixel count) (see Figure 9.14). With such a large-capacity image sensor, the G12 can produce a photograph with a recorded pixel count of 3,648 × 2,736 pixels. That’s a lot of pixels! Dividing both pixel dimensions by 300 allows you to determine the maximum size of a photographic print that can be made from this image with good results.

3,648 pixels ÷ 300 pixels/inch = 12.16 inches
2,736 pixels ÷ 300 pixels/inch = 9.12 inches

A photographer won’t always need to produce a print this large, but having lots of pixels to work with is always better than not having enough.

In multimedia work, we’re much more concerned with display resolution and bandwidth than we are with print resolution. Until recently, most television and computer monitors came with a display resolution of either 72 or 96 ppi to conform to the original ppi standards as set forth by Apple/Macintosh and Microsoft/Windows respectively. While in recent years display resolutions have gotten better, for video and the Web, 72 ppi remains the industry standard. On a 72 ppi monitor, each pixel in a 72 ppi image will be displayed by one pixel on the screen. You can go as high as 96 ppi, but anything more than this is simply a waste of file bandwidth and will do little to increase the overall quality of an image that’s displayed electronically.

Scaling

Many software applications allow you to scale an image within an open document by selecting it and adjusting one of eight resizing handles along the outer edge. But raster images are resolution dependent, which means they contain a fixed number of pixels. Resizing (or scaling) a raster image without redefining the structure and pixel count of the array (resampling) can ruin your image.

When you resize this way, you don’t change the image matrix (the image’s pixel dimensions) or the amount of data stored. You only decrease or increase the size of your pixels. When you scale an image upward (make it larger), each pixel is enlarged, and you lose image detail and sharpness. The more you enlarge a raster image, the softer and fuzzier it becomes. For this reason, professionals try to avoid the enlarging of raster images (see Figure 9.15).

Downscaling a raster image (making it smaller) is done far more often—and with better results. As you shrink an image, pixels become smaller and more tightly compacted together. In some cases, downscaling an image may actually improve image clarity because the pixel density (resolution) is artificially increased. In short, upscaling almost always leads to a bad result and downscaling usually works out okay. There is, however, a better alternative for resizing a raster image (see Figure 9.16).

Figure 9.15 Scaling is the act of resizing a digital image to make it appear smaller or larger on screen.

Figure 9.15 Scaling is the act of resizing a digital image to make it appear smaller or larger on screen.

Source: Sarah Beth Costello.

Figure 9.16 A) Scaling: Upscaling often results in a noticeable loss of image quality (increased blurriness). When downscaling a high-resolution image, image degradation is rarely a concern. B) Resampling: The original image was too big to fit on this page. I used Adobe Photoshop to resize (and resample) it to the version you see printed here. Photoshop offers you a choice of five resampling algorithms. Because this image was intended for print, I kept the resolution set to 300 ppi. If I wanted to publish it to the Web, I would have chosen 72 ppi. Source: Neale Cousland/shutterstock.com. C) Cropping: The two images on the right were achieved by cropping the original photo (top left). Cropping is a photo-editing technique used to delete portions of an image in order to enhance the focus of a main subject or improve composition. With cropping, pixels in the unwanted portion of the image are permanently deleted. The remaining pixels are retained with their original color values intact. A cropped image will always, by definition, be smaller than the original; however, this reduction in size is due to the deletion of image content (pixels) and not to scaling or resampling.

Figure 9.16 A) Scaling: Upscaling often results in a noticeable loss of image quality (increased blurriness). When downscaling a high-resolution image, image degradation is rarely a concern. B) Resampling: The original image was too big to fit on this page. I used Adobe Photoshop to resize (and resample) it to the version you see printed here. Photoshop offers you a choice of five resampling algorithms. Because this image was intended for print, I kept the resolution set to 300 ppi. If I wanted to publish it to the Web, I would have chosen 72 ppi. Source: Neale Cousland/shutterstock.com. C) Cropping: The two images on the right were achieved by cropping the original photo (top left). Cropping is a photo-editing technique used to delete portions of an image in order to enhance the focus of a main subject or improve composition. With cropping, pixels in the unwanted portion of the image are permanently deleted. The remaining pixels are retained with their original color values intact. A cropped image will always, by definition, be smaller than the original; however, this reduction in size is due to the deletion of image content (pixels) and not to scaling or resampling.

Resampling

Resampling changes the size of a raster image by increasing or decreasing the image’s pixel count. While on the surface this sounds like a simple process, you must remember that each pixel represents a single color value. If you add pixels to an already defined image, what color do you assign to them, and where do you place them? Which pixels get shifted to make room for the new ones? Likewise, if you delete pixels from an image to make it smaller, which ones get tossed and which ones get to stay? Resampling deals with these challenges by using algorithms to analyze each pixel’s color information and using this data to reconstruct an entirely new raster structure. Depending on which resampling method and algorithm you use, some of the original image data may be retained, but much of it will be discarded and replaced. For this reason, you should make a backup copy of your original image before applying changes.

When you resample to enlarge an image, you still lose detail and sharpness. Given the nature of raster images, this just can’t be avoided. However, resampling provides more options and typically yields better results than scaling alone.

Anti-Aliasing

Raster images are also known for producing aliasing artifacts, the visibly jagged distortions along the edge of a line. Aliasing is a stair-step effect caused by using square pixels to define objects with curves or diagonal lines (see Figure 9.17). You can easily see the effect when looking at text on the screen of a small digital device such as a cell phone.

Anti-aliasing smoothes out the edges of jagged type by blending the color transition points, such as the pixels along the edges of a letter. The only major drawback to this is that it increases file size somewhat. In most cases, it’s better to have a clean graphic and accept the slightly larger file size. Anti-aliasing typically works best on larger type as the jagged edges of the type are more visible.

Figure 9.17 Left: The stair-step effect known as aliasing is seen in a close-up view of the letter B. Aliasing is most pronounced along the curved segments of a stroke. Right: An anti-aliasing algorithm is applied. Anti-aliasing softens the perceived jaggedness of a raster image by blending pixels along the edge of the stroke.

Figure 9.17 Left: The stair-step effect known as aliasing is seen in a close-up view of the letter B. Aliasing is most pronounced along the curved segments of a stroke. Right: An anti-aliasing algorithm is applied. Anti-aliasing softens the perceived jaggedness of a raster image by blending pixels along the edge of the stroke.

Tech Talk

Compression In an uncompressed file format, the computer records the individual value of each pixel. Examples of this include the BMP and uncompressed TIFF formats. While these formats give you access to a lot of information, graphics saved in these formats tend to be quite large. Compression can help with this. There are two basic types of compression: lossless, which looks for more efficient ways to store the data without losing any information—kind of like putting your sleeping bag in a stuff sack—and lossy, which while reducing the file size gets rid of data you might not need at the moment.

JPEG is the most common lossy format used in multimedia production. Released in 1992 by the Joint Photographic Experts Group, the JPEG standard was designed to reduce the file size of photographic images. File size was a critical issue at the time because computer hard drives were much smaller, and processor speeds weren’t nearly as fast as they are today. In addition, data transfer rates were particularly slow online—a fast modem at the time was around 56 kilobytes per second. Bits and bytes were precious, and the new JPEG standard greatly improved the photo-imaging workflow. When applied to a raster image, the JPEG compression algorithm evaluates each pixel, looking for ways to “compress” redundant color information into a more efficiently written and structured data file. For example, the high-resolution photo in Figure 9.18 was taken on a clear and sunny day and contains a lot of blue pixels.

The original image size is 4,288 × 2,848 pixels, but notice how the first 100 rows of this image contain largely the same shade of blue. We can compute a rough estimate of the uncompressed file size of this sample as follows:

Pixel Count: 4,288 pixels per row × 100 row = 428,800 pixels File Size: 428,800 pixels × 24 bits = 10,291,200 bits (or 1.3 megabytes)

Saved in an uncompressed TIFF format, this tiny section of blue sky takes up nearly 1.3 MBs of storage space. Using JPEG compression, the same picture information can be rewritten to a new data file that’s 68 KB or less.

In photography, you can control the amount of JPEG compression both when you take the photo and later when you work within an image-editing program. For example, most digital cameras provide at least three JPEG quality settings: normal (greatest amount of compression, smallest file size, poorest image quality); fine; and superfine (least amount of compression, largest file size, best image quality). Photoshop, one of the most popular professional image-editing programs, offers 12 preset levels of JPEG compression (see Figure 9.19 and Figure 9.20).

Figure 9.18 This high-resolution photograph contains lots of redundant color information (blue sky and golden wheat), making it a great candidate for JPEG compression.

Figure 9.18 This high-resolution photograph contains lots of redundant color information (blue sky and golden wheat), making it a great candidate for JPEG compression.

Figure 9.19 The Save for Web and Devices feature in legacy versions of Adobe Photoshop allows you to compare the effect of different amounts of JPEG compression on file size and image quality. Notice how file size and quality decreases as the amount of compression steadily increases.

Figure 9.19 The Save for Web and Devices feature in legacy versions of Adobe Photoshop allows you to compare the effect of different amounts of JPEG compression on file size and image quality. Notice how file size and quality decreases as the amount of compression steadily increases.

Figure 9.20 Another example of JPEG compression at work.

Figure 9.20 Another example of JPEG compression at work.

It’s possible to overcompress an image to the point where its visual integrity is noticeably compromised. The resulting imperfections are called compression artifacts. Image optimization tools such as the Save for Web & Devices feature in Photoshop provide a handy way to test and compare compression settings before applying them. Image optimization is critically important for images on the Web because file size is directly related to bandwidth. The smaller the image file, the less time it takes to download from the Internet and render on screen. The goal of image optimization is to create the best-looking image possible, with the smallest file size and no compression artifacts.

Raster Image Formats

When you create and work with raster images, you have many options for saving your files. As you work with a file, you should save it in a format that supports layers, probably as a PSD or TIFF with layers, so you can change it in the future or resave the image in a new format for another purpose. But when you prepare a raster image to incorporate into a multimedia project, you’ll need a flattened, compressed image. Selecting the right format for your project is important: you need a good-looking image that takes up as little space as possible. Don’t fall into the habit of editing JPEGs. Remember, it’s a lossy format. Each time you save it, you lose information. Whenever possible, you should work in an uncompressed or lossless format. Always keep a backup copy. You never know when you might need to reedit the graphic.

When you select the file format and quality settings—how much to compress the image—consider whether the format is lossy (loses information during compression) or lossless. Also consider the nature of your image. Does it have many color variations that need to be captured, such as a photograph? Is it mostly lines and blocks of uniform color? Do you need transparent pixels because you plan to float the image over other elements of a website? You have more than 50 formats to choose from, but here are three of the most common formats used for still images in multimedia:

  • GIF offers 256 colors and transparency (transparent pixels) and is a lossless compression format. It is common for logos and other images with lines and solid blocks of color. It supports interlacing, so every odd line of pixels loads, then every even line loads, making graphics seem to appear faster (users see the full-sized, half-loaded graphic before the rest of the pixels appear).
  • JPEG offers 16.8 million colors but does not support transparency (has no transparent pixels). It is a lossy compression format and is used most often for photographs. This format does not support interlacing.
  • PNG offers 16.8 million colors and transparency, but you can choose to use fewer colors to save file space (PNG 8, or PNG with eight-bit color). It is a lossless compression format and is common for a wide range of images, including favicons (the small web page icons in browser tabs). Some older web brows-ers don’t support it (Internet Explorer prior to version 4); such browsers have mostly, but not completely, fallen out of use. PNG files can be very small, but for photographs with many colors, they may be larger than comparable JPEGs. This format supports interlacing.

Another option you have with both the GIF and PNG formats is dithering, or scattering the pixels to achieve blending without using as many colors. Dithering is useful if you have an image with a drop shadow and want to superimpose it cleanly on a background.

Vector Graphics

Vector imaging defines the area of a picture using paths made up of points, lines, curves and shapes. Each vector path forms the outline of a geometric region containing color information. Because paths can be mathematically resized, vector graphics can be scaled up or down without losing any picture clarity. Clipart and typefaces (fonts) are often created and stored as vector graphics because designers want the ability to scale them to any size (see Figure 9.21).

The concept behind vector graphics is like painting with numbers. A paint-by-number set normally includes a black-and-white line drawing and a set of numbered paints. The artist fills in each numbered region with the appropriate color, carefully staying within the bordered outline of each defined area. When all the regions are filled with color, the picture is complete. As with raster images, the phenomenon of perceptual grouping leads us to ignore the individual paths used to form the holistic impression (see Figure 9.22).

Figure 9.21 Vector graphics have crisp edges with no aliasing. They can be resized up or down to any size without negative consequences.

Figure 9.21 Vector graphics have crisp edges with no aliasing. They can be resized up or down to any size without negative consequences.

Figure 9.22 To complete a paint-by-number piece like this one, each numbered region must be filled in with a single color. In similar fashion, vector graphics are rendered geometrically using paths defined by points, lines, curves, and shapes.

Figure 9.22 To complete a paint-by-number piece like this one, each numbered region must be filled in with a single color. In similar fashion, vector graphics are rendered geometrically using paths defined by points, lines, curves, and shapes.

Source: Courtesy of Pam Snow, Mesa, Arizona.

Vector graphics can render curves and diagonal lines that are crisp, smooth, and sharp. Aliasing is not a problem because pixels are not used in their construction. So vector graphics are an ideal choice for prepress applications requiring higher-resolution pictures with finer line detail.

When you enlarge a raster image, the file size grows in proportion to the size of the image: as you add pixels to the array, you need more data to represent the image. Because vector encoding uses mathematical equations to record visual information, the size of a vector data file stays consistent, regardless of how large or small you make the graphic. If you are creating a still graphic, you can enlarge the graphic to any size you want and then rasterize it, saving it to whichever file format suits your purpose best.

You could also use vector graphics to create an animation, such as with Flash. Instead of drawing every separate frame of your project—with 24 frames appearing each second—you could create two different graphics for a segment and let your animation software mathematically interpolate the positions of the components in the in-between frames (a technique known as tweening).

Display Screen Standards

Television and computer monitors, tablets, and smartphones (basically any digital device with a visual display screen) have a fixed number of pixels (see Figure 9.23). We call its fixed pixel dimensions its native resolution. When a computer monitor is set to its native resolution, images displayed on screen are said to be pixel perfect because the actual number of pixels in the image source matches the number of pixels used to render it on screen. People who are visually impaired or have reduced vision may choose to magnify images on screen by either zooming in using the application software or by changing the display settings of the monitor to a lower resolution. While doing so enlarges images, making them easier to see, there is a tradeoff. The quality of images deteriorates as you move further and further away from the native resolution of the monitor. The reason for this is that the computer has to recreate the image on screen using more pixels than were in the original source. Through a process called interpolation, the computer mathematically generates new values for each pixel in the magnified image, as it is scaled up to fill a larger region of pixel real estate. This process, and its consequences, are similar to the concept of resampling, which was covered earlier in the chapter in Figure 9.16.

Figure 9.23 A comparison of display screen sizes and resolutions for Apple tablets and smartphones over time.

Figure 9.23 A comparison of display screen sizes and resolutions for Apple tablets and smartphones over time.

Manufacturers are constantly striving to increase the number of pixels in digital screens while simultaneously squeezing them into smaller physical spaces— thereby increasing the number of pixels per inch (ppi). As I worked on the first edition of this book, I used a laptop computer with a 17-inch screen and a native resolution of 1920 × 1200 pixels at 133 ppi. Today, I am using a slightly smaller 15-inch laptop with a second external monitor attached. The pixel resolution of the laptop screen is 2880 × 1800 at 220 ppi. In this example, the screen size decreased while the native resolution of the monitor drastically improved.

When a raster image is displayed in full size on a screen that’s set to the native resolution of the monitor, there’s a one-to-one correlation between the pixel data in the source file and the pixels rendered on screen. In this scenario, the source image will look its very best and is said to be “pixel perfect.” Unfortunately, user preferences can quickly get in the way of viewing a pixel perfect image every time. For instance:

  • 1. A user may not have his or her screen set to the native resolution of the monitor. On my laptop, I can choose from a number of display resolutions, including: 640 × 480, 720 × 480, 800 × 600, 1024 × 768, 1280 × 1024, 1680 × 1050, and finally, 1920 × 1200 (native). Choosing a display setting that’s lower than the native resolution of the monitor produces the effect of zooming in. This is a helpful option for someone like me, whose vision is less than perfect. However, the benefit of enlarging the text and icons on screen has two potentially negative tradeoffs: a) it reduces the desktop real estate or screen space, and b) it compromises the quality of the screen image (the image becomes fuzzier as you stray further from the native resolution of the monitor).
  • 2. A user may not have the screen set to its native resolution but may be zoomed in on an active document window. For example, you could be viewing an online newspaper article using a web browser like Firefox, Chrome, Internet Explorer, or Safari. Most browsers allow you to zoom in on a page to get a better view of the content. Doing so enlarges the view of both text and images; however, with each increase you’ll lose clarity, particularly with images (see Figure 9.24).

In both these examples, there’s no longer a one-to-one correlation between the native raster structure of the source image and the display screen. The image is no longer pixel perfect and will have to be scaled up or down to conform to the raster structure of the screen. And as we’ve already learned, scaling alters the quality of a raster image, especially when it is enlarged.

Aspect Ratio

In addition to describing the screen attributes in absolute terms (screen size and native resolution), monitors are classified by their aspect ratio. Aspect ratio is an indicator of the proportional relationship of the width to the height of the screen and is depicted with the expression x: y, where x equals the number of units wide and y equals the number of units high. While the physical size of a display screen can vary, the aspect ratio remains constant. The two most common aspect ratios in use today are 4:3 and 16:9. The standard 4:3 (pronounced 4 by 3) aspect ratio predates television and produces a familiar and somewhat boxy-looking shape. The other popular aspect ratio is 16:9 and is usually referred to as widescreen because it more closely matches the shape of a theatrical movie screen. While television and computer monitors are available in many shapes and sizes, they almost always conform to either a 4:3 or 16:9 aspect ratio.

Figure 9.24 Top: A web page is viewed natively in actual size. Bottom: This close-up view of the same page is achieved using the browser’s zoom control.

Figure 9.24 Top: A web page is viewed natively in actual size. Bottom: This close-up view of the same page is achieved using the browser’s zoom control.

Figure 9.25 A comparison of some common display resolutions used in multimedia design. Do you notice how the aspect ratio varies?

Figure 9.25 A comparison of some common display resolutions used in multimedia design. Do you notice how the aspect ratio varies?

Moving Images

Many people’s daily experience is filled with moving images. Some come from televisions and movie theaters. Others come from personal computers, game systems, mobile phones, handheld devices, even GPS (Global Positioning System) interfaces and self-checkout kiosks at supermarkets. Regardless of the content, they are typically based on the same basic principles. Let’s look at how this technology works.

Raster Scanning

In the Western world, people tend to process visual information from left to right and from top to bottom. Think about it! When you compose a letter or note, you generally start in the upper left-hand corner of the page and work your way down one line at a time from left to right. In a word-processing program such as Microsoft Word, you do basically the same thing. You press a key to produce an imprint of a character on the page, causing the cursor to advance to the next space on the line. You advance one character at a time until you reach the end of a line, at which point the cursor jumps to the beginning of the next line of text. A sheet of paper has a fixed number of lines on it, so when you reach the end of a page, a new page is added and the process continues.

Raster scanning works in much the same way, only faster. In television and computer display systems, individual video frames and computer images are reproduced on the screen, one pixel at a time, in a process called scanning. An electron beam or impulse mechanism illuminates each screen pixel as it progresses through the raster matrix. Each row of pixels is called a scan line. A scanning cycle is one complete pass of all of the scan lines in the display. When the scanning beam reaches the last pixel on the last scan line, it moves to the top and begins the next cycle. A frame is one complete scanning pass of all of the lines in a picture, or one complete scanning cycle.

The refresh rate is the number of complete scanning cycles per second and is measured in Hertz (Hz), a unit of frequency equal to one cycle per second. If the refresh rate is below 50 Hz, the image will appear to flicker. Most displays have refresh rates of 60 Hz or more. The faster the refresh rate, the sharper the image quality and the less eyestrain the user will experience. The larger the screen, the higher the refresh rate should be. Large computer monitors typically have a refresh rate of 85 Hz or higher.

Progressive Scanning

Contemporary computer monitors and some televisions reproduce images using progressive scanning, consecutively scanning the lines of the picture from top to bottom, just as you type on a typewriter. Progressive scanning helps combat eyestrain, which is why it’s a given on computer monitors. That’s not, however, necessarily the case for television.

Interlaced Scanning

Early television standards adopted a method of raster scanning called interlaced scanning to minimize both bandwidth use and flickering. With an interlace system, each frame of an image is captured in two parts and transmitted separately, one field at a time. The odd lines are scanned first, followed by a second pass of the even lines. So you’re really only seeing half of each new image at once, but the screen draws so quickly you don’t notice.

Interlaced scanning reduces the bandwidth requirements of standard broadcast television by half compared to a progressively scanned image. Using this standard helped cut the cost of broadcast equipment and, perhaps more importantly, freed up valuable broadcasting bandwidth. While broadcasting standards have changed with the move to Digital Television, interlaced signals are still an integral part of the broadcast mix (see Figure 9.26).

Figure 9.26 Broadcast television images are typically interlaced (left) while video on the Web is often de-interlaced (right), delivered progressively.

Figure 9.26 Broadcast television images are typically interlaced (left) while video on the Web is often de-interlaced (right), delivered progressively.

Fields

One complete scanning pass of either the odd or even scan lines is called a field. So two fields, the odd and even, produce one frame. As you can imagine, the electronic raster scanning process has to be fast to give you a good picture. Let’s say you’re watching a movie on a television with interlacing and a frame rate of 30 frames per second (usually stated as 30 fps) that has 480 lines in its raster (a comparatively small number). This means that one scanning pass of the 240 odd-numbered scan lines, or one field, occurs in just 1/60th of a second. Double that to get a full frame. Put another way, 14,400 scan lines of picture information are rendered on your television monitor every second.

Television and Cinema Standards

Since multimedia projects are often viewed on televisions, you need to consider television standards. A great deal of money goes into supporting the infrastructure of terrestrial broadcasting systems, so countries have developed technical standards— some more widely adopted than others—for the production of television-related equipment. While it doesn’t always work out this way, such standards help to ensure that consumers have access to equipment that’s compatible with the delivery systems used by content providers for program distribution. In an ideal world, every nation would use the same standards for every type of electronic technology, but this just isn’t the case. As it’s sometimes hard to get two people to agree on something, it’s even more difficult to get the governing bodies of entire nations to agree on a universal set of technical specifications.

Great Ideas

The Illusion of Apparent Motion

The foundation of all moving image technology rests on the ability of the human eye and brain to process a series of rapidly projected frames or scan lines as a continuous and uninterrupted picture. The motion we observe on the screen, whether in a movie house or on a television or computer monitor, is a perceptual illusion. Film, video, and moving digital images appear to have motion because of the phenomenon of short-range apparent motion. Our brains perceive motion when we encounter successive images that have small changes between them in much the same way that we process and make sense of physical movements in the real world.

In order to pull off the illusion of motion we get from film, video, and animation, individual pictures in a sequence must advance quickly. If the frame rate is set too low, the transition from one image to the next will appear jerky or stilted. The target speed is known as the flicker fusion threshold, the frequency at which the momentary flicker of intermittent light between each frame disappears from human perception.

Early on, the motion picture film industry adopted a frame rate of 24 fps as an international standard. However, an image pulsating at 24 fps is well below the flicker fusion threshold for human perception. To compensate, a projector displays each frame of motion picture film twice. A rotating shutter momentarily blocks out the projector’s light each time the frame is advanced and between each repeated exposure of a single frame, fixing the image in one spot and keeping us from seeing a blur. We don’t notice the brief black spots because our retinas retain each visual impression of light for a fraction of a second—just long enough to span the gaps between each frame. The result is a flicker-free viewing experience for the audience.

Figure 9.27 Moving images are a perceptual illusion, achieved by the rapid projection of individual still frames of film or video.

Figure 9.27 Moving images are a perceptual illusion, achieved by the rapid projection of individual still frames of film or video.

As you develop media products to be used on multiple devices—computers, phones, game systems, and so on—keep in mind how those products will work and look on each device, including television.

Flashback

The Legacy of Analog Television

Television signals used to be broadcast in analog formats. In analog broadcasts, continuous waves carried the sound and picture information.

Much of the world has made the switch to digital formats, and many (but not all) remaining countries have plans to switch to digital formats before 2020. There are three major analog television standards: the NTSC (National Television Standards Committee) standard developed in the United States, the PAL (Phase Alternating Line) standard developed in Germany, and the SECAM (Sequential Color and Memory) standard developed in France.

The most common analog formats had a 4:3 aspect ratio, set to mimic the dimensions of the film that was in use at the time when television was born. Later, movie industries moved to wider-format film to offer viewers something different from television in their theater experience. With the release of high-definition Digital Television, the aspect ratio was changed to the now popular 16:9 widescreen format, bringing television back into conformity with the theatrical viewing experience.

Digital Television

Digital Television (DTV) offers many advantages over legacy analog formats. Content created for digital media is more fluid: it can be easily repurposed and distributed through secondary channels of communication, making DTV more compatible with computer and Internet-based systems and services.

DTV also offers less signal interference and uses less bandwidth than an equivalent analog television broadcast, which is an advantage because the amount of broadcast bandwidth is finite. The switch to DTV has meant that more stations can be broadcast in the same viewing area, while using the same or less bandwidth as analog television.

Figure 9.28 Television entertainment technologies have evolved rapidly in recent years. The standalone single-piece television receiver your parents may remember can’t compete with today’s high-tech home theater system, complete with wall-mounted flat-screen monitor and 5.1 surround sound.

Figure 9.28 Television entertainment technologies have evolved rapidly in recent years. The standalone single-piece television receiver your parents may remember can’t compete with today’s high-tech home theater system, complete with wall-mounted flat-screen monitor and 5.1 surround sound.

DTV also offers the option of using a 16:9 format, similar to that used in the movie theater, as well as high-definition (HD) video—video with over twice the resolution of the old NTSC standard. When professionals shoot and edit television programs digitally, the DTV infrastructure preserves the quality of the original material during transmission. In order to transmit digital content through an analog system, programs must first be downconverted to an analog format, resulting in a loss of image quality.

ATSC

The United States adopted the ATSC (Advanced Television Systems Committee) terrestrial broadcasting standard in 1996. In the same year, WRAL in Raleigh, North Carolina, became the first television station in the country to begin broadcasting a high-definition television (HDTV) signal. The U.S. transition to HDTV was fraught with many delays and took more than a decade to complete. On June 12, 2009, U.S. analog transmissions ceased and NTSC broadcasting officially ended for all full-power television stations in the United States.

Table 9.2 ATSC Television Formats

table9_2

The NTSC format has a fixed resolution, aspect ratio, scan mode, and frame rate. The newer ATSC standard is more fluid, providing up to 18 different display formats, which are categorized into three groups: standard definition television (SDTV), enhanced definition television (EDTV), and HDTV (see Table 9.2). ATSC emphasizes progressive scanning and square pixels, bringing television technology closer to current standards for computer imaging. It also improves audio distribution, enabling a theater-style experience with 5.1-channel Dolby Digital Surround Sound. The ATSC standard has been adopted in much of the Americas and in U.S. territories. Canada made the switch in 2011, and Mexico is preparing for the switch and is simulcasting in both digital and analog formats. Other countries also have introduced the ATSC format but have not fully switched. And, of course, the ATSC is working on new standards: as ATSC 2.0 comes out, look for features such as video on demand and possibly even 3D programming.

DVB

The DVB (Digital Video Broadcasting) terrestrial broadcasting standard was established in 1997. The following year, the first commercial DVB broadcast was transmitted in the United Kingdom. Because European consumers depend more on cable and satellite distribution for television and less on over-the-air terrestrial broadcasting, their transition to a DTV infrastructure has been easier. The DVB standard has been adopted throughout Europe and in Australia. Not all areas have followed the U.S. model of a decade-long digital conversion. Berlin, the capital of Germany, made the change on August 4, 2003, making it the first city to convert to a DTV-only terrestrial broadcasting system.

Digital Cinema Standards

In 2002, a consortium of Hollywood film studios formed the Digital Cinema Initiatives, LLC (DCI) to standardize the system specifications for digital production and projection of motion picture films in theaters. The DCI standard paved the way for the long-awaited transition from film-based production and distribution of motion pictures to digital production and theatrical projection and has greatly influenced the industry practices used today by digital filmmakers. One of the first DCI standards involved adapting the consumer and broadcast HD format to a slightly wider screen size that more closely aligned to existing film formats. HD has a resolution of 1920 × 1080 and an aspect ratio of 16:9. By comparison, the DCI HD equivalent has a resolution of 2048 × 1080 and an aspect ratio of 17:9. The DCI HD standard was termed 2K because the horizontal resolution is approximately 2,000 pixels.

The DCI standard also includes two advanced HD formats known respectively as 4K and 8K. The 4K DCI standard is 4096 × 2160 and contains nearly four times as many pixels as HD. The 8K DCI standard is 8192 × 4320—roughly 16 times the resolution of HD.

In addition, the ATSC standard is continuing to evolve. ATSC 3.0, also known as Next Generation Broadcast Television, is being developed to address growing needs for improved compression and increased bandwidth efficiency to accommodate 4K over-the-air transmission and delivery of 4K content to tablets and mobile devices.

In 2015, the Blu-ray Disc Association released technical specifications for the Ultra HD Blu-ray format. This standard paved the way for manufacturers to design and sell the next generation of UHD (ultra-high-definition) Blu-ray players to consumers for viewing movies in 4K or in the original DVD or Blu-ray formats.

Figure 9.29 Top: The resolution of SD and HD is compared to the newer ultra HD format. Bottom: The Sony FDRAX100 ultra HD camcorder is one of many new devices recently introduced by manufacturers in response to the newer 4K standard.

Figure 9.29 Top: The resolution of SD and HD is compared to the newer ultra HD format. Bottom: The Sony FDRAX100 ultra HD camcorder is one of many new devices recently introduced by manufacturers in response to the newer 4K standard.

Source: rmnoa357/Shutterstock.com.

Chapter Summary

Digital images are constructed of thousands to millions of tiny visual elements called pixels that are arranged in rows and columns on a screen to form a composite visual representation. The more pixels in an image, the higher its resolution and perceived quality. Digital imaging would not be possible without light. Digital images are captured using light-sensitive sensors and displayed using electronic monitors that scan the image onto the screen with light. Since the primary colors of light are red, green, and blue, RGB color space is the standard in digital imaging, whereas CMYK color space is the printing industry standard. The way an image or graphic is defined—either as separate pixels or as mathematical relationships— constrains what you can do with it. And as you create, transform, and incorporate images and graphics into multimedia projects, the final format you choose affects how good the user’s experience will be. The final format should also guide your workflow, as you don’t want to risk starting off working at a lower resolution than your final format needs to be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.122.244