Chapter   | 23 |

Digital colour reproduction

Sophie Triantaphillidou

All images © Sophie Triantaphillidou unless indicated.

INTRODUCTION

There are numerous books and references entirely dedicated to digital colour imaging and reproduction; it is impossible to encompass all aspects of the subject in only one chapter. Relevant information on the subject of colour has been provided in various sections of the book up until this stage. Chapter 3 discusses light sources, Chapter 4 introduces basic colour vision and Chapter 5 is dedicated to an introduction to colour science. The latter chapter discusses colour terminology, models of colour vision, the basics of colorimetry and colour appearance modelling, as well as the objectives of colour reproduction and instruments used for colour measurement. Also, sections of Chapters 9, 14, 15 and 16 discuss how colour is formed in various imaging media, whilst Chapter 22 is dedicated to photographic colour reproduction. Later, in Chapter 25 we discuss how colour is communicated in various digital image workflows and in Chapter 26 how colour can be managed using International Color Consortium (ICC)-based colour management systems (CMS).

The reproduction of colour in digital imaging systems is a complex operation. Its understanding relies on knowledge of the underlying principles of physical and psychophysical phenomena related to colour perception, and colour measurement and definition, as well as knowledge of the physical principles, capabilities and limitations of image recording, storage, transmission and output systems.

Let’s consider a ‘colour scene’ as a spatially varying spectral distribution. When the scene is imaged by a still digital acquisition device, the resulting digital colour image is made out of spatially and spectrally sampled distributions that have been integrated over a number of spectral bands and over a time interval. Each digitized spectrally weighted integral corresponds to a colour channel in the digital image:

image

where f is the spectral distribution of the two-dimensional focused plane in the scene, at a spatial location (i, j) and time interval t, and λ is the wavelength; image is the encoded intensity of the nth channel in the image at the corresponding discrete location (x, y). The number of spectral bands over which the scene is integrated corresponds to the number of colour channelσy and thus the same number of encoded intensities (i.e. pixel values) are used to describe the colour at the image location x, y.

The reproduction of the digital image from an output imaging device is essentially the physical rendering of the encoded intensities to produce a ‘colour image’. The encoded image is thus converted back to a spatially varying spectral distribution: a spectral radiance/luminance distribution in a displayed image, or spectral reflectance distribution in a printed image.

The encoded image intensities represent the colour coordinates of the image in the colour space (see Chapter 5) in which it is colour encoded. The early sections in this chapter are concerned with the definition and specification of digital colour spaces and will introduce different colour space encodings that have been proposed, or standardized by various standards organizations in order to communicate digital colour images in a consistent and unambiguous manner. Transformation of an image from one space and encoding to another requires knowledge of the colour space characteristics – for example, the colour space primaries, white point, transfer function – as well as the specification of the encoding method and range. If the colour space is device dependent (i.e. defined by the characteristics of a specific imaging device), then the characterization of the device is necessary to reveal such characteristics – that is, to relate the device-dependent colour coordinates to CIE colour space coordinates and to further achieve such a transformation. Therefore, several sections of the chapter will discuss device characterization of capturing, display and printing imaging systems.

The range of colours capable of being reproduced by a particular device and/or medium, or encompassed by a particular colour space, or occupied by a particular scene or image, is referred to as the colour gamut of the device, space, scene or image. Colour gamuts vary between systems and media and are strongly influenced by the viewing conditions, i.e. illumination, surround and background. For example, the appearance of some bright colours displayed on a computer monitor in a dark environment cannot be replicated in a printed magazine viewed in a typically dimly lit room in a home, because the gamuts of the media under these viewing conditions are very different. Such gamut mismatches are very common and unavoidable in image workflows. They are dealt with using gamut mapping techniques, which alter the original image colour coordinates to ones that a given medium can reproduce. The last sections of the chapter will introduce the principles behind some common gamut mapping techniques.

COLOUR SPACE AND COLOUR ENCODING

In Chapter 5 we defined colour space as an n-dimensional geometrical model, where colours are specified by their vector coordinates (i.e. colour coordinates). These coordinates describe the position of the colour within the specific colour space only (see Figure 5.12). According to the CIE, three dimensions are enough to describe colour; thus most colour spaces are three-dimensional. Examples of colour spaces include the CIEXYZ, CIELAB and CIELUV spaces, and continuous (i.e. not quantized) RGB spaces with a particular set of red, green and blue additive primaries. In digital imaging applications, the specification of a colour space provides information about its geometrical representation, i.e. the ‘volume’ and ‘shape’ of the space. However, it does not specify how the digital colour coordinates (pixel values) can be interpreted. For this reason, the colour spaces need to be encoded.

Colour space encoding is the digital encoding of a colour space. Specification of the colour space encoding includes information about the digital encoding method and the encoding range that the space covers. The digital encoding method specifies the relationship between continuous colour space values and the corresponding encoded (digital pixel) values. In most cases the digital encoded values are linearly related to the continuous colour space coordinates and are integer values – although non-linear encoding and/or floating-point digital encoding methods are used in some applications. The encoding range is essentially the range of digital code values that are available and relates directly to the bit-depth characteristics. Therefore for 8-bit encoding the encoding range is 0–255, for 12-bit encoding it is 0–4095 and for 16-bit encoding it is 0–65,535. It is important to note that multiple encodings for a single colour space may be defined. A particular additive RGB colour space, such as the sRGB colour space, which will be discussed later in the chapter, may have different colour space encodings – for example, 8-bit sRGB and 10-bit extended sRGB (e-sRGB).

Finally, a colour image encoding is the encoding of the colour values for a digital image. This includes the specification of a colour space encoding, i.e. the colour space and colour space encoding in which the image is represented, together with any further information to interpret the pixel values, such as the reference image viewing environment and reference imaging medium. Multiple colour image encodings can be related to one colour space encoding. According to the related standard ISO 22028-1:2004 ‘it is important that the colour is specified by a complete colour image encoding, rather than simply defining a colour space encoding’. The standard specifies what should be included in a colour image encoding definition in order to achieve unambiguous colour image representation. Readers should refer to it for further information.

In the following sections we will discuss the various types of digital colour spaces, introducing the characteristics of some colour spaces and colour space encodings that have been standardized from various organizations (i.e. standard colour spaces) to help the communication of colour information through imaging chains.

CLASSIFICATION OF COLOUR SPACES

There are a variety of colour spaces used in digital imaging, as different spaces are suitable for different applications. Figure 23.1 illustrates an example of an imaging chain, where various imaging components are related to different colour spaces. There are several ways of classifying them.

The first classification is concerned with the intrinsic nature of the colour space; specifically whether the colour space is based on additive or subtractive mixtures of its related primaries, or is derived from an additive or a subtractive colour space. According to this classification we can group colour spaces as follows.

image

Figure 23.1   A typical imaging chain and related colour spaces.

RGB (red, green, blue)

These are additive-based spaces and relate directly to the trichromatic theory, i.e. the mixing of three primaries to obtain all possible colours (see Chapter 5). RGB spaces are very common since they are used by digital capturing devices, such as digital cameras and scanners, and soft displays, such as cathode ray tube (CRT) displays and liquid crystal displays (LCDs). The RGB primaries of such spaces are based on a real or hypothetical input or output device. RGB colour spaces are non-linearly related to visual perception (non-uniform), which means that equal steps within the colour space are not perceived as equal by the human visual system (HVS).

CMY(K) (cyan, magenta, yellow, black)

These are subtractive-based colour spaces; they are mainly used in printing and hard-copy output. They use three subtractive primaries, CMY (dyes at full saturation), to create all possible colours by absorbing (subtracting) various wavelengths. The fourth, black, component is included to improve the density range and thus image contrast, as well as the available colour gamut (see Chapter 16). The transformation between CMY(K) and RGB colour values is very complex. It is most often performed via an intermediate space, i.e. a CIE colour space (see later in the chapter). CMY (K) spaces are also non-linearly related to visual perception.

HSL (hue, saturation, lightness) – and similar

There are various colour spaces falling into this category, such as HSL (L standing for lightness), HSI (I standing for intensity) and HSV (V standing for value), and others. Most of these are linear transformations from RGB spaces and thus they are non-linear with respect to human vision. They are mainly used in computer graphics and image-processing applications because of one main advantage: they separate the three perceptual attributes of colour, i.e. hue, saturation and lightness (see Chapter 5), so that each one can potentially be treated individually.

YCC (luminance, chrominance 1, chrominance 2) – and similar

There are various colour spaces falling into this category, such as: Photo YCC, used in the (now obsolete) Kodak PhotoCD file format (see Chapter 17); YCbCr, which is a digital standard used in the JPEG compression schemes; and YIQ and YUV, which are analogue-based spaces for NTSC and PAL TV respectively. They are linear transformations of RGB spaces and non-linear with human vision. Such spaces separate the luminance (Y) from the chrominance (C1, C2) components and are widely used in image transmission where compression is important, i.e. they allow the compression of the luminance channel to be treated separately from that of the chrominance channels. The chrominance channels are usually more compressed since this is more tolerated by the HVS than compression of the luminance channel (see Chapter 5).

CIE colour spaces

In Chapter 5 we introduced the CIEXYZ system, which in this context can be considered as a colour space. It is linearly related to a specific set of RGB primaries and is based on the ‘standard colorimetric observer’. The CIEXYZ space is non-linear but defines colours by a set of coordinates that are meaningful to the HVS.

Two other CIE colorimetric spaces, CIELAB and CIELUV, are used in colour imaging, with CIELAB being the more commonly used by far. Both spaces are non-linear transformations of the CIEXYZ system and they are nearly linear with human perception. The latter feature is an asset, as their coordinates describe colour in a perceptual manner – provided the white point is known.

Finally, CIE colour appearance spaces, such as CIECAM97 and CIECAM02, specify colour using appearance coordinates (see Chapter 5). Colour image values for lightness, chroma and hue, for example (J, c and h respectively), are directly related to the visual appearance of the image. They are derived from colorimetric spaces but take into account additional parameters relating to the viewing environment such as surround and background illumination.

CIE colour spaces are not used by any digital imaging device, but as intermediate spaces connecting digital colour spaces which are device dependent (see below). They are also used for the definition of image coordinates in a device-independent, visually meaningful manner. They are used in colour management workflows (see Chapter 26, ICC colour management architecture) as device connection spaces (DCSs).

The second classification is concerned with whether the colour space is related to CIE specifications. According to this classification, digital colour spaces can be grouped as follows.

Colorimetric colour spaces

There is a specified relationship between colour space coordinates of these spaces and respective CIE colorimetric values and thus they are considered device independent. A known transformation allows conversion of device colour coordinates to and from a colorimetric CIE space. Standard CIE colorimetric colour spaces, such as CIEXYZ or CIELAB, fall into this category. Colour spaces other than the CIE spaces can also be classified as colorimetric, such as additive RGB or luminance–chrominance colour spaces, provided that their relationship with CIE colorimetry is specified. The information needed to define a colorimetric space is:

•  CIE colour spaces:

image   name of colour space (e.g. CIEXYZ);

image   white point (e.g. D65).

•  RGB colour spaces:

image   chromaticities of the RGB primaries;

image   white point;

image   transfer functions (e.g. gamma function – see Chapter 21).

•  Luminance–echroma colour spaces:

image   relevant information about the additive colour space from which they are derived;

image   transformation matrix that relates YC1C2 to RGB coordinates.

Colour appearance spaces

As mentioned earlier, CIE colour appearance spaces specify colour using appearance coordinates, again in a device-independent manner. The definition of a colour appearance space includes the specification of the particular colour appearance model upon which the colour space is based, together with the colour coordinates of the image data.

Device dependent colour spaces

Most imaging devices produce colour in a device-dependent manner, since their output is related to the individual primaries of the device (see Chapters 1416), as well as other device-specific parameters. Device-dependent colour spaces are therefore linked to the characteristics of a real, or an idealized, imaging device and have no defined relationship with CIE colorimetry. This means that the transformation between space colour coordinates and a CIE space is not specified. Some device-dependent colour spaces can be characterized, i.e. measurements may be used to define (or characterize) their relationship to CIE colorimetry. In such cases, they can be considered as colorimetric colour spaces.

There are two main classes of device-dependent colour spaces. Input (or sensor) device-dependent colour spaces are device-specific RGB colour spaces attached to the capturing device and capturing conditions. They can be specified by:

Device dependent colour spaces

•  spectral sensitivity of the camera/scanner (see later);

•  transfer function;

•  scene/original illumination.

Output device-dependent colour spaces are linked to a particular output device, a display or a printer. Display device-dependent RGB spaces are specified by:

•  chromaticity of the RGB primaries;

•  white point;

•  transfer functions (e.g. gamma function – see Chapter 21).

Printer device-dependent colour spaces, such as a printer CMY(K), are specified by the relationship between the input encoded values and the corresponding output colour values.

IMAGE STATE

This section explains the image state in the context of particular digital imaging workflows. It indicates the colour image encoding (in a particular encoded colour space) within the workflow, and provides information about the rendering state of the image data. Figure 23.2 illustrates a generic diagram showing the relationship between different colour encodings.

As indicated in Figure 23.2, the image data can be represented in sensor, scene-referred, original-referred and output-referred colour encodings. The latter two states are also referred to as image-referred encodings.

Sensor colour encoding

When a scene (or an image) is captured by a digital camera (or scanner), the first image state is a sensor state, which is device dependent. That means that it is encoded to RGB device-specific coordinates which, as mentioned above, depend on the sensor characteristics (spectral sensitivities, transfer characteristics) and illumination. When images are archived, or progress though the imaging workflow in sensor colour encoding, the image data has no connection with the original scene colour representation, unless the capturing system is characterized so that sensor characteristics and illumination are known. There is no standardized RGB sensor encoding colour space, and it is unlikely that there will be one, since different manufacturers employ their proprietary filters and thus the sensor sensitivities differ. Also, with digital cameras, the illumination is scene dependent.

Scene-referred colour encoding

The image data originating from a scene capture can pass from a sensor colour encoding to a scene-referred colour encoding by employing a transformation that takes into account the sensor characterization, white balancing and exposure adjustments, i.e. it is device and/or image specific. Scene-referred colour encodings are representations of the estimated colorimetric coordinates of the objects in the original scene. The coordinates may be represented in many different ways, for example as encoded scene colour values in a CIE space or in terms of the response of an idealized capturing device having a specified relationship with a CIE space. An example of the latter is the standard RIMM RGB (see later on). Note that the image colorimetry of the scene-referred image data may contain some inaccuracies due to dynamic range limitations of the capturing device, noise, quantization and other sources of error, and also due to modifications employed on purpose to ‘alter the scene’ colorimetry (for example, simulating different scene lighting).

image

Figure 23.2   Image state flowchart showing relationships between various types of colour encodings.

Scene-referred image data is not readily rendered by an output device and this is why the scene-referred colour space encodings are referred to as unrendered. The advantage of storing image data in such encodings is that data can potentially be rendered for any output device in a meaningful way, especially when it is encoded in high bit depths. Such encodings are used for archiving and transformation purposes.

Output-referred colour encoding

A colour rendering transformation, including tone mapping and gamut mapping based upon a rendering intent (see Chapter 26), is required to transform scene-referred image data to an output-referred encoding. Output referred colour encodings are linked to a particular real, or virtual, output device and viewing conditions. Output-referred image data may be represented by a colour encoding derived from CIE colorimetry and produce the intended colour appearance when viewed in the reference viewing environment. Examples of ways that output-referred image data may be represented include encoding colour values using a CIE colorimetric space (in which case they are unrendered) or a standard rendered colour space encoding derived from CIE colorimetry (e.g. sRGB or ROMM RGB – see later), or by characterized device-dependent control signals for a particular soft-copy (RGB) or hard-copy output device (CMYK).

It is important to note that output-referred image data can be obtained directly by a transformation from sensor data, without passing via a scene-referred colour encoding, as illustrated in Figure 23.2. In such cases images are intended for a specific output device and viewing conditions. For example, sRGB standard output-referred image data is frequently stored by digital cameras to be readily displayed in the reference sRGB conditions. In the case of printing, the printing systems will generally perform a colour re-rendering transformation to transform the sRGB colour values to those appropriate for the particular output device and assumed viewing conditions.

Original-referred colour encoding

Images encoded to original-referred colour encoding have coordinates that are representative of colour coordinates of a two-dimensional hard-copy image (photographic print, slide or artwork). Thus, the source is not a scene but an original image. The characteristics of the original-referred encoded image data are directly related to the characteristics of the original image colorimetry, in terms of a CIE colour space such as CIEXYZ or CIELAB, an idealized measurement device such as a Status A densitometer (see Chapter 23), or in terms of device-dependent control signals for a particular capturing device (camera or scanner).

Since the encoded image represents a two-dimensional hard-copy or soft-copy image, the resulting image data should be treated as original-referred image data rather than scene-referred image data. In this case, it is usually unnecessary to apply a colour-rendering transformation to the image data for purposes of determining output-referred image data since the original image has already been colour rendered. However, it may be desirable to apply a colour re-rendering transform to account for the differences between the media/viewing condition characteristics of the original image and the final output-referred image. As with the scene-referred colour encoding, images stored in original-referred colour encodings can be accessed for later use without needing to commit to a specific output.

STANDARD COLOUR SPACES AND COLOUR SPACE ENCODINGS

There are a number of colour space encodings which have been standardized by international organizations to facilitate the communication of images through imaging chains. Some of them are de facto rather than official standards, with their specification being available in the public domain. Standard colour space encodings have a specified relationship with CIE colorimetry (they can be transformed to/from CIE coordinates) and are based on real or idealized (hypothetical) input or output devices. The most widely known are introduced in the following sections. Figure 23.3 compares the u′, v′ chromaticity coordinates of their primaries, on the (nearly) uniform CIE u′, v′ chromaticity diagram.

image

Figure 23.3   u′, v′ colour space representation of the family of sRGB colour spaces, the RIMM/ROMM RGB and the Adobe RGB 1998.

sRGB (standard RGB) and sRGB-related colour space encodings

The standard RGB colour space encoding (sRGB) is an IEC standard (IEC 61966-2-1:1999). It was originally designed by HP and Microsoft as the default colour space encoding for the Internet during the mid-1990s. Since then, it has received wide adoption in the consumer imaging industry. sRGB is an output-referred colour space encoding, based on typical CRT display primaries (identical to ITU-R BT.709-3, used in High Definition (HD) TV monitors) and transfer function. The standard specifies a black digital code of 0 and a white of 255 for 24-bit (8 bits/channel) encoding. The strength of sRGB is its simplicity, the fact that it is based on a real device, and that it is inexpensive to implement and computationally efficient. It is available in most consumer digital cameras and also implemented in the majority of LCDs, which allow an sRGB setting to simulate a typical CRT output.

Table 23.1 presents the CIEXYZ chromaticities of the sRGB reference primaries and white point (D65), and Table 23.2 the sRGB reference display and reference viewing conditions. It is important to note that images encoded using sRGB colour space encoding maintain their colour appearance only under the reference display and viewing conditions.

The reference display transfer function is described by the equation below, where VRGB is the normalized digital count and VRGB is the normalized output luminance (see also Chapter 21, Figure 21.11):

image

The encoding transformation between 8-bit sRGB values and CIEXYZ involves the following steps (see characterization of displays later in this chapter for further explanations):

Table 23.1   CIE XYZ chromaticities of the sRGB primaries (ITU-R BT.709-3) and white point

image

Table 23.2   sRGB reference display and viewing conditions

sRGB reference display

 

Display luminance level

80 cd m–2

Display white point

D65 (x = 0.3127, y = 0.3290)

Display model offset (R, G, B)

0.0

Display input/output characteristic (gamma; see Eqn 23.2)

2.2

Reference viewing conditions

 

Reference background – part of the display surrounding the image

20% of the display luminance level (16 cd m–2), D65

Reference surround – area surrounding the display

20% of the reference ambient level (4.1 cd m–2), D50

Reference proximal field

20% of the display luminance level (16 cdm–2), D65

Reference ambient illuminance level

64 lux

Reference ambient white point

D50 (x = 0.3457, y = 0.3585)

Reference veiling glare

0.2 cdm−2

Reference observer

CIE 2° standard colorimetric observer

1.   Scaling of the sRGB pixel values to colour space values ranging from 0.0 to 1.0.

2.   Linearization of the sRGB scaled values using the reference display transfer function.

3.   Linear transformation from linear sRGB scaled values to CIE XYZ scaled values (ranging from 0.0 to 1.0) using the 3 × sRGB matrix.

The inverse transformation is obtained by reversing steps 1–3, and using the inverse sRGB matrix (see later) for the linear transformation from CIE XYZ and linear sRGB values.

While sRGB is suitable for the needs of display and Internet colour imaging, the achievable colour gamut obtained by the sRGB primaries is rather narrow compared to the colour range of many non-CRT applications, such as digital printing and photofinishing. The native transfer characteristics of LCDs do not obey the power law and are normally incompatible with sRGB; also, some late LCDs work with a 12-bit encoded input. Further, the colour ranges produced by the sensors incorporated in modern digital cameras and scanners, i.e. sensor colour gamuts, usually exceed the sRGB gamut, and thus when sensor encoded data are colour rendered to sRGB the original gamut is reduced irreversibly. Figure 23.4 illustrates an example of a number of ‘lost’ colours of a typical colour test chart when sensor encoded values originating from a commercial slide scanner are transformed to sRGB.

Gamut limitations, as well as the restrictive 8-bit sRGB encoding, have led to various extensions of the sRGB encoding standard. Here is a brief summary of them:

•   e-sRGB (an I3A standard, PIMA 7667:2002) is a family of output-referred colour encodings (e-sRGB10, e-sRGB12 and e-sRGB16, for 10, 12 and 16 bits per channel respectively), based on a virtual additive colour device with an extended RGB gamut. The sRGB set of primaries, transfer function and white point are used, but the standard allows for a larger colour space value range, extending from –0.53 to 1.68 (instead of 0.0 to 1.0), thus providing a gamut that is larger than the sRGB gamut, along with minimum 10 bits per channel quantization. The viewing conditions match those of sRGB. While both sRGB and e-sRGB are output referred, e-sRGB provides additional flexibility for high-quality printing compared to sRGB.

•   bg-sRGB (IEC 61966-2-1, amendment 1) is similar to e-sRGB standardized by the IEC.

•   scRGB (IEC 61966-2-2) is another extended gamut colour encoding based on the sRGB primaries. It is a scene-referred colour encoding, with linear transfer characteristics. Since it is scene referred, the white point luminance is based on the absolute white point luminance of the scene, i.e. it is scene dependent. The white point chromaticities correspond to D65. The digital code values are based on 16 bits per channel encoding. A non-linear version, the scRGB-nl, is also described in the same standard, with a power transfer function based on that of sRGB.

image

Figure 23.4   The Kodak Q60 colour test chart (top row) and colours in the red, green, blue channels and the combined RGB image (two bottom rows) that are lost during transformation from scanner sensor RGB to sRGB. Lost colours are indicated with white pixels on black on the individual channel images and with colour pixels in the combined RGB image.

sYCC and sYCC-related

The standard YCC encoding, sYCC (IEC 61966-2-1, amendment 1), is the standardized encoding of YCbCr colour space used in JPEG compression. It is based on the sRGB primaries and is also an output-referred encoding, but with a hypothetical extended (larger than the sRGB) colour gamut. sYCC code values are obtained from sRGB encoded images via a lumaechroma linear (3 3 matrix – see later) transformation. In JPEG compression the sRGB image data is converted to sYCC values prior to the actual discrete cosine transformation (DTC; see Chapter 29). Saturated colours that lie outside the sRGB gamut can be stored in a standard JPEG file by directly encoding the image data as YCC, rather than first clipping/mapping the image to the sRGB gamut. Imaging applications that retain the extended gamut image data can enable output devices, such as inkjet printers, to make use of them.

sc-YCC-nl (IEC 61966-2-2) is based on the non-linear version of sc-RGB (sc-RGB-nl) and it is connected to it via a linear transformation (the same used to transform sRGB to sYCC). However, as with scRGB-nl, this is a scene-referred colour encoding with the absolute white point luminance being scene dependent. It is quantized to 12 bits per channel, thus the 16-bit RGB sc-RGB-nl data is down-sampled.

ROMM RGB and RIMM RGB

ROMM RGB (ISO 22028-2-2006) is an output-referred colour encoding. It achieves an extended gamut by using theoretical rather than physical primaries based on a hypothetical (print) output. Its transfer function is nonlinear but differs from that of the sRGB encoding. There are three quantization options, at 8, 12 or 16 bits per channel. The reference medium white and black point chromaticities are those of D50 (suitable for print viewing and graphic arts) and the reference medium white and black point luminances are equal to 142 and 0.5 cd m–2 respectively. Linear transformation from ROMM RGB to CIE XYZ tristimulus values of the image is achieved via the ROMM RGB matrix.

RIMM RGB (ISO 22028-3-2006) is a scene-referred colour encoding, having the same primaries as ROMM RGB and also achieving an extended colour gamut. The colour space range is from 0.0 to 2.0 (instead of 0.0 to 1.0 for the ROMM RGB) and has a non-linear transfer function based on that of sRGB. The colour space white point is D50, with luminance of 15,000 cd m–2 to accommodate exterior scenes.

It is essential to note that colour space gamuts with extended primaries have the advantage of covering the colours reproduced by most devices, but they also lead to potential colour inaccuracy, as coded values may be wasted for reserving these to unnecessary colours. Additionally, they need to be quantized to large bit depths to avoid quantization effects such as contouring and posterization.

Adobe RGB 1998

The Adobe RGB colour space encoding was developed by Adobe Systems in 1998. It is a de facto standard, used widely by the photographic industry mainly because it is the native colour space of the Adobe Photoshop® image editing software. It is an output-referred 8 bits per channel colour encoding with primaries (the same as those used by NTSC – TV system used in the USA and other countries) and non-linear transfer characteristics based on a CRT output with a gamma of 2.2. The RGB primaries are not dissimilar to those of the sRGB (see Figure 23.3), although they encompass a slightly larger colour gamut, primarily in the cyanegreen region. This dissimilarity is often accentuated by representing the RGB chromaticities on the non-uniform CIE x, y chromaticity diagram. The reference display white point corresponds to the illuminance D65 and the white point luminance is 160 cd m–2. The reference viewing conditions are similar (but not exactly the same) to those of sRGB. The specification of Adobe RGB 1998, including the transformation from/to CIE XYZ coordinates, is available from Adobe Systems.

DEVICE COLOUR CHARACTERIZATION

Calibration and colour characterization of imaging devices are essential processes in ensuring consistent colour reproduction in digital imaging chains. Calibration is the process of setting and maintaining the imaging device to fixed settings, which correspond to known colour responses – for example, grey balancing scanner RGB responses, i.e. ensuring that equal RGB signals corresponds to neutral scanner responses (R = G = B = f(Y)), or setting the display white point to the illuminant D65, i.e. ensuring that the colour temperature of the display white point is approximately equal to 6504 K.

Colour characterization is concerned with defining the relationship between device-dependent colour coordinates and the corresponding device-independent, CIE colorimetric coordinates. Device characterization helps to transport colours between imaging devices in a meaningful manner. It is used to predict the colour output from specific input signals, or to predict the required input signal for obtaining a specific colour output. It is important to note that, once a device is calibrated to specific settings, the characterization is valid only for these settings.

The characterization model can be defined in two directions: the forward and the inverse models. For input devices, the forward model is a mapping from the original scene, or medium device-independent coordinates, to the corresponding device-dependent output signals. For output devices, it is a mapping from device-dependent input signals to the rendered device-independent colour that is produced by these signals. In both cases the inverse model is used to determine the required input in order to obtain a desired response.

A wide range of different models have been developed for the purpose of colour characterization, but none gives optimum results for all types of devices. Most models have been developed by first measuring a certain number of colours on the device/medium to be characterized and then defining a mathematical relationship that enables the transformation of any colour from the device space to a CIE colour space, or vice versa. These transformations are referred to as colour transformations.

We can broadly classify the types of models developed for device characterization as follows:

1.   Physical models, which are based on various physical properties of the device, such as spectral sensitivity, absorbance, reflectance of the device or medium.

2.   Numerical models, in which a set of coefficients is derived by regression, using a number of colour samples represented both in device-dependent and in device-independent coordinates.

3.   Look-up tables with interpolations, connecting device-dependent colour coordinates to device-independent coordinates for a large number of colour samples. The values are interpolated to create missing values for all intermediate colour coordinates.

4.   Neural networks, where the device-dependent data for a number of colour samples is the input to the network and the device-independent data is the neural network’s approximation of the device responses, or vice versa. Characterization methods based on neural networks will not be described in this chapter. Nevertheless, they have become more and more popular in recent years due to the computational speed of contemporary computer systems.

Colour transformations may involve elements from more than one of the above types. For example, entries on a colour look-up table used as the input to a colour display can be generated directly from measured data, or by employing a function derived from a numerical model.

Physical models

Physical models are often based on the spectral characteristics of the device. For example, in input devices, the spectral sensitivities of the camera’s or scanner’s sensor can be used to predict the response signals of the device. Spectral sensitivity functions of digital cameras can be measured by imaging monochromatic lights at different single wavelengths, or of narrow wavebands (using a monochromator – Chapter 5) and recording the linearized (i.e. gamma-corrected – Chapter 21) responses of each channel. These functions are usually normalized to their maximum values. Alternatively, they can be obtained from the manufacturer of the device. The latter are, however, generic and do not account for variations from device to device, or for temporal changes in the devices’ characteristics.

Provided that the spectral sensitivity (or responsivity), Si(λ), of the ith channel (usually three, i.e. red, green and blue) is known, the channel’s response signal, Di, is given by:

image

where P(λ) is the spectral radiance of the illuminant used during capture, R(λ) is the spectral reflectance or spectral transmittance of the input colour stimulus, and k and o are scaling and offset parameters respectively. λ is the range of wavelengths to which the device is sensitive. In digital cameras, the spectral responsivity S(l) itself depends on various device-specific parameters, mainly on:

•  the spectral sensitivity of the CCD or CMOS detector;

•  the colour filters used for colour separation;

•  the spectral transmittance of the infrared blocking filter that might be included in the camera;

•  the spectral transmittance of the micro-optics of the sensor;

•  noise.

In Chapter 5 we showed that the spectral distributions of the sample and the illuminant are related to colorimetric CIE XYZ values via a similar equation to Eqn 23.3 (see Eqn 5.8). The main difference is that the colorimetric value in Eqn 5.8 is obtained by using, instead of the channel sensitivity Si(λ), the corresponding colour matching function. Thus, it is important to note that, if the camera sensitivity functions are equal to (or a linear transformation of) the CIE colour matching functions describing the response of the standard colorimetric observer, the camera response will be colorimetric, and thus device independent. Devices that fulfil this so-called LuthereIves condition are referred to as colorimetric devices. In such cases, and in the absence of noise, a unique matrix, M, can be derived that relates device-dependent signals D1, D2, D3 (i.e. R, G, B) and device-independent signal CIE tristimulus values X, Y, Z:

image

The coefficients a1,1 …. a3,3 of the matrix M are constant and can be used to transform all possible colour coordinates from one colour space to the other. Matrix M is obtained by:

image

where S is a 3 × n matrix containing three columns with the sampled spectral sensitivities of the device (of n number of samples, usually taken at 5–10 nm intervals) and A is a 3 × n matrix containing the sampled CIE 1931 colour matching functions (plotted in Figure 5.15). T denotes the transpose of the matrix and –1 the inverse of the matrix.

image

λ1 …. λn denote the sample points of the spectral data. Note that, in case the camera is characterized for specific lighting conditions, or the imaging device is a scanner with a fixed illuminant, S in Eqns 23.5 and 23.6 should be substituted by the product of the spectral sensitivities and the spectral reference illuminant (i.e. S(λ) × R(λ)).

image

Figure 23.5   Relative spectral sensitivities of two commercial digital SRL cameras.
Adapted from Fairchild et al. (2008); reproduced with permission of M. D. Fairchild

Although the design of colorimetric input devices has been explored, in practice it is very difficult to achieve since it requires large dynamic ranges and high signal-to-noise ratios, as well as very narrow-band filters. A potential problem with the use of narrow-band filters is metamerism, where stimuli that appear identical to the eye may result in different device responses, or vice versa. The spectral sensitivities of input devices are unlikely to be linear combinations of the CIE 1931 colour matching functions and therefore a different matrix M1, M2, M3,…, Mn links each individual set of device coordinates to the corresponding colorimetric values. A single matrix, image, can be estimated with coefficients obtained by ‘best fitting’ the camera sensitivities to the colour matching function (see ‘Regression’ section below). This is only an approximation, but for many applications it may be sufficient. Typical spectral sensitivities of commercial digital cameras are shown in Figure 23.5.

Physical characterization models of additive emissive displays use the spectral radiance output of each colour channel. The spectral radiance emitted by a display, Srgb(λ), is a function of the linearized input digital counts:

image

where, Sr_max(λ), Sg_max(λ) and Sb_max(λ) are the spectral radiances emitted from the display primaries (i.e. the red, green and blue channels respectively) and R′, G′, B′ are the digital input counts for the red, green and blue channels respectively linearized with respect to luminance (see Chapter 21). Typical spectral radiances of CRT displays are illustrated in Figure 15.2. Spectral radiances can be measured using a spectroradiometer (see Chapter 5).

When the display characterization model is based on this relationship, there are three assumptions that are made:

•   Channel independence – each of the display channels operates independently of the other two and thus there is a separate contribution of spectral radiance for each channel, i.e. no cross-talk.

•   Chromaticity constancy – the chromaticity of the spectral radiance function is independent of the intensity of the input signal.

•   Spatial independence – the output from one spatial location of the display does not affect another spatial location.

Although monitors adhere to the above assumptions to varying degrees, models based on the spectral properties of the displays are widely used for characterization.

In a properly set-up display, the output spectral radiance for each linearized R′, G′, B′ digital input count, can be defined from the maximum spectral radiance of each channel by:

image

λ1 …. λn denote the sample points of the spectral radiances.

Since the display is an additive device, the corresponding colorimetric values can be derived by employing a 3 × 3 transformation matrix, M, with coefficients equal to the CIE tristimulus values of the display primaries:

Note that the mathematical symbol ‘^’ in the matrix image denote ‘an estimated’ matrix M.

image

The inverse transformation, used to predict required normalized R′, G′, B′ linear in luminance RGB values for any given set of tristimulus values is obtained by:

image

where M–1 is the inverse of matrix M described in Eqn 23.9. An example of display characterization using a physical model is described later in the chapter.

There are a number of physical models developed to estimate the colorimetry of printers. Most of them are rather complicated and are based on modelling the interaction between light, colorants and printing medium at microscopic or macroscopic levels. Some well-known physical models used for printer characterization are:

•   Beer–Bouguer model – used to predict the transmission of light though transparent colorants and media related to continuous-tone printing on reflection media. One of the drawbacks of this model is that it does not account for light scattering within the layers of the colorants.

•   Kubelka–Mung model – not dissimilar to the BeereBouguer model, it is employed to predict the reflectance of translucent or opaque colorants and takes into account light scattering within the printing medium.

•   Neugebauer model – used to model half-tone printing where each primary colorant in the half-tone process is a spatial array of dots (see Chapter 16).

Green and MacDonald (2002), Sharma (2003) and Kang (2006) – see Bibliography – provide information on the above physical models and examples of model-based characterization of printers.

Numerical models

The aim of the numerical models for device characterization is to define a matrix M used for colour transformation between device-dependent and device-independent colour values by measuring a number of colour samples in the device and recording their coordinates in both device-dependent and CIE spaces. For digital capturing devices, a number of colour patches with measured tristimulus values are employed as the input to the device and the corresponding device RGB values are recorded. For display and hard-copy devices, sample colour patches are created by inputting a range of device values and measuring the tristimulus coordinates of the rendered colour, using a colorimeter, a spectrophotometer or other colour instruments (see Chapter 5).

Once the matrix is derived, the colour transformation is achieved by using Eqn 23.4. As mentioned above, if the spectral power distributions of the device and the CIE colour matching functions are linearly related, the coefficients of the non-singular matrix M are constant and can be used to transform accurately all possible colour coordinates from one colour space to the other. In such a case, the coefficient can be derived by measuring only three known colour samples and solving three simultaneous equations. By matrix algebra, the solution to the set of linear equations is given by (see ‘Regression’ section below):

image

where D is a 3 × 3 matrix with the device-dependent code values (such as R, G and B) for the three samples:

image

X is a vector with the three corresponding X tristimulus values for the three samples:

image

Y is a vector with the three corresponding Y tristimulus values for the three samples:

image

Z is a vector with the three corresponding Z tristimulus values for the three samples:

image

Then, the transformation matrix M is given by:

image

The inverse transformation of the transformation described by Eqn 23.4 is often required to predict the required code values which will render a colour having a set of known CIE XYZ tristimulus values. This is achieved by using the inverse of the matrix M:

image

In most cases, the spectral power distribution of the device is not a linear combination of the CIE 1931 colour matching functions and therefore a matrix, image, can be estimated using a regression method. Regression is used to correlate the colour coordinates of the selected colour samples in both device and CIE colour spaces.

The steps to derive a numerical model include:

•   The selection of the source and the destination colour spaces used to describe the coordinates of the colour samples. For example, scanner RGB to CIE XYZ, or log RGB to log CIE XYZ.

•   The selection of number and location (in terms of their lightness, hue and chroma) of training colour samples, i.e. the colour samples that will be used in the regression to estimate image.

•   The choice of the regression equation.

•   The evaluation of the estimated results.

Regression

The most common method for estimating the coefficients of the transformation matrix image is the least squares method. The best fit in the least squares method is the estimated model for which the sum of squared residuals has its smallest value. A residual is the difference between the measured value and the value estimated by the model.

Regression is performed on a selected number of samples (training samples), n, with measured colour specifications both in the source (device) and destination (CIE) colour spaces. It is based on the assumption that the correlation between colour spaces can be approximated by a set of simultaneous equations. For a unique solution to the equations, the number of colour samples, n, must be higher than the number of terms, m, in the regression (n > m). Once the coefficients in the equations are derived (i.e. the coefficients forming matrix image), one can use the simultaneous equations with the source variables (e.g. RGB) to compute the destination coordinates (e.g. CIE XYZ). The device data are the independent variables in the regression and the tristimulus values are the dependent variables.

Regression can be linear or of a higher order, such as polynomial, the latter being an application of linear regression with m variables, where m is actually greater than the number of independent variables, i.e. three R, G and B, in the case of RGB systems.

The general form of the linear regression with m variables is given by:

image

And in vector notation:

image

where d is the dependent variable. Vector Q has m elements indicating the number of polynomial terms: each of these is an independent variable (R, G, B), or a product of independent variables (RG, GB, RB, R2, R2, G2, RGB, etc.). A is a vector with the corresponding coefficients. The number of coefficients is equal to the number of polynomial terms, m.

An example of applying polynomial regression to three independent variables corresponding to the device coordinates, R, G and B, with m = 4 polynomial terms, would be when q1 = R, q2 = G, q3 = G and q4 = RGB. The q values are therefore derived directly from the three independent variables, i.e. from the measured device-dependent coordinates. The output-dependent variable d represents the corresponding colour value in the destination space (CIE X, Y or Z tristimulus value).

For the X tristimulus value and using n measured samples, Eqn 23.14 would be represented by:

image

In matrix form, Eqn 23.16 above becomes:

image

And in vector notation:

image

where X is a vector with the X tristimulus values for n samples, Q is a 4 × n matrix (four terms for n number of samples) and AX is a vector of the four corresponding coefficients.

If the number of X tristimulus values of the colour samples in vector X is less than the number of unknowns in vector AX, there is no unique solution to the simultaneous equations. Thus, for this example the number of samples n has to be greater than or equal to 4.

In the case of the least squares fit, the error in finding the best-fitted function that returns the coefficients in matrix AX is minimized between the estimated and measured values of X. If the number of samples is indeed greater (or equal) to the number of coefficients, the product of Q with its transpose (i.e. QQT) can be inverted (it is non-singular), and the required coefficients can be obtained by:

image

Equation 23.19 is repeated for all three tristimulus values of the samples, X, Y and Z. Finally, the transformation matrix imageis obtained by:

image

or:

image

Note that Eqns 23.19 and 23.20 are analogous to Eqns 23.11 and 23.12. The transformation from device-dependent coordinates to CIE tristimulus values for the above example is achieved by:

image

Table 23.3 shows examples of equations used for colour space conversion. Generally, the accuracy of polynomial approximation improves as the number of polynomial terms increases. Higher order polynomials, however, might result in poor results in practice (see ‘Scanner characterization’ section below).

When the colour conversion needs to work inversely (i.e. CIE XYZ to RGB), the matrix of coefficients for the inverse transformation has to be sought. This is achieved via Eqns 23.16–23.22, by exchanging the position of the input and output data in the regression. That means that the regression is applied to the CIE tristimulus values, which now become the independent variables, and the RGB data, which are in this case the dependent variables.

An example of polynomial regression for input device characterization is given later in the chapter.

Look-up tables with interpolation

Colour space transformation can be achieved using multidimensional look-up-tables (LUTs) that map device-dependent to device-independent colour coordinates, and vice versa. Generally, a relatively large number of colour samples are measured (training samples) in both device-dependent and device-independent spaces. The measured colours constitute a subset of the total number of colours available in the device to be characterized. For example, in an RGB device, the number of available colours at 8 bits per channel quantization is 28 × 28 × 28 = 16,777,216. This is clearly too big a number of colours to be evaluated by direct manual measurement. To cover all possible colours, interpolation between the nearest measured points is used to cover all the colours that could be encountered in either colour space.

Various mathematical algorithms are used for interpolation. The purpose of such algorithms is to estimate the output colour coordinate for a corresponding known input coordinate, provided that two or more colour coordinates are known in both input and output colour spaces. The relative geometrical distances from the known points to the point to be determined are used as weights to estimate the value of the unknown coordinate. Thus, colour space coordinates are measured at regular intervals to create a lattice of points which are used to evaluate an interpolation function. The interpolation error is the distance between the coordinate calculated by the interpolation and the value of the interpolation function at that colour coordinate.

Since input as well as output colour spaces are usually three-dimensional (3D), interpolation methods are required that exploit various ways of subdividing a 3D space, such as the cube representing the input colour space. Examples of 3D interpolation methods used for device characterization include trilinear, prism, pyramid and tetrahedral interpolation. In trilinear interpolation the cube is not segmented, whereas in the latter three the cube is subdivided into two, three and four segments respectively.

Generally, 3D interpolation methods are an extension of linear (one-dimensional, 1D) and bilinear (2D) interpolation. In Figure 23.6, linear interpolation in one dimension is used to determine the y coordinate of a point with coordinates x, y from two points with known coordinates x1, y1 and x2, y2. The value of y, or f(x), is given by:

image

Table 23.3   Examples of equations used for colour space conversion (m is the number of polynomial terms)

image

image

Figure 23.6   Linear interpolation.

The interpolation error is given by the distance between the value of y and the interpolated value yi.

In bilinear interpolation performed in two dimensions, we have a function of two variables, f(x, y), and we thus need four known points. Further, a trilinear interpolation is achieved by applying the linear interpolation seven times. It repeats the bilinear interpolation (3 × linear) twice to determine two points on two opposite sides of a cube and then performs it once more to determine the geometrical location of the unknown point on the line connecting these two points (Eqn 23.4). Figure 23.7 illustrates the application of trilinear interpolation to determine point p with coordinates x, y, z, from the eight available points of the cube: p000, p001, p010, p011, p100, p101, p110 and p111. Refer to Kang (2006) for more details.

image

image

Figure 23.7   Trilinear interpolation.

Adapted from Kang (2006)

where:

image

and

image

The characterization of a device using LUTs with interpolation is carried out as follows. First, the lattice is created by subdividing the colour space. This is achieved by measuring the training colour samples at regular intervals in the input space and obtaining the corresponding output coordinates. The lattice essentially partitions the colour space into sub-volumes. When the colour transformation is performed, the input colour is located into a sub-volume and then the interpolation is executed to derive the output colour coordinate.

In characterization techniques using LUTs with interpolation it is important that the input colour space is represented and subdivided in a perceptually uniform manner prior to input into the LUT, to avoid perceptually important interpolations errors. This can be achieved by transforming the input data using a non-linear function which approximates the eye’s response (e.g. logarithmic, or raised to the exponent of 1/3), or using directly gamma-corrected data. The number of samples measured during the characterization process may range from 120 to 500. A larger LUT will reduce the interpolation error at the transformation stage but requires more measurements (more colours have to be measured to create the lattice) and more computer memory during its implementation.

LUTs that describe characterization models of devices are commonly used by ICC colour profiles for colour transformation (see Chapter 26), since applying colour transformations via LUTs is usually faster than via a model function that has to run for every pixel of the image. However, a LUT is set and cannot be modified during run time.

Evaluation of the characterization model

Colour differences, such as CIELAB ΔEab, as well as CIE94 and CIEDE2000 (see Chapter 5 and Appendix B), are employed to evaluate the error in characterization model. The mean, median, maximum and 95th percentile colour differences are obtained, first for the training set of samples and also for a testing set of colour samples. The latter have not been used to build the model but are used to test it, so normally they return a higher error than that returned from the training set of colours. The error measure selected to evaluate the model performance should be perceptually uniform, thus consideration needs to be given to the colour difference formula employed to estimate it.

Although the mean colour difference is often the first reported value, it can be misleading, since the error is usually not normally distributed throughout the samples. The median colour difference is a more appropriate metric to represent the ‘average’ model performance. Histograms of the frequency of the colour differences versus the magnitude of the difference are plotted to illustrate the distribution of the error (see example in Figure 23.8.. Colorimetric errors are often reported in separate ∆L*, ∆C°ab and ∆H*ab differences and plots such as ∆E*ab versus L*, C*, h* or versus DL*, ∆C*ab and DH*ab can reveal systematic tendencies in the distribution of the error. An example is illustrated in Figure 23.9. Based on this type of analysis, corrections to improve the characterization model are often implemented.

The performance of the characterization model is also often evaluated by using statistical error measures. For example, the performance of a colour transformation matrix derived by polynomial regression between device-dependent and CIE XYZ tristimulus values can be tested by calculating the correlation coefficient r of the regression by:

image

Figure 23.8   Distribution of ∆E*ab in scanner characterization.

image

Figure 23.9   CIE L* versus ∆E*ab for all training colours. Large colorimetric errors are associated with low lightness colours.

image

where xi and yi are the individual estimated and original values respectively for each tristimulus value, image and image are the mean estimated and original values, n is the number of training colour samples, and σx and σy are the standard deviations of the estimated and original values. A separate coefficient is calculated for each tristimulus value. The problem with using the correlation coefficient to test such a characterization model is the non-uniformity of the CIE XYZ colour system. Also, the correlation coefficient does not give any account of the location or distribution of the error; it is only an estimator of the accuracy of the fitted function.

CHARACTERIZATION OF DISPLAYS

In CRT displays, excited phosphors modulated by three electron guns emit an additive mixture of red, green and blue lights. The assumptions mentioned earlier in the section ‘Physical models’, in addition to the assumption of temporal stability, must hold to use the following simple procedure to characterize the display.

The calibration and characterization of displays are performed in a dark environment with all ambient lights off. The display has to warm up for a period of 30–60 minutes to allow stabilization of luminance and chromaticity (see Chapter 15). Before the characterization starts, the desired correlated colour temperature must be selected. The brightness and contrast controls of the display have to then be set to a fixed value. The aim is to achieve the maximum contrast range, with the darkest possible black point and no loss in the available luminance levels. This is carried out visually, preferably by an observer with experience and with the aid of simple black-and-white test patches, or by displaying several typical pictorial images and trying to achieve pleasing tone reproduction.

The characterization of a display system is a two-step operation. These are:

•   Evaluation of the transfer characteristics of the CRT display, i.e. definition of the relationship between input pixel values and output luminance, for the red, green and blue channels.

•   Derivation of the matrix, M, of coefficients used for transforming R′, G′, B′ linearized – with respect to display luminance – pixel values to CIE XYZ tristimulus values.

The evaluation of the R, G and B transfer functions of display systems is achieved by taking luminance measurements from a series of displayed ramps for each primary colour. This relationship is usually modelled by a function obeying the power law. Details on measuring CRT transfer characteristics, along with models used to describe the transfer relationship, are given in Chapter 21. A calibrated and warmed-up colorimeter can be used for the purpose.

The coefficients of the colour transformation matrix, M, are obtained by using the colorimeter and measuring each channel’s peak output chromaticities, x, y and luminance, Y, from the central part of the display. The measurement setup is illustrated in Figure 21.7, with the central part of the display faceplate displaying the colour of interest and the surrounding area set to display its complementary. The required coefficients are the tristimulus values of the display primaries described in Eqn 23.9. They are obtained by:

image

where xr, yr and zr are the chromaticities of the red primary, xg, yg and zg are the chromaticities of the green primary, zb, yb and zb are the chromaticities of the blue primary and Lr_max, Lg_max and Lb_max are the maximum luminances of the red, green and blue channels respectively. The coefficients are normalized in the range 0.0–1.0, achieved by scaling so that Y values sum to 1.

The colorimetry of a set of RGB colour coordinates sent to the CRT is predicted by the forward model (Figure 23.10), involving:

•   The application of the transfer function of the display to each of the R, G and B input digital counts to obtain linear in luminance R′, G′ and B′ values. This can be achieved by applying the model display transfer function, or by creating three LUTs with the actual measured (and interpolated) channel responses.

•   The tristimulus values of the display colours are then obtained by implementing Eqn 23.9.

The success of the characterization can be tested by converting a test set of colour patches with known CIE XYZ tristimulus values to display RGB through the inverse characterization model and measuring the displayed colours using a colorimeter. The inverse transformation (Figure 23.10) is achieved by reversing the steps listed above, i.e. applying the inverse matrix transformation described in Eqn 23.10 and then applying the inverse display transfer function to the linear in luminance R′, G′, B′ values to obtain input R, G, B digital counts.

Finally, the discrepancy between original and measured displayed colour is calculated using a colour difference formula, as indicated earlier in this chapter. CRT characterization may result in average colour differences of magnitude as low as 1.0 CIE ∆Eab, which is considered as the threshold of visibility in uniform colour patches. In imaging, this threshold rises for complex scenes, with the limit of visibility of displayed images being between 2.5 and 3.5∆Eab and the limit of acceptability being around 5.0 or 6.0∆Eab.

In the characterization of LCD technologies most assumptions made with the CRT hold to a degree and thus a characterization model based on that of the CRT described above can be used. In backlit active matrix LCDs, linear polarizers and a liquid crystal substrate between them are employed to polarize light coming from a source on the back of the display. The polarized light passes though a set of RGB filters to create an additive colour mixture (see Chapter 15). A primary difference between CRT and LCD characteristics is that the LCD has a native transfer function which is sigmoid (hyperbolic). However, many LCDs have built-in correction tables to mimic the CRT response. It is thus best to obtain the display transfer characteristics by direct measurement rather than adapting a CRT model with no prior knowledge of the system. Another important difference is that the black point luminance (minimum luminance) is comparatively high, due to the backlight source. For this reason a more accurate transformation between linearized input pixel values and CIE tristimulus values is achieved when the tristimulus values of the black point of the display are taken into account. The transformation in Eqn 23.9 is replaced by:

image

Figure 23.10   Forward and inverse display characterization.

image

where k_min denotes the black point.

Another consideration when characterizing an LCD system is the viewing angle dependency of the display. Therefore, measurements are angle dependent – taken from an axis perpendicular to the display, but also for other angles separately – and have to be carried out with a colorimeter having a very small reading area (aperture). Such colorimeters are usually designed specifically for LCD measurements. Finally, the grey balance of the LCD for R = G = B inputs is relatively poor, as well as the display’s chromaticity constancy. Models that compensate for the lack of chromaticity constancy introduce cross-terms in the non-linear transfer function to describe interactions between R, G and B channels.

CHARACTERIZATION OF DIGITAL INPUT DEVICES

Various methods can be employed for the characterization of input devices, such as desktop and film scanners. These include the use of linear or polynomial regression to derive a colour transformation matrix, the creation of LUTs with interpolation between measured points, dye concentration modelling and neural networks. In this section characterization using regression is discussed. This method requires relatively simple equipment and an input colour target. The scanner characterization using regression consists of:

•   Grey balancing the red, green and blue scanner signals, i.e. by setting R = G = B = f(Y) for the neutral patches; here Y is the patch relative luminance. This step is not compulsory.

•   Deriving a 3 × m matrix through polynomial regression to selected samples with known colour specifications in both source (CIE XYZ) and destination colour spaces (grey balanced R′, G′, B′ values). The matrix is implemented for the colour transformation between device responses and CIE coordinates.

The polynomial regression method for device characterization is constrained to a single set of dyes (best to characterize for specific film/print dyes), illuminant and observer. Thus, during characterization it is advisable to employ a test chart produced on the same photographic medium as the media to be scanned later on. As indicated earlier in the ‘Numerical models’ section, the number of training colour samples, n, must be equal to or greater than the number of polynomial terms, m.

Standard test targets such as the ANSI IT8.7 (Figure 23.11) can be used for the purpose. The target provides uniform mapping in the CIELAB colour space and includes a 24-step grey ramp. The number as well as the location of the training samples on the target are important in the characterization. The training set may consist of half of the total number of colour samples on the target; these should cover the CIELAB space of the scanning medium uniformly. Thus, the training set may consist of every other sample on the target. Measurements from the target are performed with a reflection spectrophotometer (or a transmission one for film targets) to obtain the original CIE XYZ tristimulus values for the training set of colours. Measurements are carried out for the standard illuminant and the CIE colorimetric observer of choice.

image

Figure 23.11   The Kodak Q60 target based on the design of the ANSI IT8.7 (top) and the Digital ColorChecker SD (bottom) used for scanner and digital camera characterization.

The target is scanned using selected working scanner settings (e.g. optical resolution, gamma = 1). Automatic selection of white and black point as well as automatic exposure should be avoided. These should be set manually when pre-scanning the target and kept constant during scanning.

Grey balanced RGB values (R′, G′, B′) are obtained by:

1.   Measuring the RGB scanner responses for the neutral patches of the target (see Chapter 21 for measuring the OECF of input devices). The relationship between relative input luminances and scanner responses can be used to build three LUTs used in the next step. This can be achieved using linear or spline interpolation.

2.   Implementing the three LUTs to linearize – with respect to input luminance – the RGB scanner responses.

The grey balanced test target is then measured. R′, G′, B′ values for the training set of samples are obtained by averaging a number of pixels from the central area of each sample.

The colour transformation matrix, 3 × m, can be derived by implementing Eqns 23.19 and 23.20. Q in Eqn 23.19 is the matrix of independent variables of size n m (m is the number of polynomial terms, for n number of training samples; see example in Eqns 23.1623.18 for m = 4). Examples of polynomials with different numbers of terms are given in Table 23.3. The terms x, y, z, etc. in the table represent the grey balanced pixel values (R′, G′, B′) or/and their products (R′, G′; R′, B′; G′, B′; R′, GO′, B′, etc.). X in Eqns 23.18 and 23.19 is the vector of size n of dependent variables (either X, Y or Z), which contains the corresponding tristimulus value for n training samples. Finally, AX, AY and AZ are the derived corresponding vectors of coefficients (1 × m) used to build the 3 × m colour correction matrix, image (see Eqns 23.20 and 23.21).

The implementation of the matrix is achieved using the equivalent of Eqn 23.22. In this equation a 3 × 4 transformation matrix has been derived using polynomial regression with m = 4.

The performance of the derived matrix can be assessed by examining the colour differences between original and estimated from the transformation CIELAB values, for the training set and a testing set of colour samples. The latter may be the colour patches in the test target which were not used in the regression. Figure 23.8 illustrates the distribution of ∆E*ab colour differences for a training and a testing set of colours, when implementing a 3 × 14 colour matrix to characterize the responses of a film scanner. Figure 23.9 shows how these colour differences vary according to the lightness of the individual colour samples.

The best polynomial used for characterizing an input device varies with the device characteristics and measurement accuracy. As mentioned earlier in the chapter, input device responses are usually not linearly related to CIE colorimetry; thus a higher than first order polynomial may be required in the regression to derive a suitable matrix for colour transformations. High-order transformation equations may return lower colorimetric errors for the training set of colour samples, but can lead to poor performance when applied to independent data – such as the testing set of colours – or the scanned image data. This is a result of fitting the noise present in the measurement in addition to the desired systematic trends.

For the inverse colour transformation, i.e. from CIE XYZ to scanner RGB, polynomial regression can be used to derive a new matrix of coefficients. The regression in this case is applied to the tristimulus data (independent variables) and (one at a time) to R, G, B values (dependent variables) to compute a new set of coefficients. The derived polynomial will not map the sample points to their exact original value.

The framework of characterization of digital still cameras is common to that of scanners. The characterization of cameras is, however, more ambiguous, mainly because the lighting conditions during capture are often uncontrolled and can vary widely. It is advisable to characterize the camera for a set of common illuminants, or characterize for one only and use the camera white balance settings to set the white point equal to that illuminant. Similar test charts to those employed for characterizing scanners can be used. Uniform illumination must be ensured during the recording of the target, setting a viewing/illuminating geometry of 0/45 (see Chapter 5). Since camera lenses do not transmit the light uniformly across the capturing frame, a grey card may be recorded prior to characterization for tracing any spatial non-uniformities, which can later be corrected. It is useful to sample the grey card densities at set distances, as the card itself might not be totally uniform.

GAMUT MAPPING

The range of reproducible colours of digital imaging systems varies between devices, media and viewing conditions. Therefore, colour gamut mapping is a necessary step of the imaging workflow. Gamut mapping deals with the adjustment of the colours of an input device, or an encoded image, to fit the colour range of the reproduction device and media. It is performed via gamut mapping algorithms (GMAs), the aims of which vary with application.

image

Figure 23.12   Gamut mapping in the imaging workflow. Diagram adapted from Green and MacDonald (2002)

The main factors affecting the performance of GMAs are the characteristics of the original and reproduction systems, the colour space in which the gamut mapping is taking place and the characteristics of the image to be mapped.

Gamut mapping is performed in images originating and reproduced in devices which are characterized (or profiled – see Chapter 26). The success of the characterization is important, since characterization errors may be perpetuated further during mapping. Figure 23.12 illustrates where the gamut mapping takes place in the imaging workflow.

Gamut mapping aims

There are many possible ways to map colours from a system to another. The choice of method (and thus of GMA) is based on the objective of colour reproduction (see Chapter 5). A way to narrow these objectives is to split them into two broad categories: accurate and pleasing reproduction. The aim of accurate reproduction is to maintain the appearance of the colours and to render them as close as possible to the original image colours, whereas the aim of pleasing reproduction is to achieve an overall pleasing image regardless of the original. For the purpose of ICC colour management systems four rendering intents have been defined: perceptual, saturation, media-relative colorimetric and ICC absolute colorimetric. For each, different gamut mapping techniques are employed. Details on these intents are provided in Chapter 26. Briefly, for the perceptual rendering intent, GMAs which preserve the contrast of the image are employed. This rendering intent is particularly applicable to pictorial images, where the overall pleasantness of the reproduction is more important that its colorimtric accuracy. The saturation rendering intent employs algorithms which preserve the vividness of the colours but do not obligatorily maintain their hue. In media-relative colorimetric rendering intent, gamut mapping aims to maintain the colorimetry of the original in-gamut colours but adapt them with regard to the white point of the reference medium. Finally, for absolute colorimetric intent the in-gamut colours at output are chromatically adapted with respect to the input and remain unchanged. The first two rendering intents can be classified as ‘pleasing’ whereas the second two are classified as ‘accurate’.

Gamut mapping techniques

There are two possibilities for performing gamut mapping: device-to-device, which is an image-independent mapping between the gamuts of the original and reproduction devices; and image-to-device, which is an image-dependent mapping, resulting in the smallest possible distortions in the image.

Implementation of gamut mapping requires information on the gamuts of the original and reproduction systems. Thus, before implementation, it is necessary to compute first the gamut boundary descriptor (GBD). This can be achieved via the characterization models of the devices. Specific methods for defining image gamuts as well as media gamuts are available. Then, the intersection between the gamut boundary and a given line along which the mapping is to be carried out, the line gamut boundary, has to be identified.

The majority of GMAs are aimed to work with perceptual attributes of colour, such as lightness, chroma, colourfulness, saturation and hue (see Chapter 5). Most proposed algorithms work with CIELAB or CIELUV colour spaces and thus the mapping is carried out in those space dimensions. However, the representation of hue in these spaces does not always correlate well with perceived hue, especially in the blue region. More recently, CIE colour appearance spaces and their coordinates have been used for mapping.

GMAs can be broadly divided into two types: gamut clipping and gamut compression. Gamut clipping algorithms only change colours which are located outside the reproduction gamut (Figure 23.13). Gamut clipping may be carried out before or after lightness compression (see later). This method minimizes the colour shift and maintains (as much as possible) image saturation, but some details of the image can be lost because many colours may be mapped to the same point on the reproduction boundary. Gamut compression algorithms change the position of all colours from the original gamut, so as to distribute the differences caused by gamut mismatches across the entire range of colours. This method aims to preserve the relative relationship between colours. Therefore, image details are not necessarily lost at the gamut boundary. Other unwanted artefacts can, however, be caused because of the changes in the in-gamut colours (e.g. unwanted decrease in saturation).

image

Figure 23.13   Gamut clipping vs. gamut compression.

Most existing GMAs start with lightness mapping. A good reproduction should maintain the tones of the main regions in an image, whereas the tone of the less important parts should be compressed. There are various types of lightness mapping techniques, such as clipping, soft clipping, linear compression and the use of sigmoid functions (similar to the characteristic curve), which preserves not only overall contrast but also shadows in the dark image areas. Figure 23.14 shows different types of lightness compression.

Chroma mapping can be achieved via linear or nonlinear functions (such as sigmoid and knee functions). A sigmoid non-linear scaling function maps chroma in three regions: for the lower chroma region the colorimetry and low-end contrast are preserved by setting the input value to output value directly; for the middle region, the chroma is increased to overcome the loss in chromatic contrast associated with the GMAs; for the high chromatic region, the out-of-gamut chroma is compressed into the destination gamut.

image

Figure 23.14   Different types of lightness compression.
Diagram adapted from Green and MacDonald (2002)

Finally, as the tolerance of the human visual system to shifts in hue is very small, most GMAs aim to preserve the hue as much as possible, but there are some which prioritize the reduction of chroma losses.

The CIE (TC8-3) recommends guidelines that cover numerous aspects of GMA evaluation, including test images, media, viewing conditions, measurement, gamut boundary calculation, colour spaces and experimental methods. Two GMAs are currently recommended by the CIE:

1.   HPMDE (hue preserving minimum ∆E°ab) keeps colours on the intersection of the original and the reproduction gamuts unchanged and modifies original out-of-gamut colours by clipping them to a point in the reproduction gamut that results to the smallest ∆E°ab in a plane of constant hue angle.

2.   SGCK is also hue preserving; it uses chroma-dependent sigmoid lightness compression and a piecewise linear compression toward the lightness cusp of the reproduction gamut.

Details on these algorithms can be found in Green and MacDonald (2002) and Sharma (2003).

BIBLIOGRAPHY

Adobe RGB 1998 Color Image Encoding, version 2005-05, 2005. Adobe Systems Inc.

Day, E.A., Taplin, L., Berns, R.S., 2004. Colorimetric characterization of a computer-controlled liquid crystal display. Color Research and Application 29 (5), 365–373.

Fairchild, M.D., Wyble, D.R., Johnson, G.M., 2008. Matching image color from different cameras. Proceedings of the SPIE/IS&T’s Electronic Imaging 2008: Image Quality & System Performance Conference, Vol. 6808.

Green, P., MacDonald, L. (Eds.), 2002. Colour Engineering: Achieving Device Independent Colour. Wiley, Chichester, UK.

IEC 61966-2-1:1999, 1999. Multimedia Systems and Equipment – Colour Measurement and Management, Part 2-1: Colour Management – Default RGB Colour Space – sRGB.

IEC 61966-2-2:2003, 2003. Multimedia Systems and Equipment – Colour Measurement and Management, Part 2-2: Colour Management – Extended RGB Colour Space – scRGB.

IEC 61966-2-4:2006, 2006. Multimedia systems and equipment – Colour Measurement and Management, Parts 2–4: Colour Management – Extended-Gamut YCC Colour Space for Video Applications – xvYCC.

ISO 22028-1:2004, 2004. Photography and Graphic Technology – Extended Colour Encodings for Digital Image Storage, Manipulation and Interchange, Part 1: Architecture and Requirements.

ISO 17321-1:2006, 2006. Graphic Technology and Photography – Colour Characterisation of Digital Still Cameras (DSCs), Part 1: Stimuli, Mmetrology and Test Procedures.

ISO/TS 22028-2:2006, 2006. Photography and Graphic Technology – Extended Colour Encodings for Digital Image Storage, Manipulation and Interchange, Part 2: Reference Output Medium Metric RGB Colour Image Encoding (ROMM RGB).

ISO/TS 22028-3:2006, 2006. Photography and Graphic Technology – Extended Colour Encodings for Digital Image Storage, Manipulation and Interchange, Part 3: Reference Input Medium Metric RGB Colour Image Encoding (RIMM RGB).

Kang, H.R, 2006. Computational Color Technology. SPIE, WA, USA.

MacDonald, L.W., 1993. Gamut mapping in perceptual colour space. Proceedings of the IS&T/SID’s 1st Color Imaging Conference, 193–196.

Sharma, G. (Ed.), 2003. Digital Color Imaging Handbook. CRC Press, Boca Raton, FL, USA.

Spaulding, K., 1999. Requirements for unambiguous specification of a color encoding: ISO 22028-1. Proceedings of the IS&T/SID’s 10th Color Imaging Conference, 106–111.

Susstrunck, S., Buckley, R., Swen, S, 1999. Standard RGB color spaces. Proceedings of the IS&T/SID’s 7th Color Imaging Conference, 127–134.

Triantaphillidou, S., 2001. Aspects of Image Quality in the Digitisation of Photographic Collections. Ph.D. thesis, University of Westminster, UK.

The number of spectral bands used to integrate the spatially varying spectral distributions in the scene may originally be more than the number of the colour channels in the digitally encoded image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.104.53