Chapter   | 1 |

Introduction to the imaging process

Elizabeth Allen

All images © Elizabeth Allen unless indicated.

INTRODUCTION

The second half of the nineteenth century saw the progression from early experiments with light-sensitive compounds to the first cameras and photographic films becoming available to the general population. It developed from the minority use of the camera obscura as a painter’s tool, through the first fleeting glimpse of a photographic image in a beaker containing silver compounds after light exposure, to the permanent rendering of the image, and then to the invention of the negative–positive photographic process used today to produce an archival image. A camera using roll film, the Kodak, was available to the public in 1887 and brought photographic media to the masses. By the beginning of the twentieth century, silver halide materials were produced that were sensitive to all visible wavelengths in the electromagnetic spectrum, producing tonally acceptable images, and this paved the way for practical colour imaging processes, beginning with the Autochrome plate in 1907, to be developed.

Since this period, the ability to capture, manipulate and view accurate images of the world around us has become something that we take for granted. There are few places in the world where images are not a part of daily life. We use them to record, express, represent, manipulate and communicate ideas and information. The diverse range of applications for imaging leads to a multitude of functions for the image: as a tool of coercion in advertising, a means to convey a visual language or an aesthetic in art, a method of visualization and analysis in science, to communicate and symbolize in journalism, or simply as a means to record and capture, and sometimes enhance, the experiences of everyday life. The manufacturing industries rely on images for a multitude of purposes, from visualization during the design and development of prototypes to the inspection of manufactured components as part of industrial control processes. Photography and applied imaging techniques using visible and non-visible radiation are fundamental to some fields: in medicine for diagnosis and monitoring of the progress of disease and treatment; and in forensic science, to provide objective records in legal proceedings and for subsequent analysis.

The art and science of photography and imaging has been developed through multiple disciplines, as a result of necessity, research and practice. The imaging process results in an image that will be observed; therefore consideration of the imaging chain, in practice or theory, must include the observer. But the numerous functions of images mean that the approach to and requirements from the imaging process are various. If involved in the practice of imaging, however, whatever the function of the image, it is impossible to avoid the need to acquire technical skills and some knowledge and understanding of the theory behind the imaging process.

Knowledge of factors affecting all stages of the imaging chain allows manipulation of the final image through informed selection of materials and processes. More in-depth study of the fundamental science of imaging, as well as being interesting and diverse, serves to enhance the practical imaging process, as understanding can be gained of the mechanisms involved, and processes and systems can be characterized and controlled to produce required and predictable results. Study of imaging science encompasses the nature of light, radiometry and photometry, vision science and visual perception, optics, colour science, chemistry, psychophysics, and much more besides. It provides methodologies for the assessment of imaging systems and tackles the complex issue of the evaluation of image quality.

The greatest change in our approach to imaging since the development of the colour process has occurred in the last 25 years, with the burgeoning growth of digital imaging technologies. The first consumer electronic camera system was introduced to the public in 1981, but it took the development of the personal computer, and its leap to widespread use, before digital imaging became practical. The internet has caused and facilitated an exponential increase in image production and dissemination. Imaging has grown to embrace computer science and computer graphics in a symbiotic relationship in which each discipline uses elements of the others. Digital image processing finds application in many areas from the aesthetic enhancement of images to analysis in medical applications. The immediacy of digital imaging has raised our expectations; it is likely that this, along with the efficiency of the digital imaging process and the ease of manipulation of digital images which are, after all, just arrays of numbers, will mean that the traditional photographic process may eventually be entirely replaced.

Keeping up with the changes in technology hence requires the acquisition of different types of knowledge: technical skills in computing for example, and an understanding of the unique qualities of information represented by discrete data, in digital images, compared to the continuous representation of information, as used in analogue (silver halide) imaging. It is clear that the need for new practices and alternative approaches will continue as the technology develops further. It is important, however, in trying to understand the new technologies, not to forget where it all started. Although technology has changed the way in which we produce and view images, much of the core science upon which the foundations of photography were built remains important and relevant. Indeed, some of the science has become almost more important in our understanding of digital systems.

THE IMAGING PROCESS

The word ‘photography’ is etymologically derived from the phrase ‘to draw with light’. Modern electronic imaging techniques are commonly classed under the umbrella term ‘digital imaging’ to distinguish between them and more traditional silver halide photography; however, both encompass the same core principle: the use of light to produce a response in a light-sensitive material, which may then be rendered permanent and viewed as an image of the original scene. Detailed comparisons will be drawn between the analogue and digital processes throughout this book. Thus, an overview is presented below.

During image capture, light from a scene is refracted by a lens and focused on to an image plane containing a light-sensitive material. Refraction is the deviation of a light ray as it passes from one material to another with different optical properties, and is a result of a change in its velocity as it moves between materials of different densities (see Chapters 2 and 6). The effects of refraction can be seen in the distortion of an object when viewed from behind a glass of water. Figure 1.1 illustrates the refraction of light rays through a simple positive lens to produce an inverted image in sharp focus on the image plane.

The amount of light falling on an image sensor is controlled at exposure by a combination of aperture (the area of the lens through which light may pass) and shutter speed (the amount of time that the shutter in front of the focal plane is open). This relationship is described by the reciprocity equation: H = Et, where E is the illuminance in lux, t is the time of exposure and H is the exposure in lux-seconds. At each exposure level, a range of possible aperture (f-number) and shutter speed combinations will produce the same overall exposure. Choice of a particular combination will affect the depth of field and sharpness/motion blur in the final image. Traditionally a single increment in the scale of possible values for both shutter speed and aperture is termed a ‘stop’ (although many cameras offer half-stop intervals). Each change of a single stop in either aperture or shutter speed scale represents a halving or doubling of the amount of light falling on the sensor. The photometry of image formation is dealt with in detail in Chapter 6 and exposure estimation is the subject of Chapter 12.

Image formation occurs when the material changes or produces a response in areas where exposed, which is in some way proportional to the amount of radiation falling on it. In traditional photographic materials, exposed light-sensitive silver halide crystals (silver chloride, bromide or iodide) form a latent image (see Chapter 13). Latent in this context means ‘not yet visible’. The latent image is actually a minute change on the surface of the exposed crystal, where a small number of silver ions have been converted to silver atoms, and is not visible to the naked eye. It is also not yet permanent. In an image sensor electromagnetic radiation falls on to pixels (a contraction of ‘picture elements’), which are discrete light sensors arranged in a grid. Each pixel accumulates charge proportional to the amount of light falling on it.

image

Figure 1.1   Image formation using a simple lens.

After exposure, the image is processed. For simplicity we consider the monochrome process, as colour is discussed in more detail later in this chapter. In photographic chemical processing the latent image is developed (Table 1.1). During the development process silver halide crystals containing latent image are reduced to metallic silver. The silver forms tiny specks, known as photographic grains, and these appear as black in the final image. The full range of tones produced in a greyscale image is a result of different densities of clusters of grains, and the image density in any area is proportional to the amount of light that has fallen on it. The image tones at this stage will be negative compared to the original scene. After development time is completed the material is placed in a stop bath to prevent further development before the image is fixed.

In digital imaging, the processing will vary depending upon the type of sensor being used (see Chapter 9). In a charge-coupled device (CCD), the charge is transferred off the sensor (‘charge coupling’), amplified and sent to an analogue-to-digital converter. There it is sampled at discrete intervals corresponding to individual pixels and quantized. Quantization means that it is allocated a discrete integer value, later to define its pixel value. The newer complementary metal oxide semiconductor (CMOS) image sensors perform this image processing on the chip and output digital values.

Table 1.1   Monochrome photographic process

PROCESS

OUTCOME

Exposure

Formation of latent image

Processing

 

Development

Latent image amplified and made visible

Stop bath

Development stopped

Fixing

Unused silver halides converted into soluble compounds which dissolve in fixing agent

Washing

Soluble chemicals removed

Drying

Water removed

At this stage, the nature of the image represented by the two systems is quite different. In silver halide materials, the random arrangement of silver halide crystals in the photographic emulsion means that a continuous range of tones may be represented and silver halide methods are often referred to as analogue imaging. The digital image, however, is a grid of non-overlapping pixels, each of which is represented by an integer number that corresponds to its intensity. It cannot represent continuous tones in the same way as silver halide materials because the data are discrete. Various techniques, introduced later in this chapter, are therefore used to simulate the appearance of continuous tones.

The final step in the imaging process is image perpetuation, during which the image is rendered permanent. The silver halide image is made permanent by fixation, a process by which all remaining unexposed silver halide crystals are made soluble and washed away. A digital image is made permanent by saving it as a unique digital image file. The image is then output in some way for viewing, either as a print, a transparency or as a digital image on a computer screen. In silver halide processes this involves exposing the negative on to the print material and again processing and fixing the image. A comparison of the analogue and digital imaging processes is illustrated in Figure 1.2.

IMAGE CONTROL

By careful control of every stage of the imaging process, the photographer is able to manipulate the final image produced. Image control requires an understanding of the characteristics of imaging material and system, composition, the behaviour and manipulation of light, tone and colour, as well as the technical skill and ability to combine all of these factors for the required result. Such skills may be acquired by practice and experimentation, but having an understanding of the theory and the science behind systems and techniques allows true mastery of the process.

Control of image shape

When capturing an original scene, the photographer controls composition of the image to be projected on to the focal plane of the camera in a variety of ways. The format of the camera selected (the size of the image sensing area) will determine not only the design of the camera and therefore the capabilities of the system, but also the size, quality and aspect ratio of the final image. Large-format cameras (also known as technical or view cameras, with an image format of 5 × 4″) are designed to allow camera movements, physical manipulation of the two planes containing lens and imaging sensor separately, enabling the photographer to change the size, magnification and perspective of elements on the plane of sharp focus. Image viewpoint is one of the key factors influencing composition, as this controls not only the positioning of different subjects and the perspective within the scene (the relationship between relative size and position of objects), but also whether the image is in portrait or landscape format (if using a rectangular image format). Chapter 11 covers camera movements and camera systems in detail.

image

Figure 1.2   Analogue and digital imaging processes.

Image shape is also controlled by the focal length of the lens being used. In a simple positive lens the focal length is the distance from the lens to the rear principal focus, defined as the point on the optical axis at which the lens brings a distant object to sharp focus. The focal length is determined by the curvature, thickness and refractive indices of optical components, and this in turn defines the angle by which light rays are deviated (refracted) as they pass through them. This determines the field angle of view, as illustrated in Figure 1.3. Geometric optics is the subject of Chapter 6.

image

Figure 1.3   Angle of view of a lens.

The angle of view determines the amount of the original scene covered by the lens. Standard lenses for each format have an angle of view of around 50°. Wider angle lenses have shorter focal lengths and cover more of the original scene, hence often displaying distortion around the periphery of the image. Longer focal lengths cover a much smaller area of the original scene and may therefore show less off-axis curvilinear distortion.

Depth of field

Depending upon the lens focal length, there is a point of optimum focus in a scene that will be a particular distance away (depth) from the camera and will produce a sharp image exactly at the focal plane. There are also zones in front of and behind this point that will produce acceptably sharp images. This zone of sharp focus is referred to as depth of field and can be an important aspect of composition. A shallow depth of field will contain a small range of planes of sharp focus, all other planes being out of focus, hence isolating and emphasizing a subject of interest. Depth of field is influenced by lens focal length, distance to focused object, and also lens aperture.

Tone and contrast

As well as the position and size of scene elements, the photographer may also influence the composition by controlling the tone and colour of elements relative to each other. Tone in the original scene is defined by the intensity of light reflected from an object. The tone and contrast of the scene and reproduced image may be manipulated in a variety of ways, for aesthetic purposes, or to work within the limitations of a device or system.

Lighting control in the original scene

When white light reaches a surface, some wavelengths are absorbed and some are reflected or transmitted. Tone is controlled by the surface properties of the subject and the nature and intensity of the light source illuminating it. The contrast of the scene is the ratio between the brightest and darkest tones in the scene and the range of possible intensity levels in between, and may be controlled or manipulated at a number of stages in the imaging chain. Again, this is affected by the absorption characteristics of the surfaces being illuminated, but also by their position relative to the scene illuminants. Therefore the photographer can use lighting techniques to change image contrast. By adding light sources or changing the angle or distance of subjects relative to illuminants, the difference between highlight and shadow can be compressed or expanded.

The tone reproduction of a device or material describes how the range of intensities in the original scene is mapped to those in the final image (see Chapter 21). Tone reproduction is limited by many factors in the imaging chain, including the dynamic range of the image sensor – that is, its ability to record and represent a range of densities or intensities. Dynamic range in silver halide materials is also dependent upon exposure level. Selection of a particular type of film or image sensor will influence the range of possible tones at capture. Because both photographic and digital imaging processes involve an imaging chain containing multiple stages and devices, there are a number of points after image capture in which tone reproduction may be manipulated. Developing agents used in photographic processing, setting up and calibration of output devices, and selection of printing materials and processes all help to define the range of tones possible in the output image. Digital image processing software makes tonal adjustment simple and interactive through manipulation of image tonal curve or histogram.

Colour

Colour imaging involves an analysis stage in which the response of a sensor to narrow bands of wavelengths is recorded; it is followed by a synthesis stage, where the measured values are converted to a response in an output device, to produce a colour that matches to a certain degree the visual appearance of the original. The rendition of colour in an image, whether digital or analogue, is dependent upon a variety of factors. The complex mechanisms by which the human visual system perceives colour mean that no colour reproduction will be identical to the colours perceived in the original scene. The aim in colour reproduction is to produce consistent and acceptable colour, and is influenced by colour preference and ‘memory colours’, which are often quite different to the actual colour qualities of associated subjects. Fundamental colour science is introduced in Chapter 5. Colour reproduction is the subject of Chapters 22 and 23.

The colours recorded from an original scene are a combination of the wavelengths present in the illuminant (the spectral power distribution) and the spectral reflectance characteristics of the surface on which the light falls, coupled with the spectral responsivity of the sensor. Choice of light source and appropriate sensor are therefore important factors in determining final colour quality of the image. The wavelengths reaching the sensor can be altered by filtering the light sources using coloured ‘gels’ or using optical filters over the lens.

Colour reproduction through the imaging chain is managed by the selection of appropriate reproduction materials and devices. In photographic imaging, colour is manipulated by filtration at the capture stage (through the use of different colour-sensitive layers in emulsions) and also at the output stage (by filtration during printing). Digital image processing provides a powerful tool in the processing of colour in digital images, allowing simple manipulation of global colour balance, specific ranges of colours, or localized areas within the image. The many different stages in the digital imaging chain and the diverse range of devices and technologies available mean that colour translation between devices is complicated. The colour gamuts of input and output devices are highly device dependent, meaning that the same pixel value may produce different colours on different devices, depending upon their characteristics and age, and the way they are set up and calibrated.

This problem has led to the development of colour management systems (see Chapter 26), which manage the process of converting pixel values into the output colour values of different devices. To gain a proper understanding of digital colour management, a continually evolving discipline, requires some knowledge of colour science, the nature of light and the human visual system. Colour management relies on the absolute specification of colour as the human visual system perceives it. This specification (CIE colorimetry) is based upon experiments performed using human observers in 1931, many years before the extent of digital imaging today could be truly imagined. While colour management in a photographic system involves an understanding of colour filtration and the colour reproduction characteristics of a particular slide film or film/paper combination, digital colour management involves the careful calibration and characterization of all input and output devices coupled with an understanding of the different digital representation of colour through the imaging chain, and software to perform colour processing.

THE ORIGINS OF PHOTOGRAPHY

Camera obscura

Since the development of systematic schools of philosophy in Ancient Greece in around 500 years BC, man has found enduring fascination in attempting to understand the nature of light and its behaviour. In China at around this time, Mo Tzu, considered one of the first great Chinese philosophers, emphasized the importance of pragmatism in philosophical thought. His followers began measuring and observing the behaviour of light using flat and curved mirrors. It is believed that they also discovered the camera obscura, which was later further developed by Alhazen (AD965), a mathematician and physicist born in Basra, Iraq. The camera obscura produced the first projected images of the real world and may therefore be viewed as the starting point for photography. Camera obscura literally means ‘dark chamber’. When light passes from outside a light-tight chamber or box through a pinhole, an inverted image is formed on the opposite surface. By the seventeenth century, the camera obscura had been adapted by placing a lens in front of the aperture and portable versions became a tool for painters, allowing them to accurately trace landscapes.

Early experiments

It was natural that the emphasis on developments in science and technology in Europe during the time of the Industrial Revolution (beginning in Britain in the latter part of the eighteenth century) would lead to attempts to try to ‘capture’ light and obtain a permanent image using the camera obscura. In 1727, Johann Heinrich Schulze, a German university professor, had discovered that light exposure caused silver nitrate to darken; this was an important step forward in the search for a light-sensitive material. In 1777, the Swedish chemist Karl Wilhelm Scheele observed the same effect using silver chloride. Thomas Wedgwood (1771–1805), son of Josiah, the famous potter, and Sir Humphrey Davy experimented with the use of paper soaked in silver nitrate placed in the camera obscura. The images were not permanent, however, and a fixing agent was not found during Wedgwood’s lifetime.

Towards the development process – the Daguerreotype

In 1822, a French physicist, Joseph Nicéphore Niépce, obtained the first permanent photographic image of a landscape. He was interested in trying to copy drawings on to transparent paper by producing a reproduction using some sort of light-sensitive material. He coated a pewter plate with asphaltum (a substance that hardens on exposure to light), exposed the plate, and then removed the remaining unexposed and unhardened asphalt in a solvent. Although crude, this was the first permanent photograph; he later used the same technique and then etched the pewter image using acid to produce a printing plate.

Niépce met Louis-Jacques-Mandé Daguerre in 1826. Daguerre was an artist, whose working life began as a theatre scenery painter, but he had progressed to painting large panoramic landscapes. He became interested in the blending of art and science in the experiments of Niépce. He had for some time been attempting to copy the images produced by the camera obscura and they now continued their investigations in partnership until Niépce died in 1833. Daguerre carried on with his experiments, but found the asphalt process slow, an image requiring an exposure of approximately 8 hours and he therefore concentrated again on using silver salts. With a coating of silver iodide on a silver backing, the exposure was reduced to as little as half an hour. This exposure produced an invisible latent image, which was then ‘developed’ over a tray of heated mercury before being fixed using common salt. He presented this process to a joint meeting of the Academy of Science and the Academy of Art in 1839 as the Daguerreotype, and it quickly became a commercial success. The process was further developed by the use of sodium hyposulphite, as a fixing agent. Sodium hyposulphite, which is the popular name for sodium thiosulphate, was discovered by Sir John Herschel in 1819 to be a solvent for silver halides and is still the basis of ‘hypo’ used today).

The length of exposure made the process more suited to still-life and landscape subjects; however, the earliest known photographic portrait was taken by an American, Samuel Morse, at around this time using Daguerre’s technique. Despite the long exposures, portrait studios began opening throughout Europe and America, and the length of exposure was reduced to less than a minute by 1840 with the use of a new ‘portrait’ lens and a change from silver iodide to more sensitive silver bromoiodide.

The negative–positive process

During the same period, in England, William Henry Fox-Talbot was also experimenting with the camera obscura based on the work of Schulze, Davy and Wedgwood. Investigating both silver nitrate and silver chloride, he found that exposures could be dramatically reduced by using separate applications of silver nitrate and sodium chloride and exposing the paper while still wet. He called his methods photogenic printing and described them in a paper presented to the Royal Society shortly after Daguerre had unveiled the Daguerreotype in Paris. This was followed by the development of the Calotype process, which Fox-Talbot patented in 1840. The Calotype used paper sensitized in silver iodide and gallic acid to develop the image. This combination reduced the necessary exposure time to approximately a minute in bright sunlight. The key difference between this and the Daguerrotype was that where Daguerre had produced a positive image, Fox-Talbot produced the first paper negative, which was then contact printed to produce a positive. The images produced by this early negative–positive process, being from a paper negative, were not as sharp as the Daguerreotype and therefore did not gain the same commercial success in professional portraiture. The obvious advantage of a negative–positive process was that it became possible to obtain many reproductions from a single negative, and when compared to the ‘one-shot’ results of the Daguerrotype, meant that it quickly became the basis of modern photography.

Attempts were made to use the same technique using a transparent base to produce a sharper negative. Niépce de St Victor, also descended from Nicéphore, obtained good, but unreliable, results using albumen (egg white) to hold the silver halide on glass, but these were superseded by the development of the wet-collodion process by Frederick Scott Archer in 1851. Collodion, containing potassium bromide and potassium iodide, was coated on to a glass plate and allowed to set. The plate was then immersed in a silver nitrate solution in darkness, producing light-sensitive silver bromide and silver iodide. It was then placed in a holder, put in the camera and exposed. It was developed immediately following exposure using a solution of pyrogallol, vinegar and water, before being fixed in hypo, washed and dried. The technique relied on the collodion being wet and hence required the immediate processing of the plates before the solvents evaporated. This meant that location photography required a portable darkroom close to the camera and, with the weight of the glass plates, made photography a cumbersome process. Early wet-collodion negatives were printed on albumen paper, but collodion paper was soon produced before the first use of gelatin in printing paper in 1885.

Modern materials

The use of gelatin in photographic materials was first attempted in 1868, and by 1873 sensitized gelatin was available to photographers. Gelatin remains an important constituent of photographic materials today. As well as creating a fine suspension of silver halide crystals throughout the emulsion, gelatin has certain special characteristics which enable and enhance the photographic process and to date no alternative has been found. Gelatin emulsions are made more light sensitive by heating, meaning that exposure time can be dramatically reduced. Additionally, the physical characteristics of gelatin mean that while it dissolves in warm water, making a solution easy to paint on to a backing such as a glass plate, it ‘gels’ as it cools, becoming hard if water is removed. The use of gelatin means that the sensitive material can be dry when exposed and need not be developed immediately if it is kept light-tight. Gelatin-based photographic materials quickly took over from the less convenient wet-collodion process, and indeed are still used in contemporary photography.

In 1885, Carbutt, of Philadelphia, produced the first sheet film using sheets of celluloid coated with the gelatin-based emulsion. Roll film quickly followed using a cast nitrocellulose base, developed by George Eastman and Henry Reichenbach, and alongside it a new camera, the Kodak, in 1888. This produced a circular image 2½ inches in diameter and 100 images. These two developments allowed photography to become not only more portable but also something that could appeal to the masses. Travel overseas was becoming a popular pursuit for those who could afford it and it was natural that these first tourists would want to record the diverse scenes they encountered.

These materials were the basis for modern materials, and their structures and properties have been refined rather than dramatically changed. Cellulose triacetate or acetate–butyrate is now commonly used as a film base, although some manufacturers use newer synthetic polymers, such as polyester. Film formats have changed as a result of developments in camera design, but the structure remains the same.

PHOTOGRAPHIC IMAGING TODAY

Modern photographic materials consist of an emulsion containing a suspension of light-sensitive silver halide crystals (chloride, bromide or iodide) in gelatin, coated on to a flexible and stable transparent plastic or paper backing (Figure 1.4). Control by the emulsion manufacturer of the physical characteristics of the material, such as the size, shape and surface area of the crystal, the nature of silver halides used, the arrangement and relative quantities of halides in a crystal, will affect the photographic properties of the emulsion in terms of contrast, speed, resolving power (resolution), sharpness and graininess. These subjects are dealt with in detail in later chapters; their influence on image quality and imaging system performance is the subject of Chapter 19.

image

Figure 1.4   Structure of a monochrome photographic film.

Characteristics of photographic materials

The speed of a photographic material determines the exposure required to obtain a specific silver density, and is therefore necessary in exposure metering. Larger silver halide crystals have a higher probability of absorbing enough electromagnetic energy to form a latent image; therefore, higher speed films tend to have larger grain. A number of systems have been developed for the measurement of film speed; however, the one most commonly used is the International Organization for Standardization (ISO) standard system of arithmetic and logarithmic speeds. Black and white photographic materials are commonly available with arithmetic speeds from ISO 50 up to ISO 3200, with the increase in speed indicating higher sensitivity. Each time ISO speed is doubled, it represents a decrease of a single stop in required exposure. Monochrome film often contains two emulsion layers. Different silver halide crystals will be used in the two layers, the larger crystals reacting faster and therefore increasing the speed of the film. The finer crystals in the slower emulsion enhance the tonal range and are capable of recording finer detail. Speed and sensitivity are the subject of Chapter 20.

The image contrast describes the range and distribution of tones produced in the final image and will depend upon the original scene contrast ratio, but is also limited by the range of densities that the material can produce. A high-contrast material will produce an image containing mainly highlights and shadows with fewer mid-tones. A lower contrast material will record more information in the mid-tones and will have a smaller density range from minimum to maximum (‘dynamic range’). The silver halide crystals in a single emulsion generally vary in size and distribution of sizes (‘polydisperse’). The random dispersion of the crystals, and their number, size and shape, mean that a continuously varying range of tones can be produced, as different sized crystals will form latent images at different exposure levels. This will ultimately determine the contrast of the material. The material contrast can, however, be altered during photographic processing, by suitable selection of developing agents. Study of the tone reproduction characteristics of materials and systems are an important part of performance evaluation (see Chapter 21).

Developed silver specks in photographic emulsions tend to form random clumps as a result of their distribution and this leads to the visual effect of photographic grain. It causes a visual sensation of non-uniformity in an area of uniform tone similar to the effects of noise, and the subjective perception of this is referred to as ‘graininess’. Graininess depends upon density level. At low densities, there is much less clumping and at high densities the visual system cannot distinguish between the individual grains; therefore, graininess is much more visible in mid-tones. The graininess of a negative is more important than that of a printing paper due to the enlargement required for printing. Granularity is an objective measure of the minute fluctuations of densities in an image. Factors during the development process such as developing agent and degree of development also affect image granularity.

The resolution of an imaging material determines its ability to represent fine detail. In practical terms it may be traditionally measured using resolving power. The resolving power of the entire system is affected by the characteristics of the imaging material and the optical limitations of the lens, and is measured in practice by imaging a test chart containing horizontal and vertical black and white closely spaced bars producing patterns of different frequencies. The smallest pattern that can be accurately resolved, i.e. where light and dark strips can still be differentiated, defines the highest spatial frequency that the sensor can represent and is often expressed in cycles per millimetre. The resolving power of the material is affected by the contrast, graininess and turbidity of the emulsion (the level and area of diffusion of light through a photographic emulsion as it is scattered by silver halide crystals). It is defined by the point spread function of the material, which describes the average size and shape of the dispersed image of a point of light in the emulsion.

Image sharpness refers to both the subjective impression produced by the image of an edge on an observer and an objective image measure. Fundamentally we judge the sharpness of an image at edges, which are localized areas of high contrast, containing a sudden decrease or increase in density. The turbidity of photographic emulsions, however, means that there is not an abrupt change but a gradual change as a result of light diffusion through the material. An objective measure that equates to sharpness is termed acutance and is evaluated by obtaining density profiles through the edge. Acutance depends upon the shape and spread of the edge. In photographic materials acutance is often enhanced as a result of chemical ‘adjacency’ effects at either side of the edge causing the dark side of the edge to be overdeveloped and the light side underdeveloped. Acutance is influenced by a number of factors, including type of developer, degree of development and type of emulsion.

A key difference between the first films developed towards the end of the nineteenth century and what is available today is their sensitivity to different wavelengths of radiation (spectral sensitivity). Silver halide materials have a natural sensitivity to the short wavelength end of the visual spectrum, including ultraviolet, violet and blue. These early photographic materials were therefore only capable of recording some of the light from the exposure and would not record longer green or red wavelengths. Images produced on these materials would represent blue objects as very light in tone and any objects containing green or red as very dark or black. Hermann Wilhelm Vogel, a chemist working with the collodion process in Berlin in 1873, discovered that by adding a minute amount of dye to the collodion, he could sensitize it to yellow light, producing better tonal rendition. These materials were known as orthochromatic. By 1905 panchromatic materials sensitive to the entire visible spectrum were available. Today, black and white materials sensitive to the far red and infrared regions of the electromagnetic spectrum are available.

Capturing colour

Isaac Newton first hypothesized that white light was made up of a mixture of wavelengths after observing, in 1664, the way in which a glass prism could split a beam of sunlight into a spectrum of coloured light. Using a second prism he found that he could recombine the dispersed light into a single beam of white light. He also discovered that it was possible to obtain light of a single colour by masking off the rest of the spectrum.

In 1802, Thomas Young showed that white light could be matched by an appropriate mixture of three lights containing a narrow band of wavelengths from the blue, green and red parts of the spectrum. He suggested that rather than the eye containing receptors for each hue, it contained only three different types of photoreceptors, each of which was sensitive to different wavelengths of light. His ideas were further extended by Hermann Von Helmholtz 50 years later, who showed, using colour matching experiments, that in people with normal colour vision, it was indeed possible to use only three wavelengths to create all other colours within the normal visible range. Helmholtz proposed that the receptors were sensitive to broad bands of short (blue), medium (green) and long (red) wavelengths. The combination of the responses of the three types of receptors would be interpreted by the brain as a single colour, the nature of which would be determined by the relative strength of response of each. This became known as the YoungHelmholtz theory of colour vision. The photoreceptors are known as cone receptors and are indeed sensitive to light from the three different parts of the visible spectrum as proposed (although this is a somewhat simplified version of the actual mechanism of colour vision, and is now recognized as only a single stage; see Chapter 5 for details of the opponent theory of colour vision). Red, green and blue are known as the primaries. The colour matching process is known as trichromacy and is the fundamental principle on which both photographic and digital colour imaging is based.

image

Figure 1.5   (a) Additive mixing of red, green and blue light. (b) Subtractive colour synthesis using cyan, magenta and yellow colorants.

A Scotsman, James Clerk Maxwell, in 1861 demonstrated that this theory could be used as a basis for producing colour photographs. He produced three separate photographic images of some tartan ribbon by exposure through red, green and blue filters, which he then printed to produce positive lantern slides. Filters of the same colour as those used at image capture were then placed in front of their respective positives and when light was projected through them and the three images registered, produced a colour photograph of the original scene. This was an additive process of colour mixing, using the addition of light of the three primaries to produce all other colours. If equal amounts of the three primaries are added together then they will produce white light. Figure 1.5a illustrates the additive system of colour reproduction.

The three colours created by mixing any two of the primaries, cyan, magenta and yellow, are the complementary colours, sometimes called the secondaries, or subtractive primaries. Overlaying different amounts of these three colorants on paper is known as subtractive colour mixing. Subtractive methods are based upon the absorption of light, each secondary colour absorbing (or subtracting) light of the colour that is opposite to it in the diagram in Figure 1.5b, i.e. yellow absorbs blue, etc. When cyan, magenta and yellow are combined, they subtract all of the three additive primaries and black is produced.

After the development of panchromatic black and white film it was possible to produce emulsions sensitive to different wavelengths of light. It was proposed that multilayer colour materials could be produced by adding substances to the emulsion that would produce coloured dyes on development. By adding an extra stage to the photographic process before fixation, in which the developed silver was bleached away, transparent coloured photographic layers were left. In 1935, Eastman-Kodak produced Kodakchrome, the first practical multilayer film, using this process.

Today the majority of both film and paper colour photographic materials are based upon the integral tripack structure, in which red-, green- and blue-sensitive silver halide emulsions produce layers of cyan, magenta and yellow dyes. These layers combine to subtract wavelengths of light from white light when it is projected through a piece of film, either when viewing a transparency or printing from a colour negative. In print materials the layers of complementary dyes subtract wavelengths from white light reflected off the white paper backing.

DIGITAL IMAGING

Early digital images

The first digital images were used in the newspaper industry. Pictures were sent by submarine cable across the Atlantic between London and New York in the 1920s, being reproduced at the other end using a specially adapted telegraph printer. They were digital in that the images were coded into discrete values and reconstructed using five levels of grey. The development of digital imaging into a process that can easily be used as an alternative to traditional photographic processes has, however, been reliant on the development of the computer. By the late 1970s cameras based upon the CCD as an image sensor had been developed to be used in research but they had not appeared in the public domain. The first electronic camera available to the public was the Mavica, which was announced by Sony in August 1981.

The CCD was originally developed in 1934 as a shift register memory store in a computer. It is based upon the use of doped silicon, which is a semiconductor. At absolute zero (zero kelvin) it behaves as an insulator, but becomes an electrical conductor at room temperature if energy is applied to it. The CCD image sensor consists of metal oxide semiconductor (MOS) capacitors arranged in a grid, each corresponding to a pixel position. On light exposure an electrode on top of the capacitor causes an electric charge proportional to the light intensity to be collected in the silicon substrate underneath. Charge coupling occurs after exposure: the charge at a pixel position is transferred from one pixel to the next and transferred off the chip. The signal is then amplified, sampled, quantized and encoded to become a stream of digital data.

The development of digital computers began in the 1940s with ideas that would later be incorporated into the design of the central processing unit (CPU). Progressive developments of transistor, high-level programming languages, the integrated circuit, operating systems and the microprocessor led to the introduction of the first personal computer in 1981 (IBM). Developments in hardware and software have meant that now a large percentage of the population in Western countries have access to a computer, and are able to capture, view, manipulate and transmit images digitally.

Parallel to developments in computer technology, resulting in smaller components and more streamlined, sophisticated but user-friendly systems, digital cameras have become smaller and single area arrays have become larger. Image sensors were initially smaller than a full frame of the ‘equivalent’ photographic format, as they were expensive and difficult to manufacture in larger sizes. An early issue was therefore resolution, which was limited by pixel size but also by the size of CCD area arrays. However, the manufacturing process has improved dramatically, allowing full-frame equivalents to film formats in the last few years. In the consumer market, the smaller image format has meant that the image diagonal is reduced, and lenses can be placed closer to the sensor, resulting in camera bodies that have become progressively more compact. As electronic components have become more miniaturized, the processing on the camera has become more complex, allowing the user to select specific zones within the image to meter from, different metering modes, ISO speed ratings, white balance of neutrals, resolution, file format and compression. Less complex but even smaller digital cameras are now automatic features of most mobile phones.

The CMOS sensor

In recent years a new class of image sensor has been developed. Although still silicon-based technology, CMOS sensors differ from CCDs in that the charge is amplified and analogue-to-digital (AD) converted at the pixel site. Digital data are then transported off the chip. The advantage of this is that each pixel can be read off the chip individually. CMOS sensors are also cheaper than CCDs to manufacture, less likely to contain defects and consume less power, which is of particular importance in a digital camera in terms of shutter lag and refresh time.

Because of the extra circuitry at each pixel site, the image sensing area of the pixel is smaller than the area of an equivalent CCD pixel. This means that the ratio of image signal to the noise generated by the electronic components is lower, and therefore images from early CMOS sensors were noisier and of lower quality than CCDs. Until a few years ago, CMOS image sensors were common in mobile phones but used less in digital cameras. Improvements in the technology and the use of advanced image-processing techniques on the sensor have meant that CMOS is now the choice in a number of high-end professional cameras, with full-frame area arrays equivalent to the 35 mm film format (for example, the Canon EOS 1DS Mark III, which has a 24 × 36 mm sensor containing approximately 21.1 million effective pixels).

Colour digital capture

Early digital cameras produced monochrome digital images, the sensor sampling intensities from the original scene. Trichromacy was used to produce colour in digital cameras using additive capture of three separate images corresponding to the red, green and blue content of an original scene. This was achieved by placing dichroic filters in front of three separate sensors that were exposed using a beamsplitter. This gave red, green and blue pixel values, which were then combined to represent the colour in the scene. Cameras based on this system are bulky however, as the camera body needs enough space to house the beamsplitter and three sensors. This system has been virtually replaced in the majority of digital cameras by the use of a single sensor on which coloured filters are overlaid. They are commonly arranged in a Bayer pattern, illustrated in Figure 1.6, and are each sensitive to a band of wavelengths of red, green or blue light.

After capture of a single value at each pixel site, values for the other two colour channels are created by interpolating between the values for each channel from surrounding pixels. The interpolation process produces estimates of the values that would have been produced if they were actually sampled; this does not improve resolution and as interpolation is an averaging and therefore a blurring process, (see Chapters 14, 24, 27) can result in a loss in image quality. A point to note in the Bayer array is that there are twice the numbers of green-sensitive pixels to those of red or blue. This feature is partly due to the fact that the peak sensitivity of the human visual system is to green light, at around 554 nanometres in normal daylight conditions, meaning that errors in the green channel are more noticeable to us than in the blue or red channels.

image

Figure 1.6   Arrangement of RGB sensors in a Bayer colour filter array.

A recent development in colour capture technology happened in the last ten years with the invention of the Foveon™ sensor, which in 2002 became available in the Sigma SD9 camera. The Foveon™ sensor has an arrangement much closer to that of integral tripack silver halide materials, making use of the fact that red, green and blue wavelengths of light penetrate to different depths in a silicon substrate. Values are captured for all three channels at every pixel site, meaning that it produces high-quality full-resolution colour images.

Other digital devices

The digital imaging chain consists of a number of devices other than the digital camera. The individual technologies and alternatives available in each stage of the imaging chain will be considered in more detail in later chapters. So far we have considered the digital camera as capture device; however, an alternative device for digital capture is the scanner, used to digitize photographic images. Flat-bed scanners were originally developed for the reflection scanning of print materials, although many now incorporate a transparency hood for scanning film materials. Dedicated film scanners provide higher quality scans from transparent media, as they are designed to record image data with a greater dynamic range (range of densities from shadow to highlight) than that encountered in print materials. Scanning film will always produce a better quality scan than scanning print, as the print is a second generation reproduction of the original scene. Drum scanners are at the professional end of the market, with the highest resolution and dynamic range. Like the digital camera, scanners use an RGB additive system of colour representation, with individual image sensors filtered for red, green and blue wavelengths of light.

A key device in the imaging chain is the computer monitor. Until recently it was based on cathode ray tube (CRT) technology developed from television systems, in which pixels are produced by different combined intensities of red, green and blue phospors. Today CRT displays have been largely replaced by alternative technologies, of which currently the most common is Liquid Crystal Display (LCD) technology. In LCD devices the image is formed from the combination of RGB filtered pixels illuminated by a backlight. The monitor is an output device and, like the digital camera and scanner, represents colour using the additive combination of RGB pixel values. Each pixel on screen is made up of a group of the three colours, sometimes termed a triad. Because the monitor is used for the viewing and editing of images, it is vital that it is set up and calibrated to produce accurate and repeatable colour.

The technologies available for digital print production are diverse; however, all use a subtractive system of colour reproduction, combining different amounts of cyan, magenta and yellow colorants deposited at pixel locations. In many, a black colorant is also used to produce accurate tones, rather than neutrals being created by maximum amounts of the other three colours. In recent years the range of possible colours capable of being printed (the colour gamut) has been extended by using extra inks, usually lighter versions of the other inks, as used, for example, in the six-colour Hexachrome™ process.

Image display devices and digital printers are the subjects of Chapters 15 and 16.

DIGITAL IMAGE REPRESENTATION

Finding methods to describe and evaluate images and image characteristics is at the heart of image science. We have thus considered the development of and differences in the analogue and digital imaging processes. To understand the implications of these differences, it is necessary to look at the way in which the two types of images are represented. The original scene may be described by a two-dimensional function f(x, y), in which the (x, y) coordinates describe spatial position within the image frame and the function value is proportional to the intensity in that position in the original scene.

In a photographic imaging process the values of f(x, y) may be represented by measured values of the density of silver at any location in the image. The position of measurement may be taken from anywhere in the image, and the density values also change continuously. This continuous range of represented tones is a result of the random distribution through the depth of the emulsion of tiny photographic grains overlaying each other. For this reason photographic images are referred to as continuous-tone images.

In a digital image, the image is represented by a grid of distinct non-overlapping pixels. Each individual pixel is addressed by its spatial coordinates in terms of rows and columns. The image function f(x, y) is often replaced by the function p(i, j), where i and j are the row and column number and p is the pixel value. The pixels can only take particular values and are usually represented as a scale of integers. A colour image will have several values (usually a triplet such as RGB) representing each colour as a combination of the intensities of three channels. The pixel value means different things to different devices, for example the intensity of RGB coloured phosphors on a CRT, or the amount of CMYK coloured inks laid down by an inkjet printer. Figure 1.7 demonstrates this difference in representation between photographic and digital methods.

image

Figure 1.7   Analogue and digital image representation. Original image from Master Kodak PhotoCD.

If a small area of smoothly changing tone in an image is represented by a function in which the x-axis represents spatial location across one dimension of the image and the y-axis represents density, as if a cross-sectional slice had been taken through it, a continuous-tone image function might look like the one in Figure 1.8a, whereas a discrete function might look like that in Figure 1.8b.

Note that the tones are inverted on the right-hand side compared to the discrete representation next to it, as high values in density relate to dark areas in the image, while high pixel values in a digital image indicate lighter values. The discrete spacing of image samples in the digital image produces a single tone at each pixel location and these values are also at discrete levels. The effect has been exaggerated by the low number of grey levels represented. The discrete spatial position and tonal values define two characteristics of the digital image, spatial resolution and bit depth or grey level resolution.

image

Figure 1.8   Continuous and discrete image functions.

Spatial resolution

In digital images, spatial resolution is a complex issue. The ability to represent fine detail is a combination of the number of pixels and the size of the pixels relative to the size of the imaging area. The resolution of the image is the number of pixels horizontally and vertically, and is defined at the beginning of the imaging process by the number of pixels on the original capture device, but it may be changed by up-sampling (interpolating) or down-sampling to resize the image. As well as the inherent resolution of the image, each digital device has its own resolution. Again, the input or output resolution in printers and scanners can be changed by interpolation. The resolution of input and output devices is commonly described in pixels per inch (ppi) or an equivalent measure such as dots per inch (dpi). The ppi of input devices is one factor limiting their ability to reproduce fine detail or high spatial frequencies.

Required output resolution depends upon the device being used. If an image is displayed on a computer monitor with groups of RGB phosphors representing each pixel at a spatial resolution of 72 ppi or above, then the eye perceives a continuous-tone image. For printed images higher resolutions are necessary. In many printing technologies ‘half-tones’ or ‘digital half-tones’ are used to create the illusion of continuous tone, by using groups of ink dots to represent a single pixel; this is described later in the book. In this case the actual number of pixels printed per inch may be defined in ‘lines per inch’, a line being a group of ink dots corresponding to a single pixel, terminology which has come from the press industry. The required printing resolution for the print industry is usually quoted as 300 dpi. It has been found that a resolution of around 240 dpi is adequate in printing using desktop inkjet printers. This subject is covered in more detail in chapters 16 and 25.

Spatial resolution has long been considered one of the major limitations of digital imaging compared to analogue processes; however, as the sensor technology is improving, pixel size is moving closer to that of photographic image point and digital sensor resolution is now considered adequate for many professional photographic applications.

Bit depth

A digital image is encoded as binary data, with a string of binary digits representing a single pixel. A binary digit (bit) can only take a value of 0 or 1 and the arrangement of digits will define the pixel value. The number of binary digits used for each pixel will define how many unique codes can be created and therefore how many different pixel values can be represented. For example, exactly 2 bits can produce the following four unique codes and no more – 00, 01, 10 and 11 – and can therefore be used to represent four values.

The visual importance of an adequate number of grey levels in a digital image can be clearly seen in Figure 1.9.

The image in Figure 1.9a, showing a 21-step grey scale, illustrates an interesting visual phenomenon. At the top of the grey scale, the greys change continuously, providing smooth gradation of tone from black to white. The bottom half of the grey scale shows a series of steps in tonal value, the visual differences between each pair all appearing equal. These steps have been produced from a digital image and the pixel values in each step in the middle and bottom strip of the grey scale are equal. In the horizontal region in the middle of the image the steps of single value have no white boundary between them. Yet the image appears ‘scalloped’ as if edges are between each step, with the area on the darker side of the edge appearing darker than the rest of its step and the area on the right-hand, lighter side appearing lighter than its step. This is believed to be caused by a ‘sharpening’ process carried out during the processing of the visual signal by the brain and the visual effect is known as Mach banding. In Figure 1.9b, which is represented by 5 bits per pixel (bpp), the noticeable jumps appear as contours over the whole of the image. Because the human visual system is so sensitive to jumps in tone, particularly in relatively uniform areas, the image values must be quantized to a fine enough level to produce the appearance of continuous tone (Chapter 21).

The number of different levels or pixel values that may be represented is defined by the expression Levels = 2b, where b is the number of bits allocated per pixel. The maximum number of levels of intensity that can be differentiated by the human visual system at a particular luminance level has been approximated to 180; therefore, a greyscale digital image needs a minimum of this number of individual tonal values to appear as if it is continuous tone. An 8-bit pixel may take 28 = 256 different values and this leaves some extra levels available for tonal manipulation and therefore an 8-bit image is the minimum standard used to produce photographic quality. The pixels in an 8-bit image are commonly displayed in image-processing applications in a range from 0 (black) to 255 (white). Modern capture devices often allow 16 bits per pixel for improved tonal rendition; however, the use of these may be limited by the functions available in image-processing software and the range of image file formats that will encode 16-bit images.

image

Figure 1.9   Bit depth and tone reproduction in digital images. (a) Eight-bit image = 256 grey levels. (b) Five-bit image = 32 grey levels.

Colour representation

Colour images are represented by separate channels, each of which is allocated the same number of bits as the equivalent greyscale image. An RGB image can hence represent 256 × 256 × 256 levels, which is approximately 16.7 million individual colours, using a total of 24 bits per pixel. RGB can also be represented by 16 bits per channel. Colour images can exist in a variety of colour spaces other than RGB, including CMY for printing. Colour spaces that are slightly more intuitive and easy to understand represent pixels in terms of hue, saturation and intensity (tone/intensity), and other variations of the same theme. The colour discrimination of the human visual system is not as sensitive as tonal discrimination and hence in certain applications it may be useful to separate tonal information from colour. Colour spaces such as the CIELAB colour space represent the image using three channels, in which a pixel has one coordinate for luminance and two relating to chroma, which represent red–green and blue–yellow opponent channels. Colour spaces and colour management between devices are dealt with in Chapters 23 and 26.

File size and file formats

The file size of a digital image is a function of the number of pixels and the number of bits allocated to each pixel. It is important to distinguish between the actual file size of the uncompressed raw data and the file size when the image is saved to a particular file format, which may be smaller or larger, depending upon which compression method is used and what other information the file contains, such as header, exif data, layers or alpha channels.

File size in bits = Number of pixels

× Number of bits per pixel

Dividing this figure by 8 converts the output to bytes; to convert to megabytes, divide by (1024 × 1024). The number of pixels depends upon required resolution, but to produce an image approximately equivalent to that of 35 mm film requires around 3072 × 2048 pixels. If this is a 24-bit RGB image then the file size = 3072 × 2048 × 24 = 18 MB.

The size of digital image files has important implications in the imaging chain in terms of speed of transmission, image processing and storage. If the image is to be displayed on the internet, image file size is even more important in terms of the speed at which it must be transmitted. A reduction in either spatial resolution or bit depth will reduce file size with an associated reduction in image quality. A balance must be achieved between requirements to limit file size and optimum image quality. File size is, however, less of an issue than it used to be, as advances in computer technology mean that cheaper hard drives are now available with larger storage capabilities, small enough to be portable, parallel to developments of other portable storage media such as USB memory sticks. The widespread use of broadband to access the internet means that transmission of images is also quicker.

The different image file formats available have different qualities, which will be dealt with later in the book. The most useful formats are standardized, meaning that an image file can be opened in different applications and across platforms. Examples of commonly used standard or de facto standard storage formats include Tagged Image File Format (TIFF), Joint Photographic Experts Group (JPEG), Graphic Interchange Format (GIF), Photoshop format (PSD), and JPEG2000. They may be broadly divided into those that compress losslessly, such as TIFF, and those that apply lossy compression, giving a smaller file size with a loss in quality, such as JPEG. Compression is achieved by the removal of redundancy (unnecessary data or information) from the image, either by organizing it more efficiently or by the removal of less visually important information.

An important development in file formats has been the introduction of the RAW file. This is a file containing data that has undergone minimal processing before ADC. This is as close as possible to the data ‘as seen’ by the image sensor and can then be opened up in an image-processing software for more accurate processing before being converted to a standard format such as TIFF. Currently the RAW files captured by digital cameras are proprietary and different in structure and content; for example Canon uses CRW, Nikon uses NEF and Fujifilm, RAF. Individual conversion software is supplied with the camera to convert them, but it is likely that a standardized format will be used in future years and recent versions of Adobe Photoshop contain a plug-in converter, Camera RAW, which can convert the majority of different RAW formats.

Chapter 17 describes commonly used image file formats; image compression is the subject of Chapter 29.

IMAGING CHAINS

The imaging chain describes the stages, in terms of devices or processes, involved from original scene to final output. The methods available to record, manipulate, store and transmit images are diverse and may be entirely based on traditional silver halide-based materials (here referred to as ‘photographic’) and techniques, or electronic processes using digital image sensors and devices. Inevitably, comparisons are made between the two types of system and the images produced, but to truly understand them is to realize that they are very different in nature and that sometimes direct comparisons are misleading. ‘Hybrid’ imaging chains contain aspects of both and can therefore combine some of the advantages of both. The concept of analogue and digital imaging chains is illustrated in Figure 1.10.

image

Figure 1.10   Imaging chains.

An example of a hybrid imaging chain acquires images on photographic film, which are then digitized in a film scanner, before being processed and output using an inkjet printer. This combines the initial quality of the film image, with the convenience of having a digital record of the image to process. An image on photographic material has a shelf-life based entirely on the physical properties of the material and the way in which it is stored. A digital image is simply an array of numbers and theoretically can be stored forever, as long as copies are archived. With the improvements in digital capture technology, it is likely that eventually an entirely digital route will become the most commonly used. However, alongside the move towards digital imaging chains the imaging process has turned full circle with high-end printing devices, such as the Fuji Pictrography™, having been developed, which ‘write’ digital images on to silver halide-based materials. As the colour layers are semi-transparent, the image has the colour and archival properties of photographic materials, but has been obtained and processed by the more immediate and versatile digital route.

In evaluating the merits of the two imaging chains, it is necessary to consider the different nature of image representation, alongside the processes involved. Photographic technologies are a known quantity, and photographic science is a well-established discipline in which a variety of performance measures have been developed, to allow evaluation of systems and materials. Photographic images are high quality and continuous tone, and the achievable results are known and repeatable. However, chemical processing is messy, time-consuming, expensive and bad for the environment. It is likely that this will be a key factor in the eventual demise of the use of photographic materials and processes.

The discrete nature of digital data results in a trade-off between file size and image quality. Certain artefacts are inherent in digital data, as a result of its discrete nature. Artefacts may also be created as the image moves through the devices in the imaging chain, especially if lossy compression is used. The discrete nature of the data is also its strength, however, as the representation of the image by an array of numbers means that image processing is achieved by numerical manipulation. Consider this alongside the explosion in the use of the internet in the last 10 years. We are more and more accustomed to the immediacy of digital imaging, the ability to view images as soon as they are captured, real-time image processing on screen and fast transmission to the other side of the world over the internet. With each new development to improve image quality, digital imaging is set to take over from analogue in all but the minority of applications.

EVALUATING IMAGE QUALITY

Image quality is the subject of Chapter 19, hence only a brief introduction is given here. Image quality has been defined by Engeldrum (2000) as the ‘integrated set of perceptions of the overall degree of excellence of an image’. Perceived image quality is the result of a complex interaction of the responses of the human visual system to a variety of image attributes. These attributes are not independent of each other; an increase in one attribute may serve to enhance or suppress the visual response to another and therefore it is logical to consider them as a whole. There are a number of physical attributes of images which may be measured and used as benchmarks for image quality and these are summarized in Table 1.2. It is important to recognize that measurement of a single attribute provides a limited correlation with actual perceived image quality. A number of image quality metrics have hence been developed which take into account the combined effects of multiple variables.

Digital imaging has led to differences in approach to the measurement of image quality. This is partly because of the input of different disciplines in the development and evaluation of digital technologies and processes. The process of sampling means that there is a loss of information from the original scene simply because the image is digital. At all stages in the imaging chain, image data is manipulated and processed, causing artefacts and errors in the final result. Lossy compression also results in differences in the output image. A simple approach to the evaluation of an image or a process is to measure the difference between the original and the output image. A variety of measures of numerical differences such as mean absolute error, root mean square error and peak signal-to-noise ratio have been developed and applied in the evaluation of compression algorithms. These are generally termed distortion metrics.

Simple objective measures of image attributes do not always correlate well with subjective image quality. The weakness in such a straightforward approach is that the measures evaluate the imaging chain without the observer. A true measure of image quality must define the quality of the image as it appears to the observer. The quantification of subjective perception is termed psychophysics, which measures and describes the relationships between physical stimuli and subjective response. In psychophysical experiments, image quality is quantified by the responses of observers to changes in images, providing scales of values on which images are ranked, differences in the scale relating to perceived differences. Such experiments may investigate the effects of changing one or a multitude of image attributes, or the effect of changing the viewing conditions or environment, or may provide a scale of overall perceived image quality. Studies may be conducted to identify the point at which a change in an image is noticeable (just-noticeable difference) and relate it to changes in physical attributes. This is the measurement of image fidelity. The study of overall image quality involves observers identifying preferred image quality.

Table 1.2   Physical measures of image quality

ATTRIBUTE

PHYSICAL MEASURE

Tone (contrast)

Tone reproduction curve, characteristic curve, density, density histogram, pixel values

Colour

Chromaticity, colour difference, colour appearance models

Resolution (detail)

Resolving power (cycles per mm, dpi, ppi, lpi)

Sharpness (edges)

Acutance, PSF, LSF, MTF

Noise (graininess)

Granularity, noise power spectrum, autocorrelation function, standard deviation, RMSE

Other

DQE, information capacity, file size, life expectancy (years)

It is impractical for manufacturers to rely entirely on the results from psychophysical experiments alone, however, as they are time-consuming to set up, perform and analyse. Image quality metrics have therefore been developed to attempt to model the predicted response of human observers. By modelling the various responses of the human visual system and including such models in combination with other physical measures of image attributes in mathematical models, metrics can be used to measure image fidelity or image quality without the inconvenience of psychophysical experiments. Such metrics are tested against data from psychophysical experiments to see how much they correlate with perceived quality. Often a variety of metrics will be evaluated over the duration of an image quality study.

BIBLIOGRAPHY

Dainty, J.C., Shaw, R., 1974. Image Science. Academic Press, London, UK.

Engeldrum, P.G., 2000. Psychometric Scaling. Imcotek Press, Winchester, USA.

Gonzales, R.C., Woods, R.E., 2002. Digital Image Processing. Prentice-Hall, New Jersey, USA.

Graham, R., 1998. Digital Imaging. Whittles Publishing, Scotland, UK.

Hunt, R.W.G., 1994. The Reproduction of Colour. Fountain Press, Kingston-upon-Thames, UK.

Jacobson, R.E.J., Ray, S.F.R., Attridge, G.G., Axford, N.R., 2000. The Manual of Photography, ninth ed. Focal Press, Oxford, UK.

Keelan, B.W., 2002. Handbook of Image Quality: Characterization and Prediction. Marcel Decker, New York, USA.

Langford, M.L., Bilissi, E.B., 2007. Advanced Photography. Focal Press, Oxford, UK.

Neblette, C.B., 1970. Fundamentals of Photography. Litton Educational Publishing, New York, USA.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.134.133