1
First Contact

It could be said of photography what Hegel said of philosophy: “No other art, no other science is exposed to this ultimate degree of contempt based on the belief that one can take possession of them all at oncea” [BOU 65]

1.1. Toward a society of the image

To say that, over these last 30 years, a real revolution has taken the world of photography by storm and deeply modified the multiple technical, economic, industrial and societal aspects in which it develops would be an understatement.

From a technical perspective, the replacement of analog silver film by solid digital sensors, tentatively began 40 years ago, emulating a transition from analog to digital that is found in many other fields (the telephone, television, printing, etc.), could have certainly been no more than a significant progress, but in the end a natural one and of little impact for the user (is the user really conscious of the transition to digital terrestrial television or of the phototypesetting of newspapers?). However, it has profoundly modified the concept of photography itself, bringing forward several types of original devices; first, the point-and-shoot that fits in a pocket and that can be forgotten, then the mobile phone or the tablet, which the photographic industry would gladly repudiate as illegitimate and somewhat degenerate children if they did not hold the promise of an inexhaustible market.

The consequences of this technical mutation have proved to be devastating for the established economy of photography. Major players (Kodak, Agfa, Fuji, Ilford, Minolta, etc.) which seemed to be ruling empires were forced to fall back on uncertain niches, to regroup and sometimes disappear. Newcomers have settled down, whose photographic culture was often thin (Sony and Samsung), sometimes non-existent (Nokia, Apple, etc.). Other players are building kingdoms from scratch around online image services or their social networks, but these are people with Internet, computing and telecoms backgrounds and not photographic ones. Whereas the chemical industries that produced films and derived products have naturally strongly suffered, processing laboratories, if they had not disappeared entirely, had to be totally reconverted and distribution has begun a profound transformation whether it relates to materials or to services.

The reconfigurations of the industrial fabric, far from being completed, continue today, involving more and more closely players who ignored each other, some coming from imaging, but many who prospered without being related to photography professions: on-board electronics experts, creators of integrated circuits, software developers, network and mobile phone key players have been transformed into designers of camera bodies and objectives.

Society’s activities are themselves deeply affected by these reconfigurations. Have they accompanied the industrial mutation or are they one of the causes? It is not up to us to say. But the societal mutations of photography are equally significant and irreversible. They reveal themselves in several ways, but first of all it is the generalization of the use of photography that is striking. Generalization within the family: photography is no longer the prerogative of the pater familias as at the beginning of the previous century, of adults and of old adolescents as at the end of the century. It is now a personal attribute that every one uses, children as well as grandparents. Society as a whole is also exposed to it and there is no niche regardless of the level of wealth or education, that escapes from it, no population, regardless of its location, city or rural, that does not participate in it. And its distribution is remarkably diffuse over the globe, no country is excluded, at least in the component of its population exposed to modern life.

At this point, we need to recall, by rereading the famous analysis that Bourdieu and his collaborators made fifty years ago, the achievements made for half a century [BOU 65]. While there is no evidence that photography is no longer an “average art” as they deplored at the time, it is obvious that the family, the cultural and the socio-professional categories that then allowed the identification of typical behaviors with regard to photography are now completely blurred. Attitudes are surprisingly similar between a class of Parisian students on a field trip, a bus of older Japanese tourists in Capri, or the crowd gathered for the return of a rock-star at Wembley Stadium. Photography is ubiquitous and permanent, it is spontaneous, individual and collective, frantic and viral, intimate and shared and very few can escape from it.

This universal infatuation with photography profoundly affects its usages. Certainly whole swathes of the photographic “culture” remain almost unchanged (art photography, event photography (family or public), professional newspapers, catalogues and advertising photography, scientific photography, etc.), and the reader would find word for word, for these areas, the relevance of Bourdieu’s analysis. But old facets tend to rise or become distorted, new facets emerge: leisure or travel photography now free from “old fashion” conformity stereotypes (at the expense of a “young” conformity?), everyday-life photography, detailed, anecdotal, sensational, observational, unusual photography, self portraiture alone or with others, microcosm, microcommunity photography and its culmination into the narcissic selfie. These new forms combine an often simplified manner of photography and modern means of instantaneous, remote and mass communication; it relies on a formidable technique, both hidden and exhibited, following the rules of the best marketing. MMS1 and the Internet are the natural extensions of the photographic image; social networks work as their magnifier unless they are the objective. It remains undecided nowadays whether YouTube, Facebook, Twitter, Instagram, Tumblr, Picasa and Flickr must be included or not in “photography products”.

The figures say everything about this unprecedented evolution, but they do not explain anything2. It would be naive to imagine that it owns everything to the advent of digital image and of its natural creation tool, the still camera. It is however doubtful that such a development never would have occurred to such an extent without a simple and universal image acquisition system having been made available, fully compatible with modern communication and information processing means. Contrary to film cameras, digital cameras play this key role3.

The objective of this book is not to extend this sociological study, which is worthy nevertheless, but to explain how the digital camera works by examining in detail each of the components that constitute it.

1.1.1. A bit of vocabulary in the form of zoology

The generic term we will subsequently use the most is the term “camera”, standing in short for “photographic camera”. This term covers the various forms of the devices, with film or digital and in its various current versions. What are these forms today? We have illustrated them in Figures 1.1 and 1.2.

The SLR: SLR stands for single lens reflex expressing that the same lens is used for viewing and for the image as opposed to other cameras that use two different optical paths. SLR is the long-standing reference of the film photography camera, especially in the historical 24 × 36 mm format. The optical path is made conventionally, either toward the sensor, or toward the viewfinder, by using a moving mirror and a prism with a pentagonal cross-section. It has interchangeable lenses. Its digital version appeared very early on the market (from the 1990s), but with small-sized sensors, generally less than 24 × 36 mm. Since around 2010, it has been available with a sensor of size 24 × 36 mm.

The compact camera: this is a complete photography system of small size, which may be slipped into a pocket or a small bag so that it can be carried everywhere without any discomfort. Its lens is fixed, usually with a variable and retractable focal length. The viewing is done with a screen and not through an eyepiece. The smaller compacts are the size of a credit card. Many compacts are very simple and very intuitive to operate, but the range also offers almost professional compacts which present all the functionalities of an SLR with a very reduced size and allow, for example, work to be prepared that will eventually be continued later with a more sophisticated camera once the parameters have been defined. The compact is the first digital camera to have conquered a place on the market, in the early 1980s.

image

Figure 1.1. The four main architectures of general public digital cameras. From left to right:
- the compact: fixed objective, no eyepiece, viewing through the back screen;
- the bridge: fixed objective, viewing through an eyepiece in an optical path returned by a tilting mirror, display on the back screen after shooting;
- the single lens reflex (SLR): interchangeable objective, viewing through an eyepiece in an optical path returned by a tilting mirror, display on the back screen after shooting;
- the hybrid: interchangeable objective, viewing through an eyepiece in an electronic path or on the back screen, no tilting mirror.
Intermediate solutions exist (for example using the live view function of SLRs that allows them to display on the back screen the image during focusing adjustments)

image

Figure 1.2. Professional cameras: on the left, 24 ×36 mm format SLR. In the center: medium-format camera. On the right: view camera. The diagrams are not to scale, the sizes are indicative

The bridge: this also has a fixed lens, usually a zoom, but a body, a build and an optical path similar to those of an SLR. It typically uses a prism and a moving mirror in the optical circuit allowing a reflex viewing. Its name comes from its intermediary positioning between compact and reflex. It appeared on the market in 1995 and has suffered a strong decline in its distribution in the 2010s.

The hybrid camera: this looks like an SLR because of its interchangeable objectives and its often advanced functionalities, but it does not use a prism or a moving reflex mirror in the optical path, the viewing being carried out through an electronic eyepiece. Its body therefore offers a smaller size than the SLR, but its performance and its usage are very close to those of the SLR. Technical reasons have delayed its appearance on the market where it had no significant presence until about 2010.

The medium format: this is a camera whose sensor (traditionally of the film) is larger than that of the 24 × 36 mm. In its film version, it uses sensitive surfaces in spools and takes pictures of 4 × 4 cm, 6 × 6 cm, or 6 × 7 cm in size. A number of digital medium formats have been available on the market for a few years, but with generally high costs which designate them for professional or semi-professional purposes.

View cameras: for formats beyond those of the medium format, cameras are referred to as view cameras (formats from 9 × 12 cm to 20 × 25 cm), which make use of plates or individually packaged film sheets. View cameras are reserved for professional applications: architecture, fashion, works of art, etc. By 2015, there have actually been no digital sensors available on the market and adapted to view cameras. The very large-dimension sensors which can be adapted to view cameras are used especially in the scientific field, in microelectronics, astronomy or particle physics, and remote sensing. They are still often prototypes made of mosaics of juxtaposed sensors. Moreover, for applications that allow it, very large images (typically 50,000 × 50,000 pixels and beyond) are obtained by the movements of a sensor (linear or matrix) using robotic mechanisms such as in biology or for capturing works of art.

Photoscopes: we will also mention in this book sensors that perform the photographic function of computers, tablets as well as that of mobile phones. These devices are very similar in their architecture and in their design to smaller compacts. They differ from them, on the one hand by automating most of the functions, on the other hand by the intensive use of communication and computing functions. They thus appear ahead in numerous technical aspects compared to their cousins solely dedicated to photography. Although the limited quality of the images they provide and the small freedom they afford to the photographer exposes them to the condescension of part of the community of photographers, they gradually become the most important source of pictures of the huge market that we have described above. As such, they receive the utmost attention of all the manufacturers, the components and software developers, and this attention is bearing fruit. They now achieve amazing performances. We will be especially looking at all the innovations they propose; these are good markers of trends in photography. We will refer to them consequently either as photoscopes, or as mobile phones.

Among the new terms and along with photoscope, the acronym “DC” is often found, which is generically used to refer to digital cameras in all its forms.

Finally, it should be noted that none of the above terms, either in French or in English, are included in the recent Vocabulaire Technique de la Photographie, [CAR 08], which reflects rather well the gap which remains within the world of photography between those who design cameras and those who use them.

In the English vocabulary the term camera is universally recognized. It covers any device that allows to capture a picture (either still or moving). In the case of the new electronic cameras, many more concise forms are proposed, making use of the letters D (digital) or E (electronic) associated with acronyms not always very explicit, such as digital still camera (DSC), electronic still picture camera (ESPC), Electronic still picture imaging (ESPI), digital single lens reflex (DSLR).

This standardized vocabulary for photography is the subject of a recently completed ISO norm [ISO 12], but still seldom followed.

1.1.2. A brief history of photography

As we move forward, we have entered the videosphere, a technical and moral revolution which does not mark the peak of the “society of the spectacle” but its end. [DEB 92]

The technical components allowing the capture of an image were all available at the beginning of the 19th century, some for a long time: the camera obscura which constitutes the body of the view camera had been known since antiquity and was particularly familiar to the artists of the Renaissance, the lens which highly enhances the captured luminous flux can be traced back several millennia before our era but has only been really useful in the formation of images since the 12th century, the photosensitive components, either in negative (such as silver chloride), or in positive (as the bitumen of Judea, mixture of natural hydrocarbons) were familiar to chemists at the end of 17th century. In addition, the laws of propagation and the mysteries of light and of color have been correctly mastered for two hundred years for the former and fifty for the latter.

The first photography tests, which can be dated to 1812, were by Nicéphore Nièpce. However, while an image could then correctly be captured, at the expense of very long exposure times, it was not stable and disappeared too quickly. Efforts were therefore being made in these two directions: on the one hand, by improving the sensitivity of receptors, on the other hand, above all, by maintaining the image after its formation.

The first photograph from life was made in 1826 by Nicéphore Nièpce of his suroundings: “View from the Window at Le Gras”. It was achieved on a pewter plate coated with bitumen and has required an exposure time of 8 h.

Seeking to improve his process, Nicéphore Nièpce tested a very large number of media and developers, the best being silver plates and iodine vapors4. He entered into a partnership with Louis Daguerre, in 1829 to develop what would become an industrial process: the daguerreotype. He used silver salts on a copper plate and develop them with iodine vapors. Nièpce died in 1833. In 1839, the daguerreotype was presented to the public and Arago officially presents photography at the Academy of Sciences. This was the beginning of the immediate commercial success of the daguerreotype, and of photography among the general public.

In Italy and in England, William Fox Talbot was working in parallel on photographic recording processes focusing in particular on the reproduction on paper (paper coated with chloride sodium and with silver nitrate fixed with potassium salts). He achieved his first photos in 1835. In 1840 he developed the paper negative, which allows the reproduction of several photographs from a single original. He generalized the use of hyposulphite of soda as a fixer and patented an original process in 1841: the calotype.

It was nevertheless the daguerreotype, rights-free, which was widespread, more than the calotype, penalized by its patent. The calotype will take its revenge later, since the principle of the photographic negative has been, for 100 years at least, at the heart of the photographic industry.

Another major breakthrough was made significantly later by George Eastman, in the United States. In 1884, he proposed photographic media not on glass but on flexible celluloid cut into strips: film is born. In the wake (1888), he proposed a very compact camera body using this film and that could take 100 pictures in a row. Individual photography was ready for the general public who no longer wanted the inconvience of bulky camera bodies, tripods and boxes of heavy and fragile photographic plates

This evolution was confirmed by the release of the Kodak Brownie in 1900, a camera priced at $1.00 that made it possible to take 20 photos, each with a cost of 25 cents. The market shifted from the camera to film: the consumable is the engine of the market.

However, for most cameras, especially those of quality, sensitive surfaces were still quite large, with sides in the order of 10 cm. It was not until the Leica in 1925 that the small format became widespread, popularizing for the occasion the 24 × 36 mm.

However, color photography remained to be improved. The Auguste and Louis Lumière brothers’ “autochrome” process, patented in 1903 and commercialized in 1907, was the starting point. The autochrome used potato starch whose colorful grains mixed within lampblack acted as a sensitive support. Nonetheless, the sensitivity was very low (equivalent today to a few ISO) and the development process was complex. The quality of the image was nevertheless exceptional and would have nothing to envy from our best emulsions such as can be seen in the pictures still available a hundred years later.

The color emulsions Kodacolor (1935) then Agfacolor (1936) would appear much later on the market and would stand out in turn by the simplicity of their use and the quality of the images they provide.

In 1948, Edwin Land conceived a process that enabled the instantaneous development of a positive, the Polaroid, whose color version appeared in 1963. Despite its success, it never really competed with the emulsions developed in laboratories which had an extremely dense network of distributors.

With regard to cinema, it was Thomas Edison who filmed the first movies from 1891 to 1894 with the kinetograph, which used the flexible film recently invented by Eastman to which he added side perforations for transporting (he thus imposed the 35 mm-sided format which would provide the basis for the success of the 24 × 36 mm). As early as 1895, the Lumière brothers had improved the kinetograph with a very large number of patents and gave it the well known popular momentum.

In the field of image transmission (the forerunner of video), the German engineer Paul Nipkow was the first to study the analysis process of a picture with a perforated disk. His work which began in 1884 would, however, only be presented to the public in 1928. Edouard Belin introduced a process of transmitting photographs, first by cable in 1908, and then by telephone in 1920: the belinograph. Meanwhile, the Russian Vladimir Zworykin filed a patent for the iconoscope in 1923, based on the principle of the Nipkow disk. In 1925, John Logie Baird performed the first experimental transmission of animated pictures in London.

The first digital cameras were born at the beginning of 1975 within the Kodak laboratories. Broadcast first in a very confidential manner and rather as laboratory instruments, they were first professionally distributed in areas very different from professional photography: insurance companies, real estate agencies, etc. They were not really competing with cameras which gave much better quality images. The consumer market was however very quickly conquered, and followed gradually by the professional market. From the 1990s, the market of digital cameras was more significant than the analog market and companies relying on film disappeared one after another (AgfaPhoto went bankrupt in 2005, Eastman Kodak had to file for bankruptcy in 2012) or convert to digital (Fujifilm).

1.2. The reason for this book

In recent years, the evolution of digital photographic equipment has been considerable. It has affected the most important components of cameras (such as sensors, for example) and the specialized press has widely reported the progresses obtained. However, the evolution has also affected aspects much less accessible to the public, either because they are too technical or because they appear as accessories in a first analysis, or often because they are hidden within products and reveal manufacturers’ secrets. These advances relate to very varied scientific fields: optics, electronics, mechanics, science materials, computer science and as a result do not fully find their place in specialized technical journals. In order to be be used by photographers, they often require long recontextualizations that explain their role and the principles of their functioning. It is in this spirit that this text has been written which reviews all the functions essential to the proper functioning of the camera and explains the proposed solutions to solve them.

But before addressing these major features of the camera, we will in global terms situate photography in the field of science, present a few key features, particularly important for the formation of the image and introduce a bit of vocabulary and of formalism that will accompany us throughout the book. We will use this quick description to give an overview of the various chapters that follow and to indicate the manner in which we have chosen to address each problem.

1.3. Physical principle of image formation

Photography is a matter of light and as such we will have to speak of optics, the science of light. There are numerous books covering this area, and often excellent despite being a bit old [BOR 70, PER 94, FOW 90]. A few important elements should be remembered concerning light, its nature and its propagation that will allow us to place photography within the major chapters of physics.

1.3.1. Light

Light is an electromagnetic radiation in a narrow window of frequencies. Today, it is likely to be addressed by formalisms of classical physics or by quantum or semi-quantum approaches. Photography is overwhelmingly explained using traditional approaches: image formation is very well described in terms of geometrical optics, fine phenomena concerning the resolution are explained by the theory of diffraction, most often in a scalar, and eventually in a vector representation. Diffraction theory takes its rightful place when addressing the ultimate limits of resolution, one of the key problems of photography. The polarization of light occurs in a few specific points that we will take care to point out. The concept of coherence is one of the finest subtleties of photography, especially in the field of microphotography.

Only the basic phenomenon of the transformation of the photon into an electron within the photodetector relies on more advanced theories since it is based on the photoelectric effect5 which can only be explained with the help of quantum theory6. However, we will not need quantum theory in this book, once the fundamental result of the photoelectric effect is admitted: the exchange of energy between a photon particle and an electron, if the photon has at least the energy necessary to change the state of the electron.

1.3.2. Electromagnetic radiation: wave and particle

Light is electromagnetic radiation perceived by the human visual system. It covers a range of wavelengths from 400 to around 800 nm7 and therefore a frequency range of 7.5 × 1014 to 3.75 × 1014 Hz, which corresponds to a transmission window of the atmosphere, on the one hand, and to a maximum of the solar emission, on the other hand.

In its wave representation, light is a superposition A of monochromatic, ideal, plane-waves characterized by their frequency ν, their direction of propagation k and their phase ϕ [PER 94]. We will represent it thereafter by a formula such as:

[1.1] image

In its particle representation, a photon of frequency ν (and therefore of wavelength λ = c/ν, with c = 299,800,000 ms−1, the speed of light) carries an energy , where h is Planck’s constant (h = 6.626 × 10−34 Js), either between 2.5 × 10−19 and 5 × 10−19 J, or, in a more convenient unit, between about 1.55 eV and 3.1 eV since the charge of the electron is 1.602 × 10−19 coulombs [BOR 70, NEA 11].

A photon carries therefore a very low energy and a normally lit scene involves a very large number of photons8. It is important for image formation to consider what relationship these photons have relative to each other: these are aspects of coherence which will be discussed in section 2.6. It should be considered now that photography is first and foremost concerned with incoherent optics. We will also examine in section 9.7 the polarization properties of waves, but these properties are only marginally involved in photography.

Color is one of the most striking manifestations of the diversity of the frequency content of electromagnetic waves. It is also one of the most complex challenges of photography. Chapter 5 will be dedicated to it, where we will have to introduce numerous notions of physiological optics to account for the richness of human perception that in fine governs the choices concerning image quality.

The corpuscular aspects of the photon will be at the heart of the chapter dedicated to the photodetector (Chapter 3) as well as at that concerning noise affecting the image signal (Chapter 7). The wave aspects will be used to address the properties of propagation through the lens (Chapter 2), of image quality (Chapter 6) and of very prospective aspects related to the improvement of the images, which we will see in Chapter 10.

The wave is either emitted, or reflected by the object of interest to the photographer. It then travels freely in space, and then in a guided fashion by the optical system which turns it into an image on the sensor. The simplest instrument to create an image is the pinhole camera (or dark room), known since antiquity as a (discrete) observation system, which does not use lenses, but a simple hole by way of an image formation system. We will examine the image constructed by the pinhole camera because it is a very convenient model which allows the simple processing of numerous computer vision problems and remains widely used today.

1.3.3. The pinhole

The dark room (or camera obscura, or pinhole) is a box perforated with a small hole (of diameter d) placed at the center of one of the sides and with a focusing screen9 on the opposite side (Figure 1.3). The principle of the pinhole is well explained in geometric optics approximation. A detailed analysis of the pinhole is available in [LAV 03]. In this approximation, light travels in a straight line without diffraction. An inverted image of an object placed in front of the camera is formed on the screen, at the scale of the ratio between the depth p′ of the box and the distance p of the object to the hole. If the object is at a great distance p with regard to the depth p′ of the box, each of its points generates a small homothetic spot of the hole (of diameter ε = (p + p′)d/pd). It is thus advantageous to keep this hole as small as possible to ensure a sharp image. But the energy that hits the screen is directly proportional to the area of the hole and in order to have a bright pinhole camera it would therefore be helpful that this hole be wide. It should be noted that any object placed in the “object space” will be imaged in an identical manner regardless of its distance to the pierced side as soon as p img p′. As a consequence, there is no notion of focusing with a pinhole camera except for the very close points (p′ image p) which will be proportionally affected with a greater blur.

Under what conditions can the rectilinear propagation of light from geometrical optics be applied to the pinhole camera? It is important that diffraction phenomena, which are measurable as soon as the dimensions of the openings are small compared to the wavelength, be negligible. Let us calculate, for a wavelength λ, the diameter of the diffraction figure of an circular opening d observed on the screen (therefore at distance p′). This diameter ε′, limited to the first ring of the Airy disk (diffraction figure of a circular aperture, this point will be examined in detail in section 2.6) is equal to ε′ = 1.22 λd/p′ and the diffraction figure will have the extension as the geometric spot previously calculated if image. For a green wavelength (λ = 0.5 × 10−6 m) and a camera with a depth of 10 cm, the diffraction gives a spot equal to the geometric optics for a hole in the order of 0.2 mm, well below what is used in practice. We are thus generally entitled to ignore the diffraction in the analysis of a pinhole image.

1.3.4. From pinholes to photo cameras

1.3.4.1. The role of the lens

In order to increase the amount of light received on the screen, we replace the opening in the pinhole camera by a spherical lens. We build a camera. Between conjugate planes of the lens, all the rays issued from an object point M converge at an image point M′ (approximated stigmatism of the lenses in the paraxial approximation). This solves the problem of the energy since the whole aperture of the lens is now used to capture the light issued from M. This gain comes at a price with the selectivity of the object planes that are clearly seen: only those that are close to the plane of the screen are sharp, the others will be more or less blurred. Consequently, it is important to provide a mechanism to ensure this focusing.

image

Figure 1.3. Pinhole – On the left: diagram of the formation of the image in a pinhole camera. On the right: if the object point M is far away from the camera (p′ img p), its image M’ is a circle of diameter approximately equal to that of the opening: ε = d

1.3.4.2. Image formation

The focal plane in a camera is determined by translating the lens on its optical axis such that to vary its distance to the image plane. This movement can be automated if the camera has a telemetry function that measures the distance to the object (section 9.5). In order for the points M and M′ to be conjugate, they must verify the conjugation relations. Let us recall them. These are Snell–Descartes relations (with origin at the optical centre) or Newton’s (with origin in focal points) [FOW 90]10. If s = p − f and s′ = p′ − f (Figure 1.5):

[1.2] image

Also we should recall the magnification relations. The transversal magnification G and the longitudinal magnification Γ of the image, which will be useful later, are equal to:

[1.3] image

The sign “-” reminds us that the image is reversed. The other planes become increasingly more blurry when gradually moving away from P, upstream as well as downstream. It is a defect well known of any photographer (see Figure 1.4).

image

Figure 1.4. The depth of field is limited: the sharpness of details decreases regularly upstream and downstream the chosen plane for the focusing

We can see in Figure 1.5, on the left, what happens for a point Q that is located in a plane at distance q ≠ p from the lens. Its image converges in Q’, in a plane Θ different from Π. The image spot of Q on Π is much greater if the aperture is significant and if the point Q is far from the focus plane. The opening of the lens is in practice controlled by a diaphragm (Figure 1.5 on the right).

The calculation of the depth of field of a photographic lens can be found in section 2.211. If with a lens of focal length f, fitted with a diaphragm of diameter D, observing an object at distance p, (see Figure 1.5), an image spot of diameter ε can be tolerated, then noting Δ the depth of field, distance between the two extreme focusing points, upstream and downstream of P, Q1 and Q2, and in the event of a distant object (p img f) and a low blur (ε small), it yields:

expressing that the depth of field varies inversely to the aperture of the diaphragm D and to the focal length f, but grows very fast when the object moves backwards to infinity (p large).

image

Figure 1.5. Camera – On the left: The focus is done on the image of P, in the plane Π by varying the “shift” of the lens (s′). A point Q, gives a sharp image Q′ in a plane Θ but in the plane Π it gives a blurred image. On the right: the amount of blur depends directly on the size of the diaphragm which limits the inclination of the rays on the axis. A small aperture can decrease the blurring

1.3.4.3. Object and image position

In practice, the purpose is to obtain the image of an object at the distance L of the observer with a camera of focal length f. Two equations are thus available: s + s′ + 2f = L and ss′ = f2 which lead to a second degree equation, and under the constraint (image and real objects) that L > 4f, with two solutions:

[1.5] image

These two solutions correspond to symmetric dispositions where the object of one of them becomes the image of the other and vice versa (Figure 1.6). Most often in photography, a focal length f (of a few centimeters) is chosen with a small value compared to L (of a few meters) and a distant object is observed. Then the solution s1 is adopted which results in a magnification significantly smaller than 1: G = f/s1. The object is then at the position sL, the image plane being roughly merged with the image focal plane: p′f.

In the case of macro-photography (photography which consists of dramatically enlarging a small object), and such that the magnification G = s′/f be maximum, it requires approximating the object to the object focal point in such manner that s′ be the largest possible, be the closest to L − 2f. The configuration on the right of Figure 1.6 is then adopted.

1.3.4.4. Lens aperture

The role of the diaphragm D in the quality of the image is clearly evident in relation [1.4], despite being introduced to control the energy flow. It is therefore a key element in the adjustment of a photographic system, generally controlled by means of a ring on the objective (see Figure 1.7).

image

Figure 1.6. The two solutions to combine two planes at a distance L with a lens of focal length f are symmetrical. When L img f, we are faced with an ordinary photography situation, the image plane is practically in the focal plane (situation on the left). The other solution (situation on the right) is the case of macrophotography where a large magnification for small objects is achieved

It is usual to call lens aperture to the diameter D of the diaphragm and f-number N to the ratio between the focal distance f and the physical diameter D: N = f/D. Thus a 50 mm lens, with an f-number N = 4, has an open diaphragm with a diameter of D = 12.5 mm. Therefore an f-number N or an aperture f/N refer to the same concept.

Moreover, the diaphragm controls the energy received by the sensor. It is proportional to the free area of the diaphragm, thus to the square of D. It is therefore convenient to propose a range of apertures following a geometric progression with a common ratio image, in order to divide by 2 the energy received at each step.

image

Figure 1.7. The diaphragm ring on a camera lens

The most commonly proposed apertures are thus:

image

The largest apertures (1/1, 1/1.4) are reserved for high-quality objectives because of they operate very far from the optical axis, and they ought to compensate for various aberrations (see section 2.8). The smallest apertures (1/32 and 1/64) can only be used in very strong brightness conditions or with very long exposure times.

We will have the opportunity to revisit these definitions of the aperture in section 2.4.4.2, where we will bring some clarification to these simple forms.

1.3.4.5. Photography and energy

The aperture of the diaphragm D controls the luminous flux arriving at the photodetector. The relationship between the received energy and the aperture is quadratic. The shutter also intervenes to determine the amount of energy that builds the image by controlling the duration of the exposure. In order to make an energy appraisal, the quality of the optical components that constitute the lenses should also be taken into account, but their contribution to the flow of energy is often negligible because objectives are nowadays carefully coated in such manner that only very few stray reflections can occur.

Finally, the last item that should be taken into account to transcribe the relation between the incident energy and the value of the resulting image is the process of the conversion of photons into signal. This process is very complex and will be discussed in detail in Chapter 4. We should here very quickly recall this problem.

In film photography, the process of the conversion of photons into image on film, undergoes through the various stages of the photochemistry of the exposure first, followed by those of developing and of fixing. In order to reduce the numerous parameters of these stages, standard processing conditions were made available which allowed for a common usage, the association of a fixed single number to a commercial product, (a specific film), and the definition of what fully described the result of the conversion: the film sensitivity.

Although the conversion process is now very different in digital photography, since it involves stages of electronic amplification that we will examine in detail in Chapter 3, the notion of sensitivity, very similar to that defined for photographic film has been recovered for solid sensors. Here again, it expresses the capability of the sensor to generate a signal as a function of the flow of incident light, and as for film photography, a greater sensitivity will a priori be possible at the expense of a lesser image quality.

This sensitivity, such as defined by the standardization body, the ISO12, varies generally between the values of 25 and 3,000 lux.second for film and between values from 100 to 100,000 lux.second in the case of solid sensors.

We will denote by N the aperture f-number, by τ the exposure time and by S the sensitivity of the receiver.

Relying on the three variables above, the formula which ensures a correct energy evaluation is of the form:

[1.6] image

Whereas the three terms S, τ and N seem to compensate for each other, they however control in a particular manner the image quality:

  • – the sensitivity S, as we have said, is notably responsible for the noise that affects the image and it will be advantageous to choose it as low as possible;
  • – the exposure time τ controls motion blurring and will have to be adapted to both the movement of the objects in the scene and to the movements of the camera;
  • – the f-number N controls the depth of field: if N is low (a wide open aperture), the depth of field will be small (equation [1.4]).

It is the art of the photographer to balance these terms in order to translate a particular effect that he wishes to achieve. Rules of use and of good sense, associated with a rough classification of the scene expressed by a choice in terms of priority (“aperture” or “speed” priority) or in terms of thematics (“Sport”, or “Landscape” or “Portrait”, for example), allow settings to be offered from the measured conditions (average brightness, focus distance) in more or less automatic modes.

1.3.4.6. Further, color

It would not be possible to mention photography without addressing the sensitive issue of color, halfway to physics on the one hand and to human perception on the other hand. We will do so in Chapter 5. We will show the complexity of any representation of a chromatic signal and the difficulty of defining color spaces that are simultaneously accurate with regard to the observed stimuli and sufficiently broad in order for any image to be exchanged and retouched without betraying the perception that the photographer had of it. We will see that technology offers several solutions which are not at all equivalent and that lead each to difficult compromises. We will also see why the photographer must imperatively worry about white balance.

1.4. Camera block diagram

Nonetheless, the photographic camera cannot be reduced to these major areas of physics, important as they are. The camera is a very complex technological tool which assembles multiple elementary components responsible for specific functions: to measure the distance to the target, to measure the distribution of energy, to ensure the conversion of the optical signal, to select the chromatic components, to archive the signal in memory, to stabilize the sensor during shooting, etc. It is through these features that one should also study the camera and it is according to this schema that part of this book is organized.

The elementary functions are grouped in the diagram of Figure 1.8. It can be recognized that:

  • – the optical block, a lens or most often an objective consisting of several lenses. It is responsible for achieving the reduced image of the scene. Its role is essential for the quality of the image and all the functions of the camera aim to make the best use of its capabilities. We will examine its functioning in Chapter 2;
  • – a sensor, the heart of the acquisition. In analog photography technology, it was film, but for this book it will be of course a CCD or CMOS semiconductor matrix. Chapter 3 will be dedicated to it;
  • – the range finder measures the distance of the objects in the scene. From its measurements the focus distance is derived. It has significantly evolved since the era of photography film. It will be examined in section 2.2, and then we will come back to this subject in section 9.5;
  • – the photometer: it measures the received brightness of the scene (section 4.4.1). It is sometimes confused with the rangefinder because in some systems these two sensors work in a coupled manner (the contrast is measured there where the focus is achieved). It has similarly evolved during these recent years;
  • – the shutter adjusts the duration of the exposure, which will be mechanical, electromechanical or electronic. It is a functionally essential accessory, but not widely publicized. We will discuss this point in section 9.4;
  • – the diaphragm associated with the shutter controls the amount of light received by the sensor during exposure. Its role is fundamental, but it has hardly evolved in recent years;
  • – finally, a moving device ensures the optical conjugation between the plane of the target object and that of the image on the sensor in accordance with the instructions provided by the range finder (section 2.2).
image

Figure 1.8. Schematic layout of a camera, whether it be analog (captured by film) or digital (solid sensor)

Other accessories are often associated with the camera, accessories that are often useful. We will also engage in their study. We will consider:

  • – optical filters (infrared and ultraviolet) and anti-aliasing, placed over the sensor (section 3.3.2);
  • – additional objectives or accessories that modify the properties of the lens to suit a particular purpose: converters/extenders, macro photography, etc. (section 9.7.1).

This functional description, viewed through the eyes of the photographer, must be supplemented with new components introduced by digital sensors. First of all, we will concentrate on the processor which controls all the features of the camera. On the one hand, it manages the information coming from the various sensors (range finders, photometers), on the other hand, it controls the settings: focus, aperture, exposure time. Finally, it ensures the proper functioning of the sensor and the dialog with the user. Its major function is to recover the signal originated from the measurement, to give it a shape as an image and in particular ensuring its compatibility with storage, transmission and archiving systems. The digital processor will be the subject of section 9.1.

Along with the processor, numerous auxiliary features deserve our attention: the energy source, the display screen, and the memory. They will also find their place in Chapter 9.

Finally, we could not conclude this book without mentioning the algorithms and the software that can implement images. We will do so in Chapter 10. We will examine those that create images within the body of the camera, but also those who are transposed to the host computer and that allow the quality of images to be improved or the functionalities of the camera to be increased. This chapter gives an important place to very prospective developments that explore new domains of application for photography, domains of application made possible by the intensive usage of calculations to obtain images that sensors only are not capable of delivering: images with extended resolution or dynamics, images with perfect focus at any distance, images correcting the effects of camera movement. This chapter is an open door to the new field of computational imaging which prefigures the camera of the future.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.210.102