Chapter 5

Integral Imaging

Integral imaging, also called integral photography, offers a true 3D image. The features of such a true 3D perception are:

a. binocular disparity as treated in Section 1.1;

b. the focusing point (accommodation) matching the convergence as described in Section 1.2; and

c. motion parallax.

The first two requirements are known from stereopsis as treated in Chapter 1, where it was mentioned that a mismatch of accommodation and vergence can cause discomfort at viewing distances that are too short. Stereoscopic and autostereoscopic displays do not meet requirement (b) as the focusing point is always on the display screen, no matter if the left and right eye images convey a disparity on the retina pertaining to a different distance or depth. By motion parallax, the ability of the viewer to move around the displayed 3D image is described, while perceiving the object differently corresponding to the viewer's different locations and viewing angles. These different views are also encountered in autostereoscopic displays with lenticulars or barriers, as treated in Sections 3.1, 3.2, 3.3, 3.4. A special arrangement of pixels was needed for the different views presented in different viewing zones of the lenticular. Although the distance between the viewer and points on the screen increases as the viewer moves to the side, full motion parallax is not ensured as the picture is restricted to the plane of the screen and the viewer to the line along the viewing zone. So depth perception in autostereoscopic displays is enhanced in comparison to stereoscopic displays but does not reach full motion parallax, nor is the viewer free to choose the viewing position.

Integral imaging was invented in 1908 by G. Lippmann, a French physicist investigating photography [1, 2].

All three requirements for a true 3D image are met by integral imaging, holography, and volumetric displays. The mechanical 3D displays in Section 7.3 also provide true 3D images: the turntable and the traveling ripple for several viewers and the deformable membrane for a single viewer. Integral imaging applies lenticulars, as we shall see, for the same reason as in autostereoscopic displays. It uses natural incoherent light, contrary to holography, and is capable of providing true 3D images for a group of viewers which may even change their position. Therefore, integral imaging has attracted attention for special applications in medicine, security, defense, advertising, and design, as well as also for 3D TV.

5.1 The Basis of Integral Imaging

Integral imaging has a pickup stage or a capture stage [3, 4], shown in Figure 5.1, in which the image of an object is recorded in a pickup plate. The convex lenses of the lenticular produce elemental images in front of each lens in the same way as the viewing zones of autostereoscopic displays in Section 5.3 are obtained. An elemental image possesses Pe pixels and the total pixel count Pt in front of the lenticular is

(5.1)equation

where Pl is the number of lenses. For an enhanced image quality, Pe is a large number multiplied by a number Pl usually larger than six for an extended side view of the object, resulting in a considerably larger number of pixels than in a conventional 2D display. This represents the challenge of integral imaging.

Figure 5.1 The pickup stage of integral imaging for real and virtual images.

img

The pickup plate is a graphic medium such as a photographic plate or a CCD.

The gap g between the lenticular and the pickup plate determines the location and the nature of the elemental images. According to the lens law

(5.2) equation

where z is the distance to the object and f is the focal length of the lenses in Figure 5.1. For z > f the gap g > 0 indicates a real image in front of the lenses, while zv < f, indicated by a dashed line in Figure 5.1, leads to gv < 0, that is, a virtual image on the other side of the lenses. In the latter case also the pickup plate lies on the other side of the lenses.

There are more object points in a plane perpendicular to the drawing plane. The generation of elemental images in this plane also requires lenticulars arranged in a direction vertical to the drawing plane. These lenticulars are provided by the crossed version of lenticulars in Figure 3.16. The depth z of a point on an object in Figure 5.2 manifests itself in the distance d of the pickup plate between neighboring elemental images of this object point. The location of the object point is indicated by the distance x from the axis of the pertinent lens, while the elemental image points are given by the distances y1 and y2 from the axes. From the dot–dashed lines in two triangles in Figure 5.2 we obtain

(5.3a)equation

and

(5.3b)equation

where pl is the pitch of the lenses. The distance of the elemental image points is

(5.3c)equation

which yields with Equations 5.3a and 5.3b

(5.4a)equation

or

(5.4b)equation

Figure 5.2 The difference d between image points originating from different depths and capture of hidden areas A at the object.

img

The larger the distance z to the object, the smaller is d. For z → ∞, d approaches pl. This implies that for increasing z the image points are crowding toward the axes of the lenses. Therefore, for larger z a viewer can no longer resolve differences in distances or depths. This limits the perception of depth in integral images. This corresponds to human vision, which also can no longer resolve differences in depths at larger depths, as outlined at the beginning of Chapter 1. As a result we note that the elemental images contain all the geometrical information, including depth of an object, even though this information is contained in a plane which is the pickup plate, also called the picture plate. As a consequence the elemental images can be reproduced as a 2D image in an LCD.

According to the lens law, the points on the object at the distance z in Figure 5.2 are sharp on the pickup plane at the distance g, while a point A at the distance zA > z is in focus at distance l from the lens with

(5.5) equation

For zA > z we obtain from Equations 5.5 and 5.2 that l < g as drawn in Figure 5.2. This means that the focused image of A lies closer to the lenses and is projected onto the pickup plane, which results in blurring of A. This blur increases with increasing distance of A. Again this limits the applicability of the pickup stage for larger depths zA. However, there is, as we already know from Equation 1.2, a range around the focus distance where the defocus is still acceptable. This range is the depth of focus in Figure 1.2. The point A can only be seen in the lower portion of the pickup plate.

The reconstruction stage or display stage shown in Figure 5.3 consists of a transparent display, as a rule an LCD, into which 2D-type information of the capture plate has been transferred. In front of this transparent display lies the same lenticular at the same distance g as in the pickup stage. The display stage is lit by incoherent and diffuse light from the right. The diffuse nature of the light provides each lens with a wide-angle bundle of rays sampling the display plate from all these angles. Finally the light passes the transparent display plate and an image of the original 3D object is reconstructed in the space on the left side of the lenticulars. This image in space is visible from various viewing angles. The rays are able to achieve this as they travel in the reverse direction through all components of the pickup stage. Of course the blur introduced in the pickup stage is still present in the reconstructed image. If the geometric data of the pickup stage, such as the pitch and the radius of curvature of the lenses and the gap g, is not precisely maintained in the display stage, additional blur is introduced. However, also in the reconstructed image there is a depth of focus in which the blur is still acceptable.

Figure 5.3 Display or reconstruction stage for integral imaging.

img

5.2 Enhancement of Depth, Viewing Angle, and Resolution of 3D Integral Images

In integral imaging the perception of depth is limited by out-of-focus images originating from a narrow depth of field area, by a narrow viewing angle given by the f-number of the lenses, defined as the focal length divided by the diameter of the aperture, and by limited resolution. The resolution of a small area of an elemental image is supposed to represent the better part of the entire object, which requires a large and not-easy-to-realize pixel density. Remedies for an improvement in all of these challenges will be presented in the following sections. There are solutions with only one or with several integral imaging devices.

5.2.1 Enhancement of Depth

The two integral imaging displays in Figure 5.4 carry 3D information of the same object but are focused on two different depths called central depth planes [5–7]. The beam splitter, here working as a beam combiner, superimposes the two elemental images which the viewer perceives as overlaid images, each focused in a different depth. The eyes can focus on one central depth plane but also glimpse the remainder of the image. When they then focus on the other central depth plane they have the sensation of an enhanced depth. Figure 5.5a and 5.5b shows the letter A in front of the letter B. While using only one display plate focused on the depth of letter A, the viewer sees the image in Figure 5.5a, where B is out of focus and hence broken up. With two display plates, one focused on A and the other on B, the viewer sees Figure 5.5b where both letters are clearly in focus.

Figure 5.4 Two integral imaging devices and beam splitter for enhancement of depth.

img

Figure 5.5 Integral images (a) from a single image plate focused on A and (b) from two image plates focused on A and B.

img

The working of a beam splitter (combiner) is explained in Figure 5.6 [8]. The beam splitter receives two separate images from the left side. One is marked L and is polarized vertically and the other marked R is polarized horizontally. If there is a voltage at the TN cell in front of L, its LC molecules orient themselves parallel to the electric field. Since then there is no birefringence, the L-polarization passes unchanged. If the TN cell in front of R carries no voltage, the LC molecules are in the twisted position and rotate the R-polarization into a vertical position. At the exit of the TN cell both images exhibit the same vertical polarization and can pass the polarizer at the end. Thus the two images are combined in the same polarization stage. In our examples the LC splitter is used as the described combiner.

Figure 5.6 A LC image splitter (combiner).

img

In Figure 5.7 [9] the two or more display plates with different depths are arranged on top of each other and placed in front of a backlight. The distance of the stacked display devices to the lens array is different with the difference corresponding to the different depths. A black pixel in one display blocking the passage of light has to be avoided as then the information in the pertinent pixel of the other display cannot pass to the lens array. Therefore the displays must have a white background with gray-shaded information, besides black on top.

Figure 5.7 Depth enhancement by stacked display plates.

img

Around each central depth plane of the two displays in the Figures 5.7 and 5.9 one encounters a range with a still acceptable focus, the field of focus. For a natural sensation of depth, the two fields of focus have to overlap, called marginal depth overlapping, as demonstrated in the Figures 5.7 and 5.9.

For an experiment, the individual real elemental images with a white background for display device 1 and display device 2 are shown in Figure 5.8a and 5.8b respectively. The overlapping objects stemming from the two display devices are shown in Figure 5.9 as the viewer perceives them. The elemental images that are not fully black are translucent, so, if on top, all points are still visible.

Figure 5.8 The real elemental images in (a) display device 1 and (b) display device 2.

img

Figure 5.9 The overlapping real images stemming from the display devices 1 and 2 in Figure 5.8a and 5.8b.

img

The system may be extended to a multilayer display with a multitude of depths. The quality of the image perceived by the viewer may be degraded as the light has to pass through at least two elemental images.

Instead of real images, also virtual images can be used which may decrease the volume of the experimental setup. This possibility is further investigated in Section 5.3 on integral videography.

Finally we investigate a one-device solution which enhances the depth cue by placing a phase mask between the lenticular and the observer. The system is investigated with the irradiation function in the spatial xy-plane. A lens does not, as a rule, provide a perfect spherical wave at its output due to various aberrations such as diffraction or optical path length errors. This can be expressed by an irradiation function called the complex generalized pupil function [10], p. 146

(5.6) equation

where a simple focusing error is described by the aberration function

(5.7)equation

This phase contains the errors in the optical path length with Hopkin's defocus coefficient [11]

(5.8) equation

where w is the radius of the lenslets and, for the pickup stage in Figure 5.1, z is the distance to the object and g is the gap between the lenticular and the pickup plate. P(x, y) in Equation 5.6 describes the circular aperture of a lens.

For an enlarged depth of focus it was proposed in [12] to introduce an additional phase factor for img. This has the advantage that the magnitude of img remains constant resulting in no loss of light. A loss would occur by changing the amplitude.

The additional phase factor is realized by a phase mask with the phase function

(5.9) equation

where α is an adjustable factor and k a design parameter [13, 14]. Due to sgn x the phase is an asymmetric function of x, which led to the term asymmetric phase mask. With the asymmetric phase mask in front of the lens facing the viewer, the generalized pupil function for the x- and y-directions is

(5.10)equation

To extend the depth of focus one strives not for a perfectly focused lens, but for distributing focus-like conditions over a larger distance.

A perfectly focused lens possesses a δ-distribution as its point spread function (PSF), which could be approximated by a narrow and high rectangle. The Fourier transform of this rectangle is the optical transfer function (OTF) with a modulus that is the magnitude of the OTF, called the modulation transfer function (MTF).

The OTF can also be obtained from the autocorrelation function of img.

The Fourier transform of the rectangle is a sin x/x function and the MTF is img. So the rationale of the phase change is to find a MTF approximating img in such a way that the pertinent PSF approximates the δ-distribution, that means a focused lens, best. This is the case if the rectangular PSF exhibits a small width 2T resulting in the first zero of its MTF located at x = π/T being at a large x-value.

The MTF in Figure 5.10a of a regular defocused lens with no phase mask reveals the sin x/x character and its dependence on the three values of the defocusing coefficient W20. The smaller the value of W20, the further away from the origin is the first zero of the MTF. Figure 5.10b and 5.10c depict the MTF with a phase mask with k = 4 and k = 5 respectively. The shifting of the first zero far away from the origin has been achieved in both cases as intended. The selection of the defocus coefficient W20 reveals slightly higher values of the MTF for smaller values of W20. The values of W20 can be adjusted by the geometrical parameters of W20 in Equation 5.8.

Figure 5.10 (a) The MTF of a defocused lens for various defocus coefficients W20. The MTF for a lens with a phase mask (b) with k = 4 and (c) with k = 5 in Equation 5.9.

img

The altogether low values of the MTF in Figure 5.10a and 5.10b off the origin indicate a low luminance and other degraded properties, since the side lobes of sin x/x are missing.

The PSF for k = 4 is shown in Figure 5.11. It was measured as the peak luminance or irradiance versus the position where it occurred. Due to a wider region of larger luminance, the depth of field has been enlarged to be nine times larger than for a lenticular without the phase mask, as indicated by the gray rectangle. Obviously the asymmetric phase mask is a powerful means of enhancing the depth of field for the pickup stage and the depth of focus for the reconstruction stage.

Figure 5.11 The PSF of a defocused lens with an asymmetric phase mask with k = 4 in Equation 5.9.

img

5.2.2 Enlargement of Viewing Angle

Figure 5.12 demonstrates the dashed viewing zone generated by the sectors of light with angle α emitted from an elemental image through a lenslet with pitch pl. The viewing zone is defined by the sector in Figure 5.12 which fully contains all sectors of light emitted by the elemental images in the section of length pl. A triangle with angle α/2 provides tan α/2 = pl/2g or [15]

(5.11)equation

as the angle of the viewing zone.

Figure 5.12 The viewing zone in front of an array of lenses.

img

The viewing angle could be enlarged by shrinking the f-number of the lenslets (focal length/diameter of aperture). However, such a lenslet diminishes the resolution and the depth of the image. Therefore other means for enlarging the viewing angle have to be found.

This viewing angle α can be expanded by adding a second display device in the slanted position in Figure 5.13 to the structure in Figure 5.12. The second device adds an additional viewing sector by means of a beam splitter [16]. The tilting angle toward the vertical approximately equals the angle of the added sector. The images obtained in this way exhibit image flipping which we encountered in the discussion of Figure 3.7 pertaining to a slanted lenticular. Image flipping occurs when moving gradually from one image into the neighboring one while the transition to the new image is not gradual but abrupt. In [17] this flipping is avoided by putting vertical barriers between each display panel and the lenticular along the edges of the individual lenses.

Figure 5.13 The doubling of the viewing zone by a second integral imaging device.

img

The arrangement in Figure 5.13 can be expanded to multidisplay systems as in Figure 5.14a and 5.14b with the displays on a linear or, as shown in the figures, on a curved structure [18].

Figure 5.14 Enhanced viewing angle by (a) integral images in real mode or (b) integral images in a virtual mode stemming from integral imaging devices on a curved surface.

img

Each additional display adds an additional sector to the viewing zone. The system in Figure 5.14a with the concave structure generates a real image, while in Figure 5.14b with the convex structure a virtual image is obtained. We shall investigate the use of virtual images in more detail in Section 5.4. Flipping of images has to be avoided in the same way as above.

5.2.3 Enhancing Resolution

Enhancing resolution is based on the obvious idea of adding more pixels to an image. This is achieved by superimposing two devices with elemental images not exactly on top of each other but shifted by half the image pitch in the horizontal and vertical directions. The two integral imaging devices according to Figure 5.4 implement just that by placing those two different elemental images into the image devices and by adding them with the beam splitter [19]. This doubles the resolution of the combined elemental images. For a uniform distribution of pixel density a precise alignment of the two elemental image grids together with the beam splitter is required.

A very powerful but also more costly enlargement of resolution is achieved by the three projectors in Figure 5.15a and 5.15b [20]. Figure 5.15a shows how each sector of the light emitted by the projectors creates in each lens of the lenticular three beams of incoming light from three different angles. These deflected beams manifest themselves by three prints of light on the surfaces of the lenses. This is further elucidated in Figure 5.15b. The projectors throw the elemental images onto the lens array and triple the resolution without the need for the lenses to become larger. The increase in information is provided by different views of the object allotted to the projectors. The image generated by the projectors does not need a screen as it is built up in space. So no device with a large pixel count is required. The pixel count in space of 1700 × 850, greater than for SXGA, was realized on an area of 800 × 400 mm2 resulting in 4.5 pixels per mm2. Figure 5.16 shows five views of this full parallax image.

Figure 5.15 Enhancement of resolution of integral imaging by projectors superimposing three images: (a) overview of structure; (b) optical details.

img

Figure 5.16 Five views from different directions of an image in the structure in Figure 5.15.

img

5.3 Integral Videography

Integral videography (IV) is an extension of integral imaging allowing video and interactive content to be viewed by several observers. It can work with just a flat panel screen with a lenticular in front, or with one or several projectors. It is true 3D or, in other words, a full parallax system. Integral videography may have its first applications in education, medical surgery, video conferencing, technical design, and entertainment long before full motion parallax TV broadcasting becomes feasible.

We start with the system in Figure 5.17 [21] which requires as basic components an LCD with a lenticular containing the full 3D information in the form of elemental images. The backlight of the LCD creates rays emanating from each lens, which reconstructs the light field in which the object is visible. This represents the virtual 3D object. Each lens emits five rays pertaining to the five images in Figure 5.18; this means the upper and lower, the left and right, as well as the center views. For this full parallax view the lenticular has to possess arrays of lenses in the horizontal and vertical directions as shown in Figure 3.16, corresponding to the elemental images arranged in these two directions. That way, 3D objects are perceived even if the display tilts or the viewer's head or position moves. Therefore the IV set is fit for hand-held use.

Figure 5.17 An integral video system (IV) using an LCD with a lens array.

img

Figure 5.18 The five views for the IV system in Figure 5.17.

img

The lens pitch and the number of rays/lenses are important design parameters. The number of rays/lens corresponds to the number of views per elemental image behind each lens. These views are obtained by an equal number of cameras in Figure 5.19 and are fed into the elemental images. To enhance the realism of the images, the number of rays, the resolution, was increased while the lens pitch was kept as small as possible, which enhances the viewing angle. The lens has a diameter of 0.5 mm.

Figure 5.19 Cameras for capture of an object.

img

A 3 in display with 88 rays/lenses and a 5 in display with 60 rays/lens were built. The characteristics are listed in Table 5.1.

Table 5.1 Technical data for a 3D IV display with 88 rays per lens (prototype I) and 60 rays per lens (prototype II).

Prototype I Prototype II
LCD size (inches) 3 5
LCD resolution (pixels) 480 × 800 1280 × 768
Number of lenses 132 × 100 256 × 192
Viewing angle (degrees) 15 30
Number of rays (ray/lens) 88 60
Color filter arrangement RGB stripe Special
Sensor Accelerometer Range sensor

Color moiré, a problem for IV, is due to the fact that color changes with viewing position and is caused by the RGB pixel structure of the LCD. As it causes eye fatigue it has to be reduced. The colors of all subpixels for a lens are combined into one elemental color and the lenses are arranged in a delta configuration as in Figure 5.20. The delta configuration renders the viewing of the color more equal from all directions. The resulting LCD has one-third of the color resolution but three times the number of rays, while color moiré is reduced.

Figure 5.20 The delta configuration of the pixels in an IV system.

img

The accelerometer mentioned in Table 5.1 is used to detect the display tilt in the hand-held version; the ultrasonic range sensors detect movements of the user's hand with respect to the 3D object.

The preparation of an integral image from the rays of light is depicted in Figure 5.19. The angle under which the rays capture the object corresponds to the emission angle Φ in Figure 3.8 for the lens in the lenticular of an autostereoscopic display. The further process steps are listed in Figure 5.21. They start with the 88 rays for a part of the object in a 3 in display and the 60 rays for a part of the object in the 5 in display. Rearranging in Figure 5.21 means placing the portions of the object captured by a particular ray at the corresponding place underneath the lens. This process, as a rule, lasts too long for real-time operation. The process is accelerated by a pre-preparation step in which all stationary portions of an object are placed at their appropriate locations underneath the lenses. That way, in a dynamic image only the changing positions of the rays have to be allocated in real-time processing.

Figure 5.21 Process steps for the rays capturing the images (IP = Integral Photography).

img

An approach with a convex mirror array and based on the intriguing idea of using simultaneously a real and a virtual image is depicted in Figure 5.22[22]. The convex mirror array to be used can be fabricated as a convex lens array coated with a reflective Al layer. The convex mirror array behaves optically like a concave lens array. Both devices transfer an incoming beam of parallel light into an outgoing diverging beam. However, the similarity does not extend to the focal length. The focal length f of a convex lens is

(5.12)equation

where R is the radius of the curvature and n is the refractive index of the lens material. The focal lens of a convex mirror with the same radius of the curvature is

(5.13)equation

For the same R and n = 1.5 the value of flens is four times larger than fmirror. The viewing angle is wider for a small f-number. Therefore the array of mirrors provides a larger viewing angle than the array of lenses, which explains the selection of a convex array of mirrors.

Figure 5.22 The IV system with a convex mirror array and a projector.

img

The projector needs a plate with elemental images for a full parallax solution. The idea, to enhance the depth, is to use on that plate two images, one real and the other virtual. How the pertaining pickup plate is obtained will be outlined by applying an array of convex lenses. How this will be transferred to a convex array of mirrors will be demonstrated at the result for lenses. Figure 5.23a and b shows the pickup stages according to Figure 5.1 for the elemental images in those cases where the distances of the object z are > f and < f. The first case leads to a real image at the distance zr > 0 in Figure 5.23a, whereas the second case leads to a virtual image at the distance zv < 0 in Figure 5.23b. We want to deal with both images simultaneously, which is accomplished by combining the two elemental image plates into one plane. This is done in Figure 5.23c where the two elemental image plates for the real (r) and the virtual (v) images are both placed on the same side as and at the same distance from the lenses, while all the other distances are preserved. Figure 5.23c represents the reconstruction stage according to Figure 5.3. The light for the reconstruction is provided by a projector shining onto the combined elemental image plane. In the first example in this section the light was provided by the backlight of an LCD carrying the elemental images. In Figure 5.23c the reconstructed virtual image lies in front of the lenticular facing the projector, while the real image is situated behind the lenticular.

Figure 5.23 The pickup stage for the real image (a) and for the virtual image (b) and the combination of the two pickup stages into a reconstruction stage for the real and virtual images (c).

img

For the use of a corresponding concave lens array which is optically equivalent to the convex mirror array, the virtual and real images are interchanged. So the real image lies in front of and the virtual image behind the convex mirror array.

The setup of this IV projector system is shown in Figure 5.22. The viewer looks at the image from the side of the projector, which is an uncomfortable viewing position. This is avoided by the arrangement in Figure 5.24 which includes a beam splitter combining the two images, the real and the virtual ones, reflected from the mirror array. That way, the viewer can move freely without being restricted by the projector.

Figure 5.24 The IV system with a real and a virtual image using a beam splitter allowing a comfortable viewing position.

img

The viewer perceives the real image in front of the virtual image. The two images are selected such that they represent the objects at different depths. Thus the perception of depth is considerably enhanced to a region beyond the two depths of the images.

As an example the number “3” was placed at the location of the real image and the letter “D” at the location of the virtual image in front of the “3.” Figure 5.25 depicts the elemental images of the two objects separately, while Figure presents the two elemental images on top of each other as seen from different perspectives. The real image “D” lying in front of the virtual image “3” is clearly perceived.

Figure 5.25 The elemental images for “3” and “D” presented separately.

img

Figure 5.26 The objects “3” behind “D”: (a) view from the center, (b) view from the left; and (c) view from the right.

img

The color projector in Figure 5.24 provides a structured image with 1024 × 768 pixels which contain RGB pixels with a pitch of 0.0805 mm measured at the surface of the mirror array. Each elemental mirror is sized 0.987 × 0.987 mm2, its focal length is 0.8 mm, while that of the lens array before coating was 3.3 mm. Each mirror covers about 12 pixels of the projected image. In this design example the usually combined arrangement of elemental images and lens or mirror array is split. The elemental images originate from the projector and are shone onto the separately located mirror array, requiring a precise overlapping adjustment of both. One can also envision the projector only providing the beam of light to the elemental images with the mirrors.

The approach in [22] introduces an ingenious use of the real and the virtual image simultaneously to enhance depth and resolution combined with only one projector.

For a closer inspection of selectable areas of an IV display, viewing direction-controlled integral imaging has been proposed [23]. As for this application an enhanced viewing angle is not essential, one can choose a large pitch in the lenticular allowing for a larger luminance. Figure 5.27a and b explains the working of the direction control. Light from a point light source is collimated by the first lens and directed toward a movable aperture of width A which crops the beam. This lens can be considered to perform an inverse Fourier transform. Therefore, at the aperture in its focal plane f, the angle of the incoming ray represents a spatial frequency and the aperture works as a bandpass in the frequency domain, which limits the diverging angle of the beam to the incoming angle. Hence this location of the aperture provides an excellent cut-off for any diverging portions of the beam. This is also true for a different angle of incoming light when the aperture, still in the focal plane of the lens, is moved, as shown in Figure 5.27b. The second lens again with focal length f receives the now tilted beam Φ down from the horizontal with

(5.14) equation

where yc is the distance by which the center of the aperture has been shifted downward from the optical axis of the first lens. The angle Φ and the triangle provided by Equation 5.14 can be found in Figure 5.27b. The beam leaves the second lens under the same angle Φ but now in an upward direction according to the laws of geometric optics. Then the beam passes a spatial light modulator (SLM) realized by an LCD which contains the elemental images of the 3D object. The viewer perceives the 3D object under the viewing direction Φ. The diverging angle Θ of the beam exiting the LCD, that is, the viewing angle of the image, can be expressed by

(5.15)equation

Figure 5.27 (a,b) The control of the viewing direction for an IV display.

img

In Figure 5.27a, yc = 0 and hence Φ = 0; yc = 0 means that the center of the aperture lies on the optical axis of the first lens. It can be shown that the height yp of the point source equals the shift of the viewing axis with Φ = 0 in Figure 5.27a. With the tilted viewing angle in Figure 5.27b the focal point on the viewing direction Φ is also shifted by yp down from the optical axes of the lenses. So different heights of the point light source manifest themselves by a parallel shift of the viewing axes. On the object side the change in yp has to be realized by changing the light to different areas at the object.

Neither Φ nor Θ depends upon the location yp of the point source. Therefore all point sources of the object are perceived by the viewer under the same geometrical processing conditions. The viewing direction Φ at a given f is in Equation 5.14 only determined by yc.

The movable aperture may be realized by an LCD in which the pixels can be switched into the transparent or the light blocking state. This is known from the switchable barriers of autostereoscopic displays in Figure 3.41. The viewing angle Θ is again at a given f only determined by the opening A of the aperture. If need arises an LCD would also allow A to be changed in order to increase or decrease the width of the field of view.

The experimental setup uses incoherent white light to illuminate the object by a collimating lens with f = 150 mm. The array of point light sources is generated by an array of pinholes with a spacing of the holes of 1 mm. A diffuser placed between the light source and the pinholes guarantees that the light exiting the pinholes exhibits a large diverging angle. For viewing different areas of the object with a different yc, different pinholes of the array have to be activated while the other pinholes are blocked. A small f-number of the two Fresnel lenses used in the 4f system ensures a small loss of light at the lenses. The widths of the beams through the lenses and the width of the aperture are 140 mm × 140 mm.

The 4f system lends itself to be combined with a position tracking device for the viewer as the viewer's position and viewing direction are directly related to the parameters yp and Φ of the 4f system.

In an experiment the letter Y and an inverted Y placed next to each other but at different depths were used as objects. Figure 5.28a–c depicts how a viewer perceives the objects when seen from beneath (a), from the center (b), and from above (c). The placement of the aperture and the position of the viewer are drawn in the left column next to the figures. In Figure 5.28a the sequence of Y and the inverted Y is changed compared to that in Figure 5.28b and c. This is only possible in a 3D case. Finally the entire system is depicted in Figure 5.29a and 5.29b with different viewing angles.

Figure 5.28 (a,b) A letter Y and an inverted Y at different depths seen from three directions through different apertures shown in the left column.

img

Figure 5.29 The concept of the IV system with controllable viewing direction (a) central and (b) slanted.

img

5.4 Convertible 2D/3D Integral Imaging

A 2D display will for a long time remain an essential means of pictorial communication. Therefore the forthcoming 3D image technology has to be switchable to the 2D stage. As 2D is not feasible with a lenticular in front of the screen, the lenticulars necessary for 3D must be placed somewhere else, leading to a modified integral imaging approach. We encountered the lenticulars for integral imaging in Sections 5.1–5.3, while pinholes were introduced in Section 5.3.

We first investigate a 2D/3D approach in which a polymer-dispersed liquid crystal display (PDLC) is switched off for a 2D display and switched on for a 3D presentation [24–26]. Figure 5.30a and 5.30b shows the 2D and the 3D operations. A lenticular and a PDLC cell are placed behind the transmission LCD facing the viewer. In Figure 5.30a the light from a light source and a collimating lens is scattered in the PDLC cell (Chapter 2, ref. [3], p. 145), when no voltage is applied across the cell. This light illuminates first the lens array and then the pixels of the LCD with diffuse light from all directions. The LCD contains a regular 2D display which receives the diffuse light and represents the 2D image in the conventional way. The loading of a 2D image into the LCD has to be synchronized with the switching off of the PDLC.

Figure 5.30 Integral imaging devices for 2D/3D conversion with a PDLC cell: (a) for the 2D mode; (b) for the 3D mode.

img

In the 3D mode in Figure 5.30b a voltage is applied across the PDLC cell orienting all LC molecules parallel to the electric field. This renders the PDLC cell transparent. Now the lenticular is exposed to a collimated beam of light which converges to the focal point of the lenses, as drawn in Figure 5.30b. This creates the point light sources needed for elemental images which had been loaded into the LCD. A 3D image is generated in front of the LCD.

This system exhibits a good light efficiency. The gap g = 2f between the elemental images in the transmission-type display and the lenticular is relatively large. In addition, the stretch for the collimation of the light is even larger, resulting in a thick device. Therefore this approach is not very practical and not fit at all for mobile displays.

A system without the PDLC cell is depicted in Figure 5.31a and 5.31b [24, 27]. There are two kinds of LED arrays as light sources, one drawn in black behind the diffusers for the 2D mode and one drawn in gray behind the pinholes for the 3D mode. Figure 5.32 represents the essence of the structure as a modified structure of the reconstruction stage.

Figure 5.31 The 2D/3D conversion for integrated imaging with LEDs: (a) for the 3D mode; (b) for the 2D mode.

img

Figure 5.32 The modified integral image structure for 2D/3D conversion.

img

In the 3D mode in Figure 5.31a the light is emitted from the pinholes in front of the gray LEDs, is focused by the lenticulars, and forms point light sources in front of the elemental images in the LCD, as in the previous solution. As the pixels receive light from a bundle of directions, a 3D display is perceived in space. The 3D mode is perceived no matter where the viewer is positioned.

On the other hand, in the 2D mode in Figure 5.31b the light originates from the black LEDs behind the diffusers, where it is scattered. It reaches the lens and the LCD in this scattered state, which is only able to provide a 2D image. Synchronized with the turning on of the black LEDs, a 2D image is written into the LCD.

In an experiment the modified structure in Figure 5.32 was used for integral imaging with the lenticular behind the elemental images. It is important to note that the relatively large distance between the pinholes and the lenticular, being 69.3 mm, has the beneficial effect of doubling the number of point light sources behind the LCD. This is due to two beams of light entering the lenses at different angles. As the pitch of the lenses was 1 mm and the pitch of the point light sources approximately 0.5 mm, 14 × 14 pixels, a relatively large number, are contained in an elemental image.

As a 3D object the number “3” located 20 mm in front of the point light sources and the letter “D” 20 mm behind them were displayed, as demonstrated in Figure 5.33. One of the objects plays the role of a real image and the other of a virtual image. Figure 5.33 shows five views of the two objects and the perspective perception is clearly visible. Depending on the viewing direction, “3” appears ahead of or behind “D.” The images of the two objects represent the real and virtual displays for the modified integral imaging structure. For the conventional integral imaging structure the same group of researchers introduced the use of the real and the virtual image [22].

Figure 5.33 Five views on the two objects, a “3” and a “D.”

img

The drawback of this LED approach for 2D/3D switching is again the thickness of the device.

In the search for a thinner device, a solution with a pinhole array on a polarizer (PAP) as in Figure 5.34 was proposed [28]. The backlight became polarized while passing the PAP with the exception of the light centered in the pinholes, as the polarizer is not able to reach that area. If the polarization switcher in Figure 5.34 exhibits a linear polarization perpendicular to the polarization of the PAP, only the light from the pinholes can pass the switch. That way, the LC panel with the elemental images is lit by an array of point light sources. This is the typical situation for the generation of a 3D display for a viewer in front of the LCD.

Figure 5.34 The structure of a PAP 2D/3D converter.

img

If the polarization switcher is turned in the direction of polarization of the PAP, the PAP plane creates a planar light wave which is only able to produce a 2D display provided a conventional 2D image had been loaded into the LCD.

This 2D/3D converter is thin since the long collimation stretch of light is no longer needed and the gap between the LCD and the PAP can be kept small. It is also a low-cost solution. However, the light efficiency is poor because in the 3D mode the polarization switcher has to absorb and reflect 20–30% of the light and only the light stemming from the pinholes is allowed to pass. Together with other losses, only 10% of the light finally reaches the LCD. Only in the 2D mode does the system possess a high light efficiency.

A further version of the 2D/3D converter in Figure 5.35a and 5.35b uses an array of fibers to create the point light sources [29]. This avoids the lossy suppression of light by a polarizer. The light exiting at the end of a stack of fibers illuminates the point sources. The fibers are waveguides which can provide light also through a bent structure of the fibers in Figure 5.35a and 5.35b, which reduces the thickness of the system.

Figure 5.35 The 2D/3D conversion (a) with a backlight behind fibers and (b) with point light sources guided through fibers.

img

In the 2D mode in Figure 5.35a, light source 1 behind the LCD is turned on and provides diffused light for the LCD by shining the light through the gaps between the fibers. This is the type of light needed for a 2D display. The fibers may generate some non-uniformity in the light at the LCD.

In the 3D mode, light source 2 consisting of a transparent plate at the fibers generates point light sources for each fiber. These point light sources can exhibit a high luminance and can be placed very accurately at the LCD. The array of point light sources is required for the 3D mode.

For both modes the luminance can be very high. The system's volume is medium; however, the thickness is larger than for the PAP approach, but thinner than for the PDLC version. Deterioration of the uniformity of light for the 2D mode is the main drawback.

Finally we consider the 2D/3D conversion in Figure 5.36a and 5.36b with two LC panels and two types of light sources [30]. Two stacked translucent LCDs, called panel 1 and panel 2, are employed.

Figure 5.36 A 2D/3D converter with two LCDs (a) in the 3D mode and (b) in the 2D mode.

img

In the 3D mode, panel 2 is switched transparent, so it does not participate in the image processing. The elemental images are displayed in panel 1. The lens array in front of panel 1 generates in the conventional way the 3D image from panel 1. Figure 5.36a shows the 3D object, a number “3” in front of the letter “D.” The 3D image can exhibit a high resolution, because there is a bundle of rays exiting a lens, where each ray carries different information. As in all cases a lens array is applied, the seam lines of the array may be visible.

There is only a moderate loss of light, as the transparent panel 2, being an LCD, still loses some light.

On the other hand, in the 2D mode, panel 1 works as a transmissive display and provides a white area backlight for panel 2, into which a 2D image has been loaded. So the viewer perceives the 2D image of panel 2. Figure 5.36b shows a flower as the 2D image on panel 2. This image can offer a high luminance and high contrast of an LCD.

The thickness of this two-panel structure is reduced because all the components can be densely stacked, while still preserving the appropriate gap g between the lenticular and the elemental images.

Acknowledgments

The author gratefully acknowledges permission to reproduce figures and tables granted by the institution and publisher named below. The sources of the figures and tables are also listed below together with their corresponding numbers in this book.

Society for Information Display (SID)

SID – Symposia and Conferences
SID 02 p. 1415, figure 2b reproduced as Figure 5.6
SID 06 p. 183, figures 2, 6, 7 reproduced as Figures 5.7, 5.9, 5.8
SID 09 p. 611, figure 1a,b, 11 reproduced as Figures 5.15a, b, Figures 5.16
SID 08 p. 749, figures 3, 4, 6 reproduced as Figures 5.17 5.18, 5.19
    p. 750, table reproduced as Table 5.1
    p. 753, figures 2, 3, 6a–c reproduced as Figures 5.25, 5.24, 5.26
SID 09 p. 607, figures 1, 2, 6a–c reproduced as Figures 5.29, 5.28a, b, 5.27
SID 06 p. 1146, figures 1a,b, 2, 3, 4, 7 reproduced as Figures 5.30a, b, 5.32, 5.31a, b, 5.33

Springer Verlag

B. Javidi, F. Okano, and J.Y. Son (eds.)

Three Dimensional Imaging, Visualization and Display, 2009

p. 33, figures 2.8a–c, 2.10 reproduced as Figures 5.10ac, 5.11
p. 45, figures 3.4, 3.5a,b reproduced as Figures 5.4, 5.5a, b
p. 47, figures 3.6, 3.7, 3.9 reproduced as 5.12, 5.13, 5.14a, b
p. 69, figures 4.9, 4.11, 4.12 reproduced as Figures 5.34, 5.35a, b, 5.36a, b

References

1. Lippmann, M.G. (1908) Epreuve reversible donnant la sensation du relief. J. Phys., 7 (4th ser.), 821.

2. Lippmann, M.G. (1908) La photographie integrale. CR Acad. Sci., Paris, 146, 446.

3. Yves, H.E. (1931) Optical properties of a Lippmann lenticulated sheet. J. Opt. Soc. Am., 21, 171.

4. Davies, N. et al. (1994) Design and analysis of an image transfer system using microlens arrays. Opt. Eng., 33 (11), 3624.

5. Lee, B. et al. (2002) Theoretical analysis for three-dimensional integral imaging systems with double devices. Appl. Opt., 41(23), 4856.

6. Min, S.W. et al. (2004) Analysis of an optical depth converter used in a three-dimensional intergral imaging system. Appl. Opt., 43, 4539.

7. Min, S.W. et al. (2009) Integral imaging using multiple display devices, in Three Dimensional Imaging, Visualization and Display (eds. B. Javidi, F. Okano, and J.Y. Son), Springer, Berlin.

8. Nam, F.H. et al. (2002) Autostereoscopic 3D display apparatus using projectors and LC image splitter. SID 02, p. 1415.

9. Kim, V. et al. (2006) Continuous depth enhanced integral imaging using multilayered display devices. SID 06, p. 182.

10. Goodman, J.W. (2005) Introduction to Fourier Optics, 3rd edn, Roberts, Greenwood Village, CO.

11. Hopkins, H.H. (1951) The frequency response of a defocused optical system. Proc. R. Soc., Ser. A, 31, 91.

12. Donski, E.R. and Cathey, W.T. (1995) Extended depth-of-field through wavefront coding. Appl. Opt., 34, 1859.

13. Castro, A. and Ojeda-Castañeda, C. (2004) Asymmetric phase mask for extended depth of field. Appl. Opt., 43, 3474.

14. Castro, A. et al. (2009) High depth-of-focus integral imaging with asymmetric phase mask, in Three-Dimensional Imaging, Visualization and Display (eds. B. Javidi, F. Okano, and J.Y. Son), Springer, Berlin.

15. Shoi, H. et al. (2005) Improved analysis on the viewing angle of integral imaging. Appl. Opt., 44(12), 2311.

16. Min, S.W. et al. (2003) Enhanced three-dimensional integral imaging system by use of double display devices. Appl. Opt., 42, 4186.

17. Choi, H. et al. (2003) Multiple viewing-zone integral imaging using dynamic barrier array for three-dimensional display. Opt. Express, 11(8), 927.

18. Kim, Y. et al. (2004) Viewing angle enhanced integral imaging system using a curved lens array. Opt. Express, 12 (3), 421.

19. Kim, Y. et al. (2007) Resolution enhanced three-dimensional integral imaging using double display devices. IEEE Lasers and Electro-Optics Society Annual Meeting, Orlando, FL, USA, paper TuW3, p. 356.

20. Sakai, H. et al. (2009) Autostereoscopic display based on enhanced photography using overlaid multiple projectors. SID 09, p. 611.

21. Oikawa, M. et al. (2008) Sample applications suitable for features of integral videography. SID 08, p. 748.

22. Kim, Y. et al. (2008) Projection-type integral imaging system using mirror array. SID 08, p. 752.

23. Park, J.H. et al. (2009) Viewing direction controllable three-dimensional display based on integral imaging. SID 09, p. 601.

24. Cho, S.W. et al. (2006) A convertible two-dimensional–three dimensional display based on a modified integral imaging technique. SID 06, p. 1146.

25. Park, J.H. et al. (2004) Depth enhanced three-dimensional–two-dimensional convertible display based on modified integral imaging. Opt. Lett., 25(23), 2734.

26. Park, J.H. et al. (2005) Resolution enhanced three-dimension-two-dimension convertible display based on integral imaging. Opt. Express, 13(6), 1875.

27. Choi, S.W. et al. (2006) Convertible two-dimensional-three-dimensional display using an LED array based on integral imaging. Opt. Lett., 31(19), 2852.

28. Choi, H. et al. (2006) A thin 3D-2D convertible integral imaging system using a pinhole array on a polarizer. Opt. Express, 14(12), 5183.

29. Kim, Y. et al. (2007) Three dimensional integral display using plastic optical fibers. Appl. Opt., 46(29), 7149.

30. Choi, H. et al. (2005) Wide viewing angle 3D/2D convertible display system using two display devices and a lens array. Opt. Express, 13(21), 8424.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.239.234