7
Technologies to Improve Image Information Quality

In this book, it is explained that image information is composed of light intensity, space, wavelength, and time, and it is the role of imaging to obtain each factor with a sufficient level of information quality for the goal of the imaging system. Since imaging systems are used for various applications, the information provided is not always the same. As the most important information varies according to the purpose of the imaging system, technologies to improve performance of the important factors have been developed. In this chapter, some examples that advance the information quality of each factor are described.

7.1 Light Intensity Information

Light intensity is the most important information and contains both sensitivity and dynamic range (DR).

7.1.1 Sensitivity

Sensors measure the amount of light coming to each built-in coordinate point, and sensitivity is important to their performance. Although sensors have only been talked about thus far in terms of a sensor chip, they are actually mounted in imaging systems after they are bonded in packages sealed with transparent glass, as shown in Figure 7.1a.

Therefore, losses are incurred at the light phase before arriving at a silicon surface and at the stage of light and signal charge after penetration into the silicon. This is explained in stages. (1) Incident light decreases by about 5%–10% by reflection at both surfaces of a sealing glass, as shown in Figure 7.1a. However, sometimes antireflection coatings are used to recover the loss depending on the application, but these are generally not used due to their cost. (2) Absorption in the sealing glass is negligible except in the ultraviolet (UV) region. (3) Arriving at the sensors, part of the light is reflected at the surface of an on-chip lens (OCL). Sometimes, OCLs are covered with material having a low refractive index to suppress the loss. (4) Further, an OCL and an on-chip color filter (OCF) absorb part of the light. Since the OCF has the role of restricting the wavelength region of light that passes through it, and the spectral response is directly related to color performance, it is necessary for the OCF to absorb light that should not be transmitted through the filter of the color. (5) Absorption and reflection by a passivation film and an interlayer isolation film follow. (6) Arriving at the photodiode (PD), there is reflection at the silicon surface. A mirror-polished silicon wafer surface shows gross* reflectance in the visual region of 30%–40%, similar to metal. As this impact is never low, an antireflection (AR) film often forms, as mentioned in Section 5.1.2. (7) Before then, light that comes to the area outside the OCL and is not led to the aperture area is also lost.

* However, the reflection is not due to a free carrier.

images

Figure 7.1

Loss factor of sensitivity: (a) packaged sensor; (b) sensor chip; (c) around PD; (d) diagram of p-well depth dependency of spectral response.

(8) On accessing the silicon, while light absorption starts photoelectric conversion, generated electrons are prone to annihilation by the recombination with high-density holes in the p layer near the surface, as shown in Figure 7.1c. From the viewpoint of dark current suppression, a high impurity concentration of the surface p+ layer is desirable. Conversely, from the viewpoint of repression of the signal charge extinction by recombination, it is preferable that the p+ layer has low density and is thin. Therefore, a balanced design is necessary. (9) In the case of a sensor formed in a p-well on an n-type substrate as shown in Figure 7.1c, the signal charges generated in the n-type substrate are discharged. As the longer depth L, which is the distance from the surface to the electronic dividing ridge, can collect more signal charges generated in deeper areas, the sensitivity of longer wavelengths increases, as mentioned in Section 2.2.2. In the case of sensors formed on a p-type substrate, all the generated signal charges are capable of contributing, as shown in Section 2.2.1.

The ratio of the number of signal charges, which contribute to sensitivity, to the number of incident photons is called quantum efficiency (QE). Specifically, the ratio of the number of incident photons to the image area or a pixel is called external QE, while the ratio of the number of penetrating photons to silicon is called internal QE.

As shown in Figure 7.1, while there are many factors that impact on sensitivity, the more cost-effective technologies have been adopted. The development of technologies that make effective use of photons and signal charges, such as OCL proposed in the early 1980s, AR film, backside illumination (BSI), and advanced front-side illumination (FSI), is ongoing, as mentioned in Chapter 5. Noise reduction technologies, which are important for signal-to-noise ratio (SNR) as sensitivity, are also still being developed, as described in Chapter 5.

7.1.2 Dynamic Range

In this section, the DR, which is the range of light intensity information that a sensor can capture, will be discussed. The DR is defined as the ratio of a signal electron number at saturation level to that of dark noise, as shown in Figure 3.6. The DR is also defined as the ratio of the maximum illuminance at which image information can be obtained without saturation to the illuminance at which the SNR equals unity. In a linear system, since a signal electron number is proportional to light intensity, both definitions of DRs are in agreement. Because it is difficult to greatly improve the DR in a linear system using state-of-the-art technology, a nonlinear system such as logarithmic conversion of photo-current1 or a combination of multiple images information is often employed. Nonlinear systems have issues such as image lag, SNR, and temperature characteristics along with complicated signal processing according to its applications, especially color applications. Low-light performance should never be sacrificed for the sake of obtaining information on highly illuminated objects.

The following sections discuss some examples of techniques that improve the DR. Additionally, the pulse output sensor described in Section 5.3.3.2.3 is one of the methods used to increase the DR.

7.1.2.1 Hyper-D CCD

Hyper-D CCD,2 which was proposed in 1995, can transfer twice the conventional type of signal charge packet number by forming double density VCCD, as shown in Figure 7.2, to handle a short exposure signal along with a normal one. Saturated signals in normal exposure are replaced by nonsaturated signals in short exposure, and they are synthesized by signal processing. Thus, image information can be obtained under much higher illumination than in conventional CCDs. Examples of a captured image are shown in Figure 7.3. While the darker scene cannot be captured as well under the same exposure conditions as the highlighted scene using a conventional CCD camera, both areas are captured well by a hyper-D CCD camera. Using this signal processing, the linear relationship between light intensity and image signal as a whole image is not retained. This case is a successful example of flexibility, capability, and high functionality owing to the combination of the device with the electronic system, which was not possible in earlier film camera systems.

images

Figure 7.2

Conceptual diagram of hyper-D CCD: (a) conventional CCD; (b) hyper-D CCD.

7.1.2.2 CMOS Image Sensor with Lateral Overflow Capacitor

The operational principle of this device3,4 is explained in Figure 7.4. As shown in Figure 7.4a, one field-effect transistor (FET) (M3) and one capacitor (CS) are added to a four-transistor (4-Tr) pixel configuration composed of a PD, readout transistor (M1), reset transistor (M2), amplify (or drive) transistor (M4), and raw select transistor (M5). The operation is shown in Figure 7.4b. At time t1, by making M2 and M3 on-state, floating diffusion (FD) and column select (CS) are reset to the voltage source to start the exposure. After the reset operation, the output of noise N2 existing in the combined capacitor [FD + CS] follows and is stored in off-chip memory at time t2. During the exposure period, M3 is kept on-state so that any existing oversaturated signal charges overflowing from the PD due to high illumi-nance are stored in the FD and CS, as shown at time t3. Before applying the readout signal charges from the PD, M3 is set to off-state. At this time, part of the oversaturated charges and noise N2 existing in the FD are output as noise N1 at time t4. Readout of the nonsatu-rated signal charge S1 from the PD to FD follows to output (S1 + N1) at time t5. The output voltage corresponding to S1 is obtained as the difference between the output voltages of N1 and (S1 + N1). Next, M3 is set to on-state to sum (oversaturated signal charges + N2) at time t3 and signal charge S1 to get (S2 + N2), that is, the summation of all signal charges and the initial noise N2. The summed charges amount is output by using (FD + CS) as the charge quantity detective capacitor at time t6, and the output voltage corresponding to the total signal charge S2 is obtained as the difference from the output voltage of noise N2 stored in off-chip memory. Because the noise charge quantity N2 at time t6 is the summation of N2 at time t2 and dark current generated at FD during the exposure period, they are not the same as N2 at time t2. While frame memory is necessary to store each N2 of each pixel at time t2, there is a proposal to substitute N2 by that of the next frame to avoid memory installation. While reset noise cannot be canceled because there is no correlation between the different reset operations in this case, since the signal level is higher in the oversaturated situation, it can be thought to be highly tolerant for noise.

images

FIGURE 7.3

(See color insert) Examples of a captured dynamic range: (a) conventional CCD; (b) hyper-D CCD.

images

Figure 7.4

Wide dynamic range CMOS image sensor with lateral overflow capacitor: (a) pixel configuration; (b) schematic diagram of operational principle. (Reprinted with permission from Akahane, N., Sugawa, S., Adachi, S., Mori, K., Ishiuchi, T., and Mizobuchi, K., IEEE Journal of Solid-State Circuits, 41, 851–856, 2006.)

7.2 Space Information

The improvement in space (position) information is nothing less than progress in space resolution. The most direct method to enhance the Nyquist frequency is to increase the pixel number. The pixel interpolation array, described in Section 5.2.3.1, increased the horizontal resolution by devising a pixel array without increasing the pixel number. This sensor was produced for video cameras in the early 1980s. Because the scanning line number is decided by the format of television systems, there was no need to increase the vertical resolution. A sensor that also extends to vertical resolution in the same manner as for digital camera use is a pixel interleaved array CCD (PIA CCD).5

Square and interleaved pixel arrays in real space are shown in Figure 7.5a and b, with the interleaved array indicating a rotation at an angle of 45° of the square array. The vertical, horizontal, and diagonal pixel pitches in the square array are p, p, and p/√2, respectively. Conversely, the vertical and horizontal pitches in the interleaved array are shortened to p/√2, while the diagonal pitch is p as shown in Figure 7.5b.

The Nyquist frequency obtained by Equation 6.1 is shown as frequency space in Figure 7.5c. In an interleaved array, the Nyquist frequency is higher than that of a square array in the vertical and horizontal directions, while that of the diagonal direction is lower. Thus, some part of a higher resolution in the diagonal direction of a square array is allocated to the vertical and horizontal directions in the interleaved array. Since the configuration is only rotated at an angle of 45°, the sampling density, that is, the information density, is the same, but the weight is changed in accordance with the directions.

So is there much point in it? The answer is “yes.” Watanabe et al.6 report that the human eye has higher sensitivity in vertical and horizontal directions. Additionally, in a paper on PIA,5 it was shown that the structure ratio of vertical and horizontal lines are statistically higher than other angles in huge number of images. Thus, a way to create efficiency in accordance with the characteristics of the human eye and the statistical nature of objects is desirable.

images

Figure 7.5

Pixel arrays in real space and Nyquist frequency of each array: (a) pixel square array; (b) pixel interleaved array; (c) Nyquist frequency.

The explanation so far is for the case of monochromatic use, in which each pixel has correlations with its neighboring pixels and a reasonable response or output can be supposed from each pixel. But the situation is quite different in the case of a one-chip color system, that is, one color filter at one pixel.

Before the discussion proceeds, some points concerning the characteristics of the human eye should be made. As mentioned in Section 6.4, the human eye and brain perceive color as a set of stimulus values of cones S, M, and L. There, what contributes most, that is, the highest sensitivity frequency is around 550 nm, which is from green to yellow-green. So, the pixel that contributes most to sensitivity and resolution is the green filter pixel; therefore, the green pixel array is important.

First, the Bayer configuration filter applied to a pixel square array sensor is discussed. Each Nyquist frequency of G, R, and B is considered as well as the previous case of monochrome by focusing on the array of each color.

As shown in Figure 7.6a, the G pixel configuration is an interpolated array and the vertical and horizontal pitch is p, which is the same as in the monochrome case. Therefore, the Nyquist frequency of green in the vertical and horizontal directions is 1/2p, which is the same with monochrome as shown in Figure 7.6b, which in the monochrome case is indicated by B/W. Thus, the combination of a square array sensor and the Bayer configuration is well made.* The pitches of R and B in the vertical and horizontal directions are 2p and the Nyquist frequency is 1/4p, as shown in the figure.

* This is not surprising because the concept of Bayer’s invention (Reference [6] in Chapter 1) is to arrange the resolution contributory color in a checkerwise fashion.

images

Figure 7.6

(a) Pixel square array with Bayer configuration color filter; (b) Nyquist frequency of each color.

Conversely, in the combination of an interleaved pixel array sensor and the Bayer configuration shown in Figure 7.7a, the G pixel configuration is a square array with the pitches of √2p in the vertical and horizontal directions; therefore, the Nyquist frequency is 1/2√2p, as shown in Figure 7.7b, indicating debasement to one-half of the monochrome case and lower than the square pixel array sensor, while the diagonal direction is as high as 1/2p. As the pitch of R and B colors in the vertical and horizontal directions is √2p, the same as G, the Nyquist frequency is also 1/2√2p.

This is also one of the technologies that, in principle, has an advantage with straightforward validity for monochrome image capture, but is quite different for color image capture by single-chip color systems.

Since many objects in near achromatic color overlap the spectral response of the green color filter with that of the red and blue filters, decay at this level is infrequent. But a rather complicated signal processing is required as well as a wider overlapping of the spectral response between colors, which tend to degrade the hue accuracy. Because an interleaved array configuration is essentially on the basis that every pixel has a proper level of response, many combinations are required at various points, such as each color spectral response, color configuration, algorithm for color, and luminance signal processing. Therefore, depending on the color distribution of the objects, the effect does not work, or, adversely, it falls to a low level.7

images

Figure 7.7

(a) Pixel interleaved array with Bayer configuration color filter; (b) Nyquist frequency of each color. (Reprinted with permission from Kosonocky, W., Yang, G., Ye, C., Kabra, R., Xie, L., Lawrence, J., Mastrocolla, V., Shallcross, F., and Patel, V., Proceedings of the IEEE Solid-State Circuits Conference Digest of Technical Papers, 11.3, pp. 182–183, San Francisco, CA, 1996.)

The ratio of the pixel numbers of R, G, and B is 1:2:1 in the Bayer configuration; however, there is an example of improvements in resolution and sensitivity by increasing the number of G in the ratio to 1:6:1 for camcorder applications.8

7.3 Time Information

The improvement of time information is essentially high time resolution.

7.3.1 Frame-Based Sensors

In the frame-based integration mode in which exposure is prosecuted with a predefined periodicity and exposure period, which almost all image sensors employ, a higher time resolution and a higher modulation transfer function (MTF) can be obtained using a higher frame rate and a shorter exposure period, as mentioned in Section 6.3. To that end, the task is to find how a higher frame rate (frames/s: fps) or a higher output pixel number rate (pixels/s) can be realized. The expedients for frame-based sensors are as follows:

  1. Parallel output by multiple channels

  2. High-speed digital output by column-parallel analog-to-digital converter (ADC)

  3. Burst-type sensor with the required number of built-in frame memories

7.3.1.1 Parallel Output–Type Sensor

In this method, because the pixel signal output is shared by parallel multiple output channels, the output frequency can be increased by the number of output channels, as shown in the conceptual diagrams in Figure 7.8a and b, illustrating a 4-channel output CCD and an 8-channel output CMOS sensor, respectively. There is an example of a CMOS sensor with 128 channel parallel outputs9 by 64 channel outputs at the top and bottom, respectively. It should be considered that fixed-pattern noise is apt to be caused as output differences due to the variation of the output channel characteristics independent of the sensor type, and countermeasures are necessary.

images

Figure 7.8

Examples of parallel output sensors: (a) CCD; (b) CMOS sensor.

7.3.1.2 Column-Parallel ADC-Type Sensor

This type of sensor is described in detail in Section 5.3.3.2.2. As mentioned there, various types of column converter ADC sensors are reported and a high-rate output performance of gigapix-els per second is achieved. Continuous progress is expected in accordance with its applications.

7.3.1.3 Burst-Type Sensor

All the sensors described thus far have continuous output with continuous capturing. In contrast, in a burst-type sensor, on-chip frame memories are equipped to store captured image signal charges for giving priority to higher frame rate by avoiding output operation at each frame, which needs driving and output time. The first one is burst CCD,10 proposed in 1996.

A two-by-two pixel configuration is shown in Figure 7.9; signal charges generated in the PD are collected and stored in the potential well under the G1 gate. They are transferred in series into the serial-parallel register of the pixel to detect successive frames by way of the G3 gate channel. The signal charges of the following frame are also transferred into the serial-parallel register. After the serial-parallel register is filled with the signals of five frames, the charge signals are transferred in parallel from the serial-parallel register to the parallel register. By repeating this action, five-by-six frame memories are filled. This operation is continued until the desired phenomenon is observed. During the operation, when the signal charges are transferred in parallel from the serial-parallel register to the parallel register, five signal charge packets in the last row in the parallel register of the pixel above are transferred to the serial-parallel register of the lower pixel. At each transfer of new signal charges from the PD to the serial parallel register in series, five signal charges transferred from the above pixel memory are transferred to the dumping drain D in series too.

images

Figure 7.9

Pixel configuration of burst-type CCD (serial-parallel memory).

Since this sensor has a register that can store 30 signal charge packets at each pixel, the number of frames of images that can be captured is 30. This is an issue of the range of time information on the accuracy and range of the four factors mentioned in Section 1.1.

While this sensor was aiming to realize 106 fps, the accomplished frame rate was 3 × 105 fps at the time of presentation at the conference. The in situ storage image sensor (ISIS)12 was devised following the advice that linear CCD-type memory should be employed to pursue a higher frame rate.11 This advice was based on an insight that the serial-parallel register of the above sensor made this goal difficult to achieve, an idea almost impossible to conceive for those who have never actually developed CCDs.

As shown in Figure 7.10, a linear CCD-type memory with 103 stages is formed at each pixel and 106 fps is achieved. Signal charges under a photogate sensor are transferred to the linear CCD memory at each 1 μs, and signal charges in the memory are transferred at the same time, the head charge packet arriving at the drain is discharged. This operation is repeated until the desired phenomenon is observed. In 2011, this sensor was improved on13 to realize a frame rate as high as 1.6 × 107 fps by using BSI for high sensitivity and charge multiplication. However, it does have issues, such as increased load on cooling of the system to suppress the rise in heat caused by high-frequency driving of a CCD, which is a large capacitor.

images

Figure 7.10

Pixel configuration of ISIS (linear CCD-type frame memory). (Reprinted with permission from Etoh, T., Poggemann, D., Ruckelshausen, A., Theuwissen, A., Kreider, G., Folkerts, H., Mutoh, H. et al., Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 2.7, pp. 46–47, San Francisco, CA, 2002.)

7.3.1.4 Coexistence Type of Burst and Continuous Imaging Modes

A burst-type CMOS image sensor was also developed. By putting the CMOS sensor to good use, analog frame memory areas are formed separate from the image area in the sensor,14 as shown in Figure 7.11 with 400(H) × 256(V) pixels.

On the top and bottom of the image area, a frame memory of 128 frames/pixel is formed for temporary storage. Signals are read out from the image area to the memory at high speed through 32 signal lines in each column, that is, four pixels per one output. The sample and hold circuit of a CDS arranged as internal circuits in a pixel provide global shutter function. Interestingly, it can capture both the burst imaging mode of 107 fps (1 Tpixels/s) with 128 frames of 105 pixel numbers and the continuous imaging mode of 7.8 k fps (780 Mpixels/s) of 105 pixel numbers with analog parallel output. When additional measures are included to achieve high speeds, it is an impressive imager filled with ideas.

images

Figure 7.11

Architecture of high-speed CMOS sensor. (Reprinted with permission from Tochigi, Y., Hanzawa, K., Kato, Y., Kuroda, R., Mutoh, H., Hirose, R., Tominaga, H., Takubo, K., Kondo, Y., Sugawa, S., Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 22.2, pp. 382–384, San Francisco, CA, 2012.)

images

Figure 7.12

Event-driven sensor: (a) pixel schematic and the role of each block; (b) log I and reconstructed log signals; (c) temporal transition of A·d(log I). (Reprinted with permission from Lichtsteiner, P., Posch, C., Delbruck, T., Journal of Solid-State Circuits, 43, 566–576, 2008.)

7.3.2 Event-Driven Sensor

The sensor described here is the second example that does not belong to “almost all image sensors”15 in this book. It is not a frame-based type of sensor in which signal charges are integrated for a prefixed exposure period. Its principle of operation is completely different except for photoelectron conversion.

As shown in Figure 7.12a, a pixel is composed of three blocks.16 In (1), the signal generation block, a photocurrent through a PD is not integrated but monitored constantly. The voltage at the node, which is connected to the cathode of the PD and the source of the feedback transistor, Mfb, is amplified by an inverting amplifier, and the output is connected with the gate input of Mfb to form an amplified and log-transformed voltage output of photocurrent I. The output is transferred to (2) the amplification and differential block, and is temporally differentiated; however, only the component that varies with time is amplified. The output is transmitted to (3), the quantization block, and is monitored by an increase/decrease checking comparator, which emits a pulse signal of ON or OFF in accordance with an increase or decrease in input change, when a predefined amount is observed. The signal is generated by pulses indicating an increase or decrease in the predefined amount of input change only at the time of change and only at the changed pixel, there is no signal showing light intensity directly. In that context, this sensor has communality* with the pulse output sensor described in Section 5.3.3.2.3. Therefore, light intensity or the change of light intensity information is quantized in these sensors differently from “almost all sensors,” while time information is not quantized.

Thus, the obtained information is only the pixel address and time, and the increase or decrease at which the predefined amount of change occurred. That signals are generated only at pixels and at times when the input has changed is a big feature of this sensor, and is quite different from frame-based sensors. Therefore, redundancy is significantly reduced compared with the frame-based mode in which signals are read out at each pixel at each frame regardless of whether input changes. The generated signal pulses in the quantization block are also used for the reset operation in the second block, from where the next potential monitoring starts. If the output of the first block, log I or Vp, changes as shown by the solid line in Figure 7.12b, the output of the second block, A·d(log I), appears. When it reaches the threshold of ON or OFF, which are the predefined amounts of change, a signal pulse is emitted by the third block as shown in Figure 7.12c and the second block output is reset. From the obtained time information when the predefined change occurred as shown in Figure 7.12c, the signal shown by the dotted line in Figure 7.12b can be obtained as a reconstructed temporal transition of log I or Vp. In this reconstruction, while it can be said that the light intensity information is quantized by a predefined amount of change of the amplified differential signal, the time information is obtained as high as the resolution determined by circuit characteristics. Thus, the time information is not quantized in this sensor, since this is not a frame-based sensor. In this sensor, the quantized factors, or built-in coordinate points, are space and amount of change of time-derivative log-transformed voltage output of amplified photocurrent I. Therefore, the signal output of the sensor is not the amount of integrated signal charge S(r, t), but time T when amount of change of time differential of log-transformed voltage output of amplified photocurrent I reaches a predetermined quantity ±A·∆[∂log I/∂t]q at pixel rk, that is, TA·∆[∂log I/∂t]q, rk), where A is voltage gain of amplification.

* Neither of these sensors belongs to the frame-based integration mode type, but both emit a pulse signal and reset monitoring integration to restart, when the light intensity information changes to a predefined amount.

The time resolution of this sensor is higher than 10 μs. The DR is 120 dB by logarithmic output and differential circuit. Because only input-varied pixels emit a signal pulse, a moving object composes a cluster of those pixels, and it can be recognized as a moving object in real time. This sensor has 26 transistors in a pixel.

Readers are directed to the website of the research institute of this sensor,17 where various interesting moving images are shown on the homepage.

7.4 Color and Wavelength Information

The physical quantity named “wavelength” exists in the natural world and “color” is a perception generated by the human eye and brain. As mentioned in Section 6.4, since it is quite difficult to obtain images with physically accurate wavelength information using the current technology, a subjective color reproduction technique is commonly used for applications that are viewed by the human eye. In this field, a single-sensor color camera system represented by the primary colors R, G, and B or the complementary colors magenta, yellow, cyan, and green and the three-sensor camera system with the primary colors are used.

7.4.1 Single-Chip Color Camera System

Color reproduction by three or four colors is an absolute approximation. Therefore, it is possible to express more subtle shades of color by adding a new color with an appropriate spectral response; however, this usually has side effects such as degradation of the SNR. As system designs are determined by what characteristics should be featured as part of a balanced overall performance, designs unsuitable for high sensitivity, which is the priority for common imaging systems, are not commonly adopted. In a digital still camera system, the Bayer configuration color filter remains dominant.

7.4.2 Multiband Camera System

While the color information of a pixel in a single-chip color system is signified by a set signal of R, G, and B, a multiband camera can obtain much more detailed color or wavelength information, as mentioned in Section 6.4.

As shown in Figure 7.13, multiple color filters attached to a turret are set in front of a photographic lens in a multiband camera system, and still images are captured through each color. It has been clarified that, in principle, 99% of color information can be obtained by three kinds of color filters.18 However, as it is quite difficult to realize an ideal combination of color filters, a practical solution was proposed that 99% of color information can be achieved by using five kinds of commercially available pigment color filters. While this system was developed for up to 8-band cameras, there is an example of a 16-band camera.19

7.4.3 Hyperspectral Imaging System

A hyperspectral imaging system provides a thorough viewpoint of a multiband camera. While the objective of a multiband camera with a smaller band number is to obtain higher color information, that of a hyperspectral imaging system is precise and wide-ranging wavelength information. A hyperspectral imaging system can be considered as a camera with a spectroscopic device such as built-in grating20 rather than a multiband camera with a restricted number of color filters. Hyperspectral imaging systems with a 5 nm bandwidth and around 100 band numbers have been developed and are considered the ultimate multiband camera.

The operational principle of hyperspectral imaging is shown in Figure 7.14. A linear portion of a whole image is passed through an optical slit and is dispersed by a spectroscopic device in a vertical direction to the incident light of the image sensor. Focusing on one vertical line on the sensor, the spectrum can be obtained from the longest wavelength at the top edge pixel to the shortest wavelength at the bottom pixel. On completion of the readout of one image on an image sensor, the linear image part is shifted to the next one to be captured in the following step and in the same manner. By scanning a two-dimensional image from the top line to the bottom line, a hyperspectral image is obtained. Since the principle is the same with a multiband camera, it is not suitable for real-time reproduction.

Because of the principle, a target is not limited to the visible region. While an optics system and sensor must be chosen, their objectives are wide from UV, visible, near-infrared, infrared, to far-infrared imaging. Among these functions, hyperspectral imaging is applied in a wide range of areas such as quality review and safety of agriculture and food, medical science, biotechnology, life science, and remote sensing, and further advancement is anticipated.

images

Figure 7.13

Schematic diagram of multiband camera.

images

Figure 7.14

Operational principle of hyperspectral imaging.

References

1. S.G. Chamberlain, J.P.Y. Lee, A novel wide dynamic range silicon photodetector and linear imaging array, Transaction on Electron Devices, ED-31(2), 175–182, 1984.

2. H. Komobuchi, A. Fukumoto, T. Yamada, Y. Matsuda, T. Kuroda, 1/4 inch NTSC format hyper-D range IL-CCD, in IEEE Workshop on CCDs and Advanced Image Sensors, April 20–22, Dana Point, CA, 1995, http://www.imagesensors.org/Past%20Workshops/1995%20Workshop/1995%20Papers/02%20Komobuchi%20et%20al.pdf (accessed January 10, 2014).

3. N. Akahane, S. Sugawa, S. Adachi, K. Mori, T. Ishiuchi, K. Mizobuchi, A sensitivity and linearity improvement of a 100 dB dynamic range CMOS image sensor using a lateral overflow integration capacitor, in 2005 Symposium on VLSI Circuits, Digest of Technical Papers, pp. 62–65, June 16–18, Kyoto, Japan, 2005.

4. N. Akahane, S. Sugawa, S. Adachi, K. Mori, T. Ishiuchi, K. Mizobuchi, A sensitivity and linearity improvement of a 100-dB dynamic range CMOS image sensor using a lateral overflow integration capacitor, IEEE Journal of Solid-State Circuits, 41(4), 851–856, 2006.

5. T. Yamada, K. Ikeda, Y. Kim, H. Wakoh, T. Toma, T. Sakamoto, K. Ogawa, et al., A progressive scan CCD image sensor for DSC applications, Journal of Solid-State Circuits, 35(12), 2044–2054, 2000.

6. A. Watanabe, T. Mori, S. Nagata, K. Hiwatashi, Spatial sine-wave responses of the human visual system, Vision Research, 8, 1245–1263, 1968.

7. H. Miyahara, The picture quality improve technology for consumer video camera, Journal of ITE, 63(6), 731–734, 2009.

8. Sony Corporation. http://www.sony.jp/products/Consumer/handycam/PRODUCTS/special/02cmos.html (accessed January 10, 2014).

9. B. Cremers, M. Agarwal, T. Walschap, R. Singh, T. Geurts, A high speed pipelined snapshot CMOS image sensor with 6.4 Gpixel/s data rate, in Proceedings of 2009 International Image Sensor Workshop, p. 9, June 22–28, Bergen, Norway, 2009, http://www.imagesensors.org/Past%20Workshops/2009%20Workshop/2009%20Papers/030_paper_cremers_cypress_gs.pdf (accessed January 10, 2014).

10. W. Kosonocky, G. Yang, C. Ye, R. Kabra, L. Xie, J. Lawrence, V. Mastrocolla, F. Shallcross, V. Patel, 360 × 360-element very-high frame-rate burst-image sensor, in Proceedings of the IEEE Solid-State Circuits Conference Digest of Technical Papers, 11.3, pp. 182–183, February 8–10, San Francisco, CA, 1996.

11. T. Kuroda, Private communication at Kyoto Research Laboratory, Panasonic Corporation, April 1996.

12. T. Etoh, D. Poggemann, A. Ruckelshausen, A. Theuwissen, G. Kreider, H. Folkerts, H. Mutoh, et al., A CCD image sensor of 1 Mframes/s for continuous image capturing of 103 frames, in Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 2.7, pp. 46–47, February 3–7, San Francisco, CA, 2002.

13. T. Etoh, D. Nguyen, S. Dao, C. Vo, M. Tanaka, K. Takehara, T. Okinaka, et al., A 16 Mfps 165 kpixel backside-illuminated CCD, in Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 23.4, pp. 406–408, February 20–24, San Francisco, CA, 2011.

14. Y. Tochigi, K. Hanzawa, Y. Kato, R. Kuroda, H. Mutoh, R. Hirose, H. Tominaga, K. Takubo, Y. Kondo, S. Sugawa, A global-shutter CMOS image sensor with readout speed of 1 Tpixel/s burst and 780 Mpixel/s continuous, in Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 22.2, pp. 382–384, February 19–23, San Francisco, CA, 2012.

15. P. Lichtsteiner, C. Posch, T. Delbruck, A 128 × 128 120 dB 30 mW asynchronous vision sensor that responds to relative intensity change, in Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, 27.9, pp. 508–510, February 6–9, San Francisco, CA, 2006.

16. P. Lichtsteiner, C. Posch, T. Delbruck, A 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor, Journal of Solid-State Circuits, 43(2), 566–576, 2008.

17. T. Delbruck, Dynamic vision sensor (DVS): Asynchronous temporal contrast silicon retina, sili-conretina, 2013. http://siliconretina.ini.uzh.ch/wiki/index.php (accessed January 10, 2014).

18. Y. Yokoyama, T. Hasegawa, N. Tsumura, H. Haneishi, Y. Miyake, New color management system on human perception and its application to recording and reproduction of art (I)—Design of image acquisition system, Journal of SPIJ, 61(6), 343–355, 1998.

19. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, N. Ohyama, Color image reproduction based on the multispectral and multi-primary imaging: Experimental evaluation, Proceedings of SPIE, 4663, 15–26, 2002.

20. Shin, Satori. http://www.nikko-pb.co.jp/nk_comm/mok08/html/images/1203g61.pdf (accessed January 10, 2014).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.54.7