4Image Restoration

When acquiring and processing an image, many processes will cause degradation of the image and reduction of image quality. The image restoration process helps recover and enhance degraded images. Similar to image enhancement, several mathematic models for image degradation are often utilized for image restoration.

The sections of this chapter are arranged as follows:

Section 4.1 presents some reasons and examples of image degradation. The sources and characteristics of several typical noises and their probability density functions are analyzed and discussed.

Section 4.2 discusses a basic and general image degradation model and the principle for solving it. The method of diagonalization of circulant matrices is provided.

Section 4.3 introduces the basic principle of unconstrained restoration and focuses on the inverse filtering technique and its application to eliminate the blur caused by uniform linear motion.

Section 4.4 introduces the basic principle of constrained restoration and two typical methods: Wiener filter and constrained least square restoration.

Section 4.5 explains how to use the method of human–computer interaction to improve the flexibility and efficiency of image restoration.

Section 4.6 presents the two groups of techniques for image repairing. One group is image inpainting for restoring small-size area and another group is image completion for filling relative large-scale region.

4.1Degradation and Noise

In order to discuss image restoration, some degradation factors, especially noise, should be introduced.

4.1.1Image Degradation

There are several modalities used to capture images, and for each modality, there are many ways to construct images. Therefore, there are many reasons and sources for image degradation. For example, the imperfections of the optical system limit the sharpness of images. The sharpness is also limited by the diffraction of electromagnetic waves at the aperture stop of the lens. In addition to inherent reasons, blurring caused by defocusing is a common misadjustment that limits the sharpness in images. Blurring can also be caused by unexpected motions and vibrations of the camera system, which cause objects to move more than a pixel during the exposure time.

Mechanical instabilities are not the only reason for image degradation. Other degradations result from imperfections in the sensor electronics or from electromagnetic interference. A well-known example is row jittering caused by a maladjusted or faulty phase-locked loop in the synchronization circuits of the frame buffer. This malfunction causes random fluctuations at the start of the rows, resulting in corresponding position errors. Bad transmission along video lines can cause echoes in images by reflections of the video signals. Electromagnetic interference from other sources may cause fixed movement in video images. Finally, with digital transmission of images over noisy wires, individual bits may flip and cause inhomogeneous, erroneous values at random positions.

The degradation due to the process of image acquisition is usually denoted as blurriness, which is generally band-limiting in the frequency domain. Blurriness is a deterministic process and, in most cases, has a sufficiently accurate mathematical model to describe it.

The degradation introduced by the recording process is usually denoted as noise, which is caused by measurement errors, counting errors, etc. On the other hand, noise is a statistical process, so sometimes the cause of the noise affecting a particular image is unknown. It can, at most, have a limited understanding of the statistical properties of the process.

Some degradation processes, however, can be modeled. Several examples are illustrated in Figure 4.1.

(1) Figure 4.1(a) illustrates nonlinear degradation, from which the original smooth and regular patterns become irregular. The development of the photography film can be modeled by this degradation.

(2) Figure 4.1(b) illustrates degradation caused by blurring. For many practically used optical systems, the degradation caused by the aperture diffraction belongs to this category.

(3) Figure 4.1(c) illustrates degradation caused by (fast) object motion. Along the motion direction, the object patterns are wiredrawing and overlapping. If the object moves more than one pixel during the capturing, the blur effect will be visible.

Figure 4.1: Four common examples of degradation.

(4) Figure 4.1(d) illustrates degradation caused by adding noise to the image. This is a kind of random degradation, which will be detailed in the following. The original image has been overlapped by random spots that either darken or brighten the initial scene.

4.1.2Noise and Representation

What is noise? Noise in general is considered to be the disturbing/annoying signals in the required signals. For example, “snow” on television or blurred printing degrades our ability to see and comprehend the contents. Noise is an aggravation.

Noise is an annoyance to us. Unfortunately, noise cannot be completely relegated to the arena of pure science or mathematics (Libbey, 1994). Since noise primarily affects humans, their reactions must be included in at least some of the definitions and measurements of noise. The spectrum and characteristics of noise determines how much it interferes with our aural concentration and reception. In TV, the black specks in the picture are much less distracting than the white specks. Several principles of psychology, including the Weber-Fechner law, help to explain and define the way that people react to different aural and visual disturbances.

Noise is one of the most important sources of image degradation. In image processing, the noise encountered is often produced during the acquisition and/or transmission processes. While in image analysis, the noise can also be produced by image preprocessing. In all cases, the noise causes the degradation of images.

4.1.2.1Signal-to-Noise Ratio

In many applications, it is not important if the noise is random or regular. People are usually more concerned with the magnitude of the noise. One of the major indications of the quality of an image is its signal-to-noise ratio (SNR). The classical formula, derived from information theory, for the signal-to-noise ratio is given as

SNR=10log10(EsEn)(4.1)

This is actually stated in terms of a power ratio. Often, in specific scientific and technical disciplines, there may be some variations in this fundamental relationship. For example, the following defined SNR has been used in the process of generating images (Kitchen and Rosenfeld, 1981):

SNR=(Cobσ)2(4.2)

where Cob is the contrast between the object and background and σ is the standard deviation of the noise. More examples can be found in Chapter 8, where SNR is used as the objective fidelity criteria.

4.1.2.2Probability Density Function of Noise

The spatial noise descriptor describes the statistical behavior of the gray-level values in the noise component of the image degradation model. These values come from several random variables, characterized by a probability density function (PDF). The PDF of some typical noises are described in the following.

Gaussian noise arises in an image due to factors such as electronic circuit noise and sensor noise which are caused by poor illumination and/or high temperature. Because of its mathematical tractability in both spatial and frequency domains, Gaussian (normal) noise models are used frequently in practice. In fact, this tractability is so convenient that the Gaussian models are often used in situations in which they are marginally applicable at best.

The PDF of a Gaussian random variable, z, is given by (see Figure 4.2)

p(z)=12πσexp[(zμ)22σ2](4.3)

where z represents the gray-level value, is the mean of z, and σ is its standard deviation (σ2 is called variance of z).

Uniform Noise The uniform density is often used as the basis for numerous random number generators that are used in simulations. The PDF of the uniform noise is given by (see Figure 4.3)

Figure 4.2: The PDF of a Gaussian noise.

Figure 4.3: The PDF of a uniform noise.

p(z)={1/(ba)ifazb0otherwiser(4.4)

The mean and variance of this density are given by

μ=(a+b)/2(4.5)

σ2=(ba)2/12(4.6)

Impulse Noise Impulse (salt and pepper) noise is found in situations where quick transient, such as faulty switching, takes place during image acquisition. Impulse noise also occurs in CMOS cameras with “dead” transistors that have no output or those that always output maximum values, and in interference microscopes for points on a surface with a locally high slope that returns no light (Russ, 2002). The PDF of (bipolar) impulse noise is given by (see Figure 4.4)

p(z)={paforz=apbforz=b0otherwise(4.7)

If b > a, the gray-level value b will appear as a light dot in the image. Conversely, level a will appear like a dark dot. If either Pa or Pb is zero, the impulse noise is called unipolar. If neither probability is zero, and especially if they are approximately equal, impulse noise values will resemble salt-and-pepper granules randomly distributed over the image (its appearance as white and black dots is superimposed on an image). For this reason, bipolar noise is also called salt-and-pepper noise or shot-and-spike noise.

Noise impulse can be either negative or positive. Scaling usually is a part of the image digitizing process. Because impulse corruption is usually large compared with the strength of the image signal, impulse noise generally is digitized as the extreme (pure black or white) values in an image. Thus, it is usually assumed that a and b are “saturated” values, which take the minimum and maximum allowed values in the digitized image. As a result, negative impulses appear as black (pepper) points in an image. For the same reason, positive impulses appear as white (salt) points. For an 8-bit image, this means that a = 0 (black) and b = 255 (white).

Figure 4.4: The PDF of an impulse noise.

4.2Degradation Model and Restoration Computation

Restoration computation depends on the model of degradation.

4.2.1Degradation Model

A simple and typical image degradation model is shown in Figure 4.5. In this model, the degradation process is modeled as a system/operator H acting on the input image f(x, y). It operates, together with an additive noise n(x, y), to produce the degraded image g(x, y). According to this model, the image restoration is the process of obtaining an approximation of f(x, y), given g(x, y) and the knowledge of the degradation operator H.

In Figure 4.5, the input and the output have the following relation

g(x,y)=H[f(x,y)]+n(x,y)(4.8)

The degradation system may have the following four properties (suppose n(x, y) = 0):

4.2.1.1Linear

Assuming that k1 and k2 are constants and f1(x, y) and f2(x, y) are two input images,

H[k1f1(x,y)+k2f2(x,y)]=k1H[f1(x,y)]+k2H[f2(x,y)](4.9)

4.2.1.2Additivity

If k1= k2= 1, eq. (4.9) becomes

H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)](4.10)

Equation (4.10) says that the response of a sum of two inputs equals the sum of the two responses in a linear system.

Figure 4.5: A simple image degradation model.

4.2.1.3Homogeneity

If f2(x, y) = 0, eq. (4.9) becomes

H[k1f1(x,y)]=k1H[f1(x,y)](4.11)

Equation (4.11) says that the response of the product of a constant and an input equals the product of the response of the input and the constant.

4.2.1.4Position Invariant

If for any f(x, y) and any a and b, there is

H[f(xa,yb)]=g(xa,yb)(4.12)

Equation (4.12) says that the response of any point in the image depends only on the value of the input at that point, not the position of the point.

4.2.2Computation of the Degradation Model

Consider first the 1-D case. Suppose that two functions f(x) and h(x) are sampled uniformly to form two arrays of dimensions A and B, respectively. For f(x), the range of x is 0, 1, 2, . . ., A– 1. For g(x), the range of x is 0, 1, 2, B– 1. g(x) can be computed by convolution. To avoid the overlap between the individual periods, the period M can be chosen as MA + B – 1. Denoting fe(x) and he(x) as the extension functions (extending with zeros), their convolution is

ge(x)=m0M1fe(m)he(xm)x=0,1,...,M1(4.13)

Since the periods of both fe(x) and he(x) are M, ge(x) also has this period. Equation

g=Hf=[ge(0)ge(1)ge(M1)]=[he(0)he(1)he(M+1)he(1)he(0)he(M+2)he(M1)he(M2)he(0)][fe(0)fe(1)fe(M1)](4.14)

From the periodicity, it is known that he(x) = he(x+ M). So, H in eq. (4.14) can be written as

H=[he(0)he(1)he(1)he(1)he(0)he(2)he(M1)he(M2)he(0)](4.15)

H is a circulant matrix, in which the last element in each row is identical to the first element in the next row and the last element in the last row is identical to the first element in the first row.

Extending the above results to the 2-D case is direct. The extended fe(x) and he(x) are

fe(x,y)={f(x,y)0xA1and0xB10AxM1orAxN1(4.16)

he(x,y)={h(x,y)0xC1and0xD10CxM1orDyN1(4.17)

The 2-D form corresponding to eq. (4.13) is

ge(x,y)=m0M1n0N1fe(m,n)he(xm,yn)x=0,1,...,M1y=0,1,...,N1(4.18)

When considering the noise, eq. (4.18) becomes

ge(x,y)=m0M1n0N1fe(m,n)he(xm,yn)+ne(x,y)x=0,1,...,M1y=0,1,...,N1(4.19)

Equation (4.19) can be expressed in matrices as

g=Hf+n=[H0HM1H1H1H0H2HM1HM2H0][fe(0)fe(1)fe(MN1)]+[ne(0)ne(0)ne(MN1)](4.20)

where each Hi is constructed from the i-th row of the extension function he(x, y),

Hi=[he(i,0)he(i,N1)he(i,1)he(i,1)he(i,0)he(i,2)he(i,N1)he(i,N2)he(i,0)](4.21)

where Hi is a circulant matrix. Since the blocks of H are subscripted in a circular manner, H is called a block-circulant matrix.

4.2.3Diagonalization

Diagonalization is an effective way to solve eq. (4.14) and eq. (4.20).

4.2.3.1Diagonalization of Circulant Matrices

For k = 0, 1, . . ., M– 1, the eigenvectors and eigenvalues of the circulant matrix H are

w(k)=[1exp(j2πMk)...exp(j2πM(M1)k)](4.22)

λ(k)=he(0)+he(M1)exp(j2πMk)+...+he(1)exp(j2πM(M1)k)(4.23)

Taking the M eigenvectors of H as columns, an M × M matrix W can be formed

W=[w(0)w(1)...w(M1)].(4.24)

The orthogonality properties of w assure the existence of the inverse matrix of W. The existence of W–1, in turn, assures the linear independency of the columns of W (the eigenvectors of H). Therefore, H can be written as

H=WDW1(4.25)

Here, D is a diagonal matrix whose elements D(k, k) = λ(k).

4.2.3.2Diagonalization of Block-Circulant Matrices

Define a matrix W of MN× MN(containing M × M blocks of N × N in size). The im-th partition of W is

W(i,m)=exp(j2πMim)WNi,m=0,1,...,M1(4.26)

where WN is an N × N matrix, whose elements are

WN(k,n)=exp(j2πNkn)k,n=0,1,...,N1(4.27)

Using the result for circulant matrices yields (note H is a block-circulant matrix)

H=WDW1.(4.28)

Furthermore, the transpose of H, denoted HT, can be represented with the help of the complex conjugate of D(D*)

HT=WD*W1(4.29)

4.2.3.3Effect of Diagonalization

In the 1-D case (without noise), substituting eq. (4.28) into eq. (4.20) and performing left multiplication of the two sides by W–1 yields

W1g=DW1f(4.30)

The products W–1 f and W–1 g are both M-D column vectors, in which the k-th item of W–1 f is denoted F(k), given by

F(k)=1Mi=0M1fe(i)exp(j2πMki)k=0,1,...,M1(4.31)

Similarly, the k-th item of W–1 g is denoted G(k), given by

G(k)=1Mi=0M1ge(i)exp(j2πMki)k=0,1,...,M1(4.32)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.116.20