CHAPTER 7

image

DIGITAL FILTERS AND DISCRETE FOURIER TRANSFORM

7.1 INTRODUCTION

If we define a filter as a device that discriminates among incoming frequencies, then filters can be implemented, for example, by mechanical, acoustical, pneumatic, or electrical elements. In the past few decades, discrete-time algorithms employing computational elements have played an increasing role in filter applications. Most commonly, these elements also incorporate discrete numerical values (as opposed to continuous circuit variables1) to represent signals. For simplicity we refer to these devices as digital filters. In this chapter, we deal with some of the issues that arise in the design of digital filters. Many of the ideas of digital signal processing pertain directly to digital filter design. However, in addition, many of the filtering concepts developed for the design of analog electrical filters [11] apply equally well to digital filter design. Section 7.2 is a review of these concepts. In Section 7.3 we show how simple mathematical transformations bridge the gap between the analog and digital world. The remainder of this chapter focuses on methods that have been primarily applied to discrete-time linear systems. Of particular interest is the application of the discrete Fourier transform (DFT) to filter theory and applications. Both filtering and the DFT are widely used for speech-processing applications.

The traditional analog filter design consists of two major portions; the approximation problem and the synthesis problem. Approximation refers to the development of an analytic expression that satisfies the constraints of the design problem (e.g., a ratio of polynomials, such that the frequency response is low pass with a 3-dB point at 4 kHz). Synthesis refers to the implementation of the filter, for instance as a cascade of second-order sections. In most books on analog filter design methods, the synthesis problem is the primary focus and occupies the main part of the book. This is because the analog circuits (consisting of resistors, inductors, capacitors, and operational amplifiers) have components with numerical values that are often too difficult or costly to specify with great accuracy. Thus, the trick in analog design is to create structures that are the least sensitive to component errors.

Digital filter design also requires both approximation and synthesis, but the situation is somewhat reversed; the approximation problem, how to find filter parameters to satisfy a given design criterion, predominates. Good design technique in the analog world is very important because the designer knows that his components are never perfect, so the search for a synthesis method that is least vulnerable to such imperfections is an integral part of the design mystique. In the early days of digital filter design, this accuracy problem could also be critical because of the possible bad effects of the finite word lengths (in some cases only 12 bits or even fewer were available). Nowadays, the availability of 32-bit word lengths and floating point capabilities at a greatly reduced cost means that the designer of digital filters can choose highly accurate components (although low cost, high-volume components still often use shorter word widths).

A filter function has relevance in both the time and frequency domains; complete specification in the frequency domain completely specifies behavior in the time domain, and vice versa. The Fourier transform of the frequency function is precisely the time response of the filter to an impulse (a delta function) applied at time zero.

Synthesis of digital filters can be divided into FIR (finite-impulse response) and IIR (infinite-impulse response) methods. A nonrecursive filter has an output that is a function of the input samples and is not a function of previous output samples. Such filters have only zeros in the complex z plane and are always FIR; that is, the response to a unit sample (a sequence with a value of one at time zero and zero for all other times) is zero after a finite number of samples. A recursive filter has an output that is a function of the input samples and previous output samples. Such a filter has, in general, both poles and zeros in the complex z plane, and in general has an impulse response (unit sample response) that does not diminish to zero after any finite number of samples – for example, a response such as h(n) = 0.5n.

Our discussion will necessarily be very brief; our aim is to point the reader at the large body of knowledge that has been developed over most of the past century. Thus, some of the specifics may be omitted or an appropriate reference will be mentioned.

7.2 FILTERING CONCEPTS

The most common application of a filter is to permit a given band of frequencies to pass through relatively undisturbed while all other frequencies are severely suppressed. However, many other applications exist. In this section we describe some of the desired characteristics of several well-known filters.

Most commonly, filters are designed to be low pass (passing frequencies below some cutoff point), high pass, bandpass, or band reject. To determine the functional relation between the filter and frequency, we introduce the filter transfer function, H(ω). In classical analog filter design, H is a complex function of the complex variable s. (The radian frequency ω = 2πf is a real variable; it turns out to be very useful to define the complex variable s with the stipulation s = α + jω, so that the value of the radian frequency is simply the distance along the vertical axis in the complex s plane.) The squared magnitude of H must be real, so it is convenient to establish our design criterion in terms of this squared-magnitude function. The classic solutions to this problem consist of a number of design families, most notably the Butterworth, Chebyshev, elliptic, and Bessel filters. All of these are IIR designs.

We first study the simple case of a low-pass Butterworth filter that satisfies the equation

image

FIGURE 7.1 Butterworth frequency response for different n. From [2].

image

Several facts can be deduced from Eq. 7.1. First, the squared-magnitude function is unity when ω = 0 and approaches zero as ω approaches infinity. For all values of the parameter n, the function is always one-half at ω = ωc, but for larger values of n, the transition is sharper, as seen in Fig. 7.1.

Figure 7.1 plots the magnitude, not the squared magnitude, of Eq. 7.1. Thus, at the cutoff frequency ωc, the function |H| is 0.707, or, in logarithmic terms, –3 dB (deciBels; see Section 13.2.4).

It may be desirable, in many cases, to have a sharper transition region. An alternative to a Butterworth filter is the Chebyshev filter.2 For a given filter complexity the Chebyshev filter can fulfill this role.

The Chebyshev filter squared-magnitude function is specified by

image

where Vn(x) is a Chebyshev polynomial of order n that can be generated by the recursion formula

image

with

image

The Chebyshev polynomial has the property of equal ripple over a given range of x, as seen in Fig. 7.2 for the case n = 4.

image

FIGURE 7.2 Chebyshev frequency response for n = 4. From [2].

The parameter n, for either the Butterworth or Chebyshev filter, determines the complexity of the function. For a given value of n, therefore, the relative merits of the two filter types can be compared. It can be shown that, for a given complexity, the Chebyshev function has a sharper transition from passband to stop band. Comparisons are a bit tricky, since the Butterworth function is monotonic whereas the Chebyshev function oscillates.

Jacobian elliptic functions have the property of equiripple in both the passband and stop band with an even sharper transition. This function is shown in Fig. 7.3.

Thus, for a given complexity, the elliptic filter frequency response most closely resembles a rectangle (for the three filters described) whereas the Butterworth is the “worst” of the three in this respect. However, the magnitude response of a given filter is not the only criterion for comparison. In many applications, the phase response may be equally or even more important. For now it is sufficient to say that the phase response of elliptic and Chebyshev filters and the associated transient responses make these filters unsuitable, for example, to the design of bandpass banks for most audio applications. Of the three, the Butterworth filter is most often the designer's choice.

image

FIGURE 7.3 Jacobian elliptic frequency response. From [2].

image

FIGURE 7.4 Comparison of the group delay for four filter types: A= Butterworth, B = Chebyshev, C = elliptic, D = Bessel. From [2].

If phase response is overwhelmingly important, a design that focuses on the phase response might outdo all three of the above designs. Indeed, such filters have been designed and built. One design that results in a good transient response is the Bessel filter, so called because it is proportional to the inverse of the Bessel polynomial, defined by the recursion [12]

image

with B0 (ω) = 1 and B1(ω) = ω + 1.

A measure of the goodness of the transient response is the graph of the group delay versus the frequency of the filter function, defined as the derivative of the phase with respect to the radian frequency. Figure 7.4 shows the group-delay graphs for the four filters we have discussed.

We see from this figure that, except for the Bessel design, the group delay peaks strongly near the transition regions. This means that frequencies in that region have a markedly different delay for lower or higher frequencies. In one experiment [9], channel vocoders were simulated with each of the four types. It was found that the Chebyshev and elliptic designs caused an unacceptably high reverberation of the vocoded speech signal.

Lerner [6] showed that both a high degree of phase linearity and reasonably selective passbands could be attained. The specific function is given by Eq. 7.6:

image

where D1 = 1/2, Dm = [(–1)m + 1/2], Di = (–1)i+1 and di is the resonant frequency of the ith pole.

The low-pass designs of the previous section can be transformed to produce high pass, bandpass, and band stop filters. For example, we might want to transform a low-pass design that passed frequencies from 0 to 1 Hz to a bandpass filter with Ωl as the lower cutoff frequency and Ωu, as the upper cutoff. To accomplish this we need to map the variable s of the original function into the following function of s, Ω1, and Ωu.3

image

Analog continuous-time filter design essentially always resulted in IIR filters, given the components that were available. Discrete-time implementation has expanded the design possibilities for FIR filters. FIR filters can be designed with a perfectly linear phase by constraining the coefficients to be symmetric, since the coefficients are in one-to-one correspondence with the impulse response.

7.3 TRANSFORMATIONS FOR DIGITAL FILTER DESIGN

The preceding section introduced a number of useful filter structures. How can these functions be synthesized as discrete-time filters? An approximation criterion is needed, since a discrete-time implementation can never be identical to the continuous-time filter functions. The work of Hurewicz [5] provides a technique for doing this in an impulse invariant way. By this we mean that the (discrete-time) response to a unit pulse of the derived digital filter will be samples of the impulse response of the associated analog filter. Let's begin with a simple single-pole analog function. The impulse response of this simple system is the inverse Laplace transform of this function.

image

If we sample k(t) at equally spaced intervals nT and take the z transform of the resulting sequence, we obtain

image

Thus, H(z) is the impulse invariant z transform corresponding to the simple single-pole system of Eq. 7.8. For more complex systems in which the s-plane poles are single (not multiple) poles and in which the impulse response is a real function, this leads to the correspondence

image

where L(s) is the function of Eq. 7.6. Thus, we have specified a digital Lerner filter in terms of the components of an analog Lerner filter.

7.4 DIGITAL FILTER DESIGN WITH BILINEAR TRANSFORMATION

The frequency response of an impulse invariant digital filter is an aliased version of the continuous frequency response from which it was derived. We need to examine a given design carefully to determine if this aliasing is detrimental.

Aliasing is best understood by studying Fig. 7.5. In (a) the analog response curve is almost totally contained within the Nyquist frequency range. Thus, the discrete-time version, (b), which must be periodic, is nearly identical to the analog response in that region. However, in (c) the analog response has a greater than Nyquist range and the resultant discrete-time response, (d), is significantly different than the desired analog response.

A wideband filter is more likely to cause problems when digitized by impulse invariance. An alternate approach depends on the bilinear transformation. The original s plane is mapped into the z plane with the axis mapping into the unit circle. This mapping is

image

which leads to the mapping of z = 1 to s = 0, z = ej(π/2) to s = j, and z = e to s = ∞ The analog frequency ωA maps into the digital frequency ωD.

image

image

FIGURE 7.5 Effect of aliasing. Note that the total response is the sum of the curves shown; the grey hatching indicates the alias distortion.

Other techniques exist for designing discrete-time filters with arbitrary properties. One powerful method that employs linear programming and the Remez exchange algorithm [7] is an iterative method that leads to an optimum design in the sense that both passband and stop band are equiripple, and that the transition from passband to stop band is most rapid. We have seen that elliptic filters also have this optimum property, but this Remez method is based on a pure FIR design so that linear phase as well as good magnitude can coexist.

7.5 THE DISCRETE FOURIER TRANSFORM

Through the early 1960s both the Fourier transform and its mathematical cousin, the Laplace transform, were widely used as theoretical aids in signal processing, but they were not employed much in actual computation. The hardware needed to implement these functions was quite expensive and quite “klugey,” but soon after it was shown how to do filtering by computer, Cooley and Tukey [1] introduced the fast Fourier transform (FFT) and sparked an explosion of theoretical and practical results in many fields. Such activity, in parallel with great advances in computers, meant that signal-processing algorithms could be implemented orders of magnitude faster with hardware orders of magnitude smaller than what was used a few decades ago. Heideman et al. [4] showed that the concept of the FFT was understood by Gauss as well as by Good [3], but until the Cooley-Tukey paper and its digestion by signal-processing folk, no great new works resulted.

In this section we introduce the DFT and work exclusively with this digital version, trusting the interested reader to seek the many fine books and papers on the more general subject of transforms. In this section we outline some of the theoretical issues and in Section 7.6 we focus on the FFT.

The DFT of a finite duration sequence x(n), 0 ≤ n ≤ N – 1, is defined as

image

with W = ej(2π/N). Comparing this definition with the definition (Eq. 6.2) of the z transform, we can think of the DFT as a finite version of the z transform, evaluated on the unit circle. The inverse DFT can be computed; that is, given the sequence X(k), 0 ≤ k ≤ N – 1, we obtain

image

The DFT computation yields the spectrum of a finite sequence; hence its great importance in signal-processing applications. Spectrum analysis can, of course, be implemented by means of a bank of analog or digital filters. There are many variations on the specific nature of spectrum analysis; these can conveniently be related to filter-bank parameters, such as the number of filters, the frequency range of the filter bank, and the amplitude and phase responses of the individual filters. In many cases, the use of a DFT can emulate (or approximately emulate) the properties of a given filter bank, and, through the FFT, perform the computation more efficiently. In Section 7.6 we illustrate how the FFT gets its speed, but it is useful to show the speed savings that can be obtained. Here we state without proof (for the present) the computational properties of a FFT.

TABLE 7.1 Relations between a Sequence and Its DFTa

image

If N is a power of 2 then the DFT can be computed in N log2N operations. This contrasts with the “brute force” DFT computation, which takes N2 operations. Thus, for example, if N = 1024, log2N = 10, the savings is a factor of 100 (ignoring the details). The computational cost of a brute force DFT is in the same ball park as that of a filter-bank spectral analysis.

The DFT can be taken, in general, of a sequence of complex numbers. Table 7.1 relates the initial sequence x(n) with its DFT X(k).

In measuring the spectrum of a signal, we want to know the frequency range to be covered, the number of frequencies measured, and the resolving power of the algorithm. In general, a real-life signal such as speech or music, a radar or sonar signal, or some biological signal (e.g., an electrocardiogram) is of sufficient duration that we can say that it goes on forever. Furthermore, it usually changes character as time progresses. The researcher wanting to perform spectral analysis by DFT must decide, based on her or his insights about the signal, on the duration of the DFT, on the time between successive applications of the DFT, and on the sampling rate of the discretized signal (assuming that the data were obtained by sampling an analog signal). Additionally, the researcher must choose a window that is appropriate for the problem. For example, the parameters of a speech-production model can change over a time span of 10–20 ms. Thus, an appropriate window size could be 20 ms, but with a DFT update every 10 ms. If L is the assumed time duration of the window and M is the number of samples in the window, then L = MT, where T is the sampling interval.

The choice of the DFT size N is dictated by the desired spectral resolution and in general can be greater than M. This is easily implemented by padding with zeros. Therefore, if it is desired that the frequency spacing between adjacent DFT samples is δF, the sampling rate R = (δF)N. The frequency range covered by the DFT is therefore R. However, for an input that is real (the imaginary component is zero), the DFT is symmetric about R/2 so that the nonredundant DFT result yields a frequency range of R/2.

image

FIGURE 7.6 Circular convolution of two eight-point sequences. Only y (0), y (2), y (4), and y (7) are shown. All outer circles carry the same sequence, but the inner circles rotate for the different values. Convolution consists of summing the products of adjacent values on the two circles, to give the value of the output named in the center.

In Chapter 6 we showed that the sampling of an analog signal in the time domain resulted in a periodic function in the frequency domain. In the DFT both time and frequency are samples; thus, both x(n) and X(k) are periodic. Both Eqs. 7.13 and 7.14 are periodic with a period N. Consequently, if we begin with a perfectly finite-length sequence of length N and compute its DFT, the inverse DFT will be periodic in N. As a result, the product of two DFT's results in the circular convolution of the time sequences. This is readily visualized in Fig 7.6.

image

FIGURE 7.7 Linear convolution of two finite-length sequences by DFT.

For each one sample rotation of the outer circle, the convolution is the sum of all products of samples facing each other. Thus, for example, the first term of the convolution is given by

image

whereas the term for n = 2 is

image

The DFT can implement the linear convolution required for FIR filtering operations by judicious padding of two finite-length sequences with the proper number of zeros [10]. An example is shown in Fig. 7.7.

Given two sequences of length N1 and N2, pad each sequence with zeros so that the two padded sequences have length N1 + N2 – 1. Then, by multiplying the DFTs of the two sequences and taking the inverse DFT of this product, one obtains the linear convolution y(n). This technique is useful, for example, when one wishes to implement a FIR filter with many terms in its impulse response.

To perform filtering by DFT for a very long input waveform, one must perform overlap-add (sectioned) convolution; the technique is described in [10] and in many other books and papers.

7.6 FAST FOURIER TRANSFORM METHODS

There is a great variety. of FFT algorithms. They can all be derived from successive applications of a single operation, by representing a one-dimensional string of numbers as a two-dimensional array. If we have an N-point sequence, the integer N is either a prime or a composite number. If N is prime, it cannot be expressed as a product of smaller integers. If N is composite, it can be expressed as the product N1 N2. If either or both N1 and N2 are composite, further reduction is permissible. For example, we can express the number 60 as 12 × 5 or 3 × 4 × 5 or 2 × 2 × 3 × 5, and so on. The term radix is commonly used to describe this decomposition. If N can be expressed as a power of a single integer r, the FFT algorithm is called a radix r algorithm. The term mixed radix means that all factors of N are not identical.

The computational advantage of the reductions just described comes directly from the fact that a two-dimensional DFT is more efficient than a one-dimensional DFT with the same number of input samples. This stems from the observation that a two-dimensional DFT can be implemented, for example, by performing one-dimensional DFTs on all columns to obtain a new matrix and then performing one-dimensional DFTs on all rows of the new matrix. The total computation time for N1 columns by N2 rows is of the order (N1)2 + (N2)2, and that may be appreciably smaller than the computation time (N1 + N2)2. Be aware that the advantages of this kind of reduction are true only for very special cases, such as the DFT algorithm.

Let's now derive the mathematics of this DFT trick. Let the one-dimensional index be n, as usual. Letting the column index be m and the row index be l (M is the number of columns and L is the number of rows), we get

image

We now perform the two-dimensional DFT and choose r and s as the transformed variables; these can be recomposed to yield the single variable

image

We are now in a position to express the DFT samples X(k) = X(s, r) as the transform of x(n) = x(l, m) by simply substituting Eqs. 7.17 and 7.18 into the definition of the DFT, giving

image

Expanding W(Ml+m)(Lr+s), observing that WML/r = WNlr = 1, and properly associating

image

Notice that the L-fold sum is the DFT of the mth column of the array having the kernel WM. Thus the first step in our computational procedure would be as follows.

1. Compute the L-point DFT of each column. The result is a function of s and m; call it q(s, m). Equation 7.20 can now be written as

image

2. Obtain a new array h(s, m) by multiplying every q(s, m) by its twiddle factor Wms. Equation 7.21 now reduces to

image

Equation 7.22 is recognized to be the M-point DFT of each row, with the row index s. Thus the final step in the procedure is as follows.

3. Compute the M-point DFT of each row of the h(s, m) matrix, with WL as kernel.

image

FIGURE 7.8 Flow chart of an eight-point FFT. Note that the outputs on the right-hand side are in a different order from the inputs on the left-hand side. From [2].

Several results emerge from the procedure. If N is the highly composite number 2I where I is an integer, the DFT may be decomposed into I dimensions, each being a two-point DFT. In most applications such a restriction is usually no problem, since, as we mentioned previously, padding with zeros simply alters the numerical meaning of the frequencies corresponding to the k values.

Another result of interest is that of row–column permutation. If the computation is performed in place, the ordering of the resulting k values is, in general, different than the ordering of the incoming n values. The advantage of an in place algorithm is that no subsidiary memory is needed; when the computation is done, the result is at the same location as when the computation started. However, the result is no longer in the same order; an example is shown in Fig. 7.8.

Each two-point DFT is represented as a nodal point in the figure and the twiddle factors are shown as arrows. Each time a two-point DFT is computed, the result is stored back into the same memory elements. Finally, as seen in the figure, the indices of the result are bit reversed, so that, for example, register number 3, which carries the input f3 (011), will wind up holding the output sample F6 (110).

Bit reversal may be avoided by using extra memory to hold intermediate results; an example is shown in Fig. 7.9.

7.7 RELATION BETWEEN THE DFT AND DIGITAL FILTERS

Since both DFTs and filter banks are capable of performing spectral analysis, it is fair to inquire what the mathematical relations are between these methods. Figure 7.10 shows a filter-bank implementation of a sliding DFT.

Assume that the system is initially at rest. Then, it is easy to see that the initial N arrivals of the signal samples will produce the required DFT output samples. Now consider the Nth sample. Because of the N-sample delay, the delayed zeroth sample cancels the original contribution of the zeroth sample, so that the outputs will now register the contributions from sample 1 through sample N. Thus, each set of outputs at every sample point corresponds to the DFT of a sequence that has been slid over to the right by one sample.

For the filter-bank implementation of Fig. 7.10, the number of multiplications per point is N. Thus the filter-bank implementation seems more efficient than the sliding FFT. However, the FFT computation can be hopped. Some examples of hopping are shown in Fig. 7.11.

Notice that the hopped measurement is simply a sampling of the sliding measurement; an equivalent result is obtained by sampling the filter-bank outputs. The effects can be treated by standard aliasing arguments: each channel of the FFT is equivalent to one of the (complex) bandpass filter arms of Fig. 7.10, and the subsampled version of it will experience aliasing similar to the illustration in Fig. 7.5. A smaller hop, corresponding to more frequent sampling, results in larger separation between primary and alias images, and hence less distortion of the subband signal. Figure 7.12 shows examples of such effects.

image

FIGURE 7.9 Flow chart of an eight-point FFT; no bit reversal, not in place. From [2].

No overlap results in severe aliasing; a 2:1 overlap significantly decreases aliasing distortion, and a 4:1 overlap (perhaps enhanced by windowing) can reduce the effects to insignificance.

7.8 EXERCISES

  • 7.1 Prove that n = 4 is an appropriate value for a digital Butterworth filter with a 3-dB attenuation at 1250 Hz, and more than a 20-dB attenuation at 2000 Hz. Assume a sampling rate of 10 kHz.
  • 7.2 Suppose that you had the choice of a Butterworth or Chebyshev design of a low-pass filter. Discuss the criteria you might use to decide on a final design.
  • 7.3 In a radix 2 FFT, prove that the number of complex multiplications is N/2 log2N.
  • 7.4 An analog signal is sampled at 10 kHz, and 23 samples are digitized and stored in computer memory. It is desired to obtain a spectrum by means of DFT that covers the range 0–5 kHz, with at a least 7-Hz resolution. Specify an algorithm to perform this task.

    image

    FIGURE 7.10 Filter-bank implementation of a sliding DFT. From [8].

  • 7.5 Prove that the configuration of Fig. 7.10 produces the same output as that of a sliding DFT. Which implementation (Fig. 7.10 or the sliding DFT) is more efficient?
  • 7.6 Consider the sequence x(0) = 1, x(1) = 2, x(2) = 3, x(3) = 4, x(4) = 5, and x(n) = 0 for all other values of n.
    • (a) Compute the sequence by linearly convolving x(n) with itself.
    • (b) Compute the sequence by circularly convolving x(n) with itself.
    • (c) Compute the circular convolution of a suitably modified version of x(n) to obtain the exact result of part (a).

    image

    FIGURE 7.11 Three examples of hopped FFTs. From [8].

    image

    FIGURE 7.12 Aliasing effects of hopped FFTs. From [8].

  • 7.7 Figure 7.13 shows the spectrum of an analog signal.

    The signal x(t) is sampled at equally spaced intervals T = 0.5s, thus creating the signal x(nT).

    • (a) Draw the spectrum Xd(f) of x(nT).
    • (b) Is x(t) recoverable from x(nT)? If so, how?
  • 7.8 In the analog domain, an ideal differentiator is defined by its transform H(ω) = jω. In the discrete domain it is convenient to define a band-limited ideal differentiator as shown in Fig. 7.14.
    • (a) Find the unit sample response of the discrete domain differentiator.
    • (b) Sketch the result for ωc = π.
    • (c) Is the unit sample response causal?
    • (d) Is the unit sample response real?
  • 7.9 We are given a block of 512 samples and want to compute the DFT at 15 values of k. The samples were obtained by sampling an analog signal at 10,240 samples/s. The 15 values of interest uniformly cover the DFT output from X(150) to X(164).

    image

    FIGURE 7.13 Spectrum of an analog signal.

    image

    FIGURE 7.14 Ideal differentiator in the discrete domain.

    Assume that you have a choice of a radix 2 FFT or a straightforward (slow) DFT. Compare the running times of the two approaches.

  • 7.10 Let's take the DFT X(k) of the finite sequence x(n), for N = 8. Now, form a new sequence specified as Y(k) = X(k) for even values of k and zero for odd values of k.
    • (a) If y(n) is the inverse DFT of Y (k), find y(n) for n = 0, 1, 2, ..., 7 in terms of the original sequence x(n).
    • (b) If x(n) has the values 0, 1, 2, 3, 3, 2, 1, 0, find the numerical values of y(n) for n = 0, 1, 2, 3, 4, 5, 6, 7.

BIBLIOGRAPHY

  1. Cooley, J. W., and Tukey, J. W., “An algorithm for the machine computation of complex Fourier series,” Math. Comput. 19: 297–301, 1965.
  2. Gold, B. and Rader, C. M., Digital Processing of Signals, McGraw–Hill, New York, 1969; republished by Kreiger, Fl. 1983.
  3. Good, I. J., “The interaction algorithm and practical fourier series,” J. R. Statist. Soc. B 20: 361–372, 1960.
  4. Heideman, M., Johnson, D. H., and Burrus, C. S., “Gauss and the history of the fast Fourier transform,” ASSP Mag. 1(4): 14–21, 1984.
  5. Hurewicz, W., “Filters and servo systems with Pulsed Data,” in H. M. James, N. B. Nichols, and R. S. Phillips, eds., Theory of Servomechanisms, Chap. 5, Radiation Laboratory Series, McGraw–Hill, New York, 1947.
  6. Lerner, R. M., “Band-pass filters with linear phase,” Proc. IEEE 52: 249–268, 1964.
  7. Parks, T. W. and McClellan, J. H., “A program for the design of linear phase finite impulse response digital filters,” IEEE Trans. Audio Electroacoust. AU-20: 195–199, 1972.
  8. Rabiner, L. R. and Gold, B., Theory and Application of Digital Signal Processing, Prentice–Hall, Englewood Cliffs, N.J., 1975.
  9. Rader, C. M., unpublished notes, 1964.
  10. Stockham, T. J., Jr., “High speed convolution and correlation,” AFIPS Proc. 28: 229–233, 1966.
  11. Storer, J. E., Passive Network Synthesis, McGraw–Hill, New York, 1957.
  12. Weinberg, L., Network Analysis and Synthesis, McGraw–Hill, New York, 1962.

1There is a class of devices in which analog values are sampled in time; an example of such a device is a switched capacitor filter.

2Sometimes this name is spelled Tschebyscheff, corresponding to an alternate transliteration from the Cyrillic. Because of this spelling, the transfer function is often given as T(s)

3To visualize how this transformation could have such an effect, begin with a simple low-pass function such as 1/1 + s.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.235.182.206