1

Introduction: Signal Digitizing and Digital Processing

The approach used to discuss digital processing of signals in this book is special. As the title of the book suggests, the central issue concerns the performance of signal digitizing and processing in a way that provides for the elimination of negative effects due to aliasing. The term ‘digital alias-free signal processing’ is introduced and actually covers a wide subject area. As this term also raises some questions it needs to be explained.

1.1 Subject Matter

Signals originating as continuous-time variables, usually considered as analog signals, might be and often are processed directly using analog electronics. However, prior to processing, these signals could also be converted into their digital counterparts. Using analog-to-digital conversions for digitizing the signals prior to processing has many well-known advantages and is usually preferable. When the signals are digitized, the obtained digital signals are processed according to the concepts of digital signal processing (DSP) technology. Such an approach is becoming increasingly more popular as digital computer applications spread out into new fields and there is growing dependence on them. It is helpful that basically the same principles and techniques are used in both areas of signal processing and computing. However, signals can still be treated by analog signal processing techniques whenever application of digital techniques is either impossible or technically and economically unreasonable.

Therefore the analog and digital approaches can complement each other as well as compete. There are still relatively wide areas where signals are processed in an analog way simply because the available digital techniques are not applicable under given conditions or are not good enough. Application of digital techniques is limited. The dominant and most important limitation is the highest value of signal sample that is achievable under given specific conditions. It is well defined and violation of this limitation leads to distortion of the signal processing results due to frequency overlapping or the so-called aliasing effect. Attempts to eliminate the harmful impact of aliasing have led to the development of advanced digital technologies for signal processing, specifically to the development of an innovative technology called ‘digital alias-free signal processing’, or DASP. This strengthens the competitiveness of digital techniques considerably. The successful use of special digitizing techniques for the elimination of aliasing has been important in showing the significance of digitizing in the whole process of signal digital processing. Many other benefits could be obtained similarly by focusing on digitizing and matching it to the needs of signal processing, as suggested by DASP. This book provides answers to questions as to what can be achieved in this way and how the signal digitizing process needs to be to altered to gain these benefits.

While the application range of the suggested approach is rather wide and various benefits could be gained in this way, specific aspects need to be taken into account. This is the subject covered by the whole book. In this chapter, the first comments are made to clarify this issue, the basic one being the attitude towards digitization of signals.

The most frequent applications of the traditional DSP technology belong to the entertainment sphere, forming the basis for audio, digital radio, TV and various other multimedia systems. Quite popular is also the use of these techniques for building fixed-line and mobile phone sets. Less visible are DSP industrial applications, especially because they are often presented as embedded systems. However, the role of digital signal processing techniques in modern telecommunication, instrumentation, industrial control, biomedical, radar, navigation and many other data acquisition and processing areas is significant.

An attempt is made in this book to focus the discussions basically to industrial applications of the considered digital technology. The industrial tasks for signal processing are typically challenging and cover a wide range of signal processing conditions. Signals have to be processed in time, frequency, modulation and spatial domains. The frequency range to be covered is very wide, extending from ultra-low frequencies up to several GHz. Processing is often multidimensional and in real-time. The signal digitizing and digital processing problems considered in this book are related to these industrial applications through described methods, algorithms, hardware, and software tools.

The latest trend in the use of digital techniques for signal processing is towards the development of ambient intelligence systems, including sensor systems and networks. They require and put the emphasis on the realization of massive data acquisition functions for supplying information from a large cluster of distributed signal sources and this function becomes increasingly important. Recently it has been found that indirectly randomized nonuniform signal processing techniques are also well suited for application in this field. The first results obtained are discussed in Chapter 7. Amazingly, the deliberate indirect randomization of sampling in this particular case can be used for purposes not related to the prevention of aliasing. The sampling randomization approach, in this case, helps to separate and distance the sampling operation from the main part of the analog-to-digital conversion structure, in order to make it an extremely simple operation that can be easily executed remotely close to the signal sources.

Thus the subjects discussed in the book cover a wide area. For already explained reasons, much attention is focused on the basic operations of signal analog-to-digital conversions. Various approaches to randomization and pseudorandomization of signal sampling and quantizing processes are discussed in detail, including issues of adapting signal digitization to conditions of specific processing. The subjects of indirect sampling randomization and hybrid periodic/nonuniform sampling are discussed. Digital processing of signals is regarded, in general, as being based on processes that decompose signals into their component parts. Signal parameter estimation, correlation and the spectral analysis of signals, represented digitally in the alias-free way, are described. The application of various signal transforms, including nonorthogonal transforms of nonuniformly sampled signals, is studied. Much attention is given to spectral analysis and signal waveform reconstruction based on it.

It can be seen from this outline of the subject area that a large part of the topics covered is not directly related to the issue of alias-free signal processing emphasized in the title of the book. Nevertheless, all of these subjects belong to the technology of digital alias-free signal processing. Indeed, to achieve the elimination of signal distortions caused by aliasing, signals need to be digitized in a special way. Once that is done, digital representation of the original signals becomes specific. This fact has to be taken into account when processing these signals in various ways. It often turns out that the algorithms developed for processing nonuniformly sampled signals are also well suited for improved processing of periodically sampled signals under some specific difficult conditions. Furthermore, the techniques used for randomization of the sampling operations are similar to those used for randomization of the quantization operations. In the case of quantizing time intervals, they are identical. This explains why many of the topics described in the book, which at first glance may seem not to be related to the problems of avoiding aliasing, actually belong to the subject area covered by the digital alias-free signal processing technology and therefore are discussed here.

The comments given in this introductory chapter explain the approach used in this book to studies of signal processing carried out in a digital manner. It is assumed that the readers of this book are familiar with DSP basics.

DASP, of course, is a recently developed part of DSP. Integration of DASP into the general theory of DSP still has to be done. As DSP, a widely used mature technology, is well described in many excellent textbooks, there is no need here to discuss the basics of these traditional digital techniques once again. However, the subject area of this book, while significantly differing from classical DSP, is also closely related to it. For that reason, the traditional methods and techniques for processing signals digitally are discussed, but are considered in the light of their relationship to specific nontraditional DASP techniques.

1.2 Digitizing Dictates Processing Preconditions

At first glance, the alias-free digital techniques are invariant with regard to their applications. Referring to the customary classification of DSP application areas, the techniques are applicable for nonparametric, model-based and statistical signal processing. The original analog signals are always converted into their digital counterparts and the obtained digital signals are then processed as required. On the other hand, the conditions for various types of applications might differ to a large extent and, consequently, the digital techniques used for applications in various areas usually need to be specific. To organize the process of signal processing in the best possible way, conversion of the original analog signals into their digital counterparts should be carried out while taking their exact characteristics into account for usually there are several ways to represent a signal digitally. Some of the digital representations may turn out to be better for some applications than others. It is certainly beneficial to learn how to digitize a signal under a given set of conditions so that the best results are obtained. Therefore signal digitizing should be optimized whenever possible. A large part of the book is dedicated to describing various techniques for signal sampling and quantizing, including issues of optimizing digitization by matching signal digitizing techniques to the conditions of digital processing dictated by specific applications. It should be kept in mind that the present digitizing approach determines the conditions for subsequent processing of the obtained digital signals. However, this fact becomes meaningful and can be exploited beneficially only if the digitization processes can be flexibly adjusted to the needs of subsequent processing of the digital signals. As explained in Chapter 2, randomization is used as a tool for achieving this.

1.2.1 Connecting Computers to the Real-life World

As the real-life world is basically analog, so are most of the signals reflecting observed processes. Computers, on the other hand, are digital. Therefore there is a gap between the real world and computers. Signal processing techniques have the responsibility for filling this gap. To accomplish that, the original signals have to be converted into the digital form first. Only after that can these digital signals be transferred to computers either directly or after performing some preprocessing of the obtained raw digital signals.

This outlook on the basic function of digital technology for signal processing is used in the following chapters to generalize the approach to studies of the considered topics. The achieved progress of converting various types of analog signals into their digital counterparts and of processing them under difficult conditions is weighed against the requirements of the general task of connecting computers (or other digital computing and data transmission devices) to the real-life world.

The involved preprocessing functions, while secondary, are also crucial. Their contribution to linking the signal sources to computers is often invaluable. For this reason, much attention is paid to a careful consideration of special software/hardware subsystems or devices used to perform the needed preprocessing. They are usually capable of doing the job in a cost-effective way, helping the computers to carry out the required signal processing and providing the information sought.

How preprocessing is organized depends on the conditions and the specific work being done. In the present case, signals are digitized in a specific nontraditional way. Consequently, the techniques used for the raw signal preprocessing described in the following chapters are unusual.

In cases where a computer is used for decision making in a control system, the developed code, representing the reaction of the computer to the information carried by the input signals, is transformed again into a digital signal and digital techniques are used for executing the generated commands. In such cases the computer is connected to the analog world both by its input and its output. The feedback calculated and presented in the digital form at the output has then usually to be converted back to the analog form whenever digital signals are not acceptable by the objects under control. It is assumed that traditional techniques can be used in this case so this topic is not considered here.

1.2.2 Widening of the Digital Domain

One of the basic objectives of these research and development activities has been widening of the digital domain over the area where the analog signal processing techniques are still used almost exclusively. To reach this goal, effort should apparently be focused on replacing the mixed analog– digital or analog techniques by digital. The basic part of this digital domain is formed by low-frequency digital applications. Therefore the direction in which the digital domain should be further expanded is towards higher frequency applications. That is the reason why special attention is paid in this book to digital techniques related to processing radio frequency and microwave signals.

The task of processing signals digitally at higher and higher frequencies has been attractive. However, it is not very easy to develop improvement there. The most serious obstacle preventing progress in this direction so far has been the aliasing effect that inevitably accompanys the applications of classical digital techniques. This dictates the necessity to filter off all frequencies above half of the sampling frequency or, in other words, to use a sampling frequency at least twice higher than the upper frequency in the spectrum of the signal to be processed. The aliasing-induced corruption of the signal is the penalty for not meeting this requirement. In the case of the traditional approach to signal sampling, when the sample values are taken periodically there is no way to increase the upper frequency in the signal spectra and still avoid aliasing except to go to higher and higher sampling rates. Thus the uppermost rate of taking signal sample values that is achievable at reasonable cost using the currently available microelectronic device manufacturing technologies determines the highest frequencies that could be handled digitally. As these manufacturing technologies are being continuously improved, the upper frequency limit of the digital domain is also going up. However, the digital domain enlarges in this way relatively slowly.

The alternative is to use a principally differing approach to the problem of avoiding aliasing. As shown later, avoiding frequency overlapping has to be based on a sufficiently high frequency periodic sampling or on the nonuniform digital representation of the signals. Needless to say, such an irregular digital representation of signals drastically differs from the traditionally used regular one. Furthermore, these differences do not only concern the signal sampling. When and if the original analog signals are converted into sequences of their sample values placed on the time axis nonuniformly, the following digital processing process has to be organized in such a way that it takes into account the specifics of the used digitization process. The exceptionally important role of signal digitizing needs to be recognized. Paying sufficient attention to this problem and using the right approach to represent signals in a digital form are crucial.

1.2.3 Digital Signal Representation

To convert an original continuous-time or analog signal into a corresponding digital signal, two essential operations have to be performed. Firstly, a sequence of instantaneous values of the signal, measured at discrete time instants, has to be formed. Secondly, the obtained signal readings, usually considered as signal sample values, are to be rounded off in order to express them in a numeric form.

The operation of sample value taking is referred to as ‘sampling’. The roundingoff operation is quantizing. The sequence of time instants at which the samples are obtained represents a stream of uniform events, which can be depicted graphically as a sampling point process. The results of the quantization operation are used to obtain the code of signal sample values.

The digital signal obtained as a result of all the mentioned operations (sampling, quantization and encoding) is the digital substitute for the original analog signal. It is clear that in an ideal case they should be equivalent. However, in reality the digital signals always differ to some extent from the originals. In fact there is always some impact of the analog-to-digital conversion procedures on the features of the obtained sampled and quantized signals. The patterns of the sampling point processes and the specific quantization techniques used have a considerable impact on the properties of the digital signals. Under certain conditions it is possible to reconstruct the original signals from the digital ones with a high degree of precision. Nevertheless, the fact remains that the features of the analog signals and their digital counterparts differ.

In general, the requirements needed for signal analog-to-digital conversions vary over a wide range. What is good for low-frequency applications is not necessarily also good for applications at higher frequencies. The requirements for analog-to-digital conversions at low frequencies more often than not do not pose a problem. No special efforts usually have to be undertaken to meet them. That changes as signal frequencies increase. More and more attention then has to be paid to enable sampling and quantizing operations to take place. The requirements needed for technical realizations of them are obvious. These are the requirements concerning the precision of signal sample taking timing, the sampling instant jittering and the quantization threshold settling time. However, much more significant are the requirements for digital representation of the original signals.

The discovery of the essential relationships characterizing signal sampling, quantization and digital processing processes, made a long time ago, has been a dramatic achievement. As that part of history has been well documented, including references to contributions made by the involved researchers, there is no need to discuss the matter in detail here. Only the core of these relationships will be mentioned, the essential and most famous sampling theorem. It has a very high practical value as it defines, in a simple and clear way, the basic condition for full recovery of the information carried by the original analog signals from their sample values taken at discrete time instants.

The end result of the sampling theorem is very well known and is often used in engineering practice for setting up proper working conditions for correct functioning of analog/digital electronic devices. The theorem also serves in making a choice between analog and digital approaches to a specific problem. Accordingly, the digital techniques can be used when the frequency at which sample values are taken from a signal is at least twice higher than the upper frequency of the signal to be processed. As the highest achievable sampling rate depends on the technical perfection of the electronic devices available for implementation of signal analog-to-digital conversions, the sampling theorem evidently determines the boundary limiting the field where signals could be processed in a digital manner. How wide that field is at any given moment apparently depends on the achievable quality of the microelectronic elements being produced at that time.

These considerations and widely accepted conclusions are of course true. However, a significant fact is more often than not overlooked. It is the fact that the conclusions of the sampling theorem in engineering practice are often considered in a simplified form, stating simply that the sampling rate has to be at least twice higher than the highest frequency present in the spectrum of the band-limited signal. The conditions for signal processing in reality might differ from this simple case substantially. The point is that this basic version of the theorem has been derived and actually holds fully only in cases where band-limited signals are sampled equidistantly. In other words, the simplified interpretation of the theorem is valid for the classical DSP approach developed a long time ago. As use of the periodic sampling approach is still overwhelming, it is often not realized that there could be any other type of digital version of the respective analog signals. Nevertheless, as shown later, that is the case.

The considerations mentioned above lead to the conclusion that the specifics of the used analog-to-digital conversions become increasingly important with widening of the frequency band within which the analog signals have to be processed digitally. It is clear that attempts to enlarge the digital domain in the direction of higher frequencies can be successful only when much more attention is paid to signal digitization processes than is customarily done for digitizing relatively low frequency DSP applications. This is true both for sampling and quantization principles and their technical implementations. For example, if the discrepancies between the expected and actual sampling instants (sampling instant jitter) usually could be and are neglected in audio frequency applications, then these discrepancies have to be kept in the range of a few picoseconds or even smaller in cases where the signal upper frequency reaches hundreds of MHz. To achieve that, the involved digitizers should have special designs.

The necessity to process signals digitally at higher and higher frequencies is strongly motivated. Much money and effort has been invested in attempts to meet growing demands in this area. It is not an easy task. When looking for the obstacles that slow down expansion of the DSP application field in the direction of higher frequencies, it can easily be seen that the bottleneck is analog-to-digital conversions. Indeed, processing 16 or 32 bit words at clock frequencies measured in hundreds of MHz is easier than providing analog signal conversions into digital signals within a sufficiently wide dynamic range at very high sampling rates.

The basic parameters of analog-to-digital converter (ADCs) that characterize their precision, dynamic range and frequency range of the input signals are the quantization bit rate and the sampling rate. The dynamic range and the bandwidth, usually limited to half of the sampling frequency, are clearly traded off, and this parameter combination might serve as an indicator of achievable performance levels. Increasing the sampling rates typically leads to considerable narrowing of the system dynamic ranges to figures unacceptable for many applications. Consequently, there is often a deficiency in the DSP subsystem dynamic range for high-frequency and low-noise applications and insufficiency in sampling rates (or bandwidth) for high-precision applications.

To achieve progress in the development of high-performance digital technologies for signal processing at significantly increased frequencies, two different approaches could be used in parallel. In addition to the ongoing process of improving the DSP hardware/software tools built on the basis of well-established DSP principles, innovative techniques for signal digitizing and fully digital processing, based on nontraditional signal digitizing concepts, could be developed and used for a very wide variety of applications.

This book is devoted to the exploration of this second approach. As shown in Chapters 2, 3 and in numerous other places in the book, an effective approach to digital processing of wideband signals is based on resolving the problem of frequency overlapping, observed as the aliasing effect. It is shown that elimination of aliasing opens up the possibility of handling signals digitally at frequencies exceeding half of the sampling frequency and that this approach is applicable widely. It is based on the application of various nonuniform sampling techniques. Much effort has been spent in the book on studies and descriptions of this quite fruitful anti-aliasing approach. Using it leads to solutions of various essential engineering problems.

1.2.4 Complexity Reduction of Systems

Digitizing techniques suitable for signal digitizing at very high frequencies are obviously much needed for the development of fully digital signal processing techniques in the frequency range up to several GHz. Such techniques are essential for many important applications including telecommunications. Attempts to use the traditional techniques, even when possible, lead to complicated and costly system designs. In the following chapters nontraditional digital techniques are investigated that might be successfully used both for expanding the digital domain and for reducing the complexity of various systems. These two types of benefits go hand in hand and are usually obtained whenever the discussed techniques, based on nonuniform sampling, pseudo-randomized quantizing and matched processing algorithms, are used correctly. Note that the second type of benefit, the simplification of designs, can often also be obtained in cases where the signals to be processed contain components that do not exceed a certain limit of relatively low frequencies and there are no principal obstacles preventing application of the classic DSP techniques.

The fact that much attention in the book is paid to elimination of aliasing might have a misleading side effect. It might lead to the wrong impression that the suggested and discussed special digital techniques are suited and recommended exclusively for applications related to processing radio frequency and microwave signals. That is definitely not so. A large part of the discussed techniques could be used with good results for developing less complicated devices and systems for signal processing than more traditional systems. Other benefits apart from elimination of aliasing could be targeted and gained.

For example, substantial complexity reduction of designs and data compression could be achieved by using the special quantization techniques and their application is actually invariant to signal frequencies. Unlike deterministic quantization, randomized and pseudo-randomized quantization provide unbiased results. Therefore, such quantizing could be used to reduce quantization errors, leading to the reduction of the quantization bit rate. It is also appropriate for applications where it is desirable to use rough quantization. This quantization mode is not only of practical interest from the viewpoint of reducing the bit streams representing quantized signals. What is more important is the fact that ADCs, when they contain only a few comparators, can be built as extremely broadband devices. They are well suited for building low-power multichannel systems and their performance might be enhanced by applying oversampling and low-pass filtering in combination with randomization of threshold levels.

It is also possible to gain by exploiting the most remarkable properties of pseudo-randomized quantization. It can be performed in such a way that the corresponding quantization noise has very advantageous properties not provided by deterministic quantization. As shown in Chapter 5, this noise is distributed uniformly, is decorrelated from the input signal, has no spurious frequencies and has a constant power spectral density over the whole frequency range independently from the input signal and the number of threshold levels used. Consequently, application of such pseudo-randomized quantization often turns out to be optimal in the sense that the required performance is achieved by processing the minimum number of bits.

Many methods and algorithms initially developed for dealing with nonuniformly sampled signals have proved to be quite useful for resolving essential problems typical for processing signals belonging to the lower frequency range. Use of nonorthogonal transforms might be mentioned as an example illustrating this. Their first application was reconstruction of nonuniformly sampled signal waveforms. Then it was discovered that they are also good for processing signals at extremely low frequencies. For example, they can be used to remove the negative effect caused by cutting off part of a signal period when, according to the classical definition, the processing should be carried out over a number of integer signal periods (or the periods of their separate components). These transforms are also useful for decomposing signals into several basic parts simultaneously or for extraction of signal components under conditions where the spectra of the components partially overlap.

To add to the given examples, the systems mentioned above for massive data acquisition will be discussed again. They are remarkable as a showcase demonstrating that sometimes it is beneficial to use nonuniform sampling techniques for other purposes rather than for the elimination of aliasing. While such massive data acquisition systems might be used widely, the applications related to data acquisition from multiple sources of biomedical signals are especially well suited. A specific nonuniform remote sampling procedure, based on waveform crossings, is then performed in order to digitize signals as close to each of the distributed signal sources as possible, and to do it in a very simple low-power way as described in Chapter 11. The output of such a sampler, at each sampling event, is a single pulse generated so that its position in time reflects the corresponding instantaneous value of the particular input signal. These pulses are transmitted over wire or radio links to the master parts, where the whole input signal waveform is reconstructed. Application of these specific nonuniform techniques for remote sampling leads to substitution of the standard multiplexing of analog signals by gathering multiple output signals taken off remote sampling units distributed over a wide area. This approach of using this specific kind of nonuniform sampling results in a number of benefits. The number of signal sources from which data could be acquired in this way, in comparison with the case where analog signal multiplexing is used, is increased dramatically, up to several hundreds of such sources at least.

All these examples show that it is crucial to digitize signals in a way best suited to the particular case of signal processing. How to do this is considered in the following chapters.

Recognition of the extremely important role that digitizing plays in the whole process of digital processing of analog signals is the reason why much more attention has been paid than usual to the issues of digitizing analog signals in this book. Flexible digitizing adaptable to the specific conditions of the given signal processing task is considered here as the key factor in the fruitful application of digital techniques and for obtaining in this way various significant benefits.

1.3 Approach to the Development of Signal Processing Systems

When a task for signal processing turns up, the basic concern is usually to find the best algorithm prescribing how the required processing has to be done. More often than not little attention is paid to the input signal format, assuming that it is given as a digital signal or can be easily converted to it if the signal originally is analog. That is the most often applied traditional DSP approach and in many cases there is evidently nothing wrong with it as it often leads to good results. However, this is true only conditionally.

This traditional approach is based on the assumption that there are no problems in converting the analog input signal into its digital counterpart. While that is more or less the case under conditions typical for processing relatively low frequency signals, in more demanding signal digitizing cases the situation might be quite different. Indeed, when an extremely wide dynamic range has to be achieved, when the signal to be processed is wideband and contains components at high frequencies, and in many other cases, the analog-to-digital conversions of the input signal may prove to be the crucial stage in the whole signal processing process. The point is that signal digitization is a vital component of digital processing and this fact is fully recognized in this book. This is the basic difference between the offered and discussed techniques for alias-free digital processing of signals and the widely used techniques based on periodic sampling and fixed-threshold quantizing. It leads to a specific approach to signal digitizing and processing. This approach will be considered more closely.

images

Figure 1.1 Closed-loop analysis and definition of digital signal processing system designs

It makes sense to approach the analysis and definition of the designs of the digital signal processing system in the way suggested in Figure 1.1. The starting point for the development of a design concept of a signal processing system is the specification of its output. Such a specification has to be given in terms of functions to be fulfilled and the performance quality to be provided for. Then the organization of signal digitization procedures, based on selection of the most suitable sampling and quantization modes, is carefully considered next, targeting satisfaction of the specific processing requirements. The decisions taken at this stage are based on knowledge of the area of various possible sampling and quantization techniques, their capabilities, advantages and limitations. After it has become clear how the signal digitizing should be carried out and how the digital signal will be defined, the development cycle can be concluded by choosing or defining the algorithm for solving the given specific task with the specific digital signal features taken into account.

The choice of good algorithms for processing the digitized signals is of course important for developing any digital signal processing system. However, these should not be set up before it is clear how the input signal will look digitally. For instance, algorithms good for processing periodically sampled signals in most cases are not directly applicable for processing nonuniformly sampled signals.

The idea of this closed-loop design development approach is simple and universal. However, it would not make much sense to apply it for optimization of traditional DSP systems. The limitations in matching the common analog-to-digital conversion process to the specifics of the given task for signal processing would prevent good results from being obtained. The problem is that the commonly used periodic sampling and fixed-threshold quantization operations, constituting the basis of a typical ADC, are rigid. Little could be done in an attempt to adapt them to the specific conditions of a given signal processing case. It is only possible to vary the time intervals between signal sample taking instants at sampling and to vary the precision of the sample value rounding-off at quantization.

There are many additional ways that operations of signal digitizing can be diversified if they are deliberately randomized. Good and not so good effects might result from this. The involved processes and relationships dictating them are quite complicated. This book is aimed to discover many of the basic ones.

1.4 Alias-free Sampling Option

The effect of aliasing, as well as the more popular means of avoiding it, of course, is well known. Traditionally the negative consequences caused by aliasing are usually accepted as unavoidable. Actually, this is not so. It is possible to avoid overlapping of signal spectral components and to distinguish them without increasing the mean sampling rate. Consider how that could be achieved.

1.4.1 Anti-aliasing Irregularity of Sampling

Assume that a digital data set, representing a signal sample value sequence, is given. This is shown graphically in Figure 1.2. Look at these signal samples and try to imagine how the signal looks from where they have been taken. It is hard to do that. The digital sample values have to be processed to reconstruct the original signal they belong to. In this particular case, the indicated sine function 1 (solid line) is found to fit the data. Therefore it should be the signal from which the sample values have been taken. However, if the reconstruction process is continued, it becomes clear that there are other sinusoids at differing frequencies, which also can be drawn exactly through the same sample value points as the first. All these sinusoids (dotted curves) are aliases and overlapping of their sample values is aliasing. The aliasing effect leads to an uncertainty. Indeed, the given sine waves of different frequencies fit equally well all of the indicated frequencies.

images

Figure 1.2 Overlapping of a periodically sampled signal component and its possible aliases

To ensure that a digital signal can provide the correct original analog signal, the bandwidth of the signal should not exceed half of the sampling frequency. If all spectral components outside the limited frequency band are filtered off the original signal or more signal sample values are taken within the same time interval, there would be no uncertainty. Either one of these possible actions impose limitations on the bandwidth of the signal, which could be sampled at the given sampling frequency without corruptions due to aliasing. Apparently, if some other way could be found to avoid aliasing, a special application of oriented digital processing of signals would be possible in a much broader frequency range. That would open up a broad area of new beneficial digital signal processing applications.

However, it is not clear whether sampling that is not corrupted by aliasing is feasible at all. In an attempt to find an alternative approach to realization of this operation, look at the diagrams of Figure 1.2 again. Notice that the time intervals between the taken signal sample values are of equal length. Now try to vary these intervals. The sample values of all indicated sinusoidal curves become different, even at small changes in distances between them. That clearly is interesting as it means that taking signal sample values irregularly disturbs the aliasing phenomenon.

To see, in a more detailed way, the consequences of this fact look at Figure 1.3. The signal shown (solid line) is the same one as given in Figure 1.2. The lower frequency sine function is again sampled and the corresponding data set is obtained. However, the distances between the sampling instants along the time axis now differ. They are irregular. Amazingly, this proves to be very useful. Indeed, as can easily be seen, now only one sine function can be drawn exactly through the points indicating the signal sample values. The sinusoidal curves at other frequencies simply do not fit them.

images

Figure 1.3 Only the signal waveform (solid line curve) exactly fits the nonuniformly spaced sample values

The results of this simple experiment suggest that the digital signals formed by using the nonuniform sampling operation should have features strongly differing from typical features of the digital signals obtained in the cases when signals are sampled periodically. Actually, this presumption is true. The content of the following chapters confirms this fact.

Studies of this kind of sampling show that nonuniform sampling of sinusoids at different frequencies provides differing data sets. Therefore irregularly or nonuniformly sampled signals have no completely overlapping aliases like those observed at periodic sampling. Consequently, it can be expected that application of nonuniform sampling should open up the possibility of distinguishing all spectral components of the signal, even if their frequencies substantially exceed the mean sampling rate. Studies, including experimental studies, confirm that this theoretical expectation is true. Real systems have been developed, built and exploited that are capable of processing signals fully digitally in a frequency range many times exceeding the mean sampling rate. Some of these are described in Chapter 11.

Taking signal sample values irregularly eliminates the basic conditions for aliasing to take place. That is very desirable for such sampling. Therefore it seems that nonuniform sampling should be preferable. On the other hand, it is also evident that the end result of nonuniform sampling, the sequence of nonuniformly taken signal sample values, differs significantly from the result of periodic sampling.

Even such superficial consideration of the illustration of nonuniform sampling effects leads to the conclusion that this special approach to sampling has advantages and disadvantages. Apparently no conclusions whatsoever could be made on the grounds of only the diagrams given in Figure 1.3. The issue of nonuniform sampling is clearly too complicated for that.

1.4.2 Sparse Nonuniform Sampling

Intuitively it might be hard to accept the idea that sometimes it could be possible to take signal sample values at a rate below twice of the upper frequency of that signal and yet be able to recover the essential information carried by it. At first glance it seems that the signal sample values need to be taken often enough to keep track of the signal changes in time. To do that, the time intervals between successive sampling instants should be sufficiently short to ensure that the signal increment during the sampling interval does not exceed a certain limit. This kind of reasoning leads to the conclusion that the sampling rate has to be at least twice as high as the highest frequency present in the signal.

However, it is also possible to look at this problem in a different way. Whenever a continuous-in-time signal is to be digitized and the best sampling technique has to be found for this, the spectrum of the signal, the acceptable mean sampling rate and subsequent processing of the digitized signal need to be considered. The sampling operation should be carried out in such a way that the sequence of signal samples obtained is as closely related to the original signal as possible. However, there are other considerations that usually have to be taken into account as well. To explain what specifically is required, consider the simple example illustrated by Figure 1.4.

Figure 1.4 displays the panel of a Virtual Instrument of the DASP Lab System described in Section 2.4. This hardware/software system contains a special digitizer and a PC. The digitizer converts the analog input signals into the digital form and then this digital signal is analysed by one of the software instruments. The system operates using the digital alias-free signal processing algorithms and the mean sampling rate is equal to 80 MS/s (megasamples per second). The particular instrument shown is a Vector spectrum analyser. Normally the analog input signal to be analysed would be converted by a digitizer into a stream of nonuniformly taken sample values and then this digital signal would be analysed. However, the DASP Lab System could also be used in the rapid prototyping mode. In this case, to illustrate application of the nonuniform sampling processes, the input signal is synthesized and the Vector spectrum analyser of the DASP Lab System is used to obtain the spectrogram and the reconstructed waveform shown. The frequencies of the signal components are indicated on the given spectrogram.

images

Figure 1.4 Digital signal analysis and waveform reconstruction in the frequency range up to 1.2 GHz

Note that the frequencies of the signal components might have been chosen and placed on the frequency axis arbitrarily. As the highest frequency in this particular case is 1.185 GHz, the required sampling rate would be at least 2.370 GHz if the signal is sampled periodically. However, in the case of this example, the signal is sampled nonuniformly so the task of spectrum analysis and reconstruction of the signal waveform is resolved by using sparse sampling at a mean rate of 80 MS/s.

It is possible to reconstruct the signal waveform from such a sparse sequence of sample values because the signal is ergodic and quasi-stationary. The parameters do not vary during the time period it is being observed. Under these conditions, a reduced number of independent sample values are needed to reconstruct it by estimating all three parameters (amplitude, frequency and phase angle) of all signal components. In this case, the time intervals between the sampling instants might be large and the mean sampling rate used in this particular case is 80 MS/s. This means that it is approximately 30 times lower than it would have to be in the case of periodic sampling. Therefore about 30 times less signal sample values have been taken during the time interval the signal has been observed. The spectrogram of this particular example contains components in a wide frequency range and is shown in the upper window of the instrument panel given in Figure 1.4, while the reconstructed signal waveform is displayed in the lower window.

The example demonstrates that once aliasing is somehow eliminated, the rate of sampling required for reconstruction of the original signal does not depend on the highest frequency component of it. For instance, even a much lower sampling frequency than 80 MS/s could be used for analysis and reconstruction of the signal considered above because it is stationary and parameters of it do not change during the observation time.

This does not contradict the sampling theorem. If the sampling process is periodic, as this theorem assumes it to be, then the sampling frequency has to be high enough. Otherwise there will be aliasing and it will be impossible to estimate the signal parameters. The situation changes completely if aliasing is taken out by introducing nonuniform sampling and estimation of signal parameters in an appropriate way. Then signal sample values could be taken at a much slower rate. Use of periodic sampling under the same conditions would result in taking many more sample values at a sampling rate about 30 times higher. Apparently the excessive sample values would not add information in this particular case. They would serve only to resolve the uncertainty caused by overlapping signal spectral components. The additional sample values would also help to reduce the impact of noise present in the signal.

This leads to the conclusion that using nonuniform sampling based anti-aliasing techniques is quite beneficial under the given conditions. Nonuniform sampling makes it possible to compress data significantly, so much simpler electronic circuitry is needed to complete the task. Just imagine how much more complicated the hardware would need to be in order to execute the periodic sampling operation at the required frequency of 2.370 GHz and to perform vector spectrum analysis of the extremely wideband digital signal.

Therefore, as the example suggests, avoiding aliasing in some other way not based on the use of high-frequency sampling should lead to a reduction in the requirements for the mean sampling rate and to other related benefits. In other words, the introduction of such sparse sampling should result in lifting the frequency limit to some higher level and in widening the digital domain in the direction of higher frequencies. In addition, the application of sparse sampling might also be good from other points of view. For instance, if a particular task for signal processing could be resolved by processing fewer signal sample values, data compression would take place, and that is always beneficial.

Application of sparse sampling (or undersampling) makes sense if the conditions are right. However, it has to be kept in mind that this sparse sampling is also necessarily nonuniform as only this kind of sampling would lead to suppression of aliasing. Consequently, processing digital signals obtained as a result of this kind of sampling has to be carried out in a special way that is suitable for handling nonuniformly sampled signals, which is a significantly more difficult task than processing periodically sampled signals.

Of course, there are also application limitations for nonuniform sparse sampling, but they differ from the limitations characterizing application conditions for periodic sampling. In the case of nonuniform sampling, the limitations on the lowest sampling rate are imposed by signal parameter variation dynamics rather than by the upper frequency of their spectra. This changes the attitude to establishing the required parameters for used sampling drivers. The signal nonstationarity issue becomes the primary consideration and analysis of the expected signal behaviour has to be carried out to determine the requirements for the designs of the sampling driver including the required mean sampling rate.

images

Figure 1.5 A typical block of signal sample values taken from a signal nonuniformly

1.4.3 Nonuniform Sampling Events

The crucial issue of timing nonequidistant sampling events will be mentioned in order to show some typical problems for nonuniform sampling and possible approaches to their resolution. A part of a signal sample value sequence belonging to the signal shown above is given in Figure 1.5. It can be seen that the intervals between the sampling instants vary. This represents a problem as the exact positions of all signal samples on the time axis have to be fixed, in addition to providing related sample values. In general, the digital signals formed in the case of nonuniform sampling should contain two times more bits than the digital signals obtained in the case of periodic sampling. The necessity to measure the time instants of sampling and to spend two times more bits for a digital description of a signal is difficult in itself, but this condition is especially worrisome in the light of the additional computation complexity caused by it.

However, more detailed consideration of this situation reveals that there is a much better approach to this problem; in fact it is possible to avoid doubling data volumes that have to be processed when sampling is performed nonuniformly. A special approach to realization of nonuniform sampling has to be used for that.

In principle, there are two options: measuring each sampling instant digitally or performing the sampling operation at predetermined time instants. The first option, measuring digitally the instants exactly when each signal sample has been taken, is a quite demanding engineering task, especially if the required time resolution is taken into account. Indeed, the period, for example, of the 1 GHz signal is 1 nanosecond. To sample such a signal, the smallest time digit obviously has to be equal to a few picoseconds.

images

Figure 1.6 Impact of sampling jitter on spectral analysis of signals: (a) estimated amplitude spectrum of a signal in the case when the mean square value of the jitter is equal to 5 ps; (b) the same spectrogram of the same signal, obtained in the case when the mean square value of the jitter is 20 ps

The second option is much better. To realize it, the required sampling point process has to be generated and the instants when the signal sample values have to be taken need to be memorized. This information is then used both for driving the sampler and for digital processing of the sampled signal. The data indicating the exact instant of each sampling event are kept in the memory. Therefore only one digital number per sample value taken appears at the output of the digitizer performing the sampling operation in this case. As it is much easier to realize this second approach in timing nonuniform sampling events, typically it is now almost always used.

According to this approach, the signal sample values are to be taken at exactly predetermined time instants. However, in reality there is always some discrepancy between the dictated and the actual sampling instants. In other words, sampling instants jitter. Apparently this jittering has to be kept within certain margins. The impact of sampling instant jittering depends on the specific signal processing taking place. This is illustrated by the spectrograms given in Figure 1.6. It can be seen that in this particular case stronger jitter leads to significantly increased noise floor of the spectrograms.

The impact of sampling jittering is potentially even more damaging than indicated above. Not only does the average noise level increase as a result of such jittering but it might also lead to peaks at some spurious frequencies.

Imperfections of wideband signal sampling timing are measured in picoseconds. This indicates the scale of engineering problems that have to be resolved in order to achieve the high performance of digitizers based on nonuniform sampling of signals. They are quite serious and the required quality of the involved electronic designs is rather high. It is almost impossible to predict the behaviour of this kind of device theoretically. That is especially true in cases where these devices have to operate in a wide temperature range. Therefore, to obtain the data characterizing the expected performance of the digitizers, their performance has to be studied experimentally. That has been done. The obtained results and the engineering experience accumulated in this area confirm the feasibility of using such nonuniform samples in a rather wide frequency range. The upper frequencies of signals digitally processed in this way, at the time of writing this book, might reach the level of a few GHz.

1.5 Remarks in Conclusion

To conclude this first introductory chapter, some remarks will be made to summarize the basic message.

Although the quality of a modern ADC is high, signal digitization nevertheless often represents the weakest link in the chain of successive signal conversions from data acquisition to the performance of the required digital transformations. In fact, it is possible to process the digital signals in a much wider frequency range than in the cases where the original signals to be processed are analog. Therefore, digitizing actually represents the potential DSP bottleneck. This is true not only because the digitization processes determine the ultimately achievable speed of signal processing. The signal sampling and quantizing operations also impact on the quality of signal processing and this impact is much stronger than is usually realized. These facts lead to the conclusion that digitizing signals deserves much more attention than it usually receives. Although there is steady progress leading to the production of better and better microelectronic devices for signal digitizing, progress is relatively slow and costly as it is based mainly on improvements of the involved semiconductor manufacturing technologies.

Another approach to the problem of widening the domain where signals are fully processed digitally is described in this book. It is based on application of the specific DASP technology and exploits its typical advantages including elimination of aliasing. While in many DSP application cases the simplest approach to digitizing provides results that are good enough, in cases where the signals to be handled digitally have extreme parameters, either in the frequency, time or spatial domains, the necessity to ensure sufficient flexibility of digitizing becomes crucial. Providing this flexibility is a cornerstone of the DASP methodology. The DASP hardware and software tools form the basis for advanced flexibly adaptable digital representation of signals and matched algorithms for their processing. This technology targets the execution of digitizing in the best possible way in a rather broad frequency range. Specific nontraditional techniques have to be and often are used to achieve the required functional and parametric capabilities. Both the similarities and differences between the generic DSP and special DASP techniques are discussed in the following chapters, with emphasis on the potential benefits often obtained by correct application of the suggested techniques.

However, widening of the digital domain does not exhaust all the potential benefits obtained by skilful use of DASP. In general, it is recommended that application of these techniques be considered under the conditions when classical DSP is inapplicable and/or when it is essential to make signal processing more cost efficient, meaning costs in terms of money, hardware volume, weight and processing time. Application of DASP, when realized in the correct way, should lead to replacement of the complicated microwave circuitry by substantially simpler medium frequency microelectronics and to substitution of analog signal processing blocks by digital signals. That translates into simplification of designs and, in turn, to manufacturing cost reductions, as such devices typically are considerably simpler and could possibly be built on the basis of much cheaper microelectronic chips. Various secondary benefits might be achieved as well, such as data compression or reduction of the bit flow to be processed, decorrelation of signals and their processing errors, elimination of various systematic errors, widening of the dynamic range achieved by taking out spurious frequencies, etc.

The signal dynamics (nonstationarity, the speed of signal parameter variations) rather than the upper frequency present in a spectrum of a signal serves as a criterion limiting application of this technology in specific cases. Correct application of DASP usually requires that a certain number of signal sample values, needed for resolution of the given specific signal processing task, should be taken in a given period of time. When signal parameters vary in time slowly, the mean sampling rate might be relatively low regardless of what the highest frequency present in the signal spectrum is. On the other hand, if the signal parameters vary rapidly, the applicability of nonuniform sampling in that particular case has to be checked.

One of the typical DASP drawbacks is the need for special algorithms. Unfortunately, the wealth of algorithms and computer programs developed for DSP more often than not cannot be directly used for DASP. Usually special and more complicated algorithms have to be developed and used. In addition, not all DASP applications are as good as the similar DSP applications in the lower frequency range. These are the principal limitations and there are limitations that will probably be eliminated sometime in the future when research will find more effective methods and algorithms.

It might seem that there are some contradictions between DASP and DSP. For instance, processing of a signal sampled at a mean rate lower than the upper frequency of its spectrum, considered to be normal in the case of DASP, is simply excluded as impossible in the case of DSP. In fact there are no contradictions. The theory of DASP supplements rather than contradicts the theory and practice of generic DSP. The cornerstone of DASP is the attitude to signal digitization. Both basic operations of digitization and sampling, as well as quantizing, are considered to be most important. Adapting them to the conditions of signal processing helps in obtaining better results. However, little can be done to vary classic equidistant sampling and fixed threshold quantizing techniques. As such rigid digitizing more often than not cannot be adapted to the specifics of DSP, more flexible methods for executing the basic digitizing operations are needed. Randomization of these operations proves to be an effective instrument for making these operations flexible. The algorithms used for processing the signal digitized in this special way naturally have to be matched to the digitization specifics.

Thus randomization is crucial for DASP. An introduction to this specific way of digitizing is given in the following chapter. It is shown there that this idea is not in fact original. There have been many attempts to use this approach in order to achieve specific benefits before. The early experiences obtained in this field are described. However, there is a problem. Unfortunately randomization of digitizing leads both to good and bad consequences. The problem is how to benefit from such deliberate randomization and at the same time avoid the degradation of signal processing quality caused by it. This is not very easy to achieve. A really in-depth understanding of the involved processes is compulsory in order to resolve this problem. In fact, the full story of this book is about just that.

Bibliography

Artyukh, Yu., Bilinskis, I., Greitans, M. and Vedin, V. (1997) Signal digitizing and recording in the DASPLab System. In Proceedings of the 1997 International Workshop on Sampling Theory and Application, Aveiro, Portugal, June 1997, pp. 357–60.

Artyukh, Yu., Bilinskis, I. and Vedin, V. (1999) Hardware core of the family of digital RF signal PC-based analyzers. Proceeding of the 1999 International Workshop on Sampling Theory and Application, August 11–14, Loen, Norway, pp. 177–79.

Balakrishnan, A.V. (1962) On the problem of time-jitter in sampling. IRE Trans. Inf. Theory, IT-8(4), 226–36.

Beutler, F.J. (1966) Error-free recovery of signals from irregularly spaced samples. SIAM Rev., 8(3), 328–35.

Beutler, F.J. (1970) Alias-free randomly timed sampling of stochastic processes. IEEE Trans. Inf. Theory, IT-16(2), 147–52.

Beutler, F.J. (1974) Recovery of randomly sampled signals by simple interpolators. Inf. Control, 26(4), 312–40.

Beutler, F.J. and Leneman, O.A. (1966) Random sampling of random processes: stationary point processes. Inf. Control, 9(4), 325–46.

Beutler, F.J. and Leneman, O.A. (1966) The theory of stationary point processes. Acta Math., 116, 159–97.

Beutler, F.J. and Leneman, O.A. (1968) The spectral analysis of impulse processes. Inf. Control, 12(3), 236–58.

Bilinskis, I. (1976) Stochastic signal quantization error spectrum (in Russian). Autom. Control Comput. Sci., 3, 55–60.

Bilinskis, I. (1977) Quasi-stochastic coding of continuous signals. Pr. Przem. Inst. Elektron., 64, 53–9.

Bilinskis, I. (1978) Random sampling of continuous signals. Proc. Acad. Sci. LSSR, 6.

Bilinskis, I. and Cain, G. (1996) Digital alias-free signal processing in the GHz frequency range. In Digest of the IEE Colloquium on Advanced Signal Processing for Microwave Applications, 29 November 1996.

Bilinskis, I. and Mikelsons, A. (1975) Quantizing of signals by parallel stochastic weighting (in Russian). Autom. Control Comput. Sci., 4, 34–8.

Bilinskis, I. and Mikelsons, A. (1978) Random sampling of continuous-time signals (in Russian). Proc. Acad. Sci. LSSR, 6, 96–101.

Bilinskis, I. and Mikelsons, A. (1983) Digital Random Processing of Continuous Signals (in Russian). Riga: Zinatne.

Bilinskis, I. and Mikelsons, A. (1990) Application of randomized or irregular sampling as an anti-aliasing technique. In Signal Processing, V: Theories and Application. Amsterdam: Elsevier Science Publishers, pp. 505–8.

Bilinskis, I. and Mikelsons, A. (1992) Randomized Signal Processing. Prentice-Hall International (UK) Ltd.

Bilinskis, I., Trejs, P.P. and Nemirovski, R.F. (1972) Verfarhen zur digitalen Messung von Zeitintervallen und Einrichtung zu dessen Realisierung. Patentscrift No. 93960, DDR.

Bilinskis, I., Trejs, P.P. and Nemirovski, R.F. (1974) Zpusob digitalniho mereni casovych intervalu a zarizeni k provadeni tohoto zpusobu. Aut. osved. 163383, CSSR.

Brannon, B. (1996) Wide-dynamic-range A/D converters pave the way for wideband digital-radio receivers. In EDN, 7 November 1996, pp. 187–205.

Brown, W.M. (1963) Sampling with random jitter. SIAM J. Appl. Math., 11, 460–73.

Corcoran, J.J., Poulton, K. and Knudsen, K.L. (1988) A one-gigasample-per-second analog-to-ditial converter. Hewlett-Packard J., June, 59–66.

Hejn, K. (1978) Application of Monte-Carlo methods in measurement. In Symposuim IMEKO Tech. Comm. on Measurement Theory, Leningrad.

Jayant, N. and Rabiner, L. (1972) The application of dither to the quantization of speech signals. Bell Syst. Tech. J., 51(6), 1293–304.

Leneman, O.A.Z. (1966) Random sampling of random processes: impulse sampling. Inf. Control, 9(4), 347–63.

Leneman, O.A.Z. (1966) Random sampling of random processes: optimum linear interpolation. Franklin Inst., 281(4), 302–14.

Leneman, O.A.Z. (1966) On error bounds for jittered sampling. IEEE Trans. Autom. Control, AC-11(1), 150.

Leneman, O.A.Z. and Lewis, J.B. (1966) Random sampling of random processes: mean-square comparison of various interpolators. IEEE Trans. Autom. Control, AC-11(3), 396–403.

Marvasti, F. (2001) Nonuniform Sampling, Theory and Practice. New York: Kluwer Academic/Plenum Publishers.

Masry, E. (1971) Random sampling and reconstruction of spectra. Inf. Control, 19(4), 275–88.

Masry, E. (1978) Alias-free sampling: an alternative conceptualization and its applications. IEEE Trans. Inf. Theory, IT-24(3), 317–24.

Mednieks, I. and Mikelsons, A. (1990) Estimation of true components of wide-band quasi-periodic signals. In Signal Processing, V: Theories and Application. Amsterdam: Elsevier Science Publishers, pp. 233–6.

Mikelsons, A. (1981) Estimating broadband signal parameters at a relatively low mean sampling rate. Autom. Control-Comput. Sci, 1, 90–4.

Millard, J.K. (1988) A one-gigasample-per-second digitizing oscilloscope.Hewlett-Packard J., June, 58–9.

Rabiner, L. and Johnson, J. (1972) Perceptual evaluation of the effects of dither on low bit rate PCM systems. Bell Syst. Tech. J., 51(7), 1487–94.

Roberts, L.G. (1962) Picture coding using pseudo-random noise. IRE Trans. Inf. Theory, IT-8, 145–54.

Shapiro, H.S. and Silverman, R.A. (1960) Alias-free sampling of random noise. SIAM J. Appl. Math., 8(2), 245–8.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.8.42