Chapter 10
Nonlinear Processing

M. Holters and L. Köper

10.1 Fundamentals

Linear systems—filters—play an important role in audio signal processing. Linearity here means that the superposition property holds: denote the output of a system excited with input signals x 1 left-parenthesis n right-parenthesis and x 2 left-parenthesis n right-parenthesis with y 1 left-parenthesis n right-parenthesis and y 2 left-parenthesis n right-parenthesis, respectively. If the system maps the superposition of the input signals x left-parenthesis n right-parenthesis equals a 1 x 1 left-parenthesis n right-parenthesis plus a 2 x 2 left-parenthesis n right-parenthesis to the corresponding superposition y left-parenthesis n right-parenthesis equals a 1 y 1 left-parenthesis n right-parenthesis plus a 2 y 2 left-parenthesis n right-parenthesis at the output (for any x 1 comma x 2 comma a 1 comma a 2), it is considered linear. Systems that do not fulfill this property are accordingly called nonlinear. One important consequence of linearity, when combined with time invariance, is that systems can be fully described in the frequency domain by their transfer function. Therefore, frequency components present in the input signal can be amplified or attenuated and phase‐shifted, but no new components can appear in the output. This is not the case for nonlinear systems, where in particular, components at multiples of the input frequency (harmonics) can appear. Figure 10.1 visualizes this important difference.

We have already seen nonlinear systems, namely the dynamic range controllers of Chapter 8. However, when only considering short time spans (in relation to the attack and release times), they can be approximated by simply scaling the signal, a linear operation. This chapter, however, will focus on nonlinear systems where such an approximation is not fruitful.

An important distinction between nonlinear systems is whether they only act on the current input signal value (static, memoryless, or stateless system) or also contain an internal state (dynamic or stateful system). The latter can exhibit arbitrary behavior, while the former are more restricted and allow some general observations. We therefore first focus on static nonlinear systems.

A static nonlinear system can be described by a single function f mapping input to output by

(10.1)y left-parenthesis n right-parenthesis equals f left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis period
Schematic illustration of comparison of a linear system and a nonlinear system when excited with a single sinusoid.

Figure 10.1 Comparison of a linear system and a nonlinear system when excited with a single sinusoid.

Now obviously, if x left-parenthesis n right-parenthesis is periodic with period upper N 0, then so is y left-parenthesis n right-parenthesis. In particular, for a sinusoidal input x left-parenthesis n right-parenthesis equals sine left-parenthesis normal upper Omega 0 n right-parenthesis of frequency normal upper Omega 0 equals 2 pi slash upper N 0, because the output y left-parenthesis n right-parenthesis has period upper N 0, from Fourier theory we obtain that it can be decomposed as

(10.2)y left-parenthesis n right-parenthesis equals upper A 0 plus upper A 1 sine left-parenthesis normal upper Omega 0 n minus phi 1 right-parenthesis plus upper A 2 sine left-parenthesis 2 normal upper Omega 0 n minus phi 2 right-parenthesis plus upper A 3 sine left-parenthesis 3 normal upper Omega 0 n minus phi 3 right-parenthesis plus midline-horizontal-ellipsis comma

i.e. it consists of sinusoids at multiples of the input frequency normal upper Omega 0. While the DC component upper A 0 is usually ignored, the component at normal upper Omega 0 is referred to as the fundamental and the remaining components are called its harmonics.

In contexts where a linear system is desired, the effect of a nonlinearity is therefore also called harmonic distortion. It is commonly measured by the total harmonic distortion (THD)

(10.3)upper T upper H upper D equals StartRoot ContinuedFraction upper A 2 squared plus upper A 3 squared plus midline-horizontal-ellipsis Over upper A 1 squared plus upper A 2 squared plus upper A 3 squared plus midline-horizontal-ellipsis EndRoot comma

the ratio of the combined power of all harmonics to the total signal power (without the DC component). Usually, the THD is given in dB. While it quantifies the deviation from linearity for systems that should ideally be linear, it is of little use to characterize systems which are intentionally nonlinear, as the relative levels of the different harmonics have a strong influence on the resulting timbre.

Typical audio signals of interest comprise more than a single sinusoid. Single note tones of musical instruments consist of a fundamental and its harmonics themselves. Any new components introduced by a static nonlinear system will still be harmonics, so the tonal quality remains, the appearance of new harmonics or amplification of existing ones will only make the sound brighter or even harsher. For input signals containing more than a single sinusoid and its harmonics, however, additional components appear spaced by the differences of the input signal frequencies. An example is given in Fig. 10.2, where the same static nonlinear system is excited with a single sinusoid at 261.63 Hz (C4, top), two sinusoids at 261.63 Hz and 392 Hz (C4 and G4, middle), and 261.63 Hz, 329.63 Hz and 392 Hz (C4, E4, and G4, i.e. Cmajor, bottom). As is apparent, additional components in the input signal quickly yield a very dense output spectrum. For that reason, intentional nonlinear processing is usually limited to single notes or chords with very few notes (e.g. power chords on an electric guitar).

Schematic illustration of effect of a static nonlinearity on a single sinusoid, two sinusoids, and three sinusoids.

Figure 10.2 Effect of a static nonlinearity on a single sinusoid, two sinusoids, and three sinusoids.

10.2 Overdrive, Distortion, Clipping

Overdrive, distortion, and clipping effects are heavily used in many kinds of musical devices. The terms overdrive and distortion are by no means exactly defined. However, most musicians will agree by defining overdrive as a soft saturating amplification of low‐level signals, which results in an operating point in the linear as well as nonlinear regions of the characteristic curve of the overdrive. A distortion effect will mainly be used in its nonlinear region. This leads to a harsher sound with a lot of harmonic frequency content. To design and analyze such effects, an understanding of the underlying nonlinear processing is crucial. A very basic clipping circuit can be obtained by connecting antiparallel diodes to the signal path. Figure 10.3 illustrates a soft‐clipping lowpass filter using only two additional diodes.

Schematic illustration of first-order diode clipper.

Figure 10.3 First‐order diode clipper.

By disregarding the diodes, the transfer function of the linear lowpass filter can simply be found to be

(10.4)upper H left-parenthesis s right-parenthesis equals StartFraction 1 Over 1 plus upper R upper C s EndFraction

with the corresponding differential equation

(10.5)ModifyingAbove y With dot left-parenthesis t right-parenthesis equals StartFraction 1 Over upper R upper C EndFraction left-parenthesis x left-parenthesis t right-parenthesis minus y left-parenthesis t right-parenthesis right-parenthesis period

However, including the nonlinearity, which is introduced by the diodes, to the transfer function is not straightforward. The nonlinear relation of the voltage over and the current through the diode can be obtained using Shockley's law:

with upper I Subscript d as the current through, upper V Subscript d the voltage over the diode, upper I Subscript s as the reverse saturation current, v Subscript t as thermal voltage, and eta as quality factor. Applying Kirchoff's voltage and current laws to the circuit from Fig. 10.3 and assuming identical diodes, a first‐order nonlinear differential equation,

can be derived.

Although this equation describes the system in its entirety, implementing it into a digital processing unit cannot be done easily, because the nonlinear differential equation needs to be solved numerically for each and every time step. This computational effort even increases for more complicated systems. Hence, the question arises whether the system can be somehow separated into a linear and a nonlinear part. Looking at the system from Fig. 10.3, the most intuitive idea is to apply first a linear lowpass filter and afterwards a nonlinear mapping function. Figure 10.4 shows such a system. The discrete input signal x left-parenthesis n right-parenthesis is filtered by a linear lowpass filter and afterwards fed into the nonlinearity. Note that this separation into a linear stateful filter and a static nonlinear mapping function does not perfectly recreate the dynamic behavior of nonlinear stateful filters such as in Eq. (10.7). However, many nonlinear systems can be sufficiently approximated by approaches based on Fig. 10.4.

Schematic illustration of combination of linear filtering and nonlinear static mapping.

Figure 10.4 Combination of linear filtering and nonlinear static mapping.

As shown for the simple example in Fig. 10.3, nonlinear processing of such dynamic systems is very tedious. Hence, we will focus on the use of static nonlinear mapping functions to create overdrive and distortion effects. A simple soft clipping characteristic curve [Sche80] is given by the equation

A corresponding hard clipping characteristic curve

can be obtained by removing the quadratic term from the soft‐clipping equation. Figure 10.5 shows both soft‐ and hard clipping mapping functions corresponding to Eq. (10.8) and Eq. (10.9) in symmetrical application f left-parenthesis x right-parenthesis equals minus f left-parenthesis negative x right-parenthesis for x less-than 0. The output signal for such a system given a sinusoidal input of 1 kHz and an amplitude of 0.8 V is depicted in Fig. 10.6. Applying such symmetrical nonlinear functions to a sinusoidal input signal resolves into an output signal which contains only odd harmonics of the original signal. Furthermore, hard clipping nonlinearities will produce a higher harmonic frequency content compared with the smoother soft‐clipping nonlinearity. However, if the characteristic curve is asymmetric, e.g. Eq. (10.8) with f left-parenthesis x right-parenthesis equals x for x less-than 0, the output signal will contain both even and odd harmonics. Stateful nonlinear filters might also add non‐harmonic frequency content to the signal. This is, for example, the case if looking at self‐oscillating systems.

Schematic illustration of static characteristic curve of a symmetrical soft-clipping (left) and hard-clipping (right) nonlinearity.

Figure 10.5 Static characteristic curve of a symmetrical soft‐clipping (left) and hard clipping (right) nonlinearity.

Schematic illustration of output signal for sinusoidal input with soft-clipping (left) and hard-clipping (right) nonlinearity.

Figure 10.6 Output signal for sinusoidal input with soft‐clipping (left) and hard clipping (right) nonlinearity.

Another nonlinear processing technique similar to clipping is called wavefolding. It also uses a nonlinear static mapping function, but instead of saturating the input signal, some parts of the input signal are folded back onto itself. The name wavefolding has its origin in the early days of electronic sound synthesis. A wavefolder takes a sine or cosine wave as an input and produces an output wave with high harmonic frequency content [Roa79]. The wavefolding process is depicted in Fig. 10.7. Using the upper linear mapping function, the signal can be inverted. Applying this inversion only to certain parts of the input signal results in an output signal similar to that shown at the bottom of Fig. 10.7. The resulting output signal has a high amount of harmonic frequency content. Because the input signal to such wavefolders is typically a sinusoid with no harmonic frequency content, this technique is often used for additive synthesis.

Schematic illustration of wavefolding using nonlinear static mapping functions.

Figure 10.7 Wavefolding using nonlinear static mapping functions.

10.3 Nonlinear Filters

Describing nonlinear systems as a combination of linear filters and static nonlinear mapping functions is sufficient for many applications. However, sometimes it is necessary to implement the nonlinearities directly into the filter, which results in a stateful nonlinear filter. For example, by adding saturating nonlinearities, a digital filter can be constructed that has a more ‘analog’‐like sound [Ros92]. The stability of these nonlinear systems cannot be as easily determined as for linear systems. We have to take into account the fact that the pole locations are now dependent on the state of the filter. Therefore, stability analysis will make use of the instantaneous pole locations of the filter. We will investigate a stateful nonlinear filter by looking at a second‐order filter [Cho20], as depicted in Fig. 10.8. The difference equation for the output can be directly constructed from the block diagram with

(10.10)y left-parenthesis n right-parenthesis equals b 0 u left-parenthesis n right-parenthesis plus b 1 u left-parenthesis n minus 1 right-parenthesis plus b 2 u left-parenthesis n minus 2 right-parenthesis minus a 1 y left-parenthesis n minus 1 right-parenthesis minus a 2 y left-parenthesis n minus 2 right-parenthesis period

Transforming the difference equation into the frequency domain, the transfer function

(10.11)upper H left-parenthesis z right-parenthesis equals StartFraction b 0 z squared plus b 1 z plus b 2 Over z squared plus a 1 z plus a 2 EndFraction

is obtained. This leads to the pole locations

(10.12)p Subscript 1 comma 2 Baseline equals minus StartFraction a 1 Over 2 EndFraction plus-or-minus StartRoot StartFraction a 1 squared Over 4 EndFraction minus a Baseline 2 EndRoot period

For further analysis, it is helpful to bring the system into a state‐space representation. Therefore we define the states to be the outputs of the delay blocks. This leads to the discrete‐time state‐space system:

(10.13)StartLayout 1st Row 1st Column StartBinomialOrMatrix x 1 left-parenthesis n plus 1 right-parenthesis Choose x 2 left-parenthesis n plus 1 right-parenthesis EndBinomialOrMatrix 2nd Column equals Start 2 By 2 Matrix 1st Row 1st Column minus a 1 2nd Column 1 2nd Row 1st Column minus a 2 2nd Column 0 EndMatrix StartBinomialOrMatrix x 1 left-parenthesis n right-parenthesis Choose x 2 left-parenthesis n right-parenthesis EndBinomialOrMatrix plus StartBinomialOrMatrix b 1 minus a 1 b 0 Choose b 2 minus a 2 b 0 EndBinomialOrMatrix u left-parenthesis n right-parenthesis 2nd Row 1st Column y left-parenthesis n right-parenthesis 2nd Column equals Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column 0 EndMatrix StartBinomialOrMatrix x 1 left-parenthesis n right-parenthesis Choose x 2 left-parenthesis n right-parenthesis EndBinomialOrMatrix plus b 0 u left-parenthesis n right-parenthesis period EndLayout

Note that the stability of this linear system requires all eigenvalues of the system matrix to have a magnitude smaller than one. A nonlinear version of this filter can be achieved by inserting nonlinear mapping functions, as mentioned in Section 10.2, into the filter. The nonlinear blocks are put directly behind the delay blocks. The output equation changes to

(10.14)y left-parenthesis n right-parenthesis equals b 0 u left-parenthesis n right-parenthesis plus f left-parenthesis b 1 u left-parenthesis n minus 1 right-parenthesis minus a 1 y left-parenthesis n minus 1 right-parenthesis plus f left-parenthesis b 2 u left-parenthesis n minus 2 right-parenthesis minus a 2 y left-parenthesis n minus 2 right-parenthesis right-parenthesis right-parenthesis period

The nonlinear state‐space model results in

(10.15)StartLayout 1st Row 1st Column StartBinomialOrMatrix x 1 left-parenthesis n plus 1 right-parenthesis Choose x 2 left-parenthesis n plus 1 right-parenthesis EndBinomialOrMatrix 2nd Column equals h left-parenthesis StartBinomialOrMatrix x 1 left-parenthesis n right-parenthesis Choose x 2 left-parenthesis n right-parenthesis EndBinomialOrMatrix right-parenthesis plus StartBinomialOrMatrix b 1 minus a 1 b 0 Choose b 2 minus a 2 b 0 EndBinomialOrMatrix u left-parenthesis n right-parenthesis 2nd Row 1st Column y left-parenthesis n right-parenthesis 2nd Column equals f left-parenthesis x 1 left-parenthesis n right-parenthesis right-parenthesis plus b 0 u left-parenthesis n right-parenthesis EndLayout

with

(10.16)StartLayout 1st Row 1st Column h 1 left-parenthesis x 1 left-parenthesis n right-parenthesis comma x 2 left-parenthesis n right-parenthesis right-parenthesis 2nd Column equals minus a 1 f left-parenthesis x 1 left-parenthesis n right-parenthesis right-parenthesis plus f left-parenthesis x 2 left-parenthesis n right-parenthesis right-parenthesis comma EndLayout
(10.17)StartLayout 1st Row 1st Column h 2 left-parenthesis x 1 left-parenthesis n right-parenthesis comma x 2 left-parenthesis n right-parenthesis right-parenthesis 2nd Column equals minus a 2 f left-parenthesis x 1 left-parenthesis n right-parenthesis right-parenthesis period EndLayout

Because the poles of this nonlinear system are dependent on the state of the system, stability can be analyzed by looking at the instantaneous pole locations of the filter. Stability can be assured if all possible instantaneous pole locations have a magnitude strictly smaller than one. Therefore, the Lyapunov stability [Che04] of the system can be analyzed. Note that the Lyapunov stability is more restrictive than the BIBO stability, because a Lyapunov stable system will only have poles inside the unit circle for any given point in time. A system is considered Lyapunov stable if the eigenvalues for the Jacobian of the discrete‐time system matrix have magnitudes strictly smaller than one. The Jacobian of the nonlinear state‐space system yields

(10.18)upper J equals Start 2 By 2 Matrix 1st Row 1st Column minus a 1 f prime left-parenthesis x 1 left-parenthesis n right-parenthesis right-parenthesis 2nd Column f prime left-parenthesis x 2 left-parenthesis n right-parenthesis right-parenthesis 2nd Row 1st Column minus a 2 f prime left-parenthesis x 1 left-parenthesis n right-parenthesis right-parenthesis 2nd Column 0 EndMatrix period
Schematic illustration of linear (left) and nonlinear (right) second-order filter.

Figure 10.8 Linear (left) and nonlinear (right) second‐order filter.

Consequently, there are two restrictions for the system to be stable. The backward coefficients have the same restriction as for the linear system with a 1 plus a 2 greater-than negative 1 intersection a 1 minus a 2 less-than 1 for real poles and a 1 less-than StartRoot 2 left-parenthesis a 2 plus 1 right-parenthesis EndRoot for conjugate complex poles. Furthermore, the first derivative of the nonlinear mapping function must not exceed a value of one at all operating points. Many nonlinear saturating functions fulfill this requirement. For example, a hyperbolic tangent, as in Fig. 10.9, can be used.

Schematic illustration of hyperbolic tangent mapping function.

Figure 10.9 Hyperbolic tangent mapping function.

To illustrate the characteristics of such nonlinear filters, we take the second‐order filter from Fig. 10.8 as an example. The filter coefficients are set to a 1 equals negative 1.89, a 2 equals 0.9, b 0 equals b 2 equals 0.02, and b 1 equals 0.04, which results in a lowpass configuration of the filter. The frequency response of the linear filter can be seen in Fig. 10.10. In this example, we will use a linear sinesweep with amplitude 1 and a frequency range from 50 Hz to 4000 Hz as an input signal to the filters. The output signal over time and frequency can be seen in the waterfall representations shown in Fig. 10.11 and Fig. 10.12. The lowpass behavior of the filters can be observed in both figures because the higher frequencies of the sinesweep receive a higher attenuation. The nonlinear filter additionally produces filtered harmonics of the input signal. This feature can be used to create sounds with a more analog‐like feel like saturated transistor or tube stages.

Schematic illustration of frequency response of linear second-order filter.

Figure 10.10 Frequency response of linear second‐order filter.

Schematic illustration of waterfall presentation of a linear filtered sinesweep.

Figure 10.11 Waterfall presentation of a linear filtered sinesweep.

Schematic illustration of waterfall presentation of a nonlinear filtered sinesweep.

Figure 10.12 Waterfall presentation of a nonlinear filtered sinesweep.

10.4 Aliasing and its Mitigation

So far, we have ignored one important aspect of nonlinear processing: the newly introduced signal components, especially higher harmonics, may exceed the Nyquist limit at half the sampling frequency. This will result in aliasing distortion, which is typically undesired. We may first observe that when considering a continuous‐time input signal ModifyingAbove x With bar left-parenthesis t right-parenthesis, the order of sampling and applying a static nonlinear mapping f may be reversed: First applying f gives ModifyingAbove y With bar left-parenthesis t right-parenthesis equals f left-parenthesis ModifyingAbove x With bar left-parenthesis t right-parenthesis right-parenthesis, which, after sampling with sampling rate f Subscript normal s, results in y left-parenthesis n right-parenthesis equals y overbar left-parenthesis n slash f Subscript normal s Baseline right-parenthesis equals f left-parenthesis x overbar left-parenthesis n slash f Subscript normal s Baseline right-parenthesis right-parenthesis. However, by first sampling, we get x left-parenthesis n right-parenthesis equals x overbar left-parenthesis n slash f Subscript normal s Baseline right-parenthesis, which is then mapped to the same y left-parenthesis n right-parenthesis equals f left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis equals f left-parenthesis x overbar left-parenthesis n slash f Subscript normal s Baseline right-parenthesis right-parenthesis. This holds true even if ModifyingAbove y With bar left-parenthesis t right-parenthesis is not band limited to f Subscript normal s Baseline slash 2.

As an extreme case, we consider the mapping function f left-parenthesis x right-parenthesis equals s g n left-parenthesis x right-parenthesis, which corresponds to applying an infinite gain and then hard clipping to the range left-bracket negative 1 comma 1 right-bracket. Applied to a sinusoidal input, the output becomes a square wave. Figure 10.13a depicts the corresponding spectrum for an input frequency of 1318.5 Hz (the note E6). As the harmonics only decay slowly with frequency, their level above the Nyquist limit of 22.05 kHz (marked by a vertical line) for the common sampling rate of f Subscript normal s Baseline equals 44.1 kHz is still significant. Consequently, the sampled signal of Fig. 10.13b contains many and strong aliased components. These will be audible as both a noise floor and inharmonic tones.

Schematic illustration of spectra of (a) continuous-time signal y(t)=sgnsin(2Π·1318.5Hz·t) with a marker at the Nyquist frequency 22.05 kHz and (b) sampled signal y(n)=y(n/44.1kHz).

Figure 10.13 Spectra of (a) continuous‐time signal ModifyingAbove y With bar left-parenthesis t right-parenthesis equals s g n left-parenthesis sine left-parenthesis 2 pi dot 1318.5 Hz dot t right-parenthesis right-parenthesis with a marker at the Nyquist frequency 22.05 kHz and (b) sampled signal y left-parenthesis n right-parenthesis equals ModifyingAbove y With bar left-parenthesis n slash 44.1 kHz right-parenthesis.

The conceptually simplest approach to reducing this aliasing distortion is to increase the sampling rate. Typically, the input signal will be upsampled before and downsampled to the original sampling rate after the nonlinearity, where the resampling includes appropriate interpolation and decimation filters. A corresponding system is shown in Fig. 10.14. The effectiveness of this method depends on how fast the harmonics decay with frequency and obviously the oversampling factor upper L. For the example above, the spectra obtained by oversampling with different factors upper L are shown in Fig. 10.15.

Schematic illustration of operation of a nonlinear system at a sampling frequency increased by factor L, where HI(z) and HD(z) denote the interpolation and decimation filter, respectively.

Figure 10.14 Operation of a nonlinear system at a sampling frequency increased by factor upper L, where upper H Subscript normal upper I Baseline left-parenthesis z right-parenthesis and upper H Subscript normal upper D Baseline left-parenthesis z right-parenthesis denote the interpolation and decimation filter, respectively.

Schematic illustration of spectra of y(t)=sgnsin(2Π·1318.5Hz·n/(L·44.1kHz)) for different values of L.

Figure 10.15 Spectra of y left-parenthesis t right-parenthesis equals s g n left-parenthesis sine left-parenthesis 2 pi dot 1318.5 Hz dot n slash left-parenthesis upper L dot 44.1 kHz right-parenthesis right-parenthesis right-parenthesis for different values of upper L.

It is clearly visible that with increasing upper L, the aliasing distortion gets reduced. For less extreme nonlinear systems, the harmonics typically decay faster and the effect of oversampling will be even more pronounced.

However, owing to increasing computational costs, the oversampling factor usually cannot be made arbitrarily large. Therefore, various other strategies to reduce the aliasing distortion have been developed, e.g. [Esq15, Esq16a, Esq16b, Mul17]. In the case of a memoryless nonlinear system, an attractive approach is based on approximating a continuous‐time system [Par16, Bil17a, Bil17b], which will be explained in the following. Perfect aliasing suppression can be obtained by carrying out the processing in the continuous‐time domain, i.e. by converting the input signal to its continuous representation, applying the nonlinearity to it, and then sampling it after appropriate lowpass filtering. While theoretically perfect, this is clearly impractical. However, by crude approximation of this process, we can obtain a practical implementation that can considerably lower the aliasing distortion.

First, we replace the exact continuous‐time representation of the input signal by a piecewise linear approximation:

(10.19)ModifyingAbove x With tilde left-parenthesis t right-parenthesis equals x left-parenthesis n right-parenthesis plus left-parenthesis t f Subscript normal s Baseline minus n right-parenthesis dot left-parenthesis x left-parenthesis n plus 1 right-parenthesis minus x left-parenthesis n right-parenthesis right-parenthesis where n equals left floor t f Subscript normal s Baseline right floor period

To this we apply the nonlinear mapping to obtain

(10.20)ModifyingAbove y With tilde left-parenthesis t right-parenthesis equals f left-parenthesis ModifyingAbove x With tilde left-parenthesis t right-parenthesis right-parenthesis period

Before sampling this, we need to apply a lowpass to suppress frequency content above the Nyquist limit. In the simplest case, we average over one sampling interval (corresponding to convolution with a rect), i.e.

(10.21)y left-parenthesis n right-parenthesis equals f Subscript normal s Baseline integral Subscript left-parenthesis n minus 1 right-parenthesis slash f Subscript normal s Baseline Superscript n slash f Subscript normal s Baseline Baseline ModifyingAbove y With tilde left-parenthesis t right-parenthesis normal d t equals f Subscript normal s Baseline integral Subscript left-parenthesis n minus 1 right-parenthesis slash f Subscript normal s Baseline Superscript n slash f Subscript normal s Baseline Baseline f left-parenthesis ModifyingAbove x With tilde left-parenthesis t right-parenthesis right-parenthesis normal d t period

Now we apply integration by substitution and rewrite as

and exploit the fact that, thanks to the piecewise linear approximation,

(10.23)StartFraction normal d t Over normal d x overTilde EndFraction equals left-parenthesis StartFraction normal d x overTilde Over normal d t EndFraction right-parenthesis Superscript negative 1 Baseline equals StartFraction 1 Over f Subscript normal s Baseline dot left-parenthesis x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis right-parenthesis EndFraction

is easy to compute. We thus obtain

(10.24)y left-parenthesis n right-parenthesis equals StartFraction 1 Over x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis EndFraction integral Subscript x left-parenthesis n minus 1 right-parenthesis Superscript x left-parenthesis n right-parenthesis Baseline f left-parenthesis x overTilde right-parenthesis normal d x overTilde period

Finally, by the fundamental theorem of calculus, we may rewrite to

where upper F denotes the antiderivative of f. The method is therefore also referred to as antiderivative antialiasing. If the antiderivative cannot be derived in closed form, it can be precomputed numerically and tabulated.

Assuming evaluation of upper F to be approximately as expensive as evaluation of f and when memorizing upper F left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis to be used as upper F left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis in the next time step, the antialiased system only requires two additional subtractions and one additional division compared with the non‐antialiased system. There is one minor complication though, namely, when x left-parenthesis n right-parenthesis almost-equals x left-parenthesis n minus 1 right-parenthesis, the denominator of Eq. (10.25) becomes nearly (or even exactly) zero, which will result in numerical problems. In the limit x left-parenthesis n right-parenthesis right-arrow x left-parenthesis n minus 1 right-parenthesis, Eq. (10.25) reduces to f left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis or equally f left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis. Thus, one could use either of those in case x left-parenthesis n right-parenthesis almost-equals x left-parenthesis n minus 1 right-parenthesis, but as will be explained momentarily, their mean one half left-parenthesis f left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis plus f left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis right-parenthesis is the most consistent choice. So to summarize, antiderivative antialiasing is given by

(10.26)y left-parenthesis n right-parenthesis equals Start 2 By 2 Matrix 1st Row 1st Column one half left-parenthesis f left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis plus f left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis right-parenthesis 2nd Column if x left-parenthesis n right-parenthesis almost-equals x left-parenthesis n minus 1 right-parenthesis comma 2nd Row 1st Column StartFraction upper F left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis minus upper F left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis Over x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis EndFraction 2nd Column otherwise period EndMatrix

While the approach thus derived is attractive owing to its simplicity, the interpolation and decimation filters are far from the ideal brick‐wall filters. This will lead to both an unwanted lowpass filtering of the desired signal components and an imperfect suppression of the image spectra. To first analyze the unwanted lowpass filtering, we consider the effect of applying the antiderivative antialiasing to a linear system, choosing in particular f left-parenthesis x right-parenthesis equals x. With upper F left-parenthesis x right-parenthesis equals one half x squared, we find

(10.27)StartLayout 1st Row 1st Column StartFraction upper F left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis minus upper F left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis Over x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis EndFraction 2nd Column equals StartFraction one half left-parenthesis x left-parenthesis n right-parenthesis right-parenthesis squared minus one half left-parenthesis x left-parenthesis n minus 1 right-parenthesis right-parenthesis squared Over x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column equals StartFraction one half left-parenthesis x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis right-parenthesis left-parenthesis x left-parenthesis n right-parenthesis plus x left-parenthesis n minus 1 right-parenthesis right-parenthesis Over x left-parenthesis n right-parenthesis minus x left-parenthesis n minus 1 right-parenthesis EndFraction equals one half left-parenthesis x left-parenthesis n right-parenthesis plus x left-parenthesis n minus 1 right-parenthesis right-parenthesis comma EndLayout

which equals our choice for the x left-parenthesis n right-parenthesis almost-equals x left-parenthesis n minus 1 right-parenthesis case, thereby justifying it. We notice that the antialiasing introduces a half‐sample delay and the expected lowpass filtering. Concerning the imperfect suppression of the image spectra, note that the frequency response of the rect filter has zeros at the multiples of the sampling rate f Subscript normal s, i.e. those components that would be aliased to DC are suppressed perfectly. Components that are aliased to low frequencies still see very high suppression. However, components just above the Nyquist limit are only attenuated by a meager 3 dB, leaving strong aliased components at high frequencies.

These effects can be seen in Fig. 10.16a, where the same example of f left-parenthesis x right-parenthesis equals s g n left-parenthesis x right-parenthesis excited with a sine at 1318.5 Hz as before is now subject to antiderivative antialiasing. Compared with Fig. 10.13b, one clearly sees that the aliasing distortion at low frequencies is greatly reduced, while the strongest components at high frequencies see almost no reduction. At the same time, the desired signal components undergo a slight lowpass filtering, as can be seen from the comparison to their desired levels marked by crosses.

Schematic illustration of spectra of output obtained with antiderivative antialiasing for different values of L, where crosses mark the desired harmonics.

Figure 10.16 Spectra of output obtained with antiderivative antialiasing for different values of upper L, where crosses mark the desired harmonics.

As the antiderivative antialiasing works especially well at low frequencies, it is beneficial to combine it with oversampling to effectively broaden the frequency range of satisfactory aliasing suppression in addition to the aliasing reduction by the oversampling itself. The results obtained from two‐ and four‐times oversampling are shown in Fig. 10.16b and Fig. 10.16c. Compared with the case of oversampling alone (Fig. 10.15a and Fig. 10.15b), the effectiveness of antiderivative antialiasing is obvious.

While the piecewise linear interpolation is required to make the integration by substitution feasible in Eq. (10.22), alternatives for the decimation filter are possible. In [Par16], a tri filter is explored, while [Bil17a, Bil17b] view Eq. (10.25) as a discrete‐time approximation of differentiating the antiderivative upper F left-parenthesis x right-parenthesis and explore using higher‐order antiderivatives and different differentiation schemes. This allows to strike different balances between computational complexity, aliasing suppression, and alteration of the desired signal components.

Unfortunately, antiderivative antialiasing is only applicable to memoryless nonlinear systems. In [Hol20], an extension to a class of stateful systems is proposed. However, it is limited to systems which can be written in a particular way where all nonlinear functions only depend on a single scalar input. An alternative approach is discussed in [Mul17], where the system output is calculated not only based on the system state, but also its derviative, which enables an additional smoothing that can reduce aliasing. In particular, it can achieve high suppression of the harmonics around the sampling rate but has little effect on even higher harmonics. It is therefore mainly effective for nonlinear systems with relatively quickly decaying harmonics.

10.5 Virtual Analog Modeling

Constructing nonlinear systems directly in the digital domain is a sound and flexible method. However, oftentimes there is a need to create a digital model taken from existing analog circuits. The field of virtual analog modeling provides many different approaches and techniques for creating suitable digital models from analog reference circuits. There are a variety of different approaches, which all can be categorized into three different model types: blackbox models, graybox models, and whitebox models. A blackbox model is based solely on the input and output data of the analog circuit. In whitebox modeling approaches, all information regarding the analog circuit is known and can be used. This includes input and output data as well as voltage and current relations on each and every point in the circuit. A graybox model is something in between these two by having access to more information than just input and output data, but limited nonetheless. In this chapter, we will give a brief introduction of the most commonly used whitebox modeling approaches i.e. wave digital filters and state‐space modeling. Although we will not give a detailed analysis of the graybox and blackbox modeling approaches, we will provide a brief reference to some of the most used techniques. A commonly applied graybox modeling approach is the use of Wiener–Hammerstein models. These models divide the overall model into several linear and nonlinear parts. This approach is similar to that shown in Fig. 10.4, where we separated a nonlinear filter into a linear filter with a subsequent static nonlinearity. This and similar graybox modeling approaches are applied successfully in [Kem08, Eic16, Fra13]. In the domain of blackbox modeling, only input and output data are available for the model construction. To model such kinds of time‐series data, certain artificial neural network structures can be used. The most common one is the recurrent neural network, which is fruitfully used in [Wri19]. In [Par19], a neural network is combined with a whitebox state‐space modeling approach.

10.5.1 Wave Digital Filters

Wave digital filters can be used to transform an analog circuit from the Kirchhoff domain to the so‐called wave domain. This representation allows efficient and precise modeling of linear as well as nonlinear analog circuits. A circuit element is described in the wave domain by an incident and a reflected wave with a corresponding port resistance. A port is characterized by its port voltage v 0 and its port current i 0. The corresponding wave variables are constructed with a linear combination of port voltage and current with the port resistance as a parameter. The incident and reflected wave are defined as [Fet86]

(10.28)StartLayout 1st Row 1st Column a 0 2nd Column equals v 0 plus upper R 0 i 0 comma 2nd Row 1st Column b 0 2nd Column equals v 0 minus upper R 0 i 0 period EndLayout

This can be written into matrix notation as

Solving now Eq. (10.29) for v 0 and i 0, the port voltages and currents can respectively be obtained from the wave variables with

With these definitions, circuit elements can now be transformed from Kirchhoff to wave domain. Rather than covering all possible circuit elements in the wave domain, we restrict ourselves to the most common ones to give a brief overview and a basic understanding. We start with the simple example of transforming a resistor into the wave domain. A resistor is described in Kirchhoff domain by Ohm's law:

(10.31)v 0 equals upper R dot i 0 period

Combining this with Eq. (10.30a) and Eq. (10.30b) and solving for the reflected wave b 0 yields

(10.32)b 0 equals StartFraction upper R minus upper R 0 Over upper R plus upper R 0 EndFraction a 0 period

This is the so‐called unadapted form of a resistor in the wave domain. Most elements can be adapted by parameterizing the port resistance upper R 0 to a suitable value. In this case, we can set upper R 0 equals upper R and we obtain b 0 equals 0 as the adapted form of a resistor in the wave domain [Wer16].

Similar to this derivation, we can obtain the wave‐domain representation of a capacitor. Starting with the differential equation

(10.33)i 0 equals upper C StartFraction d v 0 Over d t EndFraction comma

we can use (Eqs. 10.30a) and (10.30b) to obtain a differential equation depending on the wave variables with

(10.34)StartFraction d Over d t EndFraction left-brace a 0 left-parenthesis t right-parenthesis plus b 0 left-parenthesis t right-parenthesis right-brace equals StartFraction 1 Over upper R 0 upper C EndFraction left-parenthesis a 0 left-parenthesis t right-parenthesis minus b 0 left-parenthesis t right-parenthesis right-parenthesis period

Going further, the continuous time differential equation needs to be discretized. Many discretization schemes might by used here, all with certain advantages and drawbacks. Here we choose one of the most common ones with the trapezoidal rule:

(10.35)y left-parenthesis n right-parenthesis equals y left-parenthesis n minus 1 right-parenthesis plus StartFraction upper T Over 2 EndFraction left-parenthesis f left-parenthesis t Subscript n Baseline comma y Subscript n Baseline right-parenthesis plus f left-parenthesis t Subscript n minus 1 Baseline comma y Subscript n minus 1 Baseline right-parenthesis right-parenthesis comma

where y prime equals f left-parenthesis t comma y right-parenthesis is the corresponding differential equation and upper T the sampling interval. After discretization with the trapezoidal rule, the difference equation for the reflected wave in the time domain yields [Wer16]

(10.36)b 0 left-parenthesis n right-parenthesis equals minus StartFraction upper T minus 2 upper R 0 upper C Over upper T plus 2 upper R 0 upper C EndFraction b 0 left-parenthesis n minus 1 right-parenthesis plus StartFraction upper T minus 2 upper R 0 upper C Over upper T plus 2 upper R 0 upper C EndFraction a 0 left-parenthesis n right-parenthesis plus a 0 left-parenthesis n minus 1 right-parenthesis period

This unadapted form can be adapted as well by setting the port resistance to upper R 0 equals StartFraction upper T Over 2 upper C EndFraction, which results in the adapted wave‐domain representation

(10.37)b 0 left-parenthesis n right-parenthesis equals a 0 left-parenthesis n minus 1 right-parenthesis period

Note that instead of applying the trapezoidal rule in the time domain, we could also transform the differential equation to the frequency domain and use the bilinear transform for discretization, which will ultimately lead to the same result.

Other algebraic or reactive elements can be derived similarly. Table 10.1 comprises the wave‐domain representation of the most common linear circuit elements.

Table 10.1 Wave domain representation of common linear circuit elements

ElementPort resistanceWave equation
Schematic illustration of a common linear circuit element.upper R 0 equals upper Rb 0 equals 0
Schematic illustration of a common linear circuit element.upper R 0 equals StartFraction upper T Over 2 upper C EndFractionb 0 left-parenthesis n right-parenthesis equals a 0 left-parenthesis n minus 1 right-parenthesis
Schematic illustration of a common linear circuit element.upper R 0 equals StartFraction 2 upper L Over upper T EndFractionb 0 left-parenthesis n right-parenthesis equals minus a 0 left-parenthesis n minus 1 right-parenthesis
Schematic illustration of a common linear circuit element.not adaptableb 0 left-parenthesis n right-parenthesis equals 2 e left-parenthesis n right-parenthesis minus a 0 left-parenthesis n right-parenthesis
Schematic illustration of a common linear circuit element.not adaptableb 0 left-parenthesis n right-parenthesis equals 2 upper R 0 j left-parenthesis n right-parenthesis minus a 0 left-parenthesis n right-parenthesis

Table 10.2 Parallel and series adaptors

ElementPort resistanceWave equation
Schematic illustration of a parallel and series adaptor element.upper R 0 equals upper R 1StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column 0 2nd Column 1 2nd Row 1st Column 1 2nd Column 0 EndMatrix StartBinomialOrMatrix a 0 Choose a 1 EndBinomialOrMatrix
Schematic illustration of a parallel and series adaptor element.upper R 0 equals upper R 1StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column 0 2nd Column negative 1 2nd Row 1st Column negative 1 2nd Column 0 EndMatrix StartBinomialOrMatrix a 0 Choose a 1 EndBinomialOrMatrix
Schematic illustration of a parallel and series adaptor element.upper R 0 equals StartFraction upper R 1 upper R 2 Over upper R 1 plus upper R 2 EndFractionStart 3 By 1 Matrix 1st Row b 0 2nd Row b 1 3rd Row b 2 EndMatrix equals Start 3 By 3 Matrix 1st Row 1st Column 0 2nd Column StartFraction upper R 2 Over upper R Baseline 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R 2 EndFraction 2nd Row 1st Column 1 2nd Column minus StartFraction upper R 1 Over upper R Baseline 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R Baseline 2 EndFraction 3rd Row 1st Column 1 2nd Column StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction 3rd Column minus StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction EndMatrix Start 3 By 1 Matrix 1st Row a 0 2nd Row a 1 3rd Row a 2 EndMatrix
Schematic illustration of a parallel and series adaptor element.upper R 0 equals upper R 1 plus upper R 2Start 3 By 1 Matrix 1st Row b 0 2nd Row b 1 3rd Row b 2 EndMatrix equals Start 3 By 3 Matrix 1st Row 1st Column 0 2nd Column negative 1 3rd Column negative 1 2nd Row 1st Column minus upper R 1 2nd Column StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R Baseline 2 EndFraction 3rd Row 1st Column minus upper R 2 2nd Column minus StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R 2 EndFraction EndMatrix Start 3 By 1 Matrix 1st Row a 0 2nd Row a 1 3rd Row a 2 EndMatrix

As an example for a nonlinear circuit element, we will derive a wave‐domain representation of a simple diode. An ideal diode can be described by Shockley's law given in Eq. (10.6). For the sake of simplicity, we will set the ideality factor to eta equals 1. Transforming Shockley's law into the wave domain, we obtain

For use in a wave digital filter, the wave equation should be given in explicit form. However, owing to the exponential, Eq. (10.38) cannot be simply brought into an explicit formulation. Luckily we can bring Eq. (10.38) into a form which can be explicitly solved for the reflected wave b 0 if we make use of the Lambert w function omega left-parenthesis x right-parenthesis. The resulting wave equation yields

(10.39)b 0 equals a 0 plus 2 upper R 0 upper I Subscript s Baseline minus 2 v Subscript t Baseline omega left-brace StartFraction upper R 0 upper I Subscript s Baseline Over v Subscript t Baseline EndFraction e Superscript StartFraction a 0 plus upper R 0 upper I Super Subscript s Superscript Over v Super Subscript t Superscript EndFraction Baseline right-brace

with omega as the Lambert w function[Wer16].

Another important element of wave digital filters is the adaptor. Adaptors are used to connect wave digital filter elements with each other. This can be done in a series or a parallel connection. In a two‐port parallel connection, the voltage and current relations can be easily obtained with

(10.40a)StartLayout 1st Row 1st Column v 0 2nd Column equals v 1 comma EndLayout
(10.40b)StartLayout 1st Row 1st Column i 0 2nd Column equals minus i 1 period EndLayout

Inserting again the definition of the wave variables, we can construct an unadapted wave equation for a two‐port parallel adaptor

(10.41)StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column minus StartFraction upper R 0 minus upper R 1 Over upper R 0 plus upper R 1 EndFraction 2nd Column StartFraction 2 upper R 0 Over upper R 0 plus upper R 1 EndFraction 2nd Row 1st Column StartFraction 2 upper R 1 Over upper R 0 plus upper R 1 EndFraction 2nd Column StartFraction upper R 0 minus upper R 1 Over upper R 0 plus upper R 1 EndFraction EndMatrix StartBinomialOrMatrix a 0 Choose a 1 EndBinomialOrMatrix period

Choosing upper R 0 equals upper R 1 for both port resistances, an adapted form can be found with

(10.42)StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column 0 2nd Column 1 2nd Row 1st Column 1 2nd Column 0 EndMatrix StartBinomialOrMatrix a 0 Choose a 1 EndBinomialOrMatrix period

The wave equation for a two‐port series adaptor can be obtained similarly by using v 0 equals minus v 1 and i 0 equals i 1 as voltage and current relations.

For many applications, adaptors will be needed which have more than two ports. Most commonly used are three‐port adaptors. Furthermore, three‐port adaptors can also be used as building blocks for N‐port adaptors [Wer16]. Consequently, we will only deal with the derivation of three‐port series or parallel adaptors.

The voltage and current relations of a three‐port parallel adaptor are given by

(10.43a)v 0 equals v 1 equals v 2 comma
(10.43b)i 0 plus i 1 plus i 2 equals 0 period

This can be brought similarly to the two‐port adaptor in an adapted wave equation

(10.44)Start 3 By 1 Matrix 1st Row b 0 2nd Row b 1 3rd Row b 2 EndMatrix equals Start 3 By 3 Matrix 1st Row 1st Column 0 2nd Column StartFraction upper R 2 Over upper R Baseline 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R 2 EndFraction 2nd Row 1st Column 1 2nd Column minus StartFraction upper R 1 Over upper R Baseline 1 plus upper R 2 EndFraction 3rd Column StartFraction upper R 1 Over upper R 1 plus upper R Baseline 2 EndFraction 3rd Row 1st Column 1 2nd Column StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction 3rd Column minus StartFraction upper R 2 Over upper R 1 plus upper R 2 EndFraction EndMatrix Start 3 By 1 Matrix 1st Row a 0 2nd Row a 1 3rd Row a 2 EndMatrix

with the adapted port resistance upper R 0 equals StartFraction upper R 1 upper R 2 Over upper R 1 plus upper R 2 EndFraction. All adapted wave equations for two‐port as well as for three‐port series and parallel adaptors can be found in Table 10.2.

We will conclude this section with a simple example of how to construct a wave digital filter model from a given circuit schematic. An easy nonlinear circuit can be seen in Fig. 10.17. It comprises an asymmetrical second‐order diode clipper. The wave digital filter structure of this circuit can be seen on the left‐hand side of Fig. 10.18. Constructing such a wave digital filter structure underlies certain restrictions and building rules to make the filter realizable. These restrictions can be best understood if we introduce the concept of connection tree structures for wave digital filters. The structure of Fig. 10.18 can be transformed into a tree‐based topology. This structure comprises several elements, namely the root, the leaves, and adaptors. The corresponding connection tree of the wave digital filter can be seen on the right‐hand side of Fig. 10.18. The root of a wave digital filter has no upward facing connections to other elements, an adaptor has one upward facing connection and one or more downward faced ports, and a leaf is an element containing only one upwards facing connection. A non‐adaptable element, such like a voltage source, should always be the root of the connection tree. It is not allowed to be placed as a leaf or adaptor. This can lead to complications if the analog circuit contains more than one non‐adaptable element. For voltage and current sources, this can be solved by combining the source with its adjacent adaptor. The voltage source can be absorbed into a series connection resulting into a two‐port block with the wave equation

(10.45)StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column minus StartFraction upper R 0 minus upper R 1 Over upper R 0 plus upper R 1 EndFraction 2nd Column minus StartFraction 2 upper R 0 Over upper R 0 plus upper R 1 EndFraction 2nd Row 1st Column minus StartFraction 2 upper R 1 Over upper R 0 plus upper R 1 EndFraction 2nd Column StartFraction upper R 0 minus upper R 1 Over upper R 0 plus upper R 1 EndFraction EndMatrix StartBinomialOrMatrix a 0 Choose b 0 EndBinomialOrMatrix plus StartBinomialOrMatrix minus 2 StartFraction upper R 0 Over upper R 0 plus upper R 1 EndFraction Choose minus 2 StartFraction upper R 1 Over upper R 0 plus upper R 1 EndFraction EndBinomialOrMatrix e comma

where e is the source voltage. This wave equation can be adapted by setting upper R 0 equals upper R 1 yielding the adapted wave equation

(10.46)StartBinomialOrMatrix b 0 Choose b 1 EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column 0 2nd Column negative 1 2nd Row 1st Column negative 1 2nd Column 0 EndMatrix StartBinomialOrMatrix a 0 Choose a 1 EndBinomialOrMatrix minus StartBinomialOrMatrix 1 Choose 1 EndBinomialOrMatrix e comma

which can now be used as an adaptor in the connection tree. Going further, the nonlinear elements are also restricting the construction of the wave digital filter. Note that we restricted ourselves in this example to a circuit with only one nonlinear element. The reason for this is that a nonlinear element should always be placed at the root of the wave digital filter. Constructing a filter with more nonlinear elements runs into several problems, whose solutions are a major research area in wave digital filter design. One approach of dealing with multiple nonlinear or non‐adaptable elements is the use of so‐called upper R‐type adaptors, which have successfully been implemented to model these kinds of circuits [Wer16]. For the sake of simplicity, we will stick with our simple example in Fig. 10.17. One major advantage of using binary connection trees, like that from Fig. 10.18, is that there is no possibility the graph can have a delay free loop. This property directly assures realizable wave digital filter structures. From the tree structure in Fig. 10.18, a realizable signal flow graph can be derived, which can finally be used to compute the output signal of the wave digital filter.

Schematic illustration of second-order diode clipper.

Figure 10.17 Second‐order diode clipper.

Schematic illustration of wave digital filter structure of second-order diode clipper with corresponding connection tree.

Figure 10.18 Wave digital filter structure of second‐order diode clipper with corresponding connection tree.

10.5.2 State‐space Approaches

One of the most used approaches in the domain of virtual analog modeling is to derive a nonlinear state‐space model from any given circuit schematic. This can be done very systematically by using, for example, the nodal DK method proposed by David Yeh in [Yeh10]. The resulting nonlinear state‐space model has the form

(10.47b)StartLayout 1st Row 1st Column y left-parenthesis n right-parenthesis 2nd Column equals upper D x left-parenthesis n minus 1 right-parenthesis plus upper E u left-parenthesis n right-parenthesis plus upper F i left-parenthesis n right-parenthesis comma EndLayout
(10.47c)StartLayout 1st Row 1st Column v left-parenthesis n right-parenthesis 2nd Column equals upper G x left-parenthesis n minus 1 right-parenthesis plus upper H u left-parenthesis n right-parenthesis plus upper K i left-parenthesis n right-parenthesis comma EndLayout

with u left-parenthesis n right-parenthesis as input vector, y left-parenthesis n right-parenthesis as output vector, and v left-parenthesis n right-parenthesis and i left-parenthesis n right-parenthesis as the voltage over and current through the nonlinear circuit elements. The matrices upper A, upper B, upper C, upper D, upper E, upper F, upper G, upper H, upper K describe the dynamics of the system and f is a nonlinear function containing all voltage–current relations of the nonlinear elements. In comparison to the nodal‐DK method, we will derive the state‐space model with an approach introduced in [Hol15]. We will start with the description of individual circuit elements. Any circuit element will be described with the equation

(10.48a)upper M Subscript v comma e Baseline v Subscript e Baseline plus upper M Subscript i comma e Baseline i Subscript e Baseline plus upper M Subscript x comma e Baseline x Subscript e Baseline plus upper M Subscript ModifyingAbove x With dot comma e Baseline ModifyingAbove x With dot Subscript e Baseline plus upper M Subscript q comma e Baseline q Subscript e Baseline equals u Subscript e
(10.48b)f Subscript e Baseline left-parenthesis q Subscript e Baseline right-parenthesis equals 0 comma

where v Subscript e, i Subscript e are the port voltages and currents, x Subscript e, ModifyingAbove x With dot Subscript e are the element's state and state derivative vectors, respectively, q Subscript e is the auxiliary vector, u Subscript e the source vector, and f Subscript e the element's nonlinear voltage–current relationship. The coefficient matrices of common circuit elements are given in Table 10.3.

Table 10.3 Coefficient matrices and nonlinear functions of common circuit elements

Elementupper M Subscript v comma eupper M Subscript i comma eupper M Subscript x comma eupper M Subscript ModifyingAbove x With dot comma eupper M Subscript q comma eu Subscript ef Subscript e Baseline left-parenthesis q Subscript e Baseline right-parenthesis
Voltage source v Subscript sStart 1 By 1 Matrix 1st Row 1 EndMatrixStart 1 By 1 Matrix 1st Row 0 EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 1 Matrix 1st Row v Subscript s EndMatrix
Current source i Subscript sStart 1 By 1 Matrix 1st Row 0 EndMatrixStart 1 By 1 Matrix 1st Row negative 1 EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 1 Matrix 1st Row i Subscript s EndMatrix
Resistor upper RStart 1 By 1 Matrix 1st Row negative 1 EndMatrixStart 1 By 1 Matrix 1st Row upper R EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 1 Matrix 1st Row 0 EndMatrix
Capacitor upper CStartBinomialOrMatrix upper C Choose 0 EndBinomialOrMatrixStartBinomialOrMatrix 0 Choose 1 EndBinomialOrMatrixStartBinomialOrMatrix negative 1 Choose 0 EndBinomialOrMatrixStartBinomialOrMatrix 0 Choose negative 1 EndBinomialOrMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStartBinomialOrMatrix 0 Choose 0 EndBinomialOrMatrix
Inductor upper LStartBinomialOrMatrix 1 Choose 0 EndBinomialOrMatrixStartBinomialOrMatrix 0 Choose upper L EndBinomialOrMatrixStartBinomialOrMatrix 0 Choose negative 1 EndBinomialOrMatrixStartBinomialOrMatrix negative 1 Choose 0 EndBinomialOrMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStartBinomialOrMatrix 0 Choose 0 EndBinomialOrMatrix
Diode upper DStartBinomialOrMatrix 1 Choose 0 EndBinomialOrMatrixStartBinomialOrMatrix 0 Choose 1 EndBinomialOrMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 1 By 0 Matrix 1st Row Blank EndMatrixStart 2 By 2 Matrix 1st Row 1st Column negative 1 2nd Column 0 2nd Row 1st Column 0 2nd Column negative 1 EndMatrixStartBinomialOrMatrix 0 Choose 0 EndBinomialOrMatrixleft-parenthesis upper I Subscript s Baseline dot left-parenthesis e Superscript q Super Subscript e comma 1 Superscript slash v Super Subscript t Superscript Baseline minus 1 right-parenthesis minus q Subscript e comma 2 Baseline right-parenthesis

A description of the whole circuit can be now achieved by combining all individual coefficient matrices, nonlinear functions as well as voltage, current, state, state derivative, and source vectors into one system. The vectors are simply stacked like v equals Start 1 By 4 Matrix 1st Row 1st Column v Subscript e comma 1 Superscript upper T 2nd Column v Subscript e comma 1 Superscript upper T 3rd Column ellipsis 4th Column v Subscript e comma upper N Superscript upper T EndMatrix Superscript upper T. The coefficient matrices are combined into one block diagonal matrix with the form

(10.49)upper M Subscript v Baseline equals Start 4 By 4 Matrix 1st Row 1st Column upper M Subscript v comma e comma 1 Baseline 2nd Column 0 3rd Column ellipsis 4th Column 0 2nd Row 1st Column 0 2nd Column upper M Subscript v comma e comma 2 Baseline 3rd Column ellipsis 4th Column 0 3rd Row 1st Column vertical-ellipsis 2nd Column vertical-ellipsis 3rd Column down-right-diagonal-ellipsis 4th Column vertical-ellipsis 4th Row 1st Column 0 2nd Column 0 3rd Column ellipsis 4th Column upper M Subscript v comma e comma upper N Baseline EndMatrix period

The nonlinear functions are collected in the vector

(10.50)f Start 4 By 1 Matrix 1st Row q Subscript e comma 1 Baseline 2nd Row q Subscript e comma 2 Baseline 3rd Row vertical-ellipsis 4th Row q Subscript e comma upper N Baseline EndMatrix equals Start 4 By 1 Matrix 1st Row f Subscript e comma 1 Baseline left-parenthesis q Subscript e comma 1 Baseline right-parenthesis 2nd Row f Subscript e comma 2 Baseline left-parenthesis q Subscript e comma 2 Baseline right-parenthesis 3rd Row vertical-ellipsis 4th Row f Subscript e comma upper N Baseline left-parenthesis q Subscript e comma upper N Baseline right-parenthesis EndMatrix period

Going further, all constraints introduced by the circuit's elements can be described by

(10.51a)upper M Subscript v Baseline v plus upper M Subscript i Baseline i plus upper M Subscript x Baseline x plus upper M Subscript ModifyingAbove x With dot Baseline ModifyingAbove x With dot plus upper M Subscript q Baseline q equals u comma
(10.51b)f left-parenthesis q right-parenthesis equals 0 period

Applying now Kirchhoff's voltage and current laws, the circuit topology can be incorporated with the additional equations upper T Subscript v Baseline v equals 0 and upper T Subscript i Baseline i equals 0, where the matrices upper T Subscript v and upper T Subscript i are derived using standard network analysis techniques. This leads to the nonlinear differential equation system:

(10.52a)Start 3 By 5 Matrix 1st Row 1st Column upper M Subscript v Baseline 2nd Column upper M Subscript i Baseline 3rd Column upper M Subscript x Baseline 4th Column upper M Subscript ModifyingAbove x With dot Baseline 5th Column upper M Subscript q Baseline 2nd Row 1st Column upper T Subscript v Baseline 2nd Column 0 3rd Column 0 4th Column 0 5th Column 0 3rd Row 1st Column 0 2nd Column upper T Subscript i Baseline 3rd Column 0 4th Column 0 5th Column 0 EndMatrix Start 5 By 1 Matrix 1st Row v 2nd Row i 3rd Row x 4th Row ModifyingAbove x With dot 5th Row q EndMatrix equals Start 3 By 1 Matrix 1st Row u 2nd Row 0 3rd Row 0 EndMatrix
(10.52b)f left-parenthesis q right-parenthesis equals 0 period

From this point, it is possible to derive a continuous‐time state‐space model, and with subsequent discretization, a discrete‐time model. However, the equation system can also be directly discretized. For that, we decide to use the trapezoidal integration rule for time discretization:

(10.53)ModifyingAbove x With caret left-parenthesis n right-parenthesis equals ModifyingAbove x With caret left-parenthesis n minus 1 right-parenthesis plus StartFraction upper T Over 2 EndFraction left-parenthesis ModifyingAbove Above ModifyingAbove x With dot With Í‚ left-parenthesis n right-parenthesis plus ModifyingAbove Above ModifyingAbove x With dot With Í‚ left-parenthesis n minus 1 right-parenthesis right-parenthesis comma

where upper T is the sampling interval, ModifyingAbove x With caret the discrete approximation of the state at time n upper T, and ModifyingAbove Above ModifyingAbove x With dot With Í‚ the exact solution for ModifyingAbove x With dot. With the introduction of canonical states,

(10.54)ModifyingAbove x With bar left-parenthesis n right-parenthesis equals ModifyingAbove x With caret left-parenthesis n right-parenthesis plus StartFraction upper T Over 2 EndFraction ModifyingAbove Above ModifyingAbove x With dot With Í‚ left-parenthesis n right-parenthesis comma

we can make use of the substitutions

(10.55a)ModifyingAbove Above ModifyingAbove x With dot With Í‚ left-parenthesis n right-parenthesis equals StartFraction 1 Over upper T EndFraction left-parenthesis ModifyingAbove x With bar left-parenthesis n right-parenthesis minus x overbar left-parenthesis n minus 1 right-parenthesis right-parenthesis comma
(10.55b)ModifyingAbove x With caret left-parenthesis n right-parenthesis equals one half left-parenthesis ModifyingAbove x With bar left-parenthesis n right-parenthesis plus x overbar left-parenthesis n minus 1 right-parenthesis right-parenthesis comma

and the definitions upper M overbar Subscript x prime Baseline equals StartFraction 1 Over upper T EndFraction upper M Subscript ModifyingAbove x With dot Baseline plus one half upper M Subscript x and upper M overbar Subscript x Baseline equals StartFraction 1 Over upper T EndFraction upper M Subscript ModifyingAbove x With dot Baseline minus one half upper M Subscript x to construct a discrete‐time system of the form

(10.56b)f left-parenthesis ModifyingAbove q With bar left-parenthesis n right-parenthesis right-parenthesis equals 0 comma

where ModifyingAbove v With bar left-parenthesis n right-parenthesis, ModifyingAbove i With bar left-parenthesis n right-parenthesis, ModifyingAbove q With bar left-parenthesis n right-parenthesis, and u overbar left-parenthesis n right-parenthesis are the discrete‐time values at time n upper T. This discrete‐time equation system cannot be solved uniquely; however, a general solution can be obtained with

(10.57)Start 4 By 1 Matrix 1st Row ModifyingAbove v With bar left-parenthesis n right-parenthesis 2nd Row ModifyingAbove i With bar left-parenthesis n right-parenthesis 3rd Row ModifyingAbove x With bar left-parenthesis n right-parenthesis 4th Row ModifyingAbove q With bar left-parenthesis n right-parenthesis EndMatrix equals Start 4 By 1 Matrix 1st Row upper D Subscript v Baseline 2nd Row upper D Subscript i Baseline 3rd Row upper A 4th Row upper D Subscript q Baseline EndMatrix x overbar left-parenthesis n minus 1 right-parenthesis plus Start 4 By 1 Matrix 1st Row upper E Subscript v Baseline 2nd Row upper E Subscript i Baseline 3rd Row upper B 4th Row upper E Subscript q Baseline EndMatrix u overbar left-parenthesis n right-parenthesis plus Start 4 By 1 Matrix 1st Row upper F Subscript v Baseline 2nd Row upper F Subscript i Baseline 3rd Row upper C 4th Row upper F Subscript q Baseline EndMatrix z left-parenthesis n right-parenthesis period

Here, z left-parenthesis n right-parenthesis is an arbitrary vector depending on the chosen solution with as many entries as f left-parenthesis ModifyingAbove q With bar left-parenthesis n right-parenthesis right-parenthesis. Finally, we can extract from v overbar and i overbar only the quantities of interest resulting into a nonlinear state‐space system like in Eqs. (10.47a) to (10.47d):

We will demonstrate this approach using again the example from the clipping circuit in Fig. 10.3. By ordering the circuit elements as voltage source, resistor, capacitor, first diode, and second diode, the following coefficient matrices can be found:

(10.59)StartLayout 1st Row upper M Subscript v Baseline equals Start 8 By 5 Matrix 1st Row 1st Column 1 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column Blank 2nd Row 1st Column Blank 2nd Column negative 1 3rd Column Blank 4th Column Blank 5th Column Blank 3rd Row 1st Column Blank 2nd Column Blank 3rd Column upper C 4th Column Blank 5th Column Blank 4th Row 1st Column Blank 2nd Column Blank 3rd Column 0 4th Column Blank 5th Column Blank 5th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column 1 5th Column Blank 6th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column 0 5th Column Blank 7th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column 1 8th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column 0 EndMatrix upper M Subscript i Baseline equals Start 8 By 5 Matrix 1st Row 1st Column 0 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column Blank 2nd Row 1st Column Blank 2nd Column upper R 3rd Column Blank 4th Column Blank 5th Column Blank 3rd Row 1st Column Blank 2nd Column Blank 3rd Column 0 4th Column Blank 5th Column Blank 4th Row 1st Column Blank 2nd Column Blank 3rd Column 1 4th Column Blank 5th Column Blank 5th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column 0 5th Column Blank 6th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column 1 5th Column Blank 7th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column 0 8th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Column 1 EndMatrix upper M Subscript x Baseline equals Start 8 By 0 Matrix 1st Row Blank 2nd Row Blank 3rd Row negative 1 4th Row 0 5th Row Blank 6th Row Blank 7th Row Blank 8th Row Blank EndMatrix 2nd Row upper M Subscript ModifyingAbove x With dot Baseline equals Start 8 By 0 Matrix 1st Row Blank 2nd Row Blank 3rd Row 0 4th Row negative 1 5th Row Blank 6th Row Blank 7th Row Blank 8th Row Blank EndMatrix upper M Subscript q Baseline equals Start 8 By 4 Matrix 1st Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 2nd Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 3rd Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 4th Row 1st Column Blank 2nd Column Blank 3rd Column Blank 4th Column Blank 5th Row 1st Column negative 1 2nd Column 0 3rd Column Blank 4th Column Blank 6th Row 1st Column 0 2nd Column negative 1 3rd Column Blank 4th Column Blank 7th Row 1st Column Blank 2nd Column Blank 3rd Column negative 1 4th Column 0 8th Row 1st Column Blank 2nd Column Blank 3rd Column 0 4th Column negative 1 EndMatrix u equals Start 8 By 1 Matrix 1st Row u Subscript i n Baseline 2nd Row 0 3rd Row 0 4th Row 0 5th Row 0 6th Row 0 7th Row 0 8th Row 0 EndMatrix period EndLayout

The matrices upper T Subscript v and upper T Subscript i can be constructed using upper T Subscript v Baseline v equals 0 and upper T Subscript i Baseline i equals 0 yielding

(10.60)upper T Subscript v Baseline equals Start 3 By 5 Matrix 1st Row 1st Column negative 1 2nd Column 1 3rd Column 1 4th Column 0 5th Column 0 2nd Row 1st Column negative 1 2nd Column 1 3rd Column 0 4th Column 1 5th Column 0 3rd Row 1st Column 1 2nd Column negative 1 3rd Column 0 4th Column 0 5th Column 1 EndMatrix upper T Subscript i Baseline equals Start 2 By 5 Matrix 1st Row 1st Column 1 2nd Column 0 3rd Column 1 4th Column 1 5th Column negative 1 2nd Row 1st Column 0 2nd Column 1 3rd Column negative 1 4th Column negative 1 5th Column 1 EndMatrix period

After time discretization with a sampling rate of f Subscript s Baseline equals 48 kHz, we can derive

(10.61)upper M overbar prime Subscript x Baseline equals Start 8 By 0 Matrix 1st Row Blank 2nd Row Blank 3rd Row negative 0.5 4th Row negative 48000 5th Row Blank 6th Row Blank 7th Row Blank 8th Row Blank EndMatrix upper M overbar Subscript x Baseline equals Start 8 By 0 Matrix 1st Row Blank 2nd Row Blank 3rd Row 0.5 4th Row negative 48000 5th Row Blank 6th Row Blank 7th Row Blank 8th Row Blank EndMatrix

In the next step, the equation system from Eq. (10.56a) needs to be solved. The general solution of a system of the form upper A x equals b, which has no unique solution, can be constructed by using the nullspace upper N of upper A in conjunction with one particular solution p,

(10.62)x equals p plus upper N z comma

where z is an arbitrary vector. From this general solution, we can directly obtain the system matrices upper A, upper B, upper C, upper D Subscript q, upper E Subscript q, upper F Subscript q. The matrices upper D, upper E, and upper F can be extracted from upper D Subscript v, upper E Subscript v, and upper F Subscript v, i.e. we can take the third row of the matrices, which corresponds to the voltage over the capacitor. The resulting matrices yield

(10.63)StartLayout 1st Row upper A equals Start 1 By 1 Matrix 1st Row negative 1 EndMatrix upper B equals Start 1 By 1 Matrix 1st Row 0 EndMatrix upper C equals StartBinomialOrMatrix 94 dot 1 0 Superscript negative 3 Baseline Choose 0 EndBinomialOrMatrix 2nd Row upper D equals Start 1 By 1 Matrix 1st Row 0 EndMatrix upper E equals Start 1 By 1 Matrix 1st Row 0 EndMatrix upper F equals Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column 0 EndMatrix 3rd Row upper D Subscript q Baseline equals Start 4 By 1 Matrix 1st Row 0 2nd Row 96000 3rd Row 0 4th Row 0 EndMatrix upper E Subscript q Baseline equals Start 4 By 1 Matrix 1st Row 0 2nd Row 1 dot 1 0 Superscript negative 3 Baseline 3rd Row 0 4th Row 0 EndMatrix upper F Subscript q Baseline equals Start 4 By 2 Matrix 1st Row 1st Column 0 2nd Column 1 2nd Row 1st Column negative 5.512 dot 1 0 Superscript negative 3 Baseline 2nd Column 1 3rd Row 1st Column 0 2nd Column negative 1 4th Row 1st Column 1 2nd Column 0 EndMatrix period EndLayout

The output of the nonlinear state‐space system can now be computed by finding a suitable z left-parenthesis n right-parenthesis, which is consistent with (Eqs. 10.58c) and (10.58d). This vector can then be used to solve the linear part of the system comprising (Eqs. 10.58a) and (10.58b). Note that the coefficient matrices might also have different values, owing to the non‐uniqueness of the nonlinear equation system from Eq. (10.56a). Consequently, the coefficient matrices depend on the chosen particular solution and the nullspace.

10.6 Exercises

1. Fundamentals

  1. What can be said about the output of a memoryless nonlinear system excited with a periodic input?
  2. Let f left-parenthesis x right-parenthesis equals StartAbsoluteValue x EndAbsoluteValue be a static nonlinear mapping excited with x left-parenthesis t right-parenthesis equals cosine left-parenthesis omega 0 t right-parenthesis. Compute the Fourier coefficients of the resulting output y left-parenthesis t right-parenthesis equals StartAbsoluteValue cosine left-parenthesis omega 0 t right-parenthesis EndAbsoluteValue. What can be said about the THD?

2. Overdrive, Distortion, Clipping

  1. Derive the nonlinear differential equation for the first‐order diode clipper assuming identical diodes. Hint: i Subscript d Baseline equals upper I Subscript s Baseline left-parenthesis e Superscript StartFraction v Super Subscript d Superscript Over eta v Super Subscript t Superscript EndFraction Baseline minus 1 right-parenthesis, hyperbolic sine left-parenthesis x right-parenthesis equals StartFraction e Superscript x Baseline minus e Superscript negative x Baseline Over 2 EndFraction.
    Schematic illustration of the first-order diode clipper.
  2. What is the difference between soft‐ and hard clipping nonlinearities? How can they be used to create a distortion or overdrive effect?

3. Nonlinear Filters

  1. How can we extend a linear filter design to give it a more natural sound?
  2. What is the difference to linear filters regarding the stability?

4. Aliasing and its Mitigation

  1. Assume a nonlinear system introducing harmonics rolling off with frequency by approximately 1 slash f. When operated at a sampling rate of 44.1 kHz, the aliasing distortion is deemed too high. By what factor, approximately, is the aliasing distortion present below 22.05 kHz reduced when doubling the sampling rate?
  2. Apply antiderivative antialiasing to the memoryless systems described by the mapping functions of (Eqs. 10.8) and (10.9).

5. Virtual Analog Modeling

  1. Give an overview of the three main modeling approaches.
  2. How is a circuit element modeled in the wave domain? State the connection between the wave and Kirchhoff domains.

References

  1. [Bil17a] S. Bilbao, F. Esqueda, J.D. Parker, and V. Välimäki: Antiderivative antialiasing for memoryless nonlinearities. IEEE Signal Processing Letters, 24(7):1049–1053, 2017.
  2. [Bil17b] S. Bilbao, F. Esqueda, and V. Välimäki: Antiderivative antialiasing, lagrange interpolation and spectral flatness. In 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pages 141–145, New Paltz, NY, USA, Oct 2017.
  3. [Che04] G. Chen: Stability of nonlinear systems. Encyclopedia of RF and Microwave Engineering, pp 4881–4896, 2004.
  4. [Cho20] J. Chowdbury: Stable structures for nonlinear biquad filters. In Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx‐20), pages 94–100, Vienna, Austria, 2020.
  5. [Eic16] F. Eichas and U. Zölzer: Virtual analog modeling of guitar amplifiers with Wiener‐Hammerstein models. In 44th Annual Convention on Acoustics, Munich, Germany, 2018.
  6. [Esq15] F. Esqueda, V. Välimäki, and S. Bilbao: Aliasing reduction in soft‐clipping algorithms. In Proc. 23rd European Signal Process. Conf. (EUSIPCO), pages 2059–2063, Nice, France, 2015.
  7. [Esq16a] F. Esqueda, V. Välimäki, and S. Bilbao: Antialiased soft clipping using an integrated bandlimited ramp. In Proceedings of the 24th European Signal Processing Conference (EUSIPCO), pages 1043–1047, Budapest, Hungary, 2016.
  8. [Esq16b] F. Esqueda, S. Bilbao, and V. Välimäki: Aliasing reduction in clipped signals. IEEE Transactions on Signal Processing, 64(20):5255–5267, 2016.
  9. [Mul17] R. Muller and Thomas Helie: Trajectory anti‐aliasing on guaranteed‐passive simulation of nonlinear physical systems. In Proceedings of the 20th International Conference on Digital Audio Effects (DAFx‐17), pages 87–94, Edinburgh, UK, 2017.
  10. [Par16] J.D. Parker, V. Zavalishin, and E. Le Bivic: Reducing the aliasing of nonlinear waveshaping using continuous‐time convolution. In Proceedings of the 19th International Conference on Digital Audio Effects (DAFx‐16), pages 137–144, Brno, Czech Republic, 2016.
  11. [Par19] J.D. Parker, F. Esqueda, and A. Bergner: Modelling of nonlinear state‐space systems using a deep neural network. In Proceedings of the 22nd International Conference on Digital Audio Effects (DAFx‐ 19), Birmingham, UK, 2019.
  12. [Fet86] A. Fettweis: Wave digital filters: Theory and practice. In Proceedings of the IEEE, volume 74, 1986.
  13. [Fra13]Fractal Audio Systems: Multipoint iterative matching and impedance correction technology, 2013.
  14. [Hol15] M. Holters and U. Zölzer: A generalized method for the derivation of non‐linear state‐space models from circuit schematics. In Proceedings of the 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 2015.
  15. [Hol20] M. Holters: Antiderivative antialiasing for stateful systems. Applied Sciences, 10(1), 2020.
  16. [Kem08] C. Kemper: Musical instrument with acoustic transducer. “https://www.google.com/patents/US20080134867”, 2008.
  17. [Roa79] C. Roads: A tutorial on Non‐Linear Distortion or Waveshaping Synthesis. Computer Music Journal, Vol. 3, No. 2, pp. 29–34, 1979.
  18. [Ros92] D. Rossum: Making digital filters sound “analog”. ICMC, pp 30–33, 1980.
  19. [Sche80] M. Schetzen: The Volterra and Wiener Theories of Nonlinear Systems. Robert Krieger Publishing, 1980.
  20. [Wer16] K. J. Werner: Virtual Analog Modeling Of Audio Circuits Using Wave Digital Filters. PhD thesis, 2016.
  21. [Wri19] A Wright, E. Damskägg, and V. Välimäki: Real‐time black‐box mod‐ elling with recurrent neural networks. In Proceedings of the 22nd International Conference on Digital Audio Effects (DAFx‐19), Birmingham, UK, 2019.
  22. [Yeh10] D.T. Yeh, J.S. Abel, and J.O. Smith: Automated physical modeling of nonlinear audio circuits for realtime audio effects part I: Theoretical development. IEEE Trans. Audio, Speech and Language Process., 18(4):728–737, 2010.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.115.20