Modeling Interconnected Systems: A Functional Perspective

Stanley R. Liberty , in The Electrical Engineering Handbook, 2005

7.3 System Identification

In a system identification problem, one is normally asked to identify the mathematical input/output relation S from input/output data. In theory, this is straightforward if the system is truly linear (Chang, 1973), but in practice this may be difficult due to noisy data and finite precision arithmetic. The nonlinear case is difficult in general because series approximations must be made that may not converge rapidly enough for large signal modeling (Bello et al., 1973). In both linear and nonlinear identification techniques, random inputs are commonly used as probes (Bello et al., 1973; Cooper and McGillem, 1971; Lee and Schetzen, 1961). The identification of S is then carried out by mathematical operations on the system input and output autocorrelations and the cross-autocorrelations between input and output.

This section does not contain a discussion of such techniques, but some observations are presented. It should be noted that if only input/output information is assumed known (as is the case in standard identification techniques), then system identification does not supply any information on the system structure or the types or values of components. In such cases, the component connection model cannot be used. In many applications, however, connection information and component type information are available. In such cases, the component connection model may be applicable. Indeed, for identification schemes using random inputs, it is easily shown that the component correlation functions and the overall system correlation functions are related by:

(7.8) R y y = L 21 R b b L 21 T + R y u L 22 T + L 22 R u y L 22 R u u L 22 T .

(7.9) R a a = L 11 R b b L 11 T + L 11 R b u L 12 T + L 12 R u b L 11 T + L 12 R u u L 12 T .

(7.10) L 21 R b u = [ R y u L 22 R u u ] .

(7.11) R a b = L 11 R b b + L 12 R u b .

The Rjk is the crosscorrelation of signal j and signal k. If the left inverse of L 21 exists, then the subsystem correlation functions can be determined and used to identify the subsystems via standard identification techniques. However, in general, this left inverse does not exist and either the type information or the psuedo-inverse of L 21 must be used to yield subsystem models.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121709600500840

Partial-update adaptive filters

Kutluyıl Doğançay , in Partial-Update Adaptive Signal Processing, 2008

4.9.6 Partial-update RLS simulations

Consider a system identification problem where the unknown system has transfer function:

(4.148) H ( z ) = 1 0.8 z 1 + 0.3 z 2 + 0.2 z 3 + 0.1 z 4 0.2 z 5 + 0.1 z 6 + 0.4 z 7 0.2 z 8 + 0.1 z 9

The input signal x ( k ) is an AR(1) Gaussian process with a = 0.8 . The system output is measured in additive zero-mean white Gaussian noise with variance σ n 2 = 1 0 4 . The RLS parameters are set to N = 10 , λ = 0.999 and ϵ = 1 0 4 . The convergence performance of the RLS algorithm and its partial-update versions has been measured by misalignment, which is defined by:

(4.149) w ( k ) h 2 2 h 2 2

where h is the true system impulse response.

Figures 4.1 and 4.2 show the time evolution of misalignment for full-update and partial-update RLS algorithms for one realization of input and noise signals. The partial-update parameters are S = 10 for periodic partial updates and M = 1 for the other partial-update algorithms. The error magnitude bound for the set-membership partial-update RLS algorithm is set to γ = 0.02 . The selective-partial-update and set-membership partial-update RLS algorithms use the simplified coefficient selection criterion. From Figure 4.1 we observe that the initial convergence rate of the periodic-partial-update RLS algorithm is roughly S = 10 times slower than the full-update RLS algorithm as expected. The sequential and stochastic-partial-update RLS algorithms both exhibit divergent behaviour as evidenced by spikes in their misalignment curves. These spikes occur whenever the time-averaged partial-update correlation matrix R M becomes ill-conditioned due to the use of sparse partial-update regressor vectors I M ( k ) x ( k ) in the update of R M . The likelihood of ill-conditioning increases with smaller M and λ (in this case M = 1 ).

Figure 4.1. Time evolution of misalignment for data-independent partial-update RLS algorithms: (a) Periodic-partial-update RLS; (b) sequential-partial-update RLS; (c) stochastic-partial-update RLS.

Figure 4.2. Time evolution of misalignment for data-dependent partial-update RLS algorithms: (a) Selective-partial-update RLS; (b) set-membership partial-update RLS; (c) coefficient update indicator for set-membership partial-update RLS.

Even though the selective-partial-update and set-membership partial-update RLS algorithms are not entirely immune from ill-conditioning of R M , the data-dependent nature of coefficient selection is likely to reduce the frequency of ill-conditioning. In fact, in Figure 4.2 there are no visible misalignment spikes that can be attributed to ill-conditioning of R M . The selective-partial-update and set-membership partial-update RLS algorithms appear to perform well compared with the sequential and stochastic-partial-update RLS algorithms. Figure 4.2(c) shows the sparse time updates (iterations at which partial-update coefficients are updated) of the set-membership partial-update RLS algorithm, which is a feature of set-membership adaptive filters leading to reduced power consumption.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123741967000106

Approaches to partial coefficient updates

Kutluyıl Doğançay , in Partial-Update Adaptive Signal Processing, 2008

2.7.1 Example 1: Convergence performance

We use the system identification problem in Section2.3.1 to simulate the set-membership partial-update NLMS algorithm. The input signal x ( k ) is a Gaussian AR(1) process. The bound on the magnitude of a posteriori error is set to γ = 0.01 for 2 M 5 and to γ = 0.1 for M = 1 . A larger error bound is required for M = 1 to ensure stability. Referring to(2.132), we see that γ in effect controls the convergence rate of the set membership partial-update NLMS algorithm. In general, the larger γ , the slower the convergence rate will be. For a = 0 (white Gaussian input), a = 0.5 and a = 0.9 , the learning curves of the NLMS, set-membership NLMS and set-membership partial-update NLMS algorithms are shown in Figure 2.26 for comparison purposes. We observe that the set-membership algorithms achieve a smaller steady-state MSE than the NLMS algorithm.

Figure 2.26. MSE curves for full-update and set-membership partial-update NLMS algorithms for Gaussian AR(1) input with: (a) a = 0 (white); (b) a = 0.5 ; and (c) a = 0.9 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123741967000088

Detection, Classification, and Estimation in the (t,f) Domain

In Time-Frequency Signal Analysis and Processing (Second Edition), 2016

12.3.1 Problem Description

A discrete time system identification problem can be stated as follows:

(12.3.1) y [ n ] = k q [ n k ] x [ k ] + ϵ [ n ] ,

where x[n] is a transmitted signal, q[n] is the impulse response of a linear time invariant (LTI) system, ϵ[n] is an additive noise, and y[n] is the received signal. The problem is to identify the LTI system transfer function Q(f), i.e., the Fourier transform, of q[n] given the input and the output signals x[n] and y[n].

The conventional method for solving the above problem is the least-squares solution method that is equal to the cross-spectral method in stationary cases, i.e., the system transfer function Q(f) can be estimated by (see, e.g., Refs. [24,25])

(12.3.2) Q ( f ) = S x y ( f ) S x x ( f ) ,

where S xy (f) is the cross-spectrum of x[n] and y[n], and S xx (f) is the auto-spectrum of x[n]. When the additive noise ϵ[n] in Eq. (12.3.1) is a zero-mean Gaussian process and statistically independent of the input signal x[n], the estimate in Eq. (12.3.2) is asymptotically unbiased but the performance is limited by the noise variance or the signal-to-noise ratio (SNR). When the SNR is low, the performance of the estimate in Eq. (12.3.2) is poor as we will see later. Since the auto-spectrum of the input signal x[n] is in the denominator in the estimate equation (12.3.2), the input signal is, in general, chosen as a pseudorandom signal with flat spectrum. With these types of input signals, noise reduction techniques before system identification do not apply.

In the following, we introduce a different technique [26] for the system identification problem. The main idea is as follows. Instead of pseudorandom signal x[n], chirp type signals are transmitted as training signals, which have wideband characteristics in the frequency domain but are concentrated in the joint time-frequency ((t,f)) domain. The (t,f) concentration property usually holds after passing through an LTI system (this will be seen later). Since a joint (t,f) distribution usually spreads noise and localizes signals, in particular chirps, the receiver may use a (t,f) analysis technique to map the received signal y[n] from the time domain into the joint (t,f) domain. In this way, the SNR can be significantly increased in the joint (t,f) domain [27]. Furthermore, (t,f) filtering can be used in the (t,f) plane to reduce the noise and the SNR in the time domain can be increased and therefore the system identification after denoising can be improved. Some applications of this approach can be found in Refs. [28–31].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123984999000121

Introduction

Kutluyıl Doğançay , in Partial-Update Adaptive Signal Processing, 2008

1.2.2 Adaptive inverse system identification

Figure 1.3 illustrates the adaptive inverse system identification problem. Comparison of Figures 1.2 and 1.3 reveals that adaptive inverse system identification requires an adaptive filter to be connected to the input and noisy output of the unknown system in the reverse direction. The use of the D -sample delay z D for the desired adaptive filter response ensures that the adaptive filter will be able to approximate the inverse system for non-minimum-phase or maximum-phase linear systems. For such systems the stable inverse system is non-causal and has infinite impulse response (IIR). A (causal) FIR adaptive filter can approximate the stable inverse only if it is sufficiently long (i.e. N is sufficiently large) and D is non-zero. If the unknown system is a minimum-phase linear system, then no delay for the desired response is required (i.e. one can set D = 0 ).

Figure 1.3. Adaptive inverse system identification.

The adaptive filter uses an appropriate norm of the error signal e ( k ) = d ( k D ) y ( k ) as a measure of accuracy for inverse system identification. The adaptive filter coefficients are adjusted iteratively in order to minimize the error norm in a statistical sense. At the end of the minimization process the adaptive filter converges to an estimate of the inverse of the unknown system. Due to the presence of noise at the unknown system output, the inverse system estimate is not identical to the zero-forcing solution that ignores the output noise.

An important application of inverse system identification is channel equalization. Fast data communications over bandlimited channels causes intersymbol interference (ISI). The ISI can be eliminated by an adaptive channel equalizer connected in cascade with the communication channel (unknown system). The adaptive channel equalizer is an adaptive filter performing inverse channel identification for the unknown communication channel. The delayed channel input signal d ( k D ) is made available to the channel equalizer during intermittent training intervals so that the adaptation process can learn any variations in the inverse system model due to channel fading. Outside training intervals the equalizer usually switches to a decision directed mode whereby the error signal e ( k ) is obtained from the difference between the equalizer output y ( k ) and its nearest neighbour in the channel input signal constellation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123741967000076

ADAPTIVE IDENTIFICATION OF STOCHASTIC TRANSMISSION CHANNELS

L.H. Sibul , ... E.L. Titlebaum , in Adaptive Systems in Control and Signal Processing 1983, 1984

MOTIVATION AND APPLICATIONS

We note that the formulation of the system identification problem in terms of operator equations as shown by Equation 1 encompasses a fairly wide range of models. It is well known that the solution of both deterministic and stochastic linear state-space equations can be expressed by integral operators as shown in Equation 1. By the deterministic state-space equation we mean the equations where the state transition matrices are deterministic matrices and by the stochastic state-space equations we mean the cases where the state transition matrices contain elements which are stochastic processes (Adomian and Sibul, 1981; Sibul, 1967, 1968, 1970; Sibul and Comstock, 1977). It can be shown that usual autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) models are subsets of Equation 1.

Need for the identification of the operators in Equation 1 arises in the signal extraction and detection problem. Let the received signal vector v(t) be given by

(2) v ( t ) _ = £ _ u ( t ) _ + n ( t ) _

where £ is a matrix operator, u(t) is the signal to be detected, and n(t) is the interfering noise with the covariance matrix Q(t, τ). To illustrate the basic connection between the identification problem and signal detection/extraction problem, let us assume that v(t) is a Gaussian process. In this example we assume that signal and noise are complex, zero mean, mutually uncorrelated processes. It can be shown (Sibul and Sohie, 1983) that the maximum likelihood detector computes the log likelihood function given by

(3) 2 = [ E { £ _ U _ £ H _ } [ E { £ _ U _ £ H _ } + Q _ ] 1 v _ ] H Q _ 1 v _

where U = u uH. The factors [E {£ U £H } + Q]−1 and Q −1 can be adaptively determined from the received data, however, E{£ U £H } represents prior knowledge which must be either known or determined through the identification procedure. Above comments provide the basic motivation for the problem addressed in this paper.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080305653500472

Adaptive Filtering

Nasser Kehtarnavaz , in Digital Signal Processing System Design (Second Edition), 2008

L6.3 Lab Experiments

1.

Build a VI graphically to implement the inverse system identification problem shown in Figure L6-13 by modifying the system identification VI appearing in Figure L6-5. Generate the desired signal by setting the delay equal to one-half the order of the unknown system. Verify the inverse system identification VI for the system orders 12 and 16.

Figure L6-13. Inverse system identification.

2.

Build a VI to implement the LMS VI shown in Figure L6-2 in a hybrid fashion by using the MathScript feature.

3.

Build a VI to implement the time varying channel shown in Figure L6-9 by using the MathScript feature.

4.

Build a VI to implement and verify the noise cancellation system shown in Figure L6-8 in a hybrid fashion as in (2) and (3).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123744906000064

LATTICE STRUCTURES FOR FACTORIZATION OF SAMPLE COVARIANCE MATRICES

B. Friedlander , in Adaptive Systems in Control and Signal Processing 1983, 1984

1 INTRODUCTION

In recent years there has been a growing interest in lattice structures and their applications to estimation, signal processing, system identification and related problems. An extensive literature exists on the theory and practice of lattice filters [1]–[8]. In this paper we present a selective review of lattice structures in linear least-squares estimation. The central theme of our discussion will be the role of lattice structures in factorization of sample covariance matrices. Due to space limitations we will defer most of the details to the references.

Many adaptive signal processing techniques are based on the solution of the following prototype linear prediction problem: let yt be a discrete-time stationary zero-mean process. We are interested in predicting the current value of this process from past measurements. A linear predictor of order N will have the form:

(1) y ^ t | t 1 = i = 1 N A N , i y t i

where y ^ t | t 1 is the predicted value of yt based on data up to time t−1, and { A N , i , i = 1 , , N } are the predictor coefficients. The difference between the actual value of the process and is predicted value will be called the prediction error (of order N)

(2) ε N,t = y t y ^ t | t 1 = y t + i = 1 N A N , i y t i

The least-squares predictor is designed to minimize the sum of squared prediction errors over some time interval t = 0 T ε N 1 , t ε N , t · The index T will be added when necessary to specify the time interval over which the errors are minimized (e.g., ε N,t ( T ) , A N , i ( T ) Using this notation we rewrite (2) as

(3) [ ε N,0 ( T ) , , ε N,T ( T ) ] = [ I A N , 1 ( T ) , , A N , N ( T ) ] Y N + 1 , T + 1

where

This work was supported by the Office of Naval Research under Contract No N00014-81-C-0300.As we will see later, the prediction error εN,T(T) can be computed recursively in time and in order, using a simple lattice structure [4],[7]. The vectors of prediction errors of different orders {p=0,1,…,N} can be combined in the following matrix form

or

(5) E N + 1 , T + 1 ε = U N , T A Y N + 1 , T + 1

A basic property of least-squares prediction errors is that they are uncorrelated with past data. In other words

(6) [ 0 , , 0 , ε p,0 ( T N + p ) , , ε p,T N + p ( T N + p ) ] [ 0 , , 0 y 0 , , y T i ] = 0 for i > N p

Using this property it is straightforward to check that E N + 1 , T + 1 ε Y N + 1 , T + 1 is a lower triangular matrix. Post-multiplying (5) by Y N + 1 , T + 1 we get

(7) U N,T A Y N + 1 Y N + 1 , T + 1 = E N + 1 , T + 1 ε Y N + 1 , T + 1

where R N , T =Y N + 1 , T + 1 Y N + 1 , T + 1 is the (pre-windowed) sample covariance matrix. It also follows that

(8) R N,T 1 = ( E N+1,T+1 ε Y N + 1 , T + 1 ) 1 U N,T A

The lower triangular matrix EεY can be normalized to have unity along the diagonal, by premultiplying it by D N , T ε , where

(9) D N , T ε , = diag { R N , T ε , , R 0 , T N ε }

where R p , T N+p ε is the sum of squared prediction errors (of order p). Equation (8) can now be written as

(10) R N , T 1 = ( D N , T ε E N + 1 , T + 1 ε Y N + 1 , T + 1 ) 1 D N , T ε U N,T A

Note that the right-hand-side of this equation is a product of a unit diagonal lower triangular matrix, a diagonal matrix and a uint diagonal upper triangular matrix. By symmetry of RN,T it follows that the two triangular matrices are simply transposes of each other, i.e.,

(11) R N , T 1 = ( U N,T A ) ' D N , T ε U N,T A

We conclude that the least-squares predictor coefficients can be computed by performing a LDU decomposition of the inverse of the sample covariance matrix.

Inverting both sides of equation (11) gives an UDL decomposition of the covariance matrix

(12) R N , T = ( U N , T A ) 1 D N , T ε ( U N , T A ) T = ( L N , T α ) D N , T ε L N , T α

(13) α i , i ( T ) = I

Comparison of (10) and (12) provides an immediate interpretation for the entries of the matrix L N , T α :

(14) L N , T α = D N , T ε E N + 1 , T + 1 ε Y N + 1 , T + 1

In other words, the entries of the UDL factors of the sample covariance matrix are simply normalized cross-correlations between prediction errors and the data:

(15) α p,i ( T ) = [ 0 , , 0 , ε p,0 ( T N + p ) , , ε p , T N + p ( T N + p ) ] [ 0 , 0 , y 0 , , y T i ]

Lattice structures provide efficient computational procedures for such cross correlations, as will be discussed further in section 3 (see also [4],[5],[7]). These structures involve the backward predictor in addition to the forward predictor discussed so far.

Let us define the backward predictor

(16) y ^ t N 1 | t 1 = i = 1 N B N , N + 1 i y t i

and the corresponding backward prediction error

(17) r N , t 1 = y t N 1 y ^ t N 1 | t 1 = i = 1 N B N , N + 1 i y t i + y t N 1

By following the same steps as in the case of the forward predictor, it can be shown that

(18) [ r 0 , 0 ( T ) · · r 0,T ( T ) r p , 0 ( T ) · · r p,T ( T ) r N , 0 ( T ) · · r N , T ( T ) ]

or

L N , T B Y N + 1 , T + 1 = E N + 1 , T + 1 r

(19) R N , T 1 = ( L N , T B ) D N , T r L N , T B ,

(20) D N , T r =diag { R 0 , T r , , R N , T r } ,

where R p , T r is the sum of squared backward prediction errors. By inverting (19) we get

(21) R N,T = ( L N,T B ) 1 D N,T r ( L N,T B ) T = ( U N , T β ) D N , T r U N , T β

where

(22) β i,i ( T ) = I

and

(23) U N , T β = D N , T r E N + 1 , T + 1 r Y N + 1 , T + 1

To summarize: The coefficients of the backward predictor can be computed by performing a UDL decomposition of the inverse of the sample covariance matrix. The entries of the LDU factor of the covariance matrix itself are normalized cross correlations between backward prediction errors and the data. More precisely,

(24) β p,i ( T ) = [ r p , 0 ( T ) , , r p,T ( T ) ] [ 0 , , 0 y 0 , , y T i ]

In the next two sections we will briefly describe some lattice structures for computing the factors UA, LB, Lα, Uβ.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080305653500381

Convergence and stability analysis

Kutluyıl Doğançay , in Partial-Update Adaptive Signal Processing, 2008

3.3.3 Simulation examples for steady-state analysis

In this section we assess the accuracy of the steady-state analysis results by way of simulation examples. We consider a system identification problem where the unknown system is FIR of order N 1 and the additive noise at the system output n ( k ) is statistically independent of the input x ( k ) . Thus, in the signal model used in the steady-state analysis we have v ( k ) = n ( k ) and w o is the FIR system impulse response. We observe that the steady-state EMSE is not influenced by the unknown system characteristics and is only determined by the input signal statistics and the method of partial coefficient updates employed. For this reason the choice of the FIR system is immaterial as long as its length is less than the adaptive filter length N . The impulse response of the unknown FIR system to be identified has been set to:

(3.62) h = [ 1 , 1.2 , 0.8 , 0.3 , 0.4 , 0.2 , 0.1 , 0.3 , 0.1 , 0.05 ] T

The adaptive filter length is N = 10 which matches the unknown FIR system order. The additive output noise v ( k ) is an i.i.d. zero-mean Gaussian noise with variance σ v 2 = 0.001 .

The steady-state MSE values are estimated by ensemble averaging adaptive filter output errors over 400 independent trials and taking time average of 5000 ensemble-averaged values after convergence. Recall that the steady-state MSE is given by the sum of the steady-state EMSE and the variance of v ( k ) . To speed up convergence, adaptive filter coefficients are initialized to the true system impulse response (i.e. w ( 0 ) = h ). Two input signals are considered, viz. an i.i.d. Gaussian input signal with zero mean and unity variance, and a correlated AR(1) Gaussian input signal with zero mean and unity variance:

(3.63) x ( k ) = a x ( k 1 ) + ν ( k )

where a = 0.9 and ν ( k ) is zero-mean i.i.d. Gaussian with variance 1 a 2 so that σ x 2 = 1 . The eigenvalue spread of this correlated Gaussian signal is χ ( R ) = 135.5 which indicates a strong correlation.

In Figures 3.1 and 3.2 the theoretical and simulated MSE values of the periodic-partial-update LMS algorithm are depicted for the i.i.d. and correlated Gaussian inputs, respectively. The period of coefficient updates is set to S = 5 . The theoretical MSE is computed using the small step-size approximation(3.34a) and the asymptotic approximation(3.34b). For both input signals the asymptotic approximation appears to be more accurate than the small step-size approximation for large μ . This is intuitively expected since the small step-size approximation assumes a small μ .

Figure 3.1. Theoretical and simulated MSE of periodic-partial-update LMS ( S = 5 ) for i.i.d. Gaussian input. Theoretical MSE is computed using: (a) small μ approximation(3.34a); and (b) asymptotic approximation(3.34b).

Figure 3.2. Theoretical and simulated MSE of periodic-partial-update LMS ( S = 5 ) for AR(1) Gaussian input with eigenvalue spread 135.5. Theoretical MSE is computed using: (a) small μ approximation(3.34a); and (b) asymptotic approximation(3.34b).

The theoretical and simulated MSE values of the sequential-partial-update LMS algorithm are shown in Figures 3.3 and 3.4 for the i.i.d. and correlated Gaussian inputs, respectively. The number of coefficients to be updated is M = 1 (i.e. one out of ten coefficients is updated at each LMS iteration). The theoretical MSE is computed using the small step-size approximation(3.38a) and the asymptotic approximation(3.38b), which are identical to those used for periodic partial updates. The asymptotic approximation yields slightly more accurate results than the small step-size approximation for large μ .

Figure 3.3. Theoretical and simulated MSE of sequential-partial-update LMS ( M = 1 ) for i.i.d. Gaussian input. Theoretical MSE is computed using: (a) small μ  approximation (3.38a); and (b) asymptotic approximation(3.38b).

Figure 3.4. Theoretical and simulated MSE of sequential-partial-update LMS ( M = 1 ) for AR(1) Gaussian input with eigenvalue spread 135.5. Theoretical MSE is computed using: (a) small μ approximation(3.38a); and (b) asymptotic approximation(3.38b).

The MSE values of the M -max LMS algorithm are shown in Figures 3.5 and 3.6 for the i.i.d. and correlated Gaussian inputs, respectively. The number of coefficients to be updated is set to M = 4 . The theoretical MSE is computed using the small step-size approximation(3.40a) and the asymptotic approximation(3.40b), which are identical to those used for periodic partial updates. For data-dependent partial coefficient updates, the input signal correlation reduces the accuracy of the MSE approximations. One of the reasons for this is that the definitions of β M in(3.26) and (3.39) are not identical for correlated inputs.

Figure 3.5. Theoretical and simulated MSE of M -max LMS ( M = 4 ) for i.i.d. Gaussian input. Theoretical MSE is computed using: (a) small μ approximation(3.40a); and (b) asymptotic approximation(3.40b).

Figure 3.6. Theoretical and simulated MSE of M -max LMS ( M = 4 ) for AR(1) Gaussian input with eigenvalue spread 135.5. Theoretical MSE is computed using: (a) small μ approximation (3.40a); and (b) asymptotic approximation(3.40b).

For the selective-partial-update NLMS algorithm the theoretical and simulated MSE values are shown in Figures 3.7 and 3.8. The number of coefficients to be updated is set to M = 1 . The theoretical MSE is computed using the two asymptotic approximations in(3.50a) and (3.50b). For i.i.d. input,(3.50b) produces a better approximation of the steady-state MSE. On the other hand, strong input correlation and small M / N ratio tend to diminish the accuracy of the approximations. This is common to all data-dependent partial coefficient update methods.

Figure 3.7. Theoretical and simulated MSE of selective-partial-update NLMS ( M = 1 ) for i.i.d. Gaussian input. Theoretical MSE is computed using: (a) asymptotic approximation 1 (3.50a); and (b) asymptotic approximation 2(3.50b).

Figure 3.8. Theoretical and simulated MSE of selective-partial-update NLMS ( M = 1 ) for AR(1) Gaussian input with eigenvalue spread 135.5. Theoretical MSE is computed using: (a) asymptotic approximation 1 (3.50a); and (b) asymptotic approximation 2(3.50b).

Figures 3.9 and 3.10 show the theoretical and simulated MSE values of the M -max NLMS algorithm. The number of coefficients to be updated is M = 1 . The theoretical MSE is computed using the asymptotic approximations given in (3.61). For i.i.d. input, the second asymptotic approximation in(3.61b) produces the best approximation of the steady-state MSE. The third approximation in(3.61c) gives the best MSE approximation for the strongly correlated AR(1) input signal. We again observe that strong input correlation adversely impacts the accuracy of the approximations.

Figure 3.9. Theoretical and simulated MSE of M -max NLMS ( M = 1 ) for i.i.d. Gaussian input. Theoretical MSE is computed using asymptotic approximations: (a)(3.61a); (b)(3.61b); and (c)(3.61c).

Figure 3.10. Theoretical and simulated MSE of M -max NLMS ( M = 1 ) for AR(1) Gaussian input with eigenvalue spread 135.5. Theoretical MSE is computed using asymptotic approximations: (a)(3.61a); (b)(3.61b); and (c)(3.61c).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012374196700009X

Parameter Estimation of Chaotic Systems Using Density Estimation of Strange Attractors in the State Space

Yasser Shekofteh , ... Sajad Jafari , in Recent Advances in Chaotic Systems and Synchronization, 2019

Abstract

In this chapter, the focus will be on parameter estimation methods of chaotic systems. A so-called density estimation approach will be considered, and its application in a chaotic system identification problem will be described. The estimation method is based on the attractor distribution modeling in the state space using a Gaussian mixture model (GMM). The purpose of the modeling in the state space is to overcome some problems that can be caused in the traditional parameter estimation methods of chaotic systems. Here, the learning phase and evaluation phase of the parameter estimation method will be considered exactly, and some information criteria will be reported to get a proper model of the attractor. Some experimental results will be then shown to verify the success of the parameter estimation procedure.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128158388000078