Amir H. Hadian‐Rasanan1 and Jamal Amani Rad2
1Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Evin, Tehran, Iran
2Department of Cognitive Modeling, Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
Brain activity reconstruction is the field of studying and reconstructing human brain activities from imaging data such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), magnetoencephalography (MEG), and so on. This field of neuroscience has attracted a lot of attention recently and different tools and algorithms have been developed for this purpose. On the other hand, because deep learning methods obtained outperform results in different problems of science, some researchers employed these methods for their problems and found out that a good tool for reconstruction of the brain activities are deep learning methods. Despite the deep learning methods have good accuracy and are so useful, all of them have a major problem, which is considering the learning algorithm as a black box. Moreover, there is no meaningful way that can explain “Why do the employed deep learning methods get good results?”. Fortunately, this major problem can be solved by using a suitable mathematical model. There are various mathematical tools for modeling this phenomenon. One of the most useful mathematical models for this purpose is inverse model. As a definition of an inverse problem, we can remark that this is a mathematical framework that is used to extract information about a (unknown) system from the observed treatments of the system. In fact, the problem of discovering information about source of the electrical currents from EEG data can be treated as inverse problem. Figure 15.1 shows that how the brain activities can be considered as an inverse model.
The procedure of measuring electrical currents and applying some preprocessing algorithm on this raw data to prepare these data to monitoring is called forward problem. On the other hand, the procedure of estimating neural activity sources corresponding to a set of measured data is called inverse problem. It is worth to mention that in the field of brain activity reconstruction of solving both forward and inverse problem is very related to each other because if the forward problem does not solve accurately, the measured data does not have enough accuracy and we are not able to solve the inverse problem accurately by using this noisy data.
In addition to the brain activity reconstruction, inverse problems and especially determining a source parameter in an inverse problem from additional information have many applications in different fields of science and engineering. Generally, source parameter evaluating in a parabolic partial differential equations from additional information has been employed to specify the unknown attributes of a special region by evaluating the properties of its boundary or specific area of the domain. These unknown attributes contain valuable information about the (unknown) system, but measuring them directly is expensive and impossible [1].
Despite of the applicability of the inverse problems and source parameter evaluation, these problems have some challenging aspects including solution existence, solution uniqueness, and instability of the solution [2]. In the following sentences we explain each of these major problems briefly:
In Section 15.1.1, we illustrate the problem that is considered to be solved in this chapter.
In this chapter, we study the following nonlinear inverse problem of simultaneously finding unknown function and unknown source parameter from the following parabolic equation:
with initial condition
and boundary condition
subject to an over specialization at a point in the spatial domain
In order to overcome to the problem of nonlinearity of the problem and finding two unknown functions simultaneously, a transformation is used. By employing a pair of transformations and , Eqs. 15.1–15.4 are written in the following form:
with initial condition
and boundary condition
subject to
which yields
As you can see with this transformation, the nonlinear equation 15.1 is transformed to linear equation 15.5. Moreover, the role of is transferred to . As mentioned in Section 15.1,, this problem has special importance in different fields of science. Therefore, many researchers developed different numerical algorithms to solve this problem. Some of these efforts are presented in Section 15.1.2.
Since finding a source parameter has a significant role in different fields of science, this problem has been attracted over the past decade. Many researchers analyze different aspects of this problem. For example, the existence and uniqueness of the solutions to these problems and also some more applications are discussed in [3–9]. Some authors developed different numerical algorithms for simulating this problem. These numerical methods include various classes of basic numerical algorithms such as finite difference, meshless methods, or spectral method. We summarized some numerical schemes that are used to solve this problem as follows:
In addition to the above‐mentioned methods, some other researchers used different methods for solving the inverse control parameter such as method of lines [43] and predictor‐corrector method [44].
Weighted residual methods have attracted much attention in recent years. There are various types of the weighted residual methods such as Tau, Galerkin, Petrov–Galerkin, and collocation. Collocation algorithm is a powerful method based on the expansion of an unknown function by basis functions. Using different basis functions can help us to solve many different problems that have different dynamics [45–48]. In addition, Chebyshev polynomials and their various types such as rational and fractional are a suitable choice as basis functions. Many researchers have been using these polynomials in their approaches [49–56]. In this chapter, for spatial approximation, the Chebyshev collocation method is used, which yields a linear algebraic system of equations. On the other hand, in order to temporal approximation, ‐weighted finite difference scheme is used. The rest of this chapter is organized as follows. In Section 15.2, we introduce different parts of proposed method, which includes weighted residual method, collocation method, and function approximation using Chebyshev polynomials. In Section 15.3, we apply the proposed method on Eqs. 15.5–15.8, and in Section 15.4, we report the numerical experiments of solving Eqs. 15.1–15.4. Finally, a conclusion is given in Section 15.5.
Before we explain how to solve the model in the form of Eqs. 15.5–15.8, we focus on the numerical tools that are used in this chapter. At first, the weighted residual method and collocation algorithm as a special case of the weighted residual method are introduced; afterward, the properties of Chebyshev polynomials and function approximation by using these polynomials are expressed.
The spectral and pseudo‐spectral methods are used frequently to solve many problems in different fields such as computational neuroscience [57], cognitive science [58], biology [59], fluid dynamics [60], engineering [61], and so on. There is no obvious historical start point for these methods but there may be the main idea that these methods are inspired by Fourier analysis method. In 1820, Navier employed an approach based on expanding the unknown function by sine functions for solving the Laplace equation. He expands the two value unknown function in the following form:
in which can be obtained using the orthogonality of the sine functions. Because the computational cost of this method was high and does not have acceptable accuracy, this method was not used frequently in that time. By growing the available computational resources, these methods have been attracted very much and some researchers developed these methods in different aspects such as error estimation analysis, convergence and stability analysis, parallelism, etc. The weighted residual methods are powerful approach to approximate the solution of differential equations, which are the general form of the spectral methods. Before we explain the procedure and the basics of the weighted residual methods, we should remark the definition of the weight function. The definition of the weight function is presented in Definition 15.1.
In order to illustrate the weighted residual method, consider a multidimensional (nonlinear) partial differential equation as follows:
with the boundary condition
where is a (nonlinear) partial differential operator of the equation, operates on the boundary conditions, is the domain of the equation, and is the boundary of the domain. In order to approximate the solution of the above equation, we select as basis functions on the space of and assumed the solution as a finite summation of multiplication of basis functions. Therefore, we obtain as follows:
By substituting Eq. 15.13 in Eqs. 15.11 and 15.12, the following equations are obtained, which are named residual functions:
because the approximated solution and the exact solution are not equal, so the residuals are not equal to zero, but the weighted residual method will force the residual to minimize as much as possibly closer to zero.
To obtain unknown coefficients weighted inner product of Eqs. 15.15 and 15.16 which is used by a set of test functions , , …, . So we have
where are positive weight functions. The choice of test functions determines the type of spectral methods. Here, a brief review on the different types of the weighted residual method is presented by considering as a residual function:
Therefore, this method forces the residual function to be zero over the all subdomains.
In the above‐mentioned methods recently, some new pseudo‐spectral methods based on a new family of Lagrange functions have been introduced [63,64]. In order to solve our model, we employed the collocation algorithm. In the following, this algorithm is explained as a special case of weighted residual method in details.
The collocation algorithm is the simplest and most common weighted residual method that is used to solve many problems in different areas of science. In 1934, Salter and Kantrovic employed this method for the first time. Afterward, about three years later, Frazer, Jones, and Skan used this algorithm for approximating the solution of ordinary differential equations. Lanczos illustrated the significance of choosing basis functions and collocation points in 1938. From 1957 till 1964, some studies were done by Clenshaw, Norton, and Wright, which cased to revive this method. As mentioned, the collocation method is obtained by choosing Dirac delta function as the test function. Dirac delta function has some special properties, which should be remarked. The definition and some essential properties of Dirac delta function are presented.
This method is a member of weighted residual methods, which force the residual to zero at some points of the domain. As a result of choosing in the role of collocation points and Dirac delta function as test functions, it yields that . Therefore, if we take , Eqs. 15.16 and 15.17 become as follows:
Now, if we select equations from Eqs. 15.22 and 15.23, a linear or nonlinear system of equations is obtained. The unknown coefficients are specified from solving of this system. The pseudo‐code of collocation algorithm is presented in Algorithm 15.1
Orthogonal polynomials have significant role in function approximation, spectral methods, and specially in the collocation algorithm. These functions have many applications in different fields of science and engineering. For example, in pattern recognition, many support vector machine algorithms have been developed based on orthogonal functions [65–69]. Moreover, some algorithms for signal and image representation [70] use orthogonal functions. In the field of complex network, computing the reliability of the network has a special importance. Recently, some approaches based on orthogonal functions have employed to obtain the reliability of the network [71]. In the spectral methods, choosing suitable basis function has significant effect on convergence and accuracy of the algorithm. These polynomials have suitable properties to choose as a basis function. The general properties that the basis functions should have are as follows:
Some classes of the orthogonal functions have the above‐mentioned properties simultaneously. One of these suitable properties is completeness over the arbitrary close finite interval. In Theorem 15.1, we explain this property.
It is worth to mention that all classes of orthogonal functions are not complete and the mentioned theorem is only about the orthogonal functions that are the solution of a Sturm–Liouville differential equation. One of these orthogonal polynomials is Chebyshev polynomials. The Chebyshev polynomials have been used frequently in many areas of numerical analysis and scientific computing. The Chebyshev polynomials are a special case of Gegenbauer and Jacobi polynomials and have four kinds. The first kind of Chebyshev polynomials is solutions of the following Sturm–Liouville differential equation:
The solutions of this differential equation depends on . The th solution is , where and the range of is . Therefore, by denoting the th solution by , we have:
where is the th member of the first kind of Chebyshev polynomials. All classical orthogonal polynomials can be obtained by a recursive formula. The recursive formula for these polynomials is as follows:
In addition to the recursive formula, there is some direct formula to obtain the Chebyshev polynomials. For example, the following series give the th Chebyshev polynomial:
Or
As mentioned above, the Chebyshev polynomials have the orthogonality property. These functions are orthogonal over the interval by weight functions as follows:
where is the Kronecker delta function, and
Another important property of the Chebyshev polynomials is that the th member of Chebyshev series has real roots. The following theorem explain this fact formally.
Because these polynomials are orthogonal over the interval and sometimes the problem is defined over the another interval, so we should use a transformation that shifts the problem to the interval or the Chebyshev polynomials to the interval of the problem. If we want to shift these functions from to , we can use the following transformation:
where and . In this paper, we use shifted first kind of Chebyshev polynomials on . The remaining thing, which is essential for the spectral method, is calculating the differentiation of these polynomials. In order to calculate the differentiation of the Chebyshev polynomials, we can use an operational matrix of differentiation. By denoting the operational matrix of first derivative of shifted Chebyshev polynomials by , its elements can be obtained as follows:
in which
It is worth to mention that for obtaining a higher order of derivative of Chebyshev polynomials by using this operational matrix, we can use the powers of this operational matrix. For example, in order to obtain the second order of derivative of Chebyshev polynomials, it is just needed to compute .
In order to approximate functions using shifted Chebyshev polynomials, some definition and theorem are needed, which expressed in the rest of this section.
It is known that any function can be expanded as follows:
where denotes shifted Chebyshev polynomials to and
that is,
Now, let us assume that
is a finite dimensional subspace, as mentioned in Theorem 15.1 is a complete subspace of [47,51]. Let us define the ‐orthogonal projection , that for any function :
It is clear that is the best approximation of in and can be expanded as [72]:
In this part, we demonstrate that how to apply numerical methods that are explained in Section 15.2. First, we discretize the temporal dimension of the problem using a finite difference scheme and then we apply the collocation algorithm for spatial dimensions. Many researchers have developed different finite difference schemes to approximate the fractional or integer order of derivatives of a (unknown) function. Moreover, some other researchers have used and combined these schemes with some other methods to solve various problems. The basic idea of the finite difference method is inspired from Taylor series. In this chapter, we use ‐weighted finite difference scheme for temporal approximation. The stability of the used scheme is a challenging problem in the numerical methods. Crank–Nicolson is the most famous ‐weighted finite difference scheme that is unconditionally stable.
The domain of problem is . In order to discretize temporal dimension, the interval is divided into steps with time step size . Note that and we denote by and by . By applying ‐weighted finite difference scheme on Eq. 15.5, we obtain the following equation:
in which . If we choose , the mentioned scheme is called explicit; if , the scheme is called implicit; and if , the Crank–Nicolson scheme is obtained. In the above equation, the goal is obtaining , so by reordering it, we have:
In order to approximate in different time steps, we expand it as follows:
or equivalently in the matrix form:
where
and
It is worth to mention that is the shifted Chebyshev polynomial to interval . Therefore, if we consider as Eq. 15.41 for , it is obtained:
or equivalently in the matrix form:
We construct the residual functions as follows:
By choosing as collocation point and substituting Eqs. 15.41–15.45 in residual functions and equal them to zero, then selecting equations from them, we obtain linear algebraic equations, which can be solved for the unknown coefficients using any linear solver such as LU decomposition or successive over‐relaxation (SOR) in each time step.
The aim of this problem is finding unknown source parameter. For obtaining unknown source parameter, from Eqs. 15.1 and 15.4 we have
Note that for , we can write
In this section, we are going to test the presented algorithm in different aspects. In order to this purpose, five examples with fixed values of their parameters are provided to test the accuracy, efficiency, stability, and convergence of the proposed algorithm. Because usually the inverse problems are ill‐posed, the stability of the algorithm should be checked. In order to check the stability of the proposed algorithm, some noises are added to overspecified condition. We add 1%, 3%, and 5% of noise to as follows:
in which is a random number in interval , where and denotes the noisy condition. On the other hand, the convergence and order of convergence of the algorithm are demonstrated by increasing the number of collocation points, which means it is demonstrated that by increasing the number of collocation points, the error of approximated solutions decreased. In order to compute the error of approximated solutions and also compare our results to each other and with other methods in literature, the following norms are used:
It is worth to mention that we performed all computations using Maple software. Moreover, in our computations, it is considered that and .
Let us consider the following inverse problem as the first test example
where and the exact solution is given as follows:
This equation is solved in [1] by Mohebbi and Abbasi using a compact finite difference scheme. Here, we solve this equation by considering in our computations for this example. In Table 15.1, the obtained absolute error of presented numerical algorithm for unknown function is compared with the compact finite difference scheme, which is presented in [1]. Additionally, the comparison between the absolute errors for source parameter is presented in Table 15.2. It is concluded from Tables 15.1 and 15.2 that the proposed method is more accurate than finite difference type solvers.
Table 15.1 A comparison between absolute error obtained by the methods of [1] and the presented method for in test problem 1.
Compact [1] | Present method | Present method | |
= 100, = 40 | = 10, = 18 | = 10, = 25 | |
0.1 | |||
0.2 | |||
0.3 | |||
0.4 | |||
0.5 | |||
0.6 | |||
0.7 | |||
0.8 | |||
0.9 |
Table 15.2 A comparison between absolute error obtained by the methods of [1] and the presented method for in test problem 1.
Exact | Compact [1] | Present method | Present method | |
= 100, = 40 | = 10, = 18 | = 10, = 25 | ||
0.1 | 1.01 | |||
0.2 | 1.04 | |||
0.3 | 1.09 | |||
0.4 | 1.16 | |||
0.5 | 1.25 | |||
0.6 | 1.36 | |||
0.7 | 1.49 | |||
0.8 | 1.64 | |||
0.9 | 1.81 | |||
1.0 | 2.00 |
As can be seen in Tables 15.1 and 15.2, the obtained results for the proposed method are extremely more accurate than the finite difference method, which is presented in [1]. Another aspect of the proposed method, which should be checked, is stability. Figure 15.2 shows the stability of the presented algorithm. In Figure 15.2, the absolute errors of proposed method by using different numbers of collocation points for the first test problem with various percentage of noisy condition are presented, which show the stability of the mentioned method. As shown in Figure 15.2, the accuracy of the proposed method is about for noisy data, which is acceptable accuracy for this problem. We can conclude from Figure 15.2 that the presented approach is stable for different numbers of collocation points and different temporal steps.
Moreover, convergence of the algorithm is very significant. Figure 15.3 is presented to show the convergence of the suggested numerical method. In Figure 15.3, we plot the residual error for this example by using different numbers of collocation points in Figure 15.3a, and on the other hand, we plot the 2 norm of error function for different values of collocation points in Figure 15.3b. As can be seen from Figure 15.3, the order of convergence of the algorithm is exponential.
In the second example, consider the following one‐dimensional inverse problem:
where , and the exact solution is given as follows:
Similar to the previous example, this problem is solved in [1] by compact finite difference method. Here, by applying the presented numerical algorithm with on this example, we obtain high‐accuracy results, which are presented in Tables 15.3 and 15.4. In Table 15.3, the obtained absolute error of the proposed method for an unknown function is compared with a compact finite difference results [1]; in addition, the comparison between the obtained absolute error of presented method and the method of [1] for evaluating the source parameter is provided in Table 15.4. As can be seen in Tables 15.3 and 15.4, we can obtain a more accurate approximated solution by using less spatial points and time steps.
Table 15.3 A comparison between absolute error obtained by the methods of [1] and the presented method for in test problem 2.
Compact [1] | Present method | Present method | |
= 1000, = 50 | = 10, = 15 | = 10, = 20 | |
0.1 | |||
0.2 | |||
0.3 | |||
0.4 | |||
0.5 | 0 | 0 | 0 |
0.6 | |||
0.7 | |||
0.8 | |||
0.9 |
Table 15.4 A comparison between absolute error obtained by the methods of [1] and the presented method for in test problem 2.
Exact | Compact [1] | Present method | Present method | |
= 1000, = 50 | = 10, = 15 | = 10, = 25 | ||
0.1 | ||||
0.2 | ||||
0.3 | ||||
0.4 | ||||
0.5 | ||||
0.6 | ||||
0.7 | ||||
0.8 | ||||
0.9 |
Similar to test problem 1, we test the stability of the proposed algorithm by adding some noise to the overspecified condition. For a different percent of noise, we apply the proposed method by various numbers of collocation points. The obtained results are presented in Figure 15.4, which shows the stability of the provided approach.
Moreover, the convergence of provided method is shown in Figure 15.5. The process of decreasing the value of error and residual function is shown in Figure 15.5, which means, by increasing the number of collocation points, the residual and error of approximated solution decrease in the exponential sense.
In the third example, we consider the following two‐dimensional inverse problem:
in which and its exact solution is :
Shivanian and Jafarabadi developed a spectral meshless method for solving this problem in [23]. In Table 15.5, the obtained results of the proposed method by using are compared with the meshless method, which is presented in [23], in terms of for unknown function and source parameter, in different numbers of collocation points and time steps. Table 15.5 shows that the presented algorithm is more accurate than meshless method, which is introduced in [23].
Table 15.5 A comparison between the presented method and the method of [23] in the sense of for test problem 3.
Meshless [23] | Present method | Meshless [23] | Present method | |
, |
As can be seen from Table 15.5, the presented method has the higher convergence rate in comparison with the method of [23]. The convergence of the presented method is illustrated in Figure 15.6. In addition to Table 15.5, for convergence examination of the suggested algorithm, the values of and , which are obtained by using different number collocation points, is shown in Figure 15.6.
On the other hand, in order to show that this method is stable, the provided algorithm is tested by noisy data. The obtained results by adding different percent of noise to is presented in Figure 15.7, in which the number of collocation points is . Moreover, the obtained results by using 20 collocation points is presented in Figure 15.8. In both cases, the number of time steps is equal to 10.
In the fourth example, we consider another two‐dimensional inverse problem as follows:
in which and its exact solution is
This equation is solved by a splitting finite difference method in [16] and also by a greedy MLPG meshless method in [24]. The best obtained solution by the method of [16] is about by using time steps and 50 nodes for each spatial dimension. In Tables 15.6 and 15.7, the obtained results of proposed method by considering is reported.
Table 15.6 Table of obtaining accuracy for unknown function by using time steps in test problem 4.
0.1 | 0.1 | 0.1 | |||
0.2 | 0.2 | 0.2 | |||
0.3 | 0.3 | 0.3 | |||
0.4 | 0.4 | 0.4 | |||
0.5 | 0.5 | 0.5 | |||
0.6 | 0.6 | 0.6 | |||
0.7 | 0.7 | 0.7 | |||
0.8 | 0.8 | 0.8 | |||
0.9 | 0.9 | 0.9 |
Table 15.7 Table of obtaining accuracy for discovering the source parameter by using time steps in test problem 4.
Exact | = 5 | = 10 | = 15 | |
0.1 | 2.894 829 081 924 352 375 188 292 173 51 | |||
0.2 | 2.778 597 241 839 830 166 078 928 005 36 | |||
0.3 | 2.650 141 192 423 996 896 016 255 686 67 | |||
0.4 | 2.508 175 302 358 729 682 175 147 047 16 | |||
0.5 | 2.351 278 729 299 871 853 151 349 212 19 | |||
0.6 | 2.177 881 199 609 491 025 124 632 331 84 | |||
0.7 | 1.986 247 292 529 523 478 375 450 611 42 | |||
0.8 | 1.774 459 071 507 532 395 420 462 468 60 | |||
0.9 | 1.540 396 888 843 050 336 199 873 436 40 |
In addition to high accuracy of the algorithm, the convergence of the presented method can be conclude from Tables 15.6 and 15.7. Moreover, in order to show the convergence of the presented algorithm, the values of and , which are obtained by using different numbers of collocation points, are shown in Figure 15.9.
On the other hand, in order to show that the proposed method is stable, the provided algorithm is tested by noisy data. The obtained results by adding various percents of noise to are presented in Figure 15.10, in which the number of collocation points is . Moreover, the obtained results by using 20 collocation points are presented in Figure 15.11.
As the final example, the following three‐dimensional inverse problem is considered as fifth test example:
where , and it has the following exact solution:
Dehgan solved this equation in [13] by a seven‐point finite difference scheme. The best accuracy, which is reported in [13], is for and for , while it is using time steps and 50 nodes for each spatial dimension. In this example, the computations are done by considering . The is a measure of comparison between the presented algorithm and method of [13], which is discussed in Table 15.8.
Table 15.8 Table of obtaining results for different time steps and different numbers of collocation points for test problem 5 in the sense of .
, | 0.3375 | 0.5656 |
, | ||
, |
As can be seen in Table 15.8, the presented method can obtain more accurate results by much less than time steps and collocation pints for spatial dimensions in comparison with finite difference method of [13]. Moreover, Table 15.8 yields the convergence of the algorithm because by increasing the time steps and collocation points, the infinite norm of error decreased. On the other hand, the stability of presented algorithm is tested by adding some percent of noise to overspecified condition. The results of adding the noise to overspecified condition are illustrated in Figures 15.12–15.14.
In this chapter, a multidimensional inverse problem of finding a source parameter is considered, and the connection between the brain activity reconstruction and evaluating a source parameter in an inverse problem is illustrated. On the other hand, in order to approximate the solution of this inverse problem, a finite difference/pseudo‐spectral method was developed. In the presented method, a finite difference scheme is used to discretize the temporal dimension, and a pseudo‐spectral scheme based on collocation algorithm is used to approximate the spatial dimensions. We choose a scheme weighted for the finite different method. Moreover, in order to function approximation, the first kind of Chebyshev polynomials are selected as basis functions. The presented algorithm is tested for convergence and stability. In order to test the convergence of the algorithm, we show that by increasing the amount of basis functions and collocation points, the value of residual is decreased. On the other hand, in order to show the stability of the presented method, some noises are added to the overspecified condition and the obtained numerical results are reported. In addition, computational experiments confirmed the high accuracy of the proposed method in comparison with other method such as meshless methods, finite difference methods, and other relevant methods in the literature. Another important point of the presented algorithm is discovering both linear and nonlinear source parameters strongly.
3.236.55.137