Fine Art

.

Spectral methods are a class of techniques used in applied mathematics and scientific computing to Spectral method solve certain differential equations, often involving the use of the Fast Fourier Transform. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series which is a sum of sinusoids) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.

Spectral methods and finite element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains. In other words, spectral methods take on a global approach while finite element methods use a local approach. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single domain spectral shock capturing results (shock waves are not smooth).[1] In the finite element community, a method where the degree of the elements is very high or increases as the grid parameter h decreases to zero is sometimes called a spectral element method.

Spectral methods can be used to solve ordinary differential equations (ODEs), partial differential equations (PDEs) and eigenvalue problems involving differential equations. When applying spectral methods to time-dependent PDEs, the solution is typically written as a sum of basis functions with time-dependent coefficients; substituting this in the PDE yields a system of ODEs in the coefficients which can be solved using any numerical method for ODEs. Eigenvalue problems for ODEs are similarly converted to matrix eigenvalue problems .

Spectral methods were developed in a long series of papers by Steven Orszag starting in 1969 including, but not limited to, Fourier series methods for periodic geometry problems, polynomial spectral methods for finite and unbounded geometry problems, pseudospectral methods for highly nonlinear problems, and spectral iteration methods for fast solution of steady state problems. The implementation of the spectral method is normally accomplished either with collocation or a Galerkin or a Tau approach.

Spectral methods are computationally less expensive than finite element methods, but become less accurate for problems with complex geometries and discontinuous coefficients. This increase in error is a consequence of the Gibbs phenomenon. 1 Examples of spectral methods
1.1 A concrete, linear example
1.1.1 Algorithm
1.2 A concrete, nonlinear example
2 A relationship with the spectral element method
3 See also

Examples of spectral methods
A concrete, linear example

Here we presume an understanding of basic multivariate calculus and Fourier series. If g(x,y) is a known, complex-valued function of two real variables, and g is periodic in x and y (that is, g(x,y)=g(x+2π,y)=g(x,y+2π)) then we are interested in finding a function f(x,y) so that

\( \left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right)f(x,y)=g(x,y)\quad \text{for all } x,y \)

where the expression on the left denotes the second partial derivatives of f in x and y, respectively. This is the Poisson equation, and can be physically interpreted as some sort of heat conduction problem, or a problem in potential theory, among other possibilities.

If we write f and g in Fourier series:

\( f=:\sum a_{j,k}e^{i(jx+ky)} \)
\( g=:\sum b_{j,k}e^{i(jx+ky)} \)

and substitute into the differential equation, we obtain this equation:

\( \sum -a_{j,k}(j^2+k^2)e^{i(jx+ky)}=\sum b_{j,k}e^{i(jx+ky)} \)

We have exchanged partial differentiation with an infinite sum, which is legitimate if we assume for instance that f has a continuous second derivative. By the uniqueness theorem for Fourier expansions, we must then equate the Fourier coefficients term by term, giving

\( (*) a_{j,k}=-\frac{b_{j,k}}{j^2+k^2} \)

which is an explicit formula for the Fourier coefficients aj,k.

With periodic boundary conditions, the Poisson equation possesses a solution only if b0,0 = 0. Therefore we can freely choose a0,0 which will be equal to the mean of the resolution. This corresponds to choosing the integration constant.

To turn this into an algorithm, only finitely many frequencies are solved for. This introduces an error which can be shown to be proportional to \( h^n \), where h:=1/n and n is the highest frequency treated.
Algorithm

Compute the Fourier transform (bj,k) of g.
Compute the Fourier transform (aj,k) of f via the formula (*).
Compute f by taking an inverse Fourier transform of (aj,k).

Since we're only interested in a finite window of frequencies (of size n, say) this can be done using a Fast Fourier Transform algorithm. Therefore, globally the algorithm runs in time O(n log n).
A concrete, nonlinear example

We wish to solve the forced, transient, nonlinear Burgers' equation using a spectral approach.

Given u(x,0) on the periodic domain \( x\in\left[0,2\pi\right) \), find \( u \in \mathcal{U} \) such that

\( \partial_{t} u + u \partial_{x} u = \rho \partial_{xx} u + f \quad \forall x\in\left[0,2\pi\right), \forall t>0 \)

where ρ is the viscosity coefficient. In weak conservative form this becomes

\( \langle \partial_{t} u , v \rangle = \langle \partial_x \left(-\frac{1}{2} u^2 + \rho \partial_{x} u\right) , v \rangle + \langle f, v \rangle \quad \forall v\in \mathcal{V}, \forall t>0 \)

where \( \langle f, g \rangle := \int_{0}^{2\pi} f(x) \overline{g(x)}\,dx \) following inner product notation. Integrating by parts and using periodicity grants

\( \langle \partial_{t} u , v \rangle = \langle \frac{1}{2} u^2 - \rho \partial_{x} u , \partial_x v\rangle+\langle f, v \rangle \quad \forall v\in \mathcal{V}, \forall t>0. \)

To apply the Fourier-Galerkin method, choose both

\( \mathcal{U}^N := \left\{ u : u(x,t)=\sum_{k=-N/2}^{N/2-1} \hat{u}_{k}(t) e^{i k x}\right\} \)

and

\( \mathcal{V}^N :=\text{ span}\left\{ e^{i k x} : k\in -N/2,\dots,N/2-1\right\} \)

where \( \hat{u}_k(t):=\frac{1}{2\pi}\langle u(x,t), e^{i k x} \rangle \). This reduces the problem to finding \( u\in\mathcal{U}^N \) such that

\( \langle \partial_{t} u , e^{i k x} \rangle = \langle \frac{1}{2} u^2 - \rho \partial_{x} u , \partial_x e^{i k x} \rangle + \langle f, e^{i k x} \rangle \quad \forall k\in \left\{ -N/2,\dots,N/2-1 \right\}, \forall t>0. \)

Using the orthogonality relation \( \langle e^{i l x}, e^{i k x} \rangle = 2 \pi \delta_{lk} \) where \( \delta_{lk} \) is the Kronecker delta, we simplify the above three terms for each k to see

\( \begin{align} \langle \partial_{t} u , e^{i k x}\rangle &= \langle \partial_{t} \sum_{l} \hat{u}_{l} e^{i l x} , e^{i k x} \rangle = \langle \sum_{l} \partial_{t} \hat{u}_{l} e^{i l x} , e^{i k x} \rangle = 2 \pi \partial_t \hat{u}_k, \\ \langle f , e^{i k x} \rangle &= \langle \sum_{l} \hat{f}_{l} e^{i l x} , e^{i k x}\rangle= 2 \pi \hat{f}_k, \text{ and} \\ \langle \frac{1}{2} u^2 - \rho \partial_{x} u , \partial_x e^{i k x} \rangle &= \langle \frac{1}{2} \left(\sum_{p} \hat{u}_p e^{i p x}\right) \left(\sum_{q} \hat{u}_q e^{i q x}\right) - \rho \partial_x \sum_{l} \hat{u}_l e^{i l x} , \partial_x e^{i k x} \rangle \\ &= \langle \frac{1}{2} \sum_{p} \sum_{q} \hat{u}_p \hat{u}_q e^{i \left(p+q\right) x} , i k e^{i k x} \rangle - \langle \rho i \sum_{l} l \hat{u}_l e^{i l x} , i k e^{i k x} \rangle \\ &= -\frac{i k}{2} \langle \sum_{p} \sum_{q} \hat{u}_p \hat{u}_q e^{i \left(p+q\right) x} , e^{i k x} \rangle - \rho k \langle \sum_{l} l \hat{u}_l e^{i l x} , e^{i k x} \rangle \\ &= - i \pi k \sum_{p+q=k} \hat{u}_p \hat{u}_q - 2\pi\rho{}k^2\hat{u}_k. \end{align} \)

Assemble the three terms for each k to obtain

\( 2 \pi \partial_t \hat{u}_k = - i \pi k \sum_{p+q=k} \hat{u}_p \hat{u}_q - 2\pi\rho{}k^2\hat{u}_k + 2 \pi \hat{f}_k \quad k\in\left\{ -N/2,\dots,N/2-1 \right\}, \forall t>0. \)

Dividing through by 2\pi, we finally arrive at

\( \partial_t \hat{u}_k = - \frac{i k}{2} \sum_{p+q=k} \hat{u}_p \hat{u}_q - \rho{}k^2\hat{u}_k + \hat{f}_k \quad k\in\left\{ -N/2,\dots,N/2-1 \right\}, \forall t>0. \)

With Fourier transformed initial conditions \( \hat{u}_{k}(0) \) and forcing \( \hat{f}_{k}(t) \), this coupled system of ordinary differential equations may be integrated in time (using, e.g., a Runge Kutta technique) to find a solution. The nonlinear term is a convolution, and there are several transform-based techniques for evaluating it efficiently. See the references by Boyd and Canuto et al. for more details.


A relationship with the spectral element method

One can show that if g is infinitely differentiable, then the numerical algorithm using Fast Fourier Transforms will converge faster than any polynomial in the grid size h. That is, for any n>0, there is a \( C<\infty \) such that the error is less than \( Ch^n \) for all sufficiently small values of h. We say that the spectral method is of order n, for every n>0.

Because a spectral element method is a finite element method of very high order, there is a similarity in the convergence properties. However, whereas the spectral method is based on the eigendecomposition of the particular boundary value problem, the spectral element method does not use that information and works for arbitrary elliptic boundary value problems.


See also

Discrete element method
Gaussian grid
Pseudo-spectral method
Spectral element method
Galerkin method
Collocation method

Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home - Hellenica World