Fine Art

.

In mathematics, an ordinary differential equation (ODE) is an equation in which there is only one independent variable and one or more derivatives of a dependent variable with respect to the independent variable, so that all the derivatives occurring in the equation are ordinary derivatives.[1][2]

A simple example is Newton's second law of motion—the relationship between the displacement and the time of the object under the force—which leads to the differential equation

\( m \frac{\mathrm{d}^2 x(t)}{\mathrm{d}t^2} = F(x(t)),\, \)

for the motion of a particle of constant mass m. In general, the force F depends upon the position x(t) of the particle at time t, and thus the unknown function x(t) appears on both sides of the differential equation, as is indicated in the notation F(x(t)).[3][4][5][6]

Ordinary differential equations are distinguished from partial differential equations, which involve partial derivatives of functions of several variables.

Ordinary differential equations arise in many different contexts including geometry, mechanics, astronomy and population modelling. Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert and Euler.

Much study has been devoted to the solution of ordinary differential equations. In the case where the equation is linear, it can be solved by analytical methods. Unfortunately, most of the interesting differential equations are non-linear and, with a few exceptions, cannot be solved exactly. Approximate solutions are arrived at using computer approximations (see numerical ordinary differential equations).
The trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law.

Definitions
Ordinary differential equation

Let y be an unknown function

\( y\colon \mathbb{R} \to \mathbb{R} \)

in x with \( y^{(n)} the nth derivative of y, and let F be a given function

\( F\colon\mathbb{R}^{n+1}\rightarrow\mathbb{R}, \)

then an equation of the form

\( F(x,y,y',\ \dotsc,\ y^{(n-1)})=y^{(n)} \)

is called an ordinary differential equation of order n.[7][8] If y is an unknown vector valued function

\( y\colon \mathbb{R} \to \mathbb{R}^m, \)

it is called a system of ordinary differential equations of dimension m (in this case, F: ℝm(n+1)→ ℝm).

More generally, an implicit ordinary differential equation of order n takes the form

\( F\left(x, y, y', y'',\ \dotsc,\ y^{(n)}\right) = 0 \)

where F: ℝn+2→ ℝ depends on y(n).[9] To distinguish the above case from this one, an equation of the form

\( F\left(x, y, y', y'',\ \dotsc,\ y^{(n-1)}\right) = y^{(n)} \)

is called an explicit differential equation.

A differential equation not depending on x is called autonomous.

A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y together with a constant term, all possibly depending on x:

\( y^{(n)} = \sum_{i=0}^{n-1} a_i(x) y^{(i)} + r(x) \)

with ai(x) and r(x) continuous functions in x.[10][11][12] The function r(x) is called the source term; if r(x)=0 then the linear differential equation is called homogeneous, otherwise it is called non-homogeneous or inhomogeneous.[13][14]
Solutions

Given a differential equation

\( F(x, y, y', \dotsc, y^{(n)}) = 0 \)

a function u: I ⊂ R → R is called the solution or integral curve for F, if u is n-times differentiable on I, and

\( F(x,u,u',\ \dotsc,\ u^{(n)})=0 \quad x \in I. \)

Given two solutions u: J ⊂ R → R and v: I ⊂ R → R, u is called an extension of v if I ⊂ J and

\( u(x) = v(x) \quad x \in I.\, \)

A solution which has no extension is called a maximal solution. A solution defined on all of R is called a global solution.

A general solution of an n-th order equation is a solution containing n arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'.[15] A singular solution is a solution which cannot be obtained by assigning definite values to the arbitrary constants in the general solution.[16]
Applications

Ordinary differential equations describe the basic mathematical theory and methods of the natural sciences and social sciences which govern objects and phenomena, evolution and variation. Many principles and rules in physical, chemical, biological, engineering, aerospace, medical, economic and financial fields of study can be described by the appropriate ordinary differential equations, such as Newtons laws of motion, Newton's law of universal gravitation, the law of conservation of energy, the law of population growth, ecological population competition, infectious diseases, genetic variation, stock trends, interest rates and the market equilibrium price changes. People attribute the understanding and analysis of these problems to the study of the corresponding ordinary differential equations to describe the mathematical model. Therefore, the theory and methods of ordinary differential equations are widely used in various fields of social science.

Examples
Main article: Examples of differential equations
Existence and uniqueness of solutions

There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
theorem assumption conclusion
Peano existence theorem F continuous local existence only
Picard–Lindelöf theorem F Lipschitz continuous local existence and uniqueness

which are both local results.
Global uniqueness and maximum domain of solution

When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:

Theorem[17] For each initial condition \( (x_0,y_0) \) there exists a unique maximum (possibly infinite) open interval

\( I_{max} = ]x_-,x_+[, x_\pm \in \mathbb{R} \cup \pm\infty, x_0 \in I_{max} \)

such that any solution which satisfies this initial condition is a restriction of the solution which satisfies this initial condition with domain I_{max} .

In the case that \( x_\pm \neq \pm \infty \) , there are exactly two possibilities

explosion in finite time: \( \lim_{x \to x_\pm} ||y(x)|| = \infty \)
leaves domain of definition:\( \lim_{x \to x_\pm} \in \partial \bar{\Omega} \)

where \( \Omega \) is the open set in which F is defined, and \( \partial \bar{\Omega} \) is its boundary.

Note that the maximum domain of the solution

is always an interval (to have uniqueness)
may be smaller than \( \mathbb{R} \)
may depend on the specific choice of \( (x_0,y_0). \)

Example \( y' = y^2 \)

This means that \( F(x,y)=y^2 \) , which is \( C^1 \) and therefore Lipschitz continuous for all y, satisfying the Picard–Lindelöf theorem.

Even in such a simple setting, the maximum domain of solution cannot be all \mathbb{R}, since the solution is

\( y(x) = \frac{y_0}{(x_0-x)y_0+1} \)

which has maximum domain:

\( \begin{cases} \mathbb{R} & y_0 = 0 \\ \)
\( ]-\infty, x_0+\frac{1}{y_0},[ & y_0 > 0 \\ \)
\( ]x_0+\frac{1}{y_0},+\infty[ & y_0 < 0 \end{cases} \)

This shows clearly that the maximum interval may depend on the initial conditions.

The domain of y could be taken as being \(\mathbb{R} \setminus x_0+\frac{1}{y_0} \) , but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.

The maximum domain is not\( \mathbb{R} because\( \lim_{x \to x_\pm} ||y(x)|| = \infty \) , which is one of the two possible cases according to the above theorem.

Reduction to a first order system

Any differential equation of order n can be written as a system of n first-order differential equations. Given an explicit ordinary differential equation of order n (and dimension 1),

\( F\left(x, y, y', y'',\ \dotsc,\ y^{(n-1)}\right) = y^{(n)} \)

define a new family of unknown functions

\( y_i := y^{(i-1)}.\! \)

for i from 1 to n.

The original differential equation can be rewritten as the system of differential equations with order 1 and dimension n given by

\( \begin{array}{rcl} y_1^\prime&=&y_2\\ y_2^\prime&=&y_3\\ &\vdots&\\ y_{n-1}^\prime&=&y_n\\ y_n^\prime&=&F(x,y_1,\dotsc,y_n). \end{array} \)

which can be written concisely in vector notation as

\( \mathbf{y}^\prime=\mathbf{F}(x,\mathbf{y}) \)

with

\( \mathbf{y}:=(y_1,\dotsc,y_n) \)

and

\( \mathbf{F}(x,y_1,\dotsc,y_n)=(y_2,\dotsc,y_n,F(x,y_1,\dotsc,y_n)). \)

Linear ordinary differential equations
Main article: Linear differential equation

A well understood particular class of differential equations is linear differential equations. We can always reduce an explicit linear differential equation of any order to a system of differential equations of order 1

\( y_i'(x) = \sum_{j=1}^n a_{i,j}(x) y_j + b_i(x) \, \mathrm{,} \quad i = 1,\dotsc,n \)

which we can write concisely using matrix and vector notation as

\( \mathbf{y}^'(x) = \mathbf{A}(x) \mathbf{y}(x) + \mathbf{b}(x) \)

with

\( \mathbf{y}(x):=(y_1(x),\dotsc,y_n(x)) \)
\( \mathbf{b}(x):=(b_1(x),\dotsc,b_n(x)) \)
\( \mathbf{A}(x):=(a_{i,j}(x)) \, \mathrm{,} \quad i,j = 1,\dotsc,n. \)

Homogeneous equations

The set of solutions for a system of homogeneous linear differential equations of order 1 and dimension n

\( \mathbf{y}^'(x) = \mathbf{A}(x) \mathbf{y}(x) \)

forms an n-dimensional vector space. Given a basis for this vector space \( \mathbf{z}_1(x), \dotsc, \mathbf{z}_n(x) \) , which is called a fundamental system, every solution \( \mathbf{s}(x) \) can be written as

\( \mathbf{s}(x) = \sum_{i=1}^{n} c_i \mathbf{z}_i(x). \)

The n × n matrix

\( \mathbf{Z}(x) := (\mathbf{z}_1(x), \dotsc, \mathbf{z}_n(x)) \)

is called fundamental matrix. In general there is no method to explicitly construct a fundamental system, but if one solution is known d'Alembert reduction can be used to reduce the dimension of the differential equation by one.
Nonhomogeneous equations

The set of solutions for a system of inhomogeneous linear differential equations of order 1 and dimension n

\( \mathbf{y}^'(x) = \mathbf{A}(x) \mathbf{y}(x) + \mathbf{b}(x) \)

can be constructed by finding the fundamental system \( \mathbf{z}_1(x), \dotsc, \mathbf{z}_n(x) \) to the corresponding homogeneous equation and one particular solution \( \mathbf{p}(x) \) to the inhomogeneous equation. Every solution \( \mathbf{s}(x) \) to nonhomogeneous equation can then be written as

\( \mathbf{s}(x) = \sum_{i=1}^{n} c_i \mathbf{z}_i(x) + \mathbf{p}(x). \)

A particular solution to the nonhomogeneous equation can be found by the method of undetermined coefficients or the method of variation of parameters.

Concerning second order linear ordinary differential equations, it is well known that

\( y = e^{\int s \, dx } \Rightarrow y'' + Py' + \left ( -s' -s^2 -sP \right ) y = 0 . \)

So, if\( y_h \) is a solution of: y'' + Py' + Qy = 0 , then \( \exists s = {y_h' \over y_h} \) such that: \(Q = -s' - s^2 - sP . \)

So, if \( y_h \) is a solution of: y'' + Py' + Qy = 0 ; then a particular solution \( y_p of y'' + Py' + Qy = W \) , is given by:

\( y_p = y_h \int { \left ( { 1 \over y_h^2} \int W y_h e^{\int P \,dx \,} \, dx \right ) e^{- \int P \, dx } \, dx } \) .[18]

Fundamental systems for homogeneous equations with constant coefficients

If a system of homogeneous linear differential equations has constant coefficients

\( \mathbf{y}^'(x) = \mathbf{A} \mathbf{y}(x) \)

then we can explicitly construct a fundamental system. The fundamental system can be written as a matrix differential equation

\( \mathbf{Y}^' = \mathbf{A} \mathbf{Y} \)

with solution as a matrix exponential

\( e^{x \mathbf{A}} \)

which is a fundamental matrix for the original differential equation. To explicitly calculate this expression we first transform A into Jordan normal form

\( e^{x \mathbf{A}} = e^{x \mathbf{C}^{-1} \mathbf{J} \mathbf{C}^{1}} = \mathbf{C}^{-1} e^{x \mathbf{J}} \mathbf{C}^{1} \)

and then evaluate the Jordan blocks

\( J_i = \begin{bmatrix} \lambda_i & 1 & \; & \; \\ \; & \ddots & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda_i \end{bmatrix} \)

of J separately as

\( e^{x \mathbf{J_i}} = e^{\lambda_i x} \begin{bmatrix} 1 & x & \frac{x^2}{2} & \dotso & \frac{x^{n-1}}{(n-1)!} \\ \; & \ddots & \ddots & \ddots & \vdots \\ \; & \; & \ddots & \ddots & \frac{x^2}{2} \\ \; & \; & \; & \ddots & x \\ \; & \; & \; & \; & 1 \end{bmatrix} \) .

General Case

To solve

\( y'(x) = A(x)y(x)+b(x) \) with y(x0) = y0 (here y(x) is a vector or matrix, and A(x) is a matrix),

let U(x) be the solution of y'(x) = A(x)y(x) with U(x0) = I (the identity matrix). After substituting y(x) = U(x)z(x), the equation y'(x) = A(x)y(x)+b(x) simplifies to U(x)z'(x) = b(x). Thus,

\( \mathbf{y}(x) = U(x)\mathbf{y_0} + U(x)\int_{x_0}^x U^{-1}(x)\mathbf{b}(x)\,dx \)

If A(x1) commutes with A(x2) for all x1 and x2, then \( U(x) = e^{\int_{x_0}^x A(x)\,dx} \) (and thus\( U^{-1}(x) = e^{-\int_{x_0}^x A(x)\,dx}) \) , but in the general case there is no closed form solution, and an approximation method such as Magnus expansion may have to be used.
Theories of ODEs
Singular solutions

The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures

The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.
Fuchsian theory
Main article: Frobenius method

Two memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.
Lie's theory

From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.

Lie's group theory of differential equations, has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.[19]

A general approach to solve DE's uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the DE.

Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.
Sturm–Liouville theory
Main article: Sturm–Liouville theory

Sturm–Liouville theory is a theory of eigenvalues and eigenfunctions of linear operators defined in terms of second-order homogeneous linear equations, and is useful in the analysis of certain partial differential equations.
Software for ODE solving

FuncDesigner (free license: BSD, uses Automatic differentiation, also can be used online via Sage-server)
odeint - A C++ library for solving ordinary differential equations numerically
VisSim - a visual language for differential equation solving
Mathematical Assistant on Web online solving first order (linear and with separated variables) and second order linear differential equations (with constant coefficients), including intermediate steps in the solution.
DotNumerics: Ordinary Differential Equations for C# and VB.NET Initial-value problem for nonstiff and stiff ordinary differential equations (explicit Runge-Kutta, implicit Runge-Kutta, Gear’s BDF and Adams-Moulton).
Online experiments with JSXGraph
Maxima computer algebra system (GPL)
COPASI a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB a matrix-programming software (MATrix LABoratory)
GNU Octave a high-level language, primarily intended for numerical computations.

See also

Numerical ordinary differential equations
Difference equation
Matrix differential equation
Laplace transform applied to differential equations
Boundary value problem
List of dynamical systems and differential equations topics
Separation of variables
Method of undetermined coefficients

Ordinary and Partial Differential Equations

Notes

^ Kreyszig (1972, p. 1)
^ Simmons (1972, p. 2)
^ Kreyszig (1972, p. 64)
^ Simmons (1972, pp. 1,2)
^ Halliday & Resnick (1977, p. 78)
^ Tipler (1991, pp. 78–83)
^ Harper (1976, p. 127)
^ Kreyszig (1972, p. 2)
^ Simmons (1972, p. 3)
^ Harper (1976, p. 127)
^ Kreyszig (1972, p. 24)
^ Simmons (1972, p. 47)
^ Harper (1976, p. 128)
^ Kreyszig (1972, p. 24)
^ Kreyszig (1972, p. 78)
^ Kreyszig (1972, p. 4)
^ Boscain; Chitour 2011, p. 21
^ Polyanin, Andrei D.; Valentin F. Zaitsev (2003). Handbook of Exact Solutions for Ordinary Differential Equations, 2nd. Ed.. Chapman & Hall/CRC. ISBN 1-58488-297-2.
^ Lawrence (1999, p. 9)

References

Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN 0-471-71716-9
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition)", Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill
Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0-87901-432-6
Boscain, Ugo; Chitour, Yacine (2011) (in french), Introduction à l'automatique
Lawrence, Dresner (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing

Bibliography

Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
Hartman, Philip, Ordinary Differential Equations, 2nd Ed., Society for Industrial & Applied Math, 2002. ISBN 0-89871-510-5.
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
E.L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0-486-60349-0
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Ibragimov, Nail H (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN 0-8493-4488-3.
Teschl, Gerald. Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.

External links

Differential Equations at the Open Directory Project (includes a list of software for solving differential equations).
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.


Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home - Hellenica World