Hellenica World

Hamilton's principle

In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action (see that article for historical formulations). It states that the dynamics of a physical system is determined by a variational problem for a functional based on a single function, the Lagrangian, which contains all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and has even been extended to quantum mechanics, quantum field theory and criticality theories.

Mathematical formulation

Hamilton's principle states that the true evolution \mathbf{q}(t) of a system described by N generalized coordinates \[ \mathbf{q} = \left( q_{1}, q_{2}, \ldots, q_{N} \right) \] between two specified states \[ \mathbf{q}_{1} \ \stackrel{\mathrm{def}}{=}\ \mathbf{q}(t_{1}) \] and \[ \mathbf{q}_{2} \ \stackrel{\mathrm{def}}{=}\ \mathbf{q}(t_{2}) \] at two specified times \[ t_{1} \] and \[ t_{2} \] is a stationary point (a point where the variation is zero), of the action functional

\[ \mathcal{S}[\mathbf{q}] \ \stackrel{\mathrm{def}}{=}\ \int_{t_{1}}^{t_{2}} L(\mathbf{q}(t),\dot{\mathbf{q}}(t),t)\, dt \]

where \[ L(\mathbf{q},\dot{\mathbf{q}},t) \] is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in \[ \mathcal{S} \]. The action \[ \mathcal{S} \] is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation

\[ \frac{\delta \mathcal{S}}{\delta \mathbf{q}(t)}=0 \]

Euler-Lagrange equations for the action integral

Requiring that the true trajectory \[ \mathbf{q}(t) \]be a stationary point of the action functional \[ \mathcal{S} \] is equivalent to a set of differential equations for \[ \mathbf{q}(t) \](the Euler-Lagrange equations), which may be derived as follows.

Let \[ \mathbf{q}(t) \] represent the true evolution of the system between two specified states \[ \mathbf{q}_{1} \ \stackrel{\mathrm{def}}{=}\ \mathbf{q}(t_{1}) \] and \[ \mathbf{q}_{2} \ \stackrel{\mathrm{def}}{=}\ \mathbf{q}(t_{2}) \] at two specified times \[ t_{1}\] and \[ t_{2} \], and let \mathbf\varepsilon(t) be a small perturbation that is zero at the endpoints of the trajectory

\[ \boldsymbol\varepsilon(t_{1}) = \boldsymbol\varepsilon(t_{2}) \ \stackrel{\mathrm{def}}{=}\ 0 \]

To first order in the perturbation \boldsymbol\varepsilon(t), the change in the action functional \delta\mathcal{S} would be

\[ \delta \mathcal{S} = \int_{t_{1}}^{t_{2}}\; \left[ L(\mathbf{q}+\boldsymbol\varepsilon,\dot{\mathbf{q}} +\dot{\boldsymbol{\varepsilon}})- L(\mathbf{q},\dot{\mathbf{q}}) \right]dt = \int_{t_{1}}^{t_{2}}\; \left( \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \mathbf{q}} + \dot{\boldsymbol{\varepsilon}} \cdot \frac{\partial L}{\partial \dot{\mathbf{q}}} \right)\,dt \]

where we have expanded the Lagrangian L to first order in the perturbation \boldsymbol\varepsilon(t).

Applying integration by parts to the last term results in

\[ \delta \mathcal{S} = \left[ \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \dot{\mathbf{q}}}\right]_{t_{1}}^{t_{2}} + \int_{t_{1}}^{t_{2}}\; \left( \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \mathbf{q}} - \boldsymbol\varepsilon \cdot \frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{q}}} \right)\,dt \]

The boundary conditions\[ \boldsymbol\varepsilon(t_{1}) = \boldsymbol\varepsilon(t_{2}) \ \stackrel{\mathrm{def}}{=}\ 0 \] causes the first term to vanish

\[ \delta \mathcal{S} = \int_{t_{1}}^{t_{2}}\; \boldsymbol\varepsilon \cdot \left(\frac{\partial L}{\partial \mathbf{q}} - \frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{q}}} \right)\,dt \]

Hamilton's principle requires that this first-order change \[ \delta \mathcal{S} \] is zero for all possible perturbations \boldsymbol\varepsilon(t), i.e., the true path is a stationary point of the action functional \mathcal{S} (either a minimum, maximum or saddle point). This requirement can be satisfied if and only if

\[ \frac{\partial L}{\partial \mathbf{q}} - \frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{q}}} = 0 \] Euler-Lagrange equations

These equations are called the Euler-Lagrange equations for the variational problem.

The conjugate momentum \[ p_{k} \] for a generalized coordinate \[ q_{k} \] is defined by the equation \[ p_{k} \ \stackrel{\mathrm{def}}{=}\ \frac{\partial L}{\partial \dot{q}_{k}} \].

An important special case of these equations occurs when L does not contain a generalized coordinate \[ q_{k} \] explicitly, i.e.,

\[ if \frac{\partial L}{\partial q_{k}}=0 \], the conjugate momentum \[ p_{k} \ \stackrel{\mathrm{def}}{=}\ \frac{\partial L}{\partial \dot{q}_{k}} \] is constant.

In such cases, the coordinate \[ q_{k} \] is called a cyclic coordinate. For example, if we use polar coordinates t, r, θ to describe the planar motion of a particle, and if L does not depend on θ, the conjugate momentum is the conserved angular momentum.
Example: Free particle in polar coordinates

Trivial examples help to appreciate the use of the action principle via the Euler-Lagrangian equations. A free particle (mass m and velocity v) in Euclidean space moves in a straight line. Using the Euler-Lagrange equations, this can be shown in polar coordinates as follows. In the absence of a potential, the Lagrangian is simply equal to the kinetic energy

\[ L = \frac{1}{2} mv^2= \frac{1}{2}m \left( \dot{x}^2 + \dot{y}^2 \right) \]

in orthonormal (x,y) coordinates, where the dot represents differentiation with respect to the curve parameter (usually the time, t). Therefore, upon application of the Euler-Lagrange equations,

\[ \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{x}} \right) - \frac{\partial L}{\partial x} = 0 \qquad \Rightarrow \qquad m\ddot{x} = 0 \]

And likewise for y. Thus the Euler-Lagrange formulation can be used to derive Newton's laws.

In polar coordinates (r, φ) the kinetic energy and hence the Lagrangian becomes

\[ L = \frac{1}{2}m \left( \dot{r}^2 + r^2\dot{\varphi}^2 \right). \]

The radial r and φ components of the Euler-Lagrangian equations become, respectively

\[ \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{r}} \right) - \frac{\partial L}{\partial r} = 0 \qquad \Rightarrow \qquad \ddot{r} - r\dot{\varphi}^2 = 0
\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{\varphi}} \right) -\frac{\partial L}{\partial \varphi} = 0 \qquad \Rightarrow \qquad \ddot{\varphi} + \frac{2}{r}\dot{r}\dot{\varphi} = 0. \]

The solution of these two equations is given by

\[ r = \sqrt{(a t + b)^2 + c^2}
\varphi = \arctan \left( \frac{a t + b}{c} \right) + d \]

for a set of constants a, b, c, d determined by initial conditions. Thus, indeed, the solution is a straight line given in polar coordinates: a is the velocity, c is the distance of the closest approach to the origin, and d is the angle of motion.
Comparison with Maupertuis' principle

Hamilton's principle and Maupertuis' principle are occasionally confused and both have been called (incorrectly) the principle of least action. They differ in three important ways:

their definition of the action...

Maupertuis' principle uses an integral over the generalized coordinates known as the abbreviated action \[ \mathcal{S}_{0} \ \stackrel{\mathrm{def}}{=}\ \int \mathbf{p} \cdot d\mathbf{q} where \mathbf{p} = \left( p_{1}, p_{2}, \ldots, p_{N} \right) \] are the conjugate momenta defined above. By contrast, Hamilton's principle uses \mathcal{S}, the integral of the Lagrangian over time.

the solution that they determine...

Hamilton's principle determines the trajectory \[ \mathbf{q}(t) \] as a function of time, whereas Maupertuis' principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis' principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy.) By contrast, Hamilton's principle directly specifies the motion along the ellipse as a function of time.

...and the constraints on the variation.

Maupertuis' principle requires that the two endpoint states \[ q_{1} \[ and \[ q_{2} \] be given and that energy be conserved along every trajectory. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times \[ t_{1} \] and \[ t_{2} \] be specified as well as the endpoint states \[ q_{1} \] and \[ q_{2}\] .

Action principle for classical fields

The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravity.

The Einstein equation utilizes the Einstein-Hilbert action as constrained by a variational principle.

The path of a body in a gravitational field (i.e. free fall in space time, a so called geodesic) can be found using the action principle.
Hamilton's principle applied to deformable bodies

Hamilton's principle is an important variational principle in elastodynamics. As opposed to a system composed of rigid bodies, deformable bodies have an infinite number of degrees of freedom and occupy continuous regions of space; consequently, the state of the system is described by using continuous functions of space and time. The extended Hamilton Principle for such bodies is given by

\[ \int_{t1}^{t2} \left[ \delta W_e + \delta T - \delta U \right]dt = 0 \]

where T is the kinetic energy, U is the elastic energy, W_e is the work done by external loads on the body, and t_1, t_2 the initial and final times. If the system is conservative, the work done by external forces may be derived from a scalar potential V . In this case,

\[ \delta \int_{t1}^{t2} \left[ T - (U + V) \right]dt = 0. \]

This is called Hamilton's principle and it is invariant under coordinate transformations.
Action principle in quantum mechanics and quantum field theory

In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all imaginable paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, that gives the probability amplitudes of the various outcomes.

Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. In particular, it is fully appreciated and best understood within quantum mechanics. Richard Feynman's path integral formulation of quantum mechanics is based on a stationary-action principle, using path integrals. Maxwell's equations can be derived as conditions of stationary action.

W.R. Hamilton, "On a General Method in Dynamics.", Philosophical Transaction of the Royal Society Part II (1834) pp. 247-308; Part I (1835) pp. 95-144. (From the collection Sir William Rowan Hamilton (1805-1865): Mathematical Papers edited by David R. Wilkins, School of Mathematics, Trinity College, Dublin 2, Ireland. (2000); also reviewed as On a General Method in Dynamics)

Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 35–69.

Landau LD and Lifshitz EM (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover), pp. 2–4.

Arnold VI. (1989) Mathematical Methods of Classical Mechanics, 2nd ed., Springer Verlag, pp. 59–61.


Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License