Hellenica World

.

In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side (and, if the setting is a euclidean space, then the inequality is strict if the triangle is non-degenerate).[1][2]

In Euclidean geometry and some other geometries the triangle inequality is a theorem about distances. In Euclidean geometry, for right triangles it is a consequence of Pythagoras' theorem, and for general triangles a consequence of the law of cosines, although it may be proven without these theorems. The inequality can be viewed intuitively in either R2 or R3. The figure at the right shows three examples beginning with clear inequality (top) and approaching equality (bottom). In the Euclidean case, equality occurs only if the triangle has a 180° angle and two 0° angles, making the three vertices collinear, as shown in the bottom example. Thus, in Euclidean geometry, the shortest distance between two points is a straight line.

In spherical geometry, the shortest distance between two points is an arc of a great circle, but the triangle inequality holds provided the restriction is made that the distance between two points on a sphere is the length of a minor spherical line segment (that is, one with central angle in [0, π]) with those endpoints.[3][4]

The triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space: for example, spaces such as the real numbers, Euclidean spaces, the Lp spaces (p ≥ 1), and inner product spaces.

Euclidean geometry
Euclid's construction for proof of the triangle inequality for plane geometry.

Euclid proved the triangle inequality for distances in plane geometry using the construction in the figure.[5] Beginning with triangle ABC, an isosceles triangle is constructed with one side taken as BC and the other equal leg BD along the extension of side AB. It then is argued that angle β > α, so side AD > AC. But AD = AB + BD = AB + BC so the sum of sides AB + BC > AC. This proof appears in Euclid's Elements, Book 1, Proposition 20.[6]

Right triangle
Isosceles triangle with equal sides AB = AC divided into two right triangles by an altitude drawn from one of the two base angles.

A specialization of this argument to right triangles is:[7]

In a right triangle, the hypotenuse is greater than either of the two sides, and less than their sum.

The second part of this theorem already is established above for any side of any triangle. The first part is established using the lower figure. In the figure, consider the right triangle ADC. An isosceles triangle ABC is constructed with equal sides AB = AC. From the triangle postulate, the angles in the right triangle ADC satisfy:

\( \alpha + \gamma = \pi /2 \ . \)

Likewise, in the isosceles triangle ABC, the angles satisfy:

\( 2\beta + \gamma = \pi \ . \)

Therefore,

\( \alpha = \pi/2 - \gamma ,\ \mathrm{while} \ \beta= \pi/2 - \gamma /2 \ , \)

and so, in particular,

\( \alpha < \beta \ . \)

That means side AD opposite angle α is shorter than side AB opposite the larger angle β. But AB = AC. Hence:

\( \overline{AC} > \overline{AD} \ . \)

A similar construction shows AC > DC, establishing the theorem.

An alternative proof (also based upon the triangle postulate) proceeds by considering three positions for point B:[8] (i) as depicted (which is to be proven), or (ii) B coincident with D (which would mean the isosceles triangle had two right angles as base angles plus the vertex angle γ, which would violate the triangle postulate), or lastly, (iii) B interior to the right triangle between points A and D (in which case angle ABC is an exterior angle of a right triangle BDC and therefore larger than π/2, meaning the other base angle of the isosceles triangle also is greater than π/2 and their sum exceeds π in violation of the triangle postulate).

This theorem establishing inequalities is sharpened by Pythagoras' theorem to the equality that the square of the length of the hypotenuse equals the sum of the squares of the other two sides.
Relationship with shortest paths
The arc length of a curve is defined as the least upper bound of the lengths of polygonal approximations.

The triangle inequality can be used to prove that the shortest curve between two points in Euclidean geometry is a straight line. First, the triangle inequality can be extended by mathematical induction to arbitrary polygonal paths, showing that the total length of such a path is no less than the length of the straight line between its endpoints. Thus no polygonal path between two points is shorter than the line between them.

The result for polygonal paths implies that no curve can have an arc length less than the distance between its endpoints. By definition, the arc length of a curve is the least upper bound of the lengths of all polygonal approximations of the curve. The result for polygonal paths shows that the straight line between the endpoints is shortest of all the polygonal approximations. Because the arc length of the curve is greater than or equal to the length of every polygonal approximation, the curve itself cannot be shorter than the straight line path.[9]
Some practical examples of the use of the inequality

Consider a triangle whose sides are in an arithmetic progression and let the sides be a, a + d, a + 2d. Then the triangle inequality requires that

\( 0<a<2a+3d \, \)
\( 0<a+d<2a+2d \, \)
\( 0<a+2d<2a+d \, \)

To satisfy all these inequalities requires :-

\( a>0 \ \), and \( -\frac{a}{3}<d<a [10] \)

When d is chosen such that d = a/3, it generates a right triangle that is always similar to the Pythagorean triple with sides 3, 4, 5.

Now consider a triangle whose sides are in a geometric progression and let the sides be a, ar, ar2. Then the triangle inequality requires that :-

\( 0<a<ar+ar^2 \, \)
\( 0<ar<a+ar^2 \, \)
\( 0<ar^2<a+ar \, \)

The first inequality requires a > 0, consequently it can be divided through and eliminated. With a > 0, the middle inequality only requires r > 0. This now leaves the first and third inequalities needing to satisfy :-

\( \begin{align} r^2+r-1 & {} >0 \\ r^2-r-1 & {} <0 \end{align} \, \)

The first of these quadratic inequalities requires r to range in the region beyond the value of the positive root of the quadratic equation
r2 + r − 1 = 0, i.e. r > φ − 1 where φ is the golden ratio. The second quadratic inequality requires r to range between 0 and the positive root of the quadratic equation r2 − r − 1 = 0, i.e. 0 < r < φ. The combined requirements result in r being confined to the range

\( \varphi - 1 < r <\varphi\, \) and \( a >0.\, \) [11]

When r the common ratio is chosen such that r = √φ it generates a right triangle that is always similar to the Kepler triangle.
Normed vector space
Triangle inequality for norms of vectors; z is the vector sum of vectors x and y.

In a normed vector space V, one of the defining properties of the norm is the triangle inequality:

\( \displaystyle \|x + y\| \leq \|x\| + \|y\| \quad \forall \, x, y \in V \)

that is, the norm of the sum of two vectors is at most as large as the sum of the norms of the two vectors. This is also referred to as subadditivity. For any proposed function to behave as a norm, it must satisfy this requirement.[12]

If the normed space is euclidean, or, more generally, strictly convex, then \( \|x+y\|=\|x\|+\|y\| \) if and only if the triangle formed by x,y, and x+y, is degenerate, that is, x and y are on the same ray, i.e., x=0 or y=0, or \( x=\alpha \) y for some \( \alpha>0 \) . This property characterizes strictly convex normed spaces such as the \( \ell_p \) spaces \( 1<p<\infty \). However, there are normed spaces in which this is not true. For instance, consider the plane with the \( \ell_1 \) norm (the Manhattan distance) and denote x=(1,0) and y=(0,1). Then the triangle formed by x,y, and x+y, is non-degenerate but

\( \|x+y\|=\|(1,1)\|=|1|+|1|=2=\|x\|+\|y\|. \)


Example norms

Absolute value as norm for the real line. To be a norm, the triangle inequality requires that the absolute value satisfy for any real numbers x and y:

\( |x + y| \leq |x|+|y|, \)

which it does.

The triangle inequality is useful in mathematical analysis for determining the best upper estimate on the size of the sum of two numbers, in terms of the sizes of the individual numbers.

There is also a lower estimate, which can be found using the reverse triangle inequality which states that for any real numbers x and y:

\( |x-y| \geq \bigg||x|-|y|\bigg|. \)

Inner product as norm in an inner product space. If the norm arises from an inner product (as is the case for Euclidean spaces), then the triangle inequality follows from the Cauchy–Schwarz inequality as follows: Given vectors x and y, and denoting the inner product as \langle x,\ y \rangle \,:[13]

\( \|x + y\|^2 = \langle x + y, x + y \rangle \)
\( = \|x\|^2 + \langle x, y \rangle + \langle y, x \rangle + \|y\|^2 \)
\( \le \|x\|^2 + 2|\langle x, y \rangle| + \|y\|^2 \)
\( \le \|x\|^2 + 2\|x\|\|y\| + \|y\|^2 \)(by the Cauchy-Schwarz Inequality)
\( = \left(\|x\| + \|y\|\right)^2 \)

where the last form is a consequence of:

\( \|x\|^2 + 2\|x\|\|y\| + \|y\|^2 = \left(\|x\| + \|y\|\right)^2 \ .\)

Taking the square root of the final result gives the triangle inequality.

P-norm: a commonly used norm is the p-norm:

\( \|x\|_p = \left( \sum_{i=1}^n |x_i|^p \right) ^{1/p} \ , \)

where the x_i are the components of vector x. For p=2 the p-norm becomes the Euclidean norm:

\( \|x\|_2 = \left( \sum_{i=1}^n |x_i|^2 \right) ^{1/2} = \left( \sum_{i=1}^n x_{i}^2 \right) ^{1/2} \ , \)

which is Pythagoras' theorem in n-dimensions, a very special case corresponding to an inner product norm. Except for the case p=2, the p-norm is not an inner product norm, because it does not satisfy the parallelogram law. The triangle inequality for general values of p is called Minkowski's inequality.[14] It takes the form:

\( \|x+y\|_p \le \|x\|_p + \|y\|_p \ .\)

Metric space

In a metric space M with metric d, the triangle inequality is a requirement upon distance:

d(x,\ z) \le d(x,\ y) + d(y,\ z) \ ,

for all x, y, z in M. That is, the distance from x to z is at most as large as the sum of the distance from x to y and the distance from y to z.

The triangle inequality is responsible for most of the interesting structure on a metric space, namely, convergence. This is because the remaining requirements for a metric are rather simplistic in comparison. For example, the fact that any convergent sequence in a metric space is a Cauchy sequence is a direct consequence of the triangle inequality, because if we choose any \( x_n \) and \( x_m \) such that \( d(x_n, x)<\varepsilon/2 \) and \( d(x_m, x)<\varepsilon/2 \), where \( \varepsilon>0 \) is given and arbitrary (as in the definition of a limit in a metric space), then by the triangle inequality, \( d(x_n, x_m) \leq d(x_n, x) + d(x_m, x)<\varepsilon/2 + \varepsilon/2 = \varepsilon \), so that the sequence \{x_n\} is a Cauchy sequence, by definition.
Reverse triangle inequality

The reverse triangle inequality is an elementary consequence of the triangle inequality that gives lower bounds instead of upper bounds. For plane geometry the statement is:[15]

Any side of a triangle is greater than the difference between the other two sides.

In the case of a normed vector space, the statement is:

\( \bigg|\|x\|-\|y\|\bigg| \leq \|x-y\|, \)

or for metric spaces, | d(y, x) − d(x, z) | ≤ d(y, z). This implies that the norm ||–|| as well as the distance function d(x, –) are Lipschitz continuous with Lipschitz constant 1, and therefore are in particular uniformly continuous.
Reversal in Minkowski space

In the usual Minkowski space and in Minkowski space extended to an arbitrary number of spatial dimensions, assuming null or timelike vectors in the same time direction, the triangle inequality is reversed:

\( \|x+y\| \geq \|x\| + \|y\| \; \forall x, y \in V \) such that \( \|x\|, \|y\| \geq 0 and t_x , t_y \geq 0. \)

A physical example of this inequality is the twin paradox in special relativity.
See also

Subadditivity
Minkowski inequality

External links

Triangle inequality in proof wiki

Notes

^ Wolfram MathWorld - http://mathworld.wolfram.com/TriangleInequality.html
^ Mohamed A. Khamsi, William A. Kirk (2001). "§1.4 The triangle inequality in ℝn". An introduction to metric spaces and fixed point theory. Wiley-IEEE. ISBN 0-471-41825-0.
^ Oliver Brock, Jeff Trinkle, Fabio Ramos (2009). Robotics: Science and Systems IV. MIT Press. p. 195. ISBN 0-262-51309-9.
^ Arlan Ramsay, Robert D. Richtmyer (1995). Introduction to hyperbolic geometry. Springer. p. 17. ISBN 0-387-94339-0.
^ Harold R. Jacobs (2003). Geometry: seeing, doing, understanding (3rd ed.). Macmillan. p. 201. ISBN 0-7167-4361-2.
^ "Euclid's elements, Book 1, Proposition 20" Euclid's elements Dept. Math and Computer Science, Clark University 1997 . Retrieved 2010-06-25
^ Claude Irwin Palmer (1919). Practical mathematics for home study: being the essentials of arithmetic, geometry, algebra and trigonometry. McGraw-Hill. p. 422.
^ Alexander Zawaira, Gavin Hitchcock (2009). "Lemma 1: In a right-angled triangle the hypotenuse is greater than either of the other two sides". A primer for mathematics competitions. Oxford University Press. ISBN 0-19-953988-X.
^ John Stillwell (1997). Numbers and Geometry. Springer. ISBN 978-0-387-98289-2. p. 95.
^ Wolfram|Alpha. "input: solve 0<a<2a+3d, 0<a+d<2a+2d, 0<a+2d<2a+d,". Wolfram Research. Retrieved 2010-09-07.
^ Wolfram|Alpha. "input: solve 0<a<ar+ar2, 0<ar<a+ar2, 0<ar2<a+ar". Wolfram Research. Retrieved 2010-09-07.
^ Rainer Kress (1988). "§3.1: Normed spaces". Numerical analysis. Springer. p. 26. ISBN 0-387-98408-9.
^ John Stillwell (2005). The four pillars of geometry. Springer. p. 80. ISBN 0-387-25530-3.
^ Karen Saxe (2002). Beginning functional analysis. Springer. p. 61. ISBN 0-387-95224-1.
^ Anonymous (1854). "Exercise I. to proposition XIX". The popular educator; fourth volume. Ludgate Hill, London: John Cassell. p. 196.

References

Pedoe, Daniel (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0.
Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. ISBN 0-07-054235-X.


Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home