.
Summation
Summation is the operation of adding a sequence of numbers; the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a partial sum, prefix sum, or running total of the summation. The numbers to be summed (called addends, or sometimes summands) may be integers, rational numbers, real numbers, or complex numbers. Besides numbers, other types of values can be added as well: vectors, matrices, polynomials and, in general, elements of any additive group (or even monoid). For finite sequences of such elements, summation always produces a well-defined sum (possibly by virtue of the convention for empty sums).
Summation of an infinite sequence of values is not always possible, and when a value can be given for an infinite summation, this involves more than just the addition operation, namely also the notion of a limit. Such infinite summations are known as series. Another notion involving limits of finite sums is integration. The term summation has a special meaning related to extrapolation in the context of divergent series.
The summation of the sequence [1, 2, 4, 2] is an expression whose value is the sum of each of the members of the sequence. In the example,1 + 2 + 4 + 2 = 9. Since addition is associative the value does not depend on how the additions are grouped, for instance (1 + 2) + (4 + 2) and 1 + ((2 + 4) + 2) both have the value 9; therefore, parentheses are usually omitted in repeated additions. Addition is also commutative, so permuting the terms of a finite sequence does not change its sum (for infinite summations this property may fail; see absolute convergence for conditions under which it still holds).
There is no special notation for the summation of such explicit sequences, as the corresponding repeated addition expression will do. There is only a slight difficulty if the sequence has fewer than two elements: the summation of a sequence of one term involves no plus sign (it is indistinguishable from the term itself) and the summation of the empty sequence cannot even be written down (but one can write its value "0" in its place). If, however, the terms of the sequence are given by a regular pattern, possibly of variable length, then a summation operator may be useful or even essential. For the summation of the sequence of consecutive integers from 1 to 100 one could use an addition expression involving an ellipsis to indicate the missing terms: 1 + 2 + 3 + ... + 99 + 100. In this case the reader easily guesses the pattern; however, for more complicated patterns, one needs to be precise about the rule used to find successive terms, which can be achieved by using the summation operator "Σ". Using this notation the above summation is written as:
\( \sum_{i=1}^{100}i. \)
The value of this summation is 5050. It can be found without performing 99 additions, since it can be shown (for instance by mathematical induction) that
\( \sum_{i=1}^ni = \frac{n(n+1)}2 \)
for all natural numbers n. More generally, formulas exist for many summations of terms following a regular pattern.
The term "indefinite summation" refers to the search for an inverse image of a given infinite sequence s of values for the forward difference operator, in other words for a sequence, called antidifference of s, whose finite differences are given by s. By contrast, summation as discussed in this article is called "definite summation".
Notation
Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, ∑, an enlarged form of the upright capital Greek letter Sigma. This is defined as:
\( \sum_{i=m}^n x_i = x_m + x_{m+1} + x_{m+2} +\cdots+ x_{n-1} + x_n. \)
Where, i represents the index of summation; xi is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.
Here is an example showing the summation of exponential terms (all terms to the power of 2):
\( \sum_{i=3}^6 i^2 = 3^2+4^2+5^2+6^2 = 86. \)
Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in:
\( \sum x_i^2 = \sum_{i=1}^n x_i^2. \)
One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
\( \sum_{0\le k< 100} f(k) \)
is the sum of f(k) over all (integer) k in the specified range,
\( \sum_{x\in S} f(x) \)
is the sum of f(x) over all elements x in the set S, and
\( \sum_{d|n}\;\mu(d) \)
is the sum of μ(d) over all positive integers d dividing n.[1]
There are also ways to generalize the use of many sigma signs. For example,
\( \sum_{\ell,\ell'} \)
is the same as
\( \sum_\ell\sum_{\ell'}. \)
A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with ∏, an enlarged form of the Greek capital letter Pi, replacing the ∑.
Special cases
It is possible to sum fewer than 2 numbers:
If the summation has one summand x, then the evaluated sum is x.
If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if m = n in the definition above, then there is only one term in the sum; if m > n, then there is none.
Formal Definition
If the iterated function notation is defined e.g. \( f^2(x) \equiv f(f(x)) \) and is considered a more primitive notation then summation can be defined in terms of iterated functions as:
\( \left\{b+1,\sum_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x+g(i) \}\right)^{b-a+1} \{a,0\} \)
Where the curly braces define a 2-tuple and the right arrow is a function definition taking a 2-tuple to 2-tuple. The function is applied b-a+1 times on the tuple {a,0}.
Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
\( \sum_{k=a}^b f(k) = \int_{[a,b]} f\,d\mu \)
where [a,b] is the subset of the integers from a to b, and where μ is the counting measure.
Fundamental theorem of discrete calculus
Indefinite sums can be used to calculate definite sums with the formula[2]:
\( \sum_{k=a}^b f(k)=\Delta^{-1}f(b+1)-\Delta^{-1}f(a) \)
Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:
increasing function f:
\( \int_{s=a-1}^{b} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a}^{b+1} f(s)\ ds. \)
decreasing function f:
\( \int_{s=a}^{b+1} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a-1}^{b} f(s)\ ds. \)
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
\( \frac{b-a}{n}\sum_{i=0}^{n-1} f\left(a+i\frac{b-a}n\right) \approx \int_a^b f(x)\ dx, \)
since the right hand side is by definition the limit for n\to\infty of the left hand side. However for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
Identities
The formulas below involve finite sums; for infinite summations see list of mathematical series
General manipulations
\( \sum_{n=s}^t C\cdot f(n) = C\cdot \sum_{n=s}^t f(n) \), where C is a constant
\( \sum_{n=s}^t f(n) + \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) + g(n)\right] \)
\( \sum_{n=s}^t f(n) - \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) - g(n)\right] \)
\( \sum_{n=s}^t f(n) = \sum_{n=s+p}^{t+p} f(n-p) \)
\( \sum_{n=s}^j f(n) + \sum_{n=j+1}^t f(n) = \sum_{n=s}^t f(n) \)
\( \left(\sum_{i=k_0}^{k_1} a_i\right)\left(\sum_{j=l_0}^{l_1} b_j\right) = \sum_{i=k_0}^{k_1}\sum_{j=l_0}^{l_1} a_ib_j \)
\( \sum_{i=k_0}^{k_1}\sum_{j=l_0}^{l_1} a_{i,j} = \sum_{j=l_0}^{l_1}\sum_{i=k_0}^{k_1} a_{i,j} \)
\( \sum_{n=0}^t f(2n) + \sum_{n=0}^t f(2n+1) = \sum_{n=0}^{2t+1} f(n) \)
\( \sum_{n=0}^t \sum_{i=0}^{z-1} f(z\cdot n+i) = \sum_{n=0}^{z\cdot t+z-1} f(n) \)
\( \sum_{n=s}^t \ln f(n) = \ln \prod_{n=s}^t f(n) \)
\( c^{\left[\sum_{n=s}^t f(n) \right]} = \prod_{n=s}^t c^{f(n)} \)
Some summations of polynomial expressions
\( \sum_{i=m}^n 1 = n+1-m \)
\( \sum_{i=1}^n \frac{1}{i} = H_n \)(See Harmonic number)
\( \sum_{i=m}^n i = \frac{(n+1-m)(n+m)}{2} \)(see arithmetic series)
\( \sum_{i=0}^n i = \sum_{i=1}^n i = \frac{n(n+1)}{2} \)(Special case of the arithmetic series)
\( \sum_{i=0}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{n^3}{3} + \frac{n^2}{2} + \frac{n}{6}
\( \sum_{i=0}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2 = \frac{n^4}{4} + \frac{n^3}{2} + \frac{n^2}{4} = \left[\sum_{i=1}^n i\right]^2 \)
\( \sum_{i=0}^n i^4 = \frac{n(n+1)(2n+1)(3n^2+3n-1)}{30} = \frac{n^5}{5} + \frac{n^4}{2} + \frac{n^3}{3} - \frac{n}{30} \)
\( \sum_{i=0}^n i^p = \frac{(n+1)^{p+1}}{p+1} + \sum_{k=1}^p\frac{B_k}{p-k+1}{p\choose k}(n+1)^{p-k+1} \) where B_k denotes a Bernoulli number
The following formulas are manipulations of \( \sum_{i=0}^n i^3 = \left(\sum_{i=0}^n i\right)^2 \) generalized to begin a series at any natural number value (i.e., \( m \in \mathbb{N} \) ) ):
\( \left(\sum_{i=m}^n i\right)^2 = \sum_{i=m}^n ( i^3 - im(m-1) ) \)
\( \sum_{i=m}^n i^3 = \left(\sum_{i=m}^n i\right)^2 +\sum_{i=m}^n i \)
Some summations involving exponential terms
In the summations below x is a constant not equal to 1
\( \sum_{i=m}^{n-1} x^i = \frac{x^m-x^n}{1-x} \) (m < n; see geometric series)
\( \sum_{i=0}^{n-1} x^i = \frac{1-x^n}{1-x} \) (geometric series starting at 1)
\( \sum_{i=0}^{n-1} i x^i = \frac{x-nx^n+(n-1)x^{n+1}}{(1-x)^2} \)
\( \sum_{i=0}^{n-1} i 2^i = 2+(n-2)2^{n} \) (special case when x = 2) \)
\( \sum_{i=0}^{n-1} \frac{i}{2^i} = 2-\frac{n+1}{2^{n-1}} \)(special case when x = 1/2)
Some summations involving binomial coefficients
There exist enormously many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
\( \sum_{i=0}^n {n \choose i} = 2^n \)
\( \sum_{i=1}^{n} i{n \choose i} = n2^{n-1} \)
\( \sum_{i=0}^{n} i!\cdot{n \choose i} = \lfloor n!\cdot e \rfloor \)
\( \sum_{i=0}^{n-1} {i \choose k} = {n \choose k+1} \)
\( \sum_{i=0}^n {n \choose i}a^{(n-i)} b^i=(a + b)^n, \) the binomial theorem
Growth rates
The following are useful approximations (using theta notation):
\( \sum_{i=1}^n i^c = \Theta(n^{c+1}) \) for real c greater than −1
\( \sum_{i=1}^n \frac{1}{i} = \Theta(\log n) ( \)See Harmonic number)
\( \sum_{i=1}^n c^i = \Theta(c^n) \) for real c greater than 1
\( \sum_{i=1}^n \log(i)^c = \Theta(n \cdot \log(n)^{c}) \) for non-negative real c
\( \sum_{i=1}^n \log(i)^c \cdot i^d = \Theta(n^{d+1} \cdot \log(n)^{c}) \) for non-negative real c, d
\( \sum_{i=1}^n \log(i)^c \cdot i^d \cdot b^i = \Theta (n^d \cdot \log(n)^c \cdot b^n) \) for non-negative real b > 1, c, d
See also
Einstein notation
Checksum
Product (mathematics)
Kahan summation algorithm
Iterated binary operation
Summation equation
Basel problem \( – \sum_{n=1}^{\infty}{\frac{1}{n^2}}=\frac{\pi^2}{6} \)
Notes
^ Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (i through q) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see x instead of k in the above formulae involving k. See also typographical conventions in mathematical formulae.
^ "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1
Further reading
Nicholas J. Higham, "The accuracy of floating point summation", SIAM J. Scientific Computing 14 (4), 783–799 (1993).
External links
Summation, PlanetMath.org.
Derivation of Polynomials to Express the Sum of Natural Numbers with Exponents
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License