We learned that series are in some sense "infinite sums". Do the same rules as for finite sums also apply to infinite sums? Like removing braces (associative law) and re-arranging terms (commutative law)? The answer is: Generally no. But in certain cases yes! The upcoming articles will tell you when the answer is yes and when it is no. A small spoiler ahead: adding series and multiplying them by a constant is always allowed - provided that the series converges.
In the article "Limit theorems" we proved the sum rule for sequences
, which holds if
and
converge. These also hold for convergent series, since a series is just a sequence of partial sums. More precisely, if
and
converge and
, then:
In addition, a series
converges, whenever the even and odd subsequence
and
converge. And there is
More generally, within a convergent series
, we can set brackets and split
Here,
is the strictly monotonically increasing sequence of natural numbers with
where
indexes the first summand within a bracket-sum. Conversely, for a divergent series
, we also have divergence of
.
For partial sums we have
. Multiplying two series is way harder: sometimes it works and sometimes not. We will cover the details later.
There is no general associative or commutative law for series: For finite sums, one may re-arrange terms and set brackets arbitrarily and still get the same result. For infinite sums (series), this does not work in general. However, there are indicators that tell us when it works and when not.
Theorem (sum rule for series)
Let
and
be two convergent series. Then
Proof (sum rule for series)
There is:
We are allowed to use the limit theorem
since the series
and
converge, so the sequences of partial sums
and
converge, i.e. their limits exist.
Math for Non-Geeks: Template:Aufgabe
Theorem (factor rule for series)
Let
be a convergent series and
a real number. Then
Proof (factor rule for series)
There is:
We are allowed to use
since
converges, so the limit
exists.
Math for Non-Geeks: Template:Aufgabe
Proof (splitting rule for series)
This is a consequence of the sum rule above. We take a look at the series
and
. They are given by the partial sum series
We can create two new sequences
and
, by extracting the elements from
and
and "filling up the gaps" with zeros
The corresponding sequences of partial sums are then
Since
and
converge the series
and
converge as well, with
The sum rule implies convergence of
. Now
for all
. Hence,
has to converge as well, where
Math for Non-Geeks: Template:Aufgabe
For finite sums, the "Assoziativgesetzes der Addition" (German) allows to set brackets arbitrarily. For instance
Analogously
For "infinite sums", we need to pay attention: consider
The sequence of partial sums for this series is:
Which means, the partial sums "jump" between
and
, so the series diverges (
and
are accumulation points). Setting brackets can, however, lead to a series converging to 0:
So if a series diverges, we cannot simply set brackets as we wish! For convergent series, the same holds true, since we can turn the series converging to 0 above into a divergent series by removing brackets: for
(which converges to 0), removing brackets yields
(which diverges).
To obtain the limit
, we use
Achieving
does not work, since the partial sums for every setting of brackets can either take the value 0 or 1.
Consider the converging series
, which is an infinite sum
. The corresponding sequence of partial sums is
What happens if we set brackets? We could, for instance, conclude every two neighbouring elements:
. This leads to the series
. The corresponding sequence of partial sums is
This is a subsequence of the original sequence of partial sums. Now, since the series
converges, the sequence of its partial sums converges and hence every subsequence converges as well (and to the same limit. So
has the same limit as the original series. In this case, we case set brackets as we wish!
If we set brackets within a series and then consider the "bracketed series", then the partial sum sequence of the "bracketed series" is a subsequence of the original sequence of partial sums. Now
- If a sequence converges, every subsequence converges.
- If a subsequence diverges, the original sequence also diverges.
Since setting brackets leads to a subsequence of partial sums, we have that:
- Within converging series, brackets can be set arbitrarily.
- Within diverging series, brackets can be removed arbitrarily.
Or, concluded in a theorem:
Proof (brackets in series)
Let
be a converging series. Introducing new brackets, we obtain
, where
is a strictly monotonically increasing sequence of natural numbers with
. The number
is the index of the first summand of the
-th bracket. The corresponding sequence of partial sums now reads:
This is a subsequence of the original sequence of partial sums. A subsequence converges to the same limit as the original sequence. So
- If
converges, then also the subsequence
has to converge to the same limit.
- If
diverges, then
cannot be convergent to any limit in the first place. So it must diverge, as well..
So in converging series, we can set and in diverging series, we can remove brackets as we wish.
What?! The sum of all natural numbers is equal to -1/12?
[edit | edit source]
There are several Youtube videos and also some articles (here is a German one [1]) where people claim to have proven that the sum of all natural numbers equals
:
This is obviously wrong! For the series above, the sequence of partial sums diverges quadratically to
. It does not even attain any negative value. How do people then come up with the
, then? The answer is: "by violating the associative law". All we have to do is to set brackets in divergent series (which is not allowed). Recall the sum formula for the geometric series:
Question: What is wrong in the line above?
In addition, for the series
we have the identity
Question: And what is wrong, here?
Multiplying out and factoring out is not allowed for diverging infinite sums:
If we divide this equation by
, we get
. Subtracting it from the original series
yields
Question: And what is wrong with this step?
After all those illegal steps, we get
q.e.d. (or rather w.t.f.)
For series
and
as well as
we have the following computational rules:
In linear algebra, the notion of a vector space is introduced, which is roughly speaking "a set of elements, where we are allowed to add any two elements or multiply an element by a constant
". The set of all real valued sequence
is such a set, where we can add elements or multiply by a constant. So it is a vector space. The subset
which includes all sequences
, for which the series
converges is a subset of
, which is a vector space again (we do not leave it by adding elements or multiplying by a constant). Such a subset is also called a subspace. The map
assigning each
the limit of the series
preserves addition and scalar multiplication: Adding two series leads to addition of the limits. Scaling the series by a constant leads to a scaling of the limit by the same constant. maps which preserve addition and scaling are also called linear maps, so y
is a linear map.
- ↑ German one diesen Spiegel-Artikel