15 Advanced Techniques
The theory of infinite series is both deep and rich: there is much more we could say in this short chapter. With an eye towards calculus however we must march onwards, and so in this final chapter we collect some odds and ends about series that will prove useful in that quest. In particular, we prove the dominated convergence theorem allowing one to work simultaneously with limits and infinite sums, as well as Abel’s Lemma - an application of summation by parts.
15.1 Switching Limits and Sums
It’s rather common in practice to end up with an infinite sequence of infinite series. For example, if \(f(x)=\sum_k a_k x^k\) is a power series, then one might be interested in evaluating this function along a sequence \(x_n\) of inputs values within its interval of convergence. This would produce the sequence of values \(f(x_n)=\sum_{k}a_k (x_n)^k\), and if \(x_n\to x\), its natural to wonder if \(\sum_{k}a_k (x_n)^k\to \sum_{k}a_k x^k\). Unpacking this, we are asking if \[\lim_n \sum_{k}a_k (x_n)^k=\sum_{k}a_k (\lim _n x_n)^k\]
By the limit laws this is the same as asking whether \(\lim_n \sum_{k}a_k (x_n)^k=\sum_{k}\lim_n a_k (x_n)^k\), and so more abstractly we are asking if we can switch the order of a limit and a sum.
QUESTION: If \(a_{n,k}\) is a sequence depending on two variables \(n\) and \(k\), when is \(\lim_n\sum_{k\geq 0}a_{n,k}=\sum_{k\geq 0}\lim_n a_{n,k}\)?
Unfortunately this is subtle: its sometimes true and sometimes false:
Example 15.1 (When you can switch a limit and a sum) Consider the geometric series \(\sum_{k\geq 0} x^k\), and let \(a_n\) be a convergent sequence in \((-1,1)\), with limit \(a\in(-1,1)\). For each fixed \(n\) the series \(\sum_{k\geq 0}(a_n)^k\) converges, and has limit \(\frac{1}{1-a_n}\) by the formula for a geometric series. Thus, taking the limit as \(n\to\infty\) of these sums yields \[\lim_n\sum_{k\geq 0}(a_n)^k=\lim_n\frac{1}{1-a_n}=\frac{1}{1-\lim a_n}=\frac{1}{1-a}\] by the limit theorems. But this is precisely the value of \(\sum_{k\geq 0}a^k=\frac{1}{1-a}\). And again, looking at each term \(a^k\) individually, by the limit theorems \[a^k=(\lim_n a_n)^k = \lim_n (a_n)^k\] Putting these together, we see \(\sum_{k\geq 0}\lim_n (a_n)^k = \frac{1}{1-a}\) and so
\[\lim_n\sum_{k\geq 0}(a_n)^k=\frac{1}{1-a}=\sum_{k\geq 0}\lim_n (a_n)^k\]
Example 15.2 (When you can’t switch a limit and a sum) Written without summation notation, consider the following \[\begin{align*} 1 &= \frac{1}{2}+\frac{1}{2}\\ &=\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}\\ &=\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8} \end{align*}\]
Each row sums to \(1\), but the limit of each term \(1/2^n\to 0\). So, if we took the limit of the terms first, we would get \(1=0+0+0+\cdots +0 = 0\). Nonsense! Writing this precisely in summation notation, we define
\[a_{n,k}=\begin{cases} 1/2^n & 0\leq k<2^n\\ 0 & \mathrm{else} \end{cases} \]
Then each of the rows above is the sum \(1=\sum_{k\geq 0}a_{n,k}\) for \(n=2,3,4\). Since this is constant it is true that the limit is \(1\), but it is not true that the limit of the sums is the sum of the limits, which is zero.
\[1=\lim_n \sum_{k\geq 0}a_{n,k} \neq \sum_{k\geq 0}\lim a_{n,k}=\sum_{k\geq 0}0=0\]
So, its hopefully clear that to be able to use series in realistic contexts, we are in desperate need of a theorem which tells us when we can interchange limits and summations. The precise theorem giving these conditions is sometimes called Tannery’s Theorem, but we shall refer to it by its more descriptive name, Dominated Convergence.
Theorem 15.1 (Dominated Convergence for Series) Let \(a_{n,k}\) be a double sequence such that \(\lim_n a_{n,k}\) exists for each \(k\), and \(\sum_{k\geq 0}a_{n,k}\) converges for each \(n\). Then if
- There is an \(M_k\) with \(|a_{n,k}|\leq M_k\) for all \(n\).
- \(\sum M_k\) is convergent.
It follows that \(\sum_k\lim_n a_{n,k}\) is convergent, and \[\lim_n\sum_k a_k(n)=\sum_k\lim_n a_{n,k}\]
Proof. For simplicity of notation define \(a_{\infty,k}=\lim_n a_{n,k}\). First, we show that \(\sum_k a_{\infty,k}\) converges. Since for all \(n\), \(|a_{n,k}|\leq M_k\) we know this remains true in the limit, so \(\lim_n |a_{n,k}|=|a_{\infty,k}|<M_k\). Thus, by comparison we see \(\sum_k |a_{\infty,k}|\) converges, and hence so does \(\sum_k a_{\infty,k}\).
Now, the main event. Let \(\epsilon>0\). To show that \(\lim_n \sum_k a_{n,k}=\sum_k a_{\infty,k}\) we will show that there there is some \(N\) beyond which these two sums always differ by less than \(\epsilon\).
Since \(\sum_k M_k\) converges, by the Cauchy criterion there is some \(L\) where \[\sum_{k\geq L}M_k<\frac{\epsilon}{3}\]
For arbitrary \(n\), we compute
\[\begin{align*} \left|\sum_{k\geq 0} a_{n,k}-\sum_{k\geq 0}a_{\infty,k}\right| &= \left|\sum_{k< L} (a_{n,k}-a_k)+\sum_{k\geq L}a_{n,k}+\sum_{k\geq L}a_{\infty,k}\right|\\ &\leq \left|\sum_{k< L} (a_{n,k}-a_{\infty,k})\right|+\left|\sum_{k\geq L}a_{n,k}\right|+\left|\sum_{k\geq L}a_{\infty,k}\right|\\ &\leq \sum_{k< L}|a_{n,k}-a_{\infty,k}| +\sum_{k< L}|a_{n,k}|+\sum_{k\geq L}|a_{\infty,k}|\\ &\leq \sum_{k< L}|a_{n,k}-a_{\infty,k}|+ 2\sum_{k>L}M_k\\ & < \sum_{k< L}|a_{n,k}-a_{\infty,k}|+\frac{2\epsilon}{3} \end{align*}\]
That is, for an arbitrary \(n\) we can bound the difference essentially in terms of the first \(L\) terms: the rest are uniformly less than \(2\epsilon/3\). But for each of these \(L\) terms, we know that \(a_{n,k}\to a_{\infty,k}\) so we can find an \(N\) making that difference as small as we like. Let’s choose \(N_k\) such that \(|a_{n,k}-a_{\infty,k}|<\epsilon/3L\) for each \(k<L\) and then take
\[N=\max\{N_0,N_1,\ldots N_{L-1}\}\]
Now, for any \(n>N\) we are guaranteed that |a_{n,k}-a_{,k}|</3L$ and thus that
\[\sum_{k<L}|a_{n,k}-a_{\infty,k}|< L\frac{\epsilon}{3L}=\frac{\epsilon}{3}\]
Combining with the above, we now have for all \(n>N\), \[\left|\sum_{k\geq 0} a_{n,k}-\sum_{k\geq 0}a_{\infty,k}\right|<\epsilon\] as required.
There is a natural version of this theorem for products as well (though we will not need it in this course, I will state it here anyway)
Theorem 15.2 (\(\bigstar\) Dominated Convergence for Products) For each \(k\) let \(a_{n,k}\) be a function of \(n\), and assume the following:
- For each \(k\), \(a_{n,k}\to a_{\infty,k}\) is convergent.
- For each \(n\), \(\prod_{k\geq 0} a_{n,k}\) is convergent.
- There is an \(M_k\) with \(|a_{n,k}|\leq M_k\) for all \(n\).
- \(\sum M_k\) is convergent.
Then \(\prod_{k\geq 0}\lim_n (1+a_{n,k})\) is convergent, and \[\lim_n\prod_{k\geq 0} (1+a_{n,k})=\prod_{k\geq 0}(1+a_{\infty,k})\]
15.2 \(\bigstar\) Double Sums
A useful application of dominated convergence is to switching the order of a double sum. Given a double sequence, one may want to define an double sum
\[\sum_{m,n\geq 0}a_{m,n}\]
But, how should one do this? Because we have two indices, there are two possible orders we could attempt to compute this sum:
\[\sum_{n\geq 0}\sum_{m\geq 0}a_{m,n} \hspace{0.5cm}\textrm{or}\hspace{0.5cm}\sum_{m\geq 0}\sum_{n\geq 0}a_{m,n}\]
Definition 15.1 (Double Sum) Given a double sequence \(a_{m,n}\) its double sum \(\sum_{m,n\geq 0}a_{m,n}\) is defined if both orders of iterated summation converge, and are equal. In this case, the value of the double sum is defined to be their common value:
\[\sum_{m,n\geq 0}a_{m,n}:=\sum_{n\geq 0}\sum_{m\geq 0}a_{m,n} =\sum_{m\geq 0}\sum_{n\geq 0}a_{m,n}\]
We should be worried from previous experience that in general these two things need not be equal, so the double sum may not exist! Indeed, we can make this worry precise, by seeing that to relate one to the other is really an exchange of order of limits:
\[\sum_{m\geq 0}=\lim_M \sum_{0\leq m\leq M}\hspace{1cm}\sum_{n\geq 0}=\lim_N \sum_{0\leq n\leq N}\]
And so, expanding the above with these definitions (and using the limit laws to pull a limit out of a finite sum) we see
\[\sum_{n\geq 0}\sum_{m\geq 0}a_{m,n}=\lim_N \sum_{0\leq n\leq N}\left(\lim_M \sum_{0\leq m\leq M}a_{m,n}\right)\] \[=\lim_N\lim_M\left( \sum_{0\leq n\leq N}\sum_{0\leq m\leq M} a_{m,n}\right)=\lim_N\lim_M \sum_{\begin{smallmatrix}0\leq m\leq M\\ 0\leq n\leq N\end{smallmatrix}}a_{m,n}\]
Where in the final line we have put both indices under a single sum to indicate that it is a finite sum, and the order does not matter. Doing the same with the other order yields the exact same finite sum, but with the order of limits reversed:
\[\sum_{m\geq 0}\sum_{n\geq 0}a_{m,n}=\lim_M\lim_N \sum_{\begin{smallmatrix}0\leq m\leq M\\ 0\leq n\leq N\end{smallmatrix}}a_{m,n}\]
Because this is an exchange-of-limits-problem, we can hope to provide conditions under which it is allowed using Tannery’s theorem.
Theorem 15.3 Let \(a_{m,n}\) be a double sequence, and assume that either \[\sum_{m\geq 0}\sum_{n\geq 0}|a_{m,n}|\hspace{1cm}\textrm{or}\hspace{1cm}\sum_{n\geq 0}\sum_{m\geq 0}|a_{m,n}|\] converges. Then the double sum \(\sum_{m,n\geq 0}a_{m,n}\), meaning both iterated sums exist and are equal: \[\sum_{m\geq 0}\sum_{n\geq 0}a_{m,n}=\sum_{n\geq 0}\sum_{m\geq 0}a_{m,n}\]
Exercise 15.1 (Cauchy’s Double Summation Formula) Use Dominated Convergence to prove the double summation formula (Theorem 15.3).
Hint: without loss of generality, assume that \(\sum_{m\geq 0}\sum_{n\geq 0}|a_{m,n}|\) converges. Set \(M_m=\sum_{n\geq 0}|a_{m,n}|\) and show the various hypotheses of Dominated convergence apply
15.3 Summation by Parts
Perhaps you remember from calculus 2 the formula for integration by parts, which states
\[\int_a^b u dv = uv\big|_a^b -\int_a^b v du\]
We will of course prove this later, after we have defined the integral! But there is also a discrete version of this, for sums, which will prove useful beforehand. In fact this is a fact about finite sums so we could have proven it way back in the very first chapter of this book on the field axioms (like we did for the finite geometric series). But we were busy enough back then and did not, so instead the duty falls to us now in this odds-and-ends chapter of advanced techniques.
Theorem 15.4 For sequences \(\{a_n\}\) and \(\{b_n\}\), we have \[ \sum_{k=m}^n (a_{k+1} - a_k) b_{k+1} + \sum_{k=m}^n a_k (b_{k+1} - b_k) = a_{n+1}b_{n+1} - a_m b_m. \]
The good news is the proof is remarkably simple, now that we have the concept of a telescoping series
Proof. Combining the two terms on the left (which are finite sums, so this is no trouble), we obtain \[ \sum_{k=m}^n \left( b_{k+1} a_{k+1} - b_{k+1} a_k + a_k b_{k+1} - a_k b_k \right) = \sum_{k=m}^n (b_{k+1} a_{k+1} - a_k b_k). \]
This is a telescoping sum, and it simplifies to \(a_{n+1}b_{n+1} - a_m b_m\) after all the cancellations.
Summation by parts is often used in a slightly different form known as Abel’s Lemma, named after the Norwegian Mathematician Niels Abel.
Theorem 15.5 (Abel’s Lemma) Let \(\{a_n\}\) and \(\{b_n\}\) be sequences and let \(s_n = \sum_{k\leq n}a_k\) denote the \(n\)th partial sum of the series corresponding to the sequence \(\{a_n\}\). Then for every \(m < n\) we have \[ \sum_{k=m+1}^n a_k b_k = s_n b_n - s_m b_m - \sum_{k=m}^{n-1} s_k (b_{k+1} - b_k). \]
Proof. We apply the summation by parts formula to the sequences \(\{s_n\}\) and \(\{b_n\}\), to obtain \[ \sum_{k=m}^{n-1} (s_{k+1} - s_k) b_{k+1} + \sum_{k=m}^{n-1} s_k (b_{k+1} - b_k) = s_n b_n - s_m b_m. \]
Since \(s_k\) is the sequence of partial sums of \(a_k\), observe that \(s_{k+1} - s_k = a_{k+1}\), and so \[ \sum_{k=m}^{n-1} a_{k+1} b_{k+1} + \sum_{k=m}^{n-1} s_k (b_{k+1} - b_k) = s_n b_n - s_m b_m. \]
Replacing \(k\) with \(k - 1\) in the first sum and bringing the second sum to the right, we get our result.
This allowed Abel to produce another powerful test for convergence of a series:
Theorem 15.6 (Abel’s Test) If the series \(\sum_{k=1}^\infty x_k\) converges, and if \((y_k)\) is a monotone decreasing nonnegative sequence, then the series \(\sum_{k=1}^\infty x_k y_k\) converges.
Exercise 15.2 Prove Abel’s test.
Hint: Use Abel’s Lemma to observe that \(\sum_{k=1}^n x_k y_k = s_n y_{n+1} + \sum_{k=1}^n s_k (y_k - y_{k+1})\), where \(s_n = x_1 + x_2 + \cdots + x_n\) is the partial sums of \(x_n\). Then use the comparison test to argue that \[\sum_{k=1}^\infty s_k (y_k - y_{k+1})\] converges absolutely, and show how this leads directly to a proof of Abel’s Test.
Our main application of this result will be to understanding the continuity of power series at their endpoints, in a future chapter. But summation by parts makes quick work of many other calculations that might have otherwise been performed through a lengthy induction.
Here we take a look at one example: the summing of integer powers.
Example 15.3 (Summing Integers) For any \(n\in\NN\), \(\sum_{k=1}^n k = \frac{n(n+1)}{2}\).
Let \(a_k = k\) and \(b_k = k - 1\). Then each of the differences \(a_{k+1} - a_k\) and \(b_{k+1} - b_k\) equals \(1\), so by summation by parts, we have \[\sum_{k=1}^n (1)(k) + \sum_{k=1}^n (k)(1) = (n+1)(n).\] This equality can be simplified to \[2 \sum_{k=1}^n k = n(n+1).\] Dividing by two gives the claim \[1 + 2 + \ldots + n = \frac{n(n+1)}{2}.\]
Example 15.4 (Summing Squares) For any \(n\in\NN\), \(\sum_{k=1}^nk^2=\frac{n(n+1)(2n+1)}{6}\).
To see this, let \(a_k = k^2\) and \(b_k = k - 1\). In this case, \[a_{k+1} - a_k = (k+1)^2 - k^2 = 2k + 1,\] and \[b_{k+1} - b_k = 1.\] So by the summation by parts formula, we have \[\sum_{k=1}^n (2k+1)k + \sum_{k=1}^n (k^2)(1) = (n+1)^2 n.\]
Simplifying a bit, we get \[3 \sum_{k=1}^n k^2 + \sum_{k=1}^n k = (n+1)^3 n.\]
Since \(\sum_{k=1}^n k = \frac{n(n+1)}{2}\) from the previous example, after some algebra we end up with the desired result \[\sum_{k=1}^n k^2 = 1^2 + 2^2 + \ldots + n^2 = \frac{n(n+1)(2n+1)}{6}.\]
Exercise 15.3 (Summing Cubes) Prove that the following formula holds for the sum of cubes \[1^3+2^3+\cdots+n^3= \left(1+2+\cdots + n\right)^2\]
Hint: follow the suggested steps below:
- Let \(a_k=k^2\) and \(b_k=(k-1)^2\), and apply summation by parts.
- Simplify the left side with algebra, and the sum of squares
- Divide both sides by 4, and recognize the right as the square of what we got when summing \(\sum_{k=1}^n k\).
15.4 Problems
Exercise 15.4 Let \(E(x)=\sum_{k\geq 0} x^k/k!\) Prove that if \(x_n\to x\) is a convergent sequence, that \[\lim_n E(x_n)= E(x)\] using dominated convergence.
Exercise 15.5 Another nice application involving the series \(\sum_{k\geq 0}\frac{x^k}{k!}\) is proving
\[ \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n = \sum_{k=0}^{\infty} \frac{x^k}{k!}. \]
Hint: Setting \(f_k(n) = \binom{n}{k} \frac{x^k}{n^k}\), it can be shown that \(f_k = \lim_{n \to \infty} f_k(n) = \frac{x^k}{k!}\) and Dominated Convergence can be applied with the upper bound \(M_k = \frac{|x|^k}{k!}\).
Exercise 15.6 Use Dominated Convergence to prove that
\[\frac{1}{2}=\lim_n \left[\frac{1+2^n}{2^n\cdot 3 +4}+\frac{1+2^n}{2^n\cdot 3^2 +4^2}+\frac{1+2^n}{2^n\cdot 3^3 +4^3}+\cdots\right]\]
- Write in summation notation, and give a formula for the terms \(a_k(n)\)
- Show that \(\lim_n a_k(n) =\frac{1}{3^k}\)
- Show that for all \(n\), \(|a_k(n)|\leq \frac{2}{3^k}\)
Use these facts to show that the hypotheses of dominated convergence hold true, and then use the theorem to help you take the limit.
Exercise 15.7 (Applying the Double Sum) Since switching the order of limits involves commuting terms that are arbitrarily far apart, techniques like double summation allow one to prove many identities that are rather difficult to show directly. We will make a crucial use of this soon, in understanding exponential functions. But here is a first example:
For any \(k\in\NN\), prove the following equality of infinite sums:
\[\frac{z^{1+k}}{1-z}+\frac{(z^2)^{1+k}}{1-z^2}+\frac{(z^3)^{1+k}}{1-z^3}+\cdots =\frac{z^{1+k}}{1-z^{1+k}}+\frac{z^{2+k}}{z^{2+k}}+\frac{z^{3+k}}{1-z^{3+k}}+\cdots\]
Hint: first write each side as a summation: \[\sum_{n\geq 1}\frac{z^{n(k+1)}}{1-z^n}=\sum_{m\geq 1}\frac{z^{m+k}}{1-z^{m+k}}\]
*Then setting \(a_{m,n}=z^{n(m+k)}\), show that Cauchy summation applies to the double sum \(\sum_{m,n}\geq 0 a_{m,n}\) and compute the sum in each order, arriving that the claimed equality.