$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} \renewcommand{\epsilon}{\varepsilon} % ALTERNATE VERSIONS % \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} % \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} % \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} % \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} % \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\uppersum}[1]{U_{#1}} \newcommand{\lowersum}[1]{L_{#1}} \newcommand{\upperint}[1]{U_{#1}} \newcommand{\lowerint}[1]{L_{#1}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} % extra auxiliary and additional topic/proof \newcommand{\extopic}{\bigstar} \newcommand{\auxtopic}{\blacklozenge} \newcommand{\additional}{\oplus} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\sampleset}[1]{\mathcal{S}_{#1}} \newcommand{\erf}{\operatorname{erf}} $$

8  Monotone Convergence

Highlights of this Chapter: We prove the monotone convergence theorem, which is our first theorem that tells us a sequence converges, without having to first know its limiting value. We show how to use this theorem to find the limit of various recursively defined sequences, including two important examples.

  • We prove the infinite sequence of roots \(\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}\) converges to the golden ratio.
  • We prove the sequence \(\left(1+\frac{1}{n}\right)^n\) converges to the number \(e=2.71828\ldots\)
  • We begin a treatment of irrational exponents, by looking at the limit of sequences with rational exponents.

8.1 Monotone Convergence

The motivation for inventing sequences is to work with infinite processes, where we have a precise description of each finite stage, but cannot directly grasp the “completed” state “at infinity”. In the first section of this chapter we computed a few specific limits, and then in the second we showed how to find new, more complicated limits given that you know the value of some simpler ones via algebra.

But what we haven’t done, since our original motivating discussion with the nested intervals theorem, is actually return to the part of the theory we are most interested in: rigorously assuring that certain sequences converge, without knowing the value of their limit ahead of time! The most useful theorem in this direction is the monotone convergence theorem, which deals with monotone sequences.

Definition 8.1 (Monotone Sequences) A sequence \(s_n\) is monotone increasing (or more precisely, monotone non-decreasing) if \[m\leq n\,\,\,\implies\,\,\, s_m\leq s_n\] A sequence is monotone decreasing (non-increasing) if \[m\leq n\,\,\,\implies\,\,\, s_m\geq s_n\]

Note: constant sequences are monotone: both monotone increasing and monotone decreasing.

The original inspiration for a monotone sequence is the sequence of upper bounds or lower bounds from a collection of nested intervals: as the intervals get smaller, the lower bounds monotonically increase, and the upper bounds monotonically decrease. The Monotone convergence theorem guanatees that such sequences always converge. Its proof is below, but could actually be extracted directly from Theorem 3.3.

Theorem 8.1 (Monotone Convergence) Let \(s_n\) be a monotone bounded sequence. Then \(s_n\) is a convergent sequence.

Proof. Here we consider the case that \(s_n\) is monotone increasing, and leave the decreasing case as an exercise. Let \(S=\{s_n\mid n\in\NN\}\). Then \(S\) is nonempty, and is bounded above (by any upper bound for the sequence \(s_n\), which we assumed is bounded). Thus by completeness, it has a supremum \(s=\sup S\).

We claim that \(s_n\) is actually a convergent sequence, which limits to \(s_n\). To prove this, choose \(\epsilon>0\), and note that as \(s\) is the least upper bound, \(s-\epsilon\) is not an upper bound for \(S\), so there must be some \(N\) where \(s_N>s-\epsilon\). But \(s_n\) is monotone increasing, so if \(n>N\) it follows that \(s_n>s_N\). Recalling that for all \(n\) we know \(s_n\leq s\) (since \(s\) is an upper bound), we have found some \(N\) where for all \(n>N\) we know \(s-\epsilon< s_n < s\). This further implies \(|s_n-s|<\epsilon\), which is exactly the definition of convergence! Thus

\[s_n\to s\] So it is a convergent sequence, as claimed.

Though straightforward to prove, this theorem has tons of applications, as it assures us that many of the difficult to describe recursively defined sequences that show up in practice actually do converge, and thus we may rigorously reason about their limits. We will give several interesting ones below.

8.2 Application: Defining Irrational Powers

We have already defined rational powers of a number in terms of iterated multiplication/division, and the extraction of roots: but how does one define a real numbered power? We can use sequences to do this! To motivate this, let’s consider the example of defining \(2^\pi\). We can write \(\pi\) as the limit of a sequence of rational numbers, for instance

\[3,\,3.1,\,3.14,\,3.141,\,3.1415\,\ldots\]

And since rational exponents make sense, from this we can produce a sequence of exponentials

\[2^3,\, 2^\frac{31}{10},\, 2^{\frac{314}{100}},\,2^{\frac{3141}{1000}},\,2^{\frac{31415}{10000}},\ldots\]

Then we may ask if this sequence has a limit: if it does, it’s natural to try and define this value two to the power of pi. To make sure this makes sense, we need to check several potential worries:

  • Does this sequence converge?
  • Does the limit depend on the particular sequence chosen?

For example if you tried to define \(3^{\sqrt{2}}\) using the babylonian sequence for \(\sqrt{2}\), and your friend tried to use the sequence coming from the partial fraction, you’d better get the same number if this is a reasonable thing to define! Because we are in the section on monotone convergence, we will restrict ourselves at the moment to monotone sequences though we will see later we can dispense with this if desired.

Proposition 8.1 If \(r_n\to x\) is a monotone sequence of rational numbers converging to \(x\), and \(a>0\) then the sequence \(a^{r_n}\) converges.

Proof. Recall for a fixed positive base \(a\), exponentiation by rational numbers is monotone increasing, so \(r<s\) implies \(a^r<a^s\).
Thus, given a monotone sequence \(r_n\), the exponentiated sequence \(a^{r_n}\) remains monotone (for monotone increasing we see \(r_n\leq r_{n+1}\implies a^{r_n}\leq a^{r_{n+1}}\) and the equalities are reversed if \(r_n\) is monotone decreasing).

Now that we know \(a^{r_n}\) is monotone, we only need to see its bounded to apply Monotone Convergence. Again we have two cases, and will deal here with the monotone increasing case. As \(r_n\to x\) and \(x\) is a real number, there must be some natural number \(N>x\). Thus, \(N\) is greater than \(r_n\) for all \(n\), and so \(a^N\) is greater than \(a^{r_n}\): our sequence is bounded above by \(a^N\). Thus all the hypotheses of monotone convergence are satisfied, and \(\lim a^{r_n}\) exists.

Now that we know such sequences make sense, we wish to clear up any potential ambiguity, and show that if two different sequences both converge to \(x\), the value we attempt to assign to \(a^x\) as a limit is the same for each. As a lemma in this direction, we look at sequences converging to zero.

Exercise 8.1 Let \(r_n\) be any sequence of rationals converging to zero. Then for any \(a>0\) we have \[\lim a^{r_n}=1\]

Corollary 8.1 If \(r_n,s_n\) are two monotone sequences of rationals each converging to \(x\), then \[\lim a^{r_n}=\lim a^{s_n}\] for any \(a>0\).

Proof. Let \(z_n=r_n-s_n\), so that \(z_n\to 0\). Because \(r_n\) and \(s_n\) are monotone, we know \(\lim a^{r_n}\) and \(\lim a^{s_n}\) exist. And by the exercise above, we have \(a^{z_n}\to 1\). Noting that \(r_n=s_n+z_n\) and that the laws of exponents apply for rational exponents, we have \[a^{r_n}=a^{s_n+z_n}=a^{s_n}a^{z_n}\] But as all quantities in question converge we can use the limit theorems to compute:

\[\begin{align*} \lim a^{r_n}&=\lim a^{s_n+z_n}\\ &=\lim a^{s_n}a^{z_n}\\ &=(\lim a^{s_n})(\lim a^{z_n})\\ &=\lim a^{s_n} \end{align*}\]

Thus, we can unambiguously define the value of \(a^x\) as the limit of any monotone sequence \(a^{r_n}\) without specifying the sequence itself.

Definition 8.2 ## Irrational Powers Let \(a>0\) and \(x\in\RR\). Then we define \(a^x\) as a limit \[a^x=\lim a^{r_n}\] For \(r_n\) any monotone sequence of rational numbers converging to \(x\).

Perhaps upon reading this definition to yourself you wonder, is the restriction to monotone sequences important, or just an artifact of our currently limited toolset?
Once we build more tools we will see the latter is the case; you will show on homework that arbitrary convergent sequences \(r_n\to x\) can be used to unambiguously define \(a^x\).

8.3 Applicaiton: Recursive Sequences

The monotone convergence theorem is particularly adept to working with recursive sequences, as one may aim to prove such a sequence is monotone and bounded by induction. This guarantees the limit exists, at which point we can rigorously give that limiting value a name, and use limit theorems to find its value.

8.3.1 The Golden Ratio

Consider the recursive sequence defined by \(s_{n+1}=\sqrt{1+s_n}\) starting from \(s_0=1\):

\[s_0=1\] \[s_1=\sqrt{1+\sqrt{1}}\] \[s_2=\sqrt{1+\sqrt{1+\sqrt{1}}}\] \[\ldots\]

Because such sequences follow a regular pattern, we can use a shorthand notation with ellipsis for their terms. For example, in the original sequence above, writing the first couple steps of the pattern followed by an ellipsis

\[\sqrt{1+\sqrt{1+\sqrt{1+\ldots}}}\]

we take to mean the sequence of terms \(s_n\) where \(s_{n+1}=\sqrt{1+s_n}\) itself. Thus, writing \(\lim \sqrt{1+\sqrt{1+\cdots}}\) means the limit of this sequence, implicitly defined by this infinite expression.

Exercise 8.2 Here are some other infinite expressions defined by recursive sequences: can you give the recursion relation they satisfy? \[\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdots}}}\]

\[\frac{1}{1+\frac{1}{1+\cdots}}\]

\[\cos(\cos(\cos(\cdots\cos(5)\cdots)))\]

In all of these sequences it is not clear at all how to find their limit value from scratch, or how we could possibly apply any of the limit theorems about field axioms and inequalities. But, recursive sequences are set up for using induction, and monotone convergence! We can build a sort of recipe for dealing with them:

Recursive Sequence Operation Manual:

  • Prove its bounded, by induction.
  • Prove its monotone, by induction.
  • Use Monotone convergence to conclude its convergent.
  • Use the recursive definition, and the limit theorems, to find an equation satisfied by the limit.
  • Solve that equation, to find the limit.

A beautiful and interesting example of this operations manual is carried out below:

Proposition 8.2 The sequence \(\sqrt{1+\sqrt{1+\cdots}}\) converges to the golden ratio.s

Proof. The infinite expression \(\sqrt{1+\sqrt{1+\cdots}}\) defines the recursive sequence \(s_{n+1}=\sqrt{1+s_n}\) with \(s_1=1\).

Step 1: \(s_n\) is monotone increasing, by induction First we show that \(s_2>s_1\). Using the formula, \(s_2=\sqrt{1+\sqrt{1}}=\sqrt{2}\), which is larger than \(s_1=1\). Next, we assume for induction \(s_n>s_{n-1}\) and we use this to prove that \(s_{n+1}>s_n\). Starting from our induction hypothesis, we add one to both sides yielding \(1+s_n>1+s_{n-1}\) and then we take the square root (which preserves the inequality, by Proposition 2.5) to get \[\sqrt{1+s_n}>\sqrt{1+s_{n-1}}\] But now, we simply note that the term on the left is the definition of \(s_{n+1}\) and the term on the right is the definition of \(s_n\). Thus we have \(s_{n+1}>s_n\) as claimed, and our induction proof works for all \(n\).

Step 2: \(s_n\) is bounded, by induction It is hard to guess an upper bound for \(s_n\) without doing a little calculation, but plugging the first few terms into a calculator shows them to be less than \(2\), so we might try to prove \(\forall n\, s_n<2\). The base case is immediate as \(s_1=1<2\), so assume for induction \(s_n<2\). Then \(1+s_n<3\) and so \(\sqrt{1+s_n}<\sqrt{1+2}=\sqrt{3}\), and \(\sqrt{3}<2\) (as \(3<2^2=4\)) so our induction has worked, and the entire sequence is bounded above by \(2\).

Conclusion: \(s_n\) converges! We have proven the sequence \(s_n\) is both monotone increasing and bounded above by \(2\). Thus the monotone convergence theorem assures us that there exists some \(L\) with \(s_n\to L\). It only remains to figure out what number this is!

Step 3: The Limit Theorems Because truncating the beginning of a sequence does not change its limit, we see that \(\lim s_n=\lim s_{n+1}=L\). But applying the limit theorems to \(s_{n+1}=\sqrt{1+s_n}\), we see that as \(s_n\to L\), it follows that \(1+s_n\to 1+L\) and thus that \(\sqrt{1+s_n}\to \sqrt{1+L}\). This gives us an equation that \(L\) must satisfy!
\[\sqrt{1+L}=L\] Simplifying this becomes \(1+L=L^2\), which has solutions \((1\pm \sqrt{5})/2\). This argument only tells us so far that one of these numbers must be our limit \(L\): to figure out which we need to bring in more information. Noticing that only one of the two is positive, and all the terms of our sequence are positive singles it out:

\[\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}=\frac{1+\sqrt{5}}{2}\approx 1.618\ldots\]

This number is known as the golden ratio.

Example 8.1 The final step of the proof above suggests a way one might find a recursive sequence to use as a calculating tool: if we started with the golden ratio \[\phi =\frac{1+\sqrt{5}}{2}\] we could observe that \(\phi\) solves the quadratic equation \(1+L=L^2\), and hence \(L=\sqrt{1+L}\). This sets up a recursive sequence, as we can plug this relation into itself over and over:

\[L=\sqrt{1+L}=\sqrt{1+\sqrt{1+L}}=\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}\]

Which immediately suggests the recursion \(s_{n+1}=\sqrt{1+s_n}\) as a candidate for generating a sequence that would solve the original equation.

Exercise 8.3 Find a recursive sequence whose limit is the positive real root of \(x^2-2x-5\). Then prove that your proposed sequence actually converges to this value.

Exercise 8.4 What number is this? \[\sqrt{1-2\sqrt{1-2\sqrt{1-2\sqrt{\cdots}}}}\]

8.3.2 \(\sqrt{2}\)

Recall the babylonian sequence converging to \(\sqrt{2}\) was recursively defined, starting from the side length \(x_1=2\) of a \(2\times 1\) rectangle and replacing it with the average of the two sides (a rectangle closer to a square). As a formula this is the process \(x\mapsto (x+\frac{2}{x})/2\) from which we product a recursive sequence

\[x_{n+1}=\frac{x_n+\frac{2}{x_n}}{2}\]

Our goal here is to provide a proof of convergence using the strategy laid out above. First, we aim to show that \(x_n\) is monotone decreasing. To make the algebra simpler, we first give a little lemma simplfying the condition:

Exercise 8.5 Let \(x_n\) be a sequence of positive numbers satisfying the babylonian recurrence relation.
Show that \(x_{n+1}<x_n\) if and only if \(2<x_n^2\).

Proposition 8.3 Starting from \(x_0=2\), the recursive procedure for \(x_n\) defines a monotone decreasing sequence.

Proof. We proceed by induction. For the base case we compute \(x_1=\frac{2+\frac{2}{2}}{2}=\frac{3}{2}\) so \(x_1<x_0\) as required. For the inductive step we assume \(x_n<x_{n-1}\) and aim to show \(x_{n+1}<x_n\). By the exercise above, this is equivalent to assuming that \(2<x_{n-1}^2\) and using this to prove that \(2<x_n^2\).

Writing out the recursive definition of \(x_n\) we see

\[\begin{align*} x_n^2 &= \left(\frac{x_{n-1}+\frac{2}{x_{n-1}}}{2}\right)^2\\ &=\frac{x_{n-1}^2+4+\frac{4}{x_{n-1}^2}}{4}\\ &= \left(\frac{x_{n-1}}{2}\right)^2+1+\left(\frac{2}{x_{n-1}}\right)^2 \end{align*}\]

The first term is \(>1\) by the inductive hypothesis, and so the first two terms sum to greater than 2. Since the last term is a square its positive and can’t possibly make things smaller, so the entire thing sums to something strictly larger than 2, as required. Thus \(x_n\) is monotone decreasing.

The next step in our process is to prove the sequence is bounded.

Exercise 8.6 Prove that the sequence with \(x_0=2\) satisfying the babylonian recurrence relation is bounded below by 1.

Now, since \(x_n\) is monotone and bounded it converges to some limiting value \(L\). Since truncating the beginning of a sequence has no effect on the eventual limit, we know \(\lim x_{n+1}=\lim x_n = L\), but expanding the first term’s recursive definition, this implies that

\[\lim \frac{x_n+\frac{2}{x_n}}{2}=L\]

Since we know \(x_n\to L\) and \(L\neq 0\) (since its bounded below by 1), we can use the limit laws to compute

\[\lim\frac{x_n+\frac{2}{x_n}}{2} = \frac{\lim x_n + \frac{2}{\lim x_n}}{2}=\frac{L+\frac{2}{L}}{2}\]

Thus, whatever the limiting value \(L\) is it must satisfy the equation

\[\frac{L+\frac{2}{L}}{2}=L\]

Multiplying by \(2L\) to clear denominators we see

\[L^2+2=2L^2\,\implies\, 2=L^2\]

We know \(L\) is positive, and the only positive solution to this equation is \(L=\sqrt{2}\). So we have another proof of the convergence of the babylonian procedure!

8.4 \(\bigstar\) The Number \(e\)

In this section we aim to study, and prove the convergence of the following sequence of numbers \[\left(1+\frac{1}{n}\right)^n\] We will later see that the limit of this sequence is the number \(e\) (indeed, many authors take this sequence itself as the definition of \(e\) as it is perhaps the first natural looking sequence limiting to this special value. We will instead define \(e\) in terms of exponential functions to come, and then later show its value coincides with this limit).

We begin by proving \(a_n\) is monotone as a prelude to applying monotone convergence.

Example 8.2 The sequence \(a_n=\left(\frac{n+1}{n}\right)^n\) is monotone increasing.

Proof. To show \(a_n\) is increasing we will show that the ratio \(\frac{a_n}{a_{n-1}}\) is greater than 1. Simplifying,

\[\frac{a_n}{a_{n-1}}=\frac{\left(\frac{n+1}{n}\right)^n}{\left(\frac{n}{n-1}\right)^{n-1}}=\left(\frac{n+1}{n}\right)^n\left(\frac{n-1}{n}\right)^{n-1}\]

Multiplying by \(\frac{n-1}{n}\) and its inverse we can make the powers on each of these terms the same, and combine them:

\[=\left(\frac{n+1}{n}\right)^n\left(\frac{n-1}{n}\right)^{n}\frac{n}{n-1}=\left(\frac{n^2-1}{n^2}\right)^n\frac{n}{n-1}\] Simplifying what is in parentheses, we notice that we are actually in a perfect situation to apply Bernoulli’s Inequality (Exercise 2.6) to help us estimate this term. Recall this says that if \(r\) is any number such that \(1+r\) is positive, \((1+r)^n\geq 1+nr\). When \(n\geq 2\) we can apply this to \(r=-\frac{1}{n^2}\), yielding \[\frac{a_n}{a_{n-1}}=\left(1-\frac{1}{n^2}\right)^n\frac{n}{n-1}\geq \left(1-\frac{n}{n^2}\right)\frac{n}{n-1}\] \[=\frac{n-1}{n}\frac{n}{n-1}=1\]

Thus \(\frac{a_n}{a_{n-1}}\geq1\), so \(a_n\geq a_{n-1}\) and the sequence is monotone increasing for all \(n\), as claimed.

Next we need to show that \(a_n\) is bounded above. Computing terms numerically, it seems that \(a_n\) is bounded above by 3, but of course no amount of computation can substitute for a proof. And after a bit of trying, it seems hard to prove directly that it actually is bounded above.

So instead, we will employ a bit of an ingenious trick. We will study a second sequence, which appears very similar to the first:

\[b_n=\left(\frac{n+1}{n}\right)^{n+1}\]

Indeed, this is just our sequence \(a_n\) multiplied by one extra factor of \(\frac{n+1}{n}\)! But this extra factor changes its behavior a bit: computing the first few terms, we see that it appears to be decreasing:

\[b_1=\left(1+1\right)^2=4,\,\,b_2=\left(1+\frac{1}{2}\right)^3=\frac{27}{8}= 3.375,\,\,b_3=\left(1+\frac{1}{3}\right)^4\approx 3.1604\]

Indeed, a proof that its decreasing can be constructed following an identical strategy to \(a_n\) in Example 8.2.

Exercise 8.7 The sequence \(b_n=\left(\frac{n+1}{n}\right)^{n+1}\) is monotone decreasing.

Now that we understand the behavior of \(b_n\) we can use it to prove that \(a_n\) is bounded above:

Corollary 8.2 The sequence \(a_n=\left(1+\frac{1}{n}\right)^n\) is convergent

Proof. Note that the sequence \(b_n\) and \(a_n\) are related by \[b_n=\left(\frac{n+1}{n}\right)^{n+1}=a_n\left(\frac{n+1}{n}\right)\] Since \(\frac{n+1}{n}>1\) we see that \(b_n>a_n\) for all \(n\). But \(b_n\) is decreasing, so \(b_n\leq b_1=2^2=4\), and so \(a_n\) is bounded above by \(4\).

Note that we can also readily see that \(b_n\) is itself convergent (though we did not actually need that fact for our analysis of \(a_n\)): we proved its monotone decreasing, and its a sequence of positive terms - so its trivially bounded below by zero!

We can also see that \(a_n\) and \(b_n\) have the same limit, using the limit theorems. Since \(\frac{1}{n}\to 0\), we know that \(1+\frac{1}{n}\to 1\), and hence that

\[\begin{align*} \lim b_n &= \lim \left[a_n\left(\frac{n+1}{n}\right) \right]\\ &= (\lim a_n)\cdot\left(\lim \frac{n+1}{n}\right) \\ &=\lim a_n \end{align*}\]

As mentioned previously, we will later see that this limit is the number called \(e\). But believing for a moment that we should be interested in this particular limit, having the two sequences \(a_n\) and \(b_n\) lying around actually proves quite practically useful for estimating its value.

Since \(\lim a_n = e = \lim b_n\) and \(a_n<b_n\) for all \(n\), we see that the number \(e\) is contained in the interval \(I_n=[a_n,b_n]\), and hence is the limit of the nested intervals:

Corollary 8.3 \[\{e\}=\bigcap_{n\geq 1}\left[\left(1+\frac{1}{n}\right)^n,\left(1+\frac{1}{n}\right)^{n+1}\right]\]

Taking any finite \(n\), this interval gives us both an upper and lower bound for \(e\): for example

\[n=10 \implies 2.59374\leq e\leq 2.85311\] \[n=100 \implies 2.7048\leq e\leq 2.73186\] \[n=1,000\implies 2.71692\leq e\leq 2.71964\] \[n=1,000,000\implies 2.71826\leq e\leq 2.71829\]

Thus, correct to four decimal places we know \(e\approx 2.7182\)

8.5 Problems