7  Limit Laws

Highlights of this Chapter: We develop techniques for bounding limits by inequalities, and computing limits using the field axioms. We use these techniques to prove two interesting results:

  • The Babylonian sequence approximating 2 truly does converge to this value.
  • Given any real number, there exists a sequence of rational numbers converging to it.

Now that we have a handle on the definition of convergence and divergence, our goal is to develop techniques to avoid using the definition directly, wherever possible (finding values of N for an arbitrary ε is difficult, and not very enlightening!)

The natural first set of questions to investigate then are how our new definition interacts with the ordered field axioms: can we learn anything about limits and inequalities, or limits and field operations? We tackle both of these in turn below.

7.1 Limits and Inequalities

Proposition 7.1 (Limits of nonnegative sequences) Let an be a convergent sequence of nonnegative numbers. Then liman is nonnegative.

Proof. Assume for the sake of contradiction that anL but L<0. Since L is negative, we can find a small enough epsilon (say, ε=|L|/2) such that the entire interval (Lε,L+ε) consists of negative numbers.

The definition of convergence says for this ε, there must be an N where for all n>N we know an lies in this interval. Thus, we’ve concluded that for large enough n, that an must be negative! This is a contradiction, as an is a nonnegative sequence.

Exercise 7.1 If an is a convergent anL for all n, then limanL. Similarly prove if an is a convergent anU for all n, then limanU.

This exercise provides the following useful corollary, telling you that if you can bound a sequence, you can bound its limit.

Corollary 7.1 (Inequalities and Convergence) If an is a convergent sequence with LanU for all n, then LlimanU

In fact, a kind of converse of this is true as well: if a sequence converges, then we know the limit ‘is bounded’ (as it exists, as a real number, and those can’t be infinite). But this is enough to conclude that the entire sequence is bounded!

Proposition 7.2 (Convergent Sequences are Bounded) Let sn be a convergent sequence. Then there exists a B such that |sn|<B for all nN.

Proof. Let snL be a convergent sequence. Then we know for any ε>0 eventually the sequence stays within ε of L. So for example, choosing ε=1, this means there is some N where for n>N we are assured |snL|<1, or equivalently 1<snL<1. Adding L,

L1<sn<L+1

Thus, we have both upper and lower bounds for the sequence after N and all we are left to worry about is the finitely many terms before this. For an upper bound on these we can just take the max of s1,,sN and for a lower bound we can take the min.

Thus, to get an overall upper bound, we can take M=max{s1,s2,,sN,L+1}

and for an overall lower bound we can take

m=min{s1,s2,,sN,L1}

Then for all n we have msnM so the sequence sn is bounded.

Theorem 7.1 (The Squeeze Theorem) Let an,bn and cn be sequences with anbncn for all n. Then if an and cn are convergent, with liman=limcn=L, then bn is also convergent, and limbn=L

Proof. Choose ε>0. Since both anL, we can choose Na such that n>Na implies |anL|<ε, and similarly as cnL there’s an Nc with n>Nc implying $|c_n-L|<. Set N=max{Na,Nb} and note that for any n>N this means ε<anL<ε and ε<cnL<ε. Since ancn by assumption, we can string these inequalities together to get

ε<anLcnL<ε

But we know more than this: in fact, anbncn and subtracting L allows us to squeeze this directly into the one given above:

ε<anLbnLcnL<ε

Ignoring the terms with an and cn, this says ε<bnL<ε, or |bnL|<ε. Thus bnL as claimed.

7.1.1 Example Computations

The squeeze theorem is incredibly useful in practice as it allows us to prove the convergence of complicated looking sequences by replacing them with two (hopefully simpler) sequences, an upper and lower bound. To illustrate, let’s look back at ?exr-another-seq-converges, and re-prove its convergence.

Example 7.1 (nn2+1 converges to 0.) Since we are trying to converge to zero, we want to bound this sequence above and below by sequences that converge to zero. Since n is always positive, a natural lower bound is the constant sequence 0,0,0,.

One first thought for an upper bound may be nn+1: its easy to prove that nn2+1<nn+1 (as we’ve made the denominator smaller), and so we have bounded our sequence 0<an<nn+1. Unfortunately this does not help us, as limnn+1=1 () so the two bounds do not squeeze an to zero!

Another attempt at an upper bound may be 1/n: we know this goes to zero () and it is also an upper bound: nn2+1<nn2=1n

Thus since lim0=0 and lim1n=0, we can conclude via squeezing that limnn2+1=0 as well.

This theorem is particularly useful for calculating limits involving functions whose values are difficult to compute. While we haven’t formally introduced the sine function yet in this class, we know (and will later confirm) that 1sin(x)1 for all xR. We can use this to compute many otherwise difficult limits:

Example 7.2 (sn=sinnn converges to 0.) Since 1sin(x)1 we know 0|sinx|1 for all x, and thus 0sinnn1n

Since both of these bounding sequences converge to zero, we know the original does as well, by the squeeze theorem.

This sort of estimation can be applied to even quite complicated looking limits:

Example 7.3 Compute the following limit: lim(n2sin(n32n+1)n3+n2+n+1)n

Lets begin by estimating as much as we can: we know |sin(x)|1, so we can see that

|n2sin(n32n+1)n3+n2+n+1|<n2n3+n2+1

Next, we see that by shrinking the denominator we can produce yet another over estimate:

n2n3+n2+1<n2n3=1n

Bringing back the nth power

|n2sin(n32n+1)n3+n2+n+1|n<1nn

And, unpacking the definition of absolute value:

1nn<(n2sin(n32n+1)n3+n2+n+1)n<1nn

It now suffices to prove that 1/nn converges to zero, as we ve squeezed our sequence with it. But this is easiest to do with another squeeze: namely, since nn>2n we see 0<1/nn<1/2n, and we already proved that 1/2n0, so we’re done!

lim(n2sin(n32n+1)n3+n2+n+1)n=0

Exercise 7.2 Use the squeeze theorem to prove that lim(n321n33n3+5)2n+7=0

A nice corollary of the squeeze theorem tells us when a sequence converges by estimating its difference from the proposed limit:

Exercise 7.3 Let an be a sequence, and L be a real number. If there exists a sequence αn where |anL|αn for all n, and αn0, then liman=L.

This is useful as unpacking the definition of absolute value (), a sequence αn with αnanLαn can be thought of as giving “error bounds” on the difference of an from L. In this language, the proposition says if we can bound the error between an and L by a sequence going to zero, then an must actually go to L.

7.2 Limits and Field Operations

Just like inequalities, the field operations themselves play nicely with limits.

Theorem 7.2 (Constant Multiples) Let sn be a convergent sequence, and k a real number. Then the sequence ksn is convergent, and limksn=klimsn

Proof. We distinguish two cases, depending on k. If k=0, then ksn is just the constant sequence 0,0,0 and klimsn=0 as well, so the theorem is true.

If k0, we proceed as follows. Denote the limit of sn by L, and let ε>0. Choose N such that n>N implies |snL|<ε|k| (we can do so, as snL). Now, for this same value of N, choose arbitrary n>N and consider the difference |ksnkL|:

|ksnkL|=|k(snL)|=|k||snL|<|k|ε|k|=ε

Thus, ksnkL as claimed!

To do a similar calculation for the sum of sequences requires an ε/2 type argument:

Theorem 7.3 (Limit of a Sum) Let sn,tn be convergent sequences. Then the sequence of term-wise sums sn+tn is convergent, with lim(sn+tn)=limsn+limtn

This is a great example of a classic proof technique known as an ε/2 argument that we will use many times.

Proof. Let ε>0 be arbitrary. Since we know that both an and bn converge, we can provide notation for their limits - specifically, liman = A, and limbn = B. Since anA, there exists some Na such that for any n>Na, |anA|<ε2. Similarly, since bnB, there exists some Nb so that for any n>Nb, |bnB|<ε2. Lets set N equal to the maximum of the set {Na,Nb}. This means that if n>N, |anA|+|bnB|<ε According to the triangle inequality, we also know that |(anA)+(bnB)||anA|+|bnB| so by combining the previous two inequalities we know that |(anA)+(bnB)|<ε|(an+bn)(A+B)|<ε This is equivalent to the convergence definition saying that lim(an+bn) = A+B = liman + limbn.

Corollary 7.2 (Limit of a Difference) Let sn,tn be convergent sequences. Then sntn is convergent and lim(sntn)=limsnlimtn

Proof. Rewrite sntn as sn+(tn). Note that since tn is convergent we know the multiple tn is convergent, with limtn=t implying lim(tn)=t by . Now using the limit of sums () we see since sn and tn are convergent so is sn+(tn), and lim(sn+(tn))=limsn+lim(tn)=limsn+(1)limtn=limsnlimtn

The case of products is a little more annoying to prove, but the end result is the same - the limit of a product is the product of the limits.

Theorem 7.4 (Limit of a Product) Let sn,tn be convergent sequences. Then the sequence of term-wise products sntn is convergent, with lim(sntn)=(limsn)(limtn)

Proof (Sketch). Let snS and tnT be two convergent sequences and choose ε>0. We wish to find an N beyond which we know sntn lies within ε of $ST.

To start, we consider the difference |sntnST| and we add zero in a clever way:

|sntnST|=|sntnsnT+snTST|=|(sntnsnT)+(snTST)|

applying the triangle inequality we can break this apart

|sntnST||sntnsnT|+|snTST|=|sn||tnT|+|snS||T|

The second term here is easy to bound: if T=0 then its just literally zero, and if T0 then we can make it as small as we want: we know snS so we can make |snS| smaller than anything we need (like ε/T, or even ε/2T if necessary).

For the first term we see it includes a term of the form |tnT| which we know we can make as small as we need to by choosing sufficiently large N. But its being multiplied by |sn| and we need to make sure the whole thing can be made small, so we should worry about what if |sn| is getting really big? But this isn’t actually a worry - we know sn is convergent, so its bounded, so there is some B where |sn|<B for all n. Now we can make |tnT| as small as we like, (say, smaller than ε/B or ε/2B or whatever we need).

Since each of these terms can be made small as we need individually, choosing large enough n’s we can make them both simultaneously small, so the whole difference |sntnST| is small (less than ε) which proves convergence.

Exercise 7.4 Write the sketch of an argument above in the right order, as a formal proof.

Corollary 7.3 If p is a positive integer then lim1np=0 Hint: Induction on the power p

The next natural case to consider after sums and differences and products is quotients. We begin by considering the limit of a reciprocal:

Proposition 7.3 (Limit of a Reciprocal) Let sn be a convergent nonzero sequence wtih a nonzero limit. Then the sequence 1/sn of reciprocals is convergent, with lim1sn=1limsn

Proof (Sketch). For any ε>0, want to show when n is very large, we can make |1sn1s|<ε

We can get a common denominator and rewrite this as |1sn1s|=|ssn||ssn|

Since sn is not converging to zero, we should be able to bound it away from zero: that is, find some m such that |sn|>m for all nN (we’ll have to prove we can actually do this). Given such an m we see the denominator |ssn|>m|s|, and so |1sn1s|<|sns|m|s| We want this less than ε so all we need to do is choose N big enough that |sns| is less than εm|s| and we’re good.

Exercise 7.5 Turn the sketch argument for lim1sn=1sn in into a formal proof.

From here, its quick work to understand the limit of a general quotient.

Theorem 7.5 (Limit of a Quotient) Let sn,tn be convergent sequences, with tn0 and limtn0. Then the sequence sn/tn of quotients is convergent, with limsntn=limsnlimtn

Proof. Since tn converges to a nonzero limit, by we know that 1/tn converges, with limit 1/limtn. Now, we can use for the product sn1tn:

limsntn=limsn1tn=(limsn)(lim1tn) =limsn1limtn=limsnlimtn

Finally we look at square roots. We have already proven in that nonnegative numbers have square roots, and so given a nonnegative sequence sn we can consider the sequence sn of its roots. Below we see that the limit concept respects roots just as it does the other field operations:

Theorem 7.6 (Root of Convergent Sequence) Let sn>0 be a convergent sequence, and sn its sequence of square roots. Then sn is convergent, with limsn=limsn

Proof (Sketch). Assume sns, and fix ε>0. We seek an N where n>N implies |sns|<ε. This looks hard: because the fact we know is about sns and the fact we need is about sns.
But what if we multiply and divide by sn+s so we can simplify using the difference of squares?

|sns|sn+ssn+s=|sns|sn+s

This has the quantity |sns| that we know about in it! We know we can make this as small as we like by the assumption sns, so as long as the denominator does not go to zero, we can make this happen!

Proof (Formal). Let sn be a positive sequence with sns and assume s0 (we leave that case for the exercise below). Let ε>0, and choose N such that if n>N we have |sns|<εs.

Now for any n, rationalizing the numerator we see |sns|=|sns|sn+s<|sns|s

Where the last inequality comes from the fact that sn>0 by definition, so s+sn>s. When n>N we can use the hypothesis that sns to see |sns|<|sns|s=εss=ε

And so, sn is convergent, with limit s.

Exercise 7.6 Prove that if sn0 is a sequence of nonnegative numbers, that the sequence of roots also converges to zero sn0.

Hint: you don’t need to rationalize the numerator or do fancy algebra like above

Together this suite of results provides an effective means of calculating limits from simpler pieces. They are often referred to together as the limit theorems

Theorem 7.7 (The Limit Theorems) Let an and bn be any two convergent sequences, and kR a constant. Then limkan=kliman lim(an±bn)=(liman)±(limbn) limanbn=(liman)(limbn)

If bn0 and limbn0, limanbn=limanlimbn And, if an0, then an is convergent, with liman=liman

7.2.1 Example Computations

Example 7.4 Compute the limit of the following sequence sn:

sn=3n3+n62n2+5n3n2+1

Example 7.5 Compute the limit of the sequence sn sn=12n+n21n2n+1

7.3 Applications

7.3.1 Babylon and 2

We know that 2 exists as a real number (), and we know that the babylonian procedure produces excellent rational approximations to this value (), in the precise sense that the numerator squares to just one more than twice the square of the denominator.

Now we finally have enough tools to combine these facts, and prove that the babylonian procedure really does limit to 2.

Theorem 7.8 Let sn=pnqn be a sequence of rational numbers where both pn,qn and for each pn2=2qn21. Then sn2.

Proof. We compute the limit of the sequence sn2. Using that pn2=2qn2+1 we can replace the numerator and do algebra to see sn2=pn2qn2=2qn2+1qn2=2+1qn2.

Now, as by assumption qn we have that qn2=qnqn also diverges to infinity (), and so its reciprocal converges to 0 (?prp-diverge-to-infty-equliv-converge-to-zero). Thus, using the limit theorems for sums, limpn2qn2=lim(21qn2)=2lim1qn2=2

That is, the limit of the squares approaches 2. Now we apply to this sequence sn2, and conclude that

  • sn=sn2 converges.
  • limsn=limsn2=limsn2=2

This provides a rigorous justification of the babylonian’s assumption that if you are patient, and compute more and more terms of this sequence, you will always get better and better approximations of the square root of 2.

Exercise 7.7 Build a sequence that converges to n by following the babylonian procedure, starting with a rectangle of area n.

7.3.2 Rational and Irrational Sequences

Combining the squeeze theorem and limit theorems with the density of the (ir)rationals allows us to prove the existence of certain sequences that will prove quite useful:

Theorem 7.9 For every xR there exists a sequence rn of rational numbers with rnx.

Proof. Let xR be arbirary, and consider the sequence x+1n. Because the constant sequence x,x,x and the sequence 1/n are convergent, by the limit theorem for sums we know x+1n is convergent and lim(x+1n)=x+lim1n=x

Now for each nN, by the density of the rationals we can find a rational number rn with x<rn<x+1n. This defines a sequence of rational numbers squeezed between x and x+1n: thus, by the squeeze theorem we hav

x<rn<x+1nlimrn=x

Through a similar argument using we find the existence of a sequence of irrational numbers converging to any real number.

Exercise 7.8 For every xR there exists a sequence yn of irrationals with ynx.

7.4 Problems

7.4.1 Infinity

Given the formal defintion of divergence to infinity as meaning eventually gets larger than any fixed number, we can formulate analogs of the limit theorems for such divergent sequences. We will not need any of these in the main text but it is good practice to attempt their proofs:

Exercise 7.9 If sn and k>0 then ksn.

Exercise 7.10 If tn diverges to infinity, and sn either converges, or also diverges to infinity, then sn+tn.

Exercise 7.11 If tn diverges to infinity, and sn either converges, or also diverges to infinity, then sntn.

Note that there is not an analog of the division theorem: if sn and tn, with only this knowledge we can learn nothing about the quotient sn/tn.

Exercise 7.12 Give examples of sequences sn,tn where limsntn=0 limsntn=2 limsntn=

These limit laws are the precise statement behind the “rules” often seen in a calculus course, where students may write 2+=, +=, or =, but they may not write /. (If you are looking at this last case and thinking l’Hospital, we’ll get there in ?thm-Lhospital!)