$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} \renewcommand{\epsilon}{\varepsilon} % ALTERNATE VERSIONS % \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} % \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} % \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} % \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} % \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\uppersum}[1]{U_{#1}} \newcommand{\lowersum}[1]{L_{#1}} \newcommand{\upperint}[1]{U_{#1}} \newcommand{\lowerint}[1]{L_{#1}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} % extra auxiliary and additional topic/proof \newcommand{\extopic}{\bigstar} \newcommand{\auxtopic}{\blacklozenge} \newcommand{\additional}{\oplus} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\sampleset}[1]{\mathcal{S}_{#1}} \newcommand{\erf}{\operatorname{erf}} $$

30  Elementary Functions

We derive the integrals of familiar functions, and along the way finally discover a formula for the logarithm.

Using the fundamental theorem of calculus we can effortless find the integrals of many important functions, simply because we know their derivatives! This is much much quicker than working directly from the definition (as one can appreciate by looking at the optional chapter examples in the last part).

30.1 Polynomials and Power Series

We begin with perhaps the first example one sees in a Calculus I course :::{#prp-integrating-x} The function \(x\) is integrable on any interval \([a,b]\subset\RR\), and \[\int_{[a,b]}x = \frac{b^2}{2}-\frac{a^2}{2}\] :::

Proof. The function \(x\) is continuous, thus its integrable. We know by the power rule for differentiation that \((x^2)^\prime = 2x\). Thus by linearity of the derivative, if \(F(x)=x^2/2\), we have \(F^\prime=x\), and we can use this antiderivative to evaluate the integral via the fundamental theorem: \[\int_{[a,b]}x=F(b)-F(a)=\frac{b^2}{2}-\frac{a^2}{2}\]

This technique of inverting the power rule works for all \(n\neq -1\) to give a general formula

Theorem 30.1 For any \(n>0\) the function \(x^n\) is integrable on \([a,b]\) for any \([a,b]\subset\RR\). For \(x<-1\), \(x^n\) is not defined at \(0\) but is integrable on any interval \([a,b]\) not containing zero. In both cases, there is a uniform formula for the integral: \[\int_{[a,b]}x^n = \frac{1}{n+1}x^{n+1}\Bigg|_{[a,b]}\]

Proof. The monomial \(x^n\) is continuous, hence integrable, and has as an antiderivative \(x^{n+1}{n+1}\). Thus we can use this to evaluate the integral \[\int_{[a,b]}x^n = \frac{x^{n+1}}{n+1}\Bigg|_{[a,b]}\]

Using linearity of the integral, this gives an immediate calculation of the antiderivative of any polynomial:

Theorem 30.2 If \(p(x)=\sum_{k=1}^N a_k x^k\) is any polynomial, then \(p\) is integrable on any interval \([a,b]\) and \[\int_{[a,b]} p(x) = \sum_{k=1}^{N}\frac{a_k}{k+1}x^{k+1}\Bigg|_{[a,b]}\]

Proof. Polynomials are continuous, hence integrable. We know an antiderivative of each term, so we compute one at a time via linearity of the integral on continuous functions: \[\int_{[a,b]} \left(a_0+a_1x+a_2x^2+\cdots a_n x^n\right) = \int_{[a,b]}a_0 + \int_{[a,b]}a_1x +\int_{[a,b]}a_2x^2+\cdots +\int_{[a,b]}a_n x^n\] \[=a_0\int_{[a,b]}1 + a_1\int_{[a,b]}x+a_2\int_{[a,b]}x^2+\cdots+a_n\int_{[a,b]}x^n\] \[=a_0 x\Bigg|_{[a,b]}+a_1\frac{x^2}{2}\Bigg|_{[a,b]}+a_2\frac{x^3}{3}\Bigg|_{[a,b]}+\cdots + a_n \frac{x^{n+1}}{n+1}\Bigg|_{[a,b]}\]

30.1.1 Power Series

As we remember well from differentiation, things that are true for finite sums don’t always carry over easily to the limit. Indeed, the proof of differentiability of polynomials was identical to their integration above, a direct corollary of linearity. But we cannot use linearity to conclude things about limits, so for power series we instead needed to refine our tools of dominated convergence. A similar strategy goes through without fail here for integration: one can use dominated convergence for integrals (Theorem 28.6) to prove power series can be integrated term by term within their radius of converence.

But the fundamental theorem also makes a much easier technique available to us, given that we know the differentiation case! We follow that line of reasoning here.

Proposition 30.1 Let \(\sum a_n x^n\) be a power series with radius of convergence \(r\). Then the series \(\sum \frac{a_n}{n+1}x^{n+1}\) also has radius of convergence \(r\).

Proof. Like for the differentiable case, we prove this here under the assumption that the Ratio test succeeds in computing the radius of convergence for the original series. (As an exercise, show everything still works even if the ratio test fails, by doing the fully general argument with limsup of the root test). So for any \(x\in(-R,R)\) \[\lim \left|\frac{a_{n+1}}{a_n}\right||x|<1\]

We now turn to compute the ratio test for our new series \(\sum_\frac{a_{n}}{n+1}x^{n+1}\): the ratio in question is

\[\frac{\frac{a_{n+1}}{n+2}x^{n+2}}{\frac{a_n}{n+1}x^{n+1}}=\left(\frac{a_{n+1}}{a_n}\right)\left(\frac{n+1}{n+2}\right)x\]

Since \((n+1)/(n+2)\to 1\) we can compute the overall limit using the limit theorems and see we end up with the exact same limit as for the original series! Thus integrating term by term does not change the radius of convergence at all.

Having confirmed that \(\sum \frac{a_n}{n+1}x^n\) converges when the original series does, we can provide a direct proof of the term-by-term integrability of power series, avoiding the use of dominated convergence:

Theorem 30.3 (Integrating Power Series) Let \(f(x)=\sum a_n x^n\) be a power series with radius of convergence \(r\). Then \(f\) is integrable on \((-r,r)\) and for any \(0< x < r\) \[\int_{[0,x]}f = \sum_{n\geq 0} \frac{a_n}{n+1}x^{n+1}\]

Proof. The function \(\sum a_n x^n\) is continuous on \((-r,r)\) and thus integrable by Theorem 28.1. Define \(F(x)= \sum \frac{a_n}{n+1}x^n\). This converges on \((-r,r)\) by Proposition 30.1, and defines a differentiable function on this interval; whose derivative can be calculated term-by-term, giving \[F^\prime(x)=\left(\sum_{n\geq 0}\frac{a_n}{n+1}x^{n+1}\right)^\prime = \sum_{n\geq 0} a_n x^n = f(x)\] Thus \(F\) is an antiderivative of \(f\), and by the fundamental theorem of calculus \[\int_{[0,x]}f = F(x)-F(0)\] Since \(F\) has no constant term, \(F(0)=0\) and so \(\int_{[0,x]}f = \sum_{n\geq 0} \frac{a_n}{n+1}x^{n+1}\) as claimed.

30.2 Exponentials and Trigonometric Functions

We put a good amount of work into defining exponential and trigonometric functions from their functional equations, proving they are continuous and eventually finding their differentiation laws. Now we reap some of the benefits, and find their integrals as effortless as we did the polynomials.

Theorem 30.4 (The Natural Exponential) THe natural exponential \(\exp(x)\) is integrable and \[\int_{[a,b]}\exp = \exp(b)-\exp(a)\]

Proof. The function \(\exp\) is continuous, hence integrable, and its its own derivative. Thus, its also its own antiderivative, and \[\int_{[a,b]}\exp = \exp\Bigg|_{[a,b]}=\exp(b)-\exp(a)\]

Theorem 30.5 Let \(E(x)\) be any exponential function. Then \(E\) is integrable, and \[\int_{[a,b]}E = \frac{E(b)-E(a)}{E^\prime(0)}\]

Proof. We proved that every exponential is continuous and differentiable, wtih \(E^\prime(x)\) a multiple of \(E(x)\): specifically, \(E^\prime = E^\prime(0)E\). Dividing by the constant value \(E^\prime(0)\) and using linearity of the derivative, we see \[\left(\frac{E(x)}{E^\prime(0)}\right)^\prime = \frac{1}{E^\prime(0)}\left(E(x)\right)^\prime = \frac{1}{E^\prime(0)}E^\prime(0)E(x)=E(x)\]

Thus we’ve found an antiderivative! The fundametnal theorem fo calculus then quickly finishes the job: \[\int_{[a,b]}E = \frac{E(x)}{E^\prime(0)}\Bigg|_{[a,b]}=\frac{E(b)-E(a)}{E^\prime(0)}\]

We succeed equally quickly for the basic trigonometric functions:

Theorem 30.6 (Integrating Sine and Cosine) \[\int_{[a,b]}\cos = \sin(b)-\sin(a)\hspace{1cm}\int_{[a,b]}\sin = \cos(a)-\cos(b)\]

Proof. We know \(\sin^\prime=\cos\) and \(\cos^\prime =-\sin\) from previous homework, Thus, \(\cos\) has \(\sin\) as an antiderivative, and \[\int_{[a,b]}\cos = \sin\Bigg|_{[a,b]}=\sin(b)-\sin(a)\] Similarly, \(\sin\) has \(-\cos\) as an antiderivative, so \[\int_{[a,b]}\sin = -\cos\Bigg|_{[a,b]}=-\left(\cos(b)-\cos(a)\right)=\cos(a)-\cos(b)\]

These two formulae, together with the trigonometric identities are enough to fully determine all trigonometric integrals.

30.3 Logarithms

In this section we finally develop a formula for the logarithm: we proved it existed some time ago as we had already proven the inverses of exponentials were logarithms, and we proved that exponentials exist. But this did not give us any way to compute a logarithm. This is in stark contrast to the exponential, where right from the beginning we had some means of computing it (via the cumbersome task of taking limits of rational powers), and now we have a nice extremely efficient power series. We will remedy all of this in this subsection, by studying the simple function \(1/x\).

We already know there is something rather unique about this function, because it is the only case where the power rule fails us, and we can’t simply use our knowledge of differentiation to invert. To start, we prove a rather simple lemma that is key to unlocking the logarithm properties:

Lemma 30.1 If \([a,b]\) is an interval of positive numbers and \(k>0\) then the integral of \(1/x\) on the intervals \([a,b]\) and \([ka,kb]\) are the same.

Proof. Intuitively this is plausible: when we multiply by \(k\) the length of our interval increases by a factor of \(k\) but the height of our function \(1/x\) decreases by a factor of \(k\) at each point, leaving the area fixed.

To prove it, we (surprise!) invoke the fundamental theorem of calculus. Let \(F\) be an antiderivative of \(1/x\), so \(F^\prime(x)=1/x\). Then look at \(F(kx)\): taking its derivative, we see by the chain rule \[F(kx)^\prime = F^\prime(kx)(kx)^\prime=kF^\prime(kx)\] And using that \(F^\prime = 1/x\), \[=k\frac{1}{kx}=\frac{1}{x}\]

Thus both \(F(x)\) and \(F(kx)\) are antiderivatives of \(1/x\)! This means we can use either to evaluate our integral: so, using \(F(kx)\), \[\int_{[a,b]}\frac{1}{x} = F(kx)\Bigg|_{[a,b]}=F(kb)-F(ka)\] But this quantity is exactly an antiderivative of \(1/x\) evalluated at \(kb\) and \(ka\), so \[F(kb)-F(ka)=F(x)\Bigg|_{[ka,kb]}=\int_{[ka,kb]}\frac{1}{x}\]

Stringing these equalities together yields the result.

We can immediately use this to show the integral of \(1/x\) has the logarithm property.

Theorem 30.7 (The Logarithm as an Integral) The function \(f(x)=1/x\) is integrable on \((0,\infty)\), and its integral \[L(x)=\int_{[1,x]}\frac{1}{x}\] satisfies the law of logarithms \(L(xy)=L(x)+L(y)\)$ for \(x,y>1\).

Proof. The function \(1/x\) is continuous on \((0,\infty)\) so it is integrable on any closed subinterval \([a,b]\). Let \(x,y>1\) and consider the interval \([1,xy]\). We can decompose this into two intervals \([1,x]\cup [x,xy]\) and so by subdivision \[L(xy)=\int_{[1,xy]}\frac{1}{t}=\int_{[1,x]}\frac{1}{t}+\int_{[x,xy]}\frac{1}{t}\] The first of these terms is by definition \(L(x)\), and the second can be calculated via our lemma: \[\int_{[x,xy]}\frac{1}{t}=\int_{[1,y]}\frac{1}{t}=L(y)\] Thus, \(L(xy)=L(x)+L(y)\), as claimed!

Exercise 30.1 Confirm this also works for arbitrary \(x,y\in\RR\) if we interpret our integral as an oriented integral (Definition 25.4).

Using the fundamental theorem we can easily calculate the derivative of this logarithm at \(1\):

Corollary 30.1 (The Natural Logarithm) The integral of \(1/x\) is the natural logarithm \[\log(x)\int_{[1,x]}\frac{1}{x}\]

Proof. By the fundamental theorem \(\log(x)^\prime = \frac{1}{x}\) so evaluating at \(1\), \[\log(x)^\prime\Bigg_{x=1}=\frac{1}{1}=1\]

As is, this is not a very useful formula for the logarithm, as we can’t use antidifferentiation to compute it: the function we care about is defined as the antiderivative! But this does have dramatic implications: we can use this to derive a formula for the logarithm, via power series.

Theorem 30.8 (Logarithm Power Series) The function \(\log(1+x)\) has a power series representation for \(x\in(-1,1)\) \[\log(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\cdots\]

Proof. The geometric series converges on \((-1,1)\) to \[\frac{1}{1-x}=\sum_{n\geq 0}x^n\] Making the substitution \(x\mapsto -x\) we can rewrite this as \[\frac{1}{1+x}=\sum_{n\geq 0}(-x)^n =\sum_{n\geq 0}(-1)^n x^n\]

This power series converges on \((-1,1)\) still, and so is integrable term-by-term along this entire interval: \[\int_{[0,x]}\frac{1}{1+x}=\int_{[0,x]}\sum_{n\geq 0}(-1)^n x^n=\sum_{n\geq 0 }\int_{[0,x]}(-1)^n x^n = \sum_{n\geq 0}(-1)^n\frac{x^{n+1}}{n+1}\]

Since the integral of \(1/x\) is \(\log(x)\), its easy to see the antiderivative of \(1/(1+x)\) is \(\log(1+x)\), by differentiation using the chain rule: \[\left(\log(1+x)\right)^\prime = \frac{1}{1+x}(1+x)^\prime = \frac{1}{1+x}\]

Thus our power series is indeed a logarithm!

\[\log(1+x)= \sum_{n\geq 0}(-1)^n\frac{x^{n+1}}{n+1} = x-\frac{x^2}{2}+\frac{x^3}{3}-\cdots\]

Example 30.1 (Integrating the Tangent) The function \(\tan(x)=\sin(x)/\cos(x)\) is continuous on \((-\pi/2,\pi/2)\) and hence integrable on any \([a,b]\subset(0,\pi/2)\). To find the value of the integral, we notice that

\[\tan(x)=\frac{\sin(x)}{\cos(x)}=\frac{1}{\cos(x)}\sin(x)=\frac{-1}{\cos(x)}(\cos(x))^\prime\]

Since \(1/x\) is the derivative of the natural logarithm function, we see this is the result of a chain rule! \[\log(\cos(x))^\prime = \frac{1}{\cos(x)}(\cos(x))^\prime\], and so \[-\log(\cos(x))^\prime = \tan(x)\] We have found an antiderivative for tangent, so the fundamental theorem yields its integral: \[\int_{[a,b]}\tan(x)=-\log(\cos(x))\Bigg|_{[a,b]}=-\log(\cos(b))+\log(\cos(a))\] Using the rules of logarithms we can simplify this a bit to \[\int_{[a,b]}\tan(x)=\log\left(frac{\cos(a)}{\cos(b)}\right)\]

30.4 Inverse Trigonometric Functions

Understanding the inverse trigonometric functions will prove exceedingly useful to us in our end goal of calculating \(\pi\): we have defined \(\pi\) as a particular input to the trigonometic functions (the first positive input at which sine is zero, for example), and so we don’t have a way to compute \(\pi\) by plugging something into a function: we’ve had to resort to methods like Newton’s approximation scheme, which requires a lot of calculation since we are working with a power series!

Our lives would be much easier if we had functions that yielded \(\pi\) as an output (of a simple value): we could then simply derive a means of computing this function! The natural such functions would be the inverse trigonometric functions, and so we take a moment to study these here.

30.4.1 \(\bigstar\) The ArcSine

Our first fundamental problem of course is we have no idea how to get a formula for the inverse trigonometric functions! To get one, we will use the fact that we understand differentiation quite well, and then apply the fundamental theorem.

Proposition 30.2 The derivative of the inverse sine function is \[(\arcsin x)^\prime = \frac{1}{\sqrt{1-x^2}}\]

Proof. Let \(f(x)=\arcsin(x)\). Then where defined, \(f(\sin(\theta))=\theta\) by definition, and we may differentiate via the chain rule: on the left side \[\frac{d}{d\theta}f(\sin(\theta))=f^\prime(\sin(\theta))\cos(\theta)\] and on the right \(\frac{d}{d\theta} \theta=1\). Equating these and solving for \(f^\prime\) yields \[f^\prime(\sin(\theta))=\frac{1}{\cos(\theta)}\] The only remaining problem is that we want to know \(f^\prime\) as a function of \(x\) and we only know its value implicitly, as a function of \(\sin(\theta)\). But setting \(x=\sin\theta\) we can express \(\cos\theta = \sqrt{1-x^2}\) via the pythagorean identity \(\sin^2\theta+\cos^2\theta =1\). Thus

\[f^\prime(x)=\frac{1}{\sqrt{1-x^2}}\]

Before integration this would have been a mere curiosity. But, armed wtih the fundamental theorem this is an extremely powerful fact: indeed, it directly gives us a representation as an integral:

Corollary 30.2 The inverse sine function is defined on the interval \([0,1]\) by the integral \[\arcsin(x)=\int_{[0,x]}\frac{1}{\sqrt{1-x^2}}\]

Proof. Since \((\arcsin x)^\prime =\frac{1}{\sqrt{1-x^2}}\), the inverse sine is an antiderivative of \(\frac{1}{\sqrt{1-x^2}}\), and also \(\sin(0)=0\) implies \(\arcsin(0)=0\), so it is zero at \(x=0\). Thus, it is exactly the area function \[\arcsin(x)=\int_{[0,x]}\frac{1}{\sqrt{1-t^2}}\]

Exercise 30.2 Carry out the analogous reasoning to derive an integral expression for the inverse cosine function.

30.4.2 The ArcTangent

Proposition 30.3 \[(\arctan x)^\prime=\frac{1}{1+x^2}\]

Proof. We again proceed by differentiating the identity \(\arctan(\tan\theta)=\theta\). This yields \(\arctan^\prime(\tan\theta)\frac{1}{\cos^2\theta}=1\) and multiplying through by \(\cos^2\) we can solve for the derivative of arctangent: \[\arctan^\prime(\tan\theta)=\cos^2\theta\]

The only problem is again we have the derivative as a function implicitly of of \(\tan\theta\), and we need it in terms of just an abstract variable \(x\). Setting \(x=\tan\theta\) we see that \(x^2=\tan^2\theta\) and (using the pythagorean identity) \(x^2+1=\tan^2\theta+1 =\frac{1}{\cos^2\theta}\). Thus \[\cos^2\theta = \frac{1}{1+x^2}\] and putting these two together, we reach what we are after

\[\arctan^\prime(x)=\frac{1}{1+x^2}\]

Proposition 30.4 The inverse function \(\arctan(x)\) to the tangent \(\tan(x)=\sin(x)/\cos(x)\) admits an integral representation \[\arctan(x)=\int_{[0,x]}\frac{1}{1+t^2}\]

Proof. This follows as \(\arctan^\prime(x)=1/(1+x^2)\), so both \(\arctan\) and this integral have the same derivative. As antiderivatives of the same function this means that they differ by a constant. Finally, this constant is equal to zero as \(\arctan(0)=0\) and \(\int_{[0,0]}\frac{1}{1+x^2} =0\) as it is an integral over a degenerate interval.

This integral expression is quite nice - the arctangent like the logarithm is shown to be the integral of a rather simple rational function. But like arcsine, an integral expression is rather difficult to use for computing actual values: we’d need to actually compute (or estimate) some Riemann sums. So it’s helpful to look for other expressions as well, and here arctan has a particularly nice power series.

Recall the geometric series \[\frac{1}{1-x}=\sum_{n\geq 0}x^n\]

We can substitute \(-x^2\) for the variable here to get a series for \(1/(1+x^2)\):

\[\frac{1}{1+x^2}=\sum_{n\geq 0}(-x^2)^n=\sum_{n\geq 0}(-1)^nx^{2n}\] \[=1-x^2+x^4-x^6+x^8-\cdots\]

This power series has radius of convergence \(1\) (inherited from the original geometric series) and converges at neither endpoint. We know from the above that this function is the derivative of the arctangent, so we should integrate it!

\[\arctan(x)=\int_{[0,x]}\frac{1}{1+t^2}\,dt = \int_{[0,x]} \sum_{n\geq 0}(-1)^nt^{2n}\,dt\]

Inside its radius of convergence we can exchange the order of the sum and the integral:

\[\begin{align*} \int_{[0,x]} \left(\sum_{n\geq 0}(-1)^nt^{2n}\right)\,dt&=\sum_{n\geq 0}\int_{[0,x]}(-1)^nt^{2n}\,dt\\ &=\sum_{n\geq 0}(-1)^n\int_{[0,x]}t^{2n}dt\\ &=\sum_{n\geq 0}(-1)^n \frac{x^{2n+1}}{2n+1} \end{align*}\]

Theorem 30.9 For \(x\in(-1,1)\), \[\arctan(x)=\sum_{n\geq 0}(-1)^n\frac{x^{2n+1}}{2n+1}\] \[=x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\frac{x^9}{9}-\cdots\]