27 Elementary Functions
In this section we look at how to find derivatives of functions which are defined not explicitly, but by the functional equations defining exponentials, logarithms and trigonometric functions.
27.1 Exponentials & Logs
Proposition 27.1 Let \(E(x)\) be an exponential function. Then \(E\) is differentiable on the entire real line, and \[E^\prime(x) = E^\prime(0)E(x)\]
First we show that this formula holds so long as \(E\) is actually differentiable at zero. Thus, differentiability at a single point is enough to ensure differentiability everywhere and fully determine the formula!
Proof. Let \(x\in\RR\), and \(h_n\to 0\). Then we compute \(E^\prime(x)\) by the following limit: \[E^\prime(x)=\lim \frac{E(x+h_n)-E(x)}{h_n}\]
Using the property of exponentials and the limit laws, we can factor an \(E(x)\) out of the entire numerator:
\[=\lim \frac{E(x)E(h_n)-E(x)}{h_n}=E(x)\lim \frac{E(h_n)-1}{h_n}\]
But, \(E(0)=1\) so the limit here is actually the *derivative of \(E\) at zero$!
\[E^\prime(x)=E(x)E^\prime(0)\]
Next, we tackle the slightly more subtle problem of showing that \(E\) is in fact differentiable at zero. This is tricky because all we have assumed is that \(E\) is continuous and satisfies the law of exponents: how are we going to pull differentiability out of this? The trick is two parts (1) show the right and left hand limits defining the derivative exist, and (2) show they’re equal.
Proof. STEP 1: Show that the left and right hand limits defining the derivative exist: \(E\) is convex (Exercise 21.4) so the difference quotient is monotone increasing (Proposition 5.1), and so the limit \(\lim_{x\to 0^-}\) exists (as a sup) and \(\lim_{x\to 0^+}\) exists (as an inf), ?cor-convexity-implies-one-side-limit.
STEP2: Now that we know each of these limits exist, let’s show they are equal using the definition:
To compute the lower limit, we can choose any sequence approaching \(0\) from below: let \(h_n\) be a positive sequence with \(h_n\to 0\), then \(-h_n\) will do:
\[\lim_{h\to 0^-}\frac{E(h)-1}{h}=\lim \frac{E(-h_n)-1}{-h_n}\] And by ?exr-exponential-of-negative we see \(E(-h_n)=1/E(h_n)\). Thus \[\begin{align*} \lim \frac{E(-h_n)-1}{-h_n}&=\lim \frac{\frac{1}{E(h_n)}-1}{-h_n}\\ &=\lim\frac{1-E(h_n)}{-h_n}\frac{1}{E{h_n}}\\ &=\lim \frac{E(h_n)-1}{h_n}\frac{1}{E(h_n)} \end{align*}\]
But, since \(E\) is continuous (by definition) and \(E(0)=1\) (?exr-exponential-of-negative) the limit theorems imply \[\lim \frac{1}{E(h_n)}=\frac{1}{\lim E(h_n)}=\frac{1}{E(\lim h_n)}=\frac{1}{E(0)}=1\] Thus, \[\begin{align*} &\lim \left(\frac{E(h_n)-1}{h_n}\frac{1}{E(h_n)}\right)\\&= \left(\lim \frac{E(h_n)-1}{h_n}\right)\left(\lim\frac{1}{E(h_n)}\right)\\ &=\lim \frac{E(h_n)-1}{h_n}\\ \end{align*}\] But this last limit evaluates exactly to the limit from above since \(h_n>0\) and \(h_n\to 0\). Stringing all of this together, we finally see \[\lim_{h\to 0^-}\frac{E(h)-1}{h}=\lim_{h\to 0^+}\frac{E(h)-1}{h}\] Thus, by Proposition 17.1 we see that since both one sided limits exist and are equal the entire limit exists: \(E\) is differentiable at \(0\).
Having done this work we can immediately calculate the derivative of logarithms, using the fact that they are inverses of exponentials:
Proposition 27.2 Let \(L\) be a logarithm function, then \(L\) is differentiable and \[L^\prime(x)=\frac{L^\prime(1)}{x}\]
Proof. Let \(L\) be a logarithm, with inverse the exponential \(E(x)\). We know that \(E\) is differentiable and \(E^\prime(x)\neq 0\) as its a constant multiple of the everywhere-positive \(E(x)\) itself. Thus by Theorem 23.7 the function \(L\) is also differentiable. Choosing \(b>0\) and setting \(L(b)=a\) gives \[L^\prime(b)=\frac{1}{E^\prime(a)}=\frac{1}{E^\prime(0)E(a)}=\frac{1}{E^\prime(0) b}\] Where the last equality follows as \(L(b)=a\) implies \(E(a)=b\)
Now we apply the theorem on differentiability of inverses one more time to remove the mention of \(E\) in the final answer, and express everything in terms of the logarithm itself. Since \(E(0)=1\) and \(L(1)=0\), we have \(L^\prime(1)=\frac{1}{E^\prime(0)}\) and substituting this in gives the claimed form.
27.1.1 The Natural Exponential and Natural Log
When studying the functional equations for logs and exponentials we saw there is not one solution but a whole family of them. While the functional equation itself gave no preference to any exponential over any other, the derivative
Definition 27.1 We write \(\exp(x)\) for the exponential function which has \(\exp^\prime(0)=1\). This exponential satisfies the simple differential identity \[\exp^\prime(x)=\exp(x)\]
Note that by the chain rule we know such a thing exists so long as any exponential exists. If \(E(x)\) is any exponential then \(E(x/E^\prime(0))\) has derivative \(1\) at \(x=0\)!
A similar story plays out for logarithms; the functional equation itself had many logarithm solutions, but calculus picks out one of these as clearly the most natural:
Definition 27.2 (The Natural Log) We write \(\log\) for the logarithm function which has \(\log^\prime(1)=1\). This logarithm satisfies the simple differential identity \[\log^\prime(x)=\frac{1}{x}\]
Furthermore the two notions of “naturalness” picking out a logarithm and an exponential are compatible with one another!
Corollary 27.1 The natural exponential and natural log are inverses of one another.
We will make much use of this pair of special functions, and their exeedingly simple differentiation rules. As a first application, we give a re-proof of the power rule avoiding difficult limiting arguments:
Theorem 27.1 (\(\bigstar\) The General Power Rule) If \(a\in\RR\) and \(f(x)=x^a\). Then \(f\) is differentiable for all \(x>0\), and \[(x^a)^\prime = ax^{a-1}\]
Proof. Let \(\exp\) be the natural exponential, and \(\log\) be the natural log. Then \(\exp(\log(x))=x\), and so \(\exp(\log(x^a))=x^a\). Using the property of logarithms and powers (Corollary 21.2) this simplifies
\[x^n=\exp(\log(x^a))=\exp(a\log(x))\]
By the chain rule,
\[\begin{align*} \left[\exp(a\log(x))\right]^\prime &= \exp(a\log(x))\left[a\log(x)\right]^\prime\\ &= \exp(a\log(x)) a \log^\prime(x)\\ &= \exp(a\log(x))a\frac{1}{x} \end{align*}\]
But, recalling that \(\exp(a\log(x))=\exp(\log(x^a))=x^a\) this simplifies to
\[=x^a a\frac{1}{x}=ax^{a-1}\]
27.1.2 Finding a Series Representation
To work with the natural exponential efficiently, we need to find a formula that lets us compute it. And this is exactly what power series are good at! However, the theory of power series is a little tricky, as we saw in the last chapter. Not every function has a power series representation, but if a function does, there’s only one possibility:
Proposition 27.3 If the natural exponential has a power series representation, then it is \[p(x)=\sum_{k\geq 0}\frac{x^k}{k!}\]
Proof. We know the only candidate series for a function \(f(x)\) is \(\sum_{k\geq 0}\frac{f^{(k)}(0)}{k!}x^k\), so for \(\exp\) this is
\[p(x)=\sum_{k\geq 0}\frac{\exp^{(k)}(0)}{k!}x^k\]
However, we know that \(\exp^\prime=\exp\) and so inductively \(\exp^{(k)}=\exp\), and so \[\exp^{(k)}(0)=\exp(0)=1\] Thus \[p(x)=\sum_{k\geq 0}\frac{1}{k!}x^k\]
So now, while we know \(\exp\) exists we are back to talking about hypotheticals because we don’t know if it is representable by a power series! The first step to fixing this is to show that the proposed series at least converges.
Proposition 27.4 The series \(p(x)=\sum_{k\geq 0}\frac{x^k}{k!}\) converges for all \(x\in\RR\).
Proof. This series converges for all \(x\in\RR\) by the Ratio test, as \[\lim\Big|\frac{x^{n+1}/(n+1)!}{x^n/n!}\Big|=\lim \frac{|x|}{n+1}=0<1\]
Now, all that remains is to show that \(p(x)=\exp(x)\). One means to do this is by a direct calculation, to show this series satisfies the law of exponents
Exercise 27.1 (Power Series & Law of Exponents) Show that for any \(x,y\in\RR\) \[\left(\sum_{n\geq 0}\frac{x^n}{n!}\right)\left(\sum_{m\geq 0}\frac{y^m}{m!}\right)=\left(\sum_{k\geq 0}\frac{(x+y)^k}{k!}\right)\] Thus, the power series \(\sum x^n/n!\) satisfies the law of exponents, and defines an exponential function.
However this computation is a rather complicated manipulation of products and double sums, so we present an alternative approach. Since \(p\) is a power series, this really means that the limit of its partial sums equals \(\exp(x)\), or
\[\forall x\in\RR\,\,\, \exp(x)=\lim_N p_N(x)\]
For any finite partial sum \(p_N\), we know that it is not exactly equal to \(\exp(x)\) (as this finite sum is just a polynomial!). Thus there must be some error term \(R_N = \exp-p_N\), or
\[\exp(x)=p_N(x)+R_N(x)\]
This is helpful, as we know from the previous chapter how to calculate such an error, using the Taylor Error Formula: for each fixed \(x\in\RR\) and each fixed \(N\in\NN\), there is some point \(c_N\in[0,x]\) such that
\[R_N(x)=\frac{\exp^{(N+1)}(c_N)}{(N+1)!}x^{N+1}\]
And, to show the power series becomes the natural exponential in the limit, we just need to show this error tends to zero!
Proposition 27.5 As \(N\to\infty\), for any \(x\in\RR\) the Taylor error term for the exponential goes to zero: \[R_N(x)\to 0\]
Proof. Fix some \(x\in\RR\). Then for an arbitrary \(N\), we know \[R_N(x)=\frac{\exp^{(N+1)}(c_N)}{(N+1)!}x^{N+1}\] where \(c_N\in[0,x]\) is some number that we don’t have much control of (as it came from an existence proof: Rolle’s theorem in our derivation of the Taylor error). Because we don’t know \(c_N\) explicitly, its hard to directly compute the limit and so instead we use the squeeze theorem:
We know that \(\exp\) is an increasing function: thus, the fact that \(0\leq c_N\leq x\) implies that \(1=\exp(0)\leq \exp(c_N)\leq \exp(x)\), and multiplying this inequality through by \(x^{N+1}{(N+1)!}\) yields the inequality
\[\frac{x^{N+1}}{(N+1)!}\leq R_N(x)=\exp(c_N)\frac{x^{N+1}}{(N+1)!}\leq \exp(x)\frac{x^{N+1}}{(N+1)!}\]
(Here I have assumed that \(x\geq 0\): if \(x<0\) then the inequalities reverse for even values of \(N\) as \(x^{N+1}\) is negative and we are multiplying through by a negative number. But this does not affect the fact that the error term \(R_N(x)\) is still sandwiched between the two.)
So now our problem reduces to showing that the upper and lower bounds converge to zero. Since \(\exp(x)\) is a constant (remember, \(N\) is our variable here as we take the limit), a limit of both the upper and lower bounds comes down to just finding the limit
\[\lim_N \frac{x^{N+1}}{(N+1)}\]
But this is just the \(N+1\)st term of the power series \(p(x)=\sum_{n\geq 0}x^n/n!\) we studied above! And since this power series converges, we know that as \(n\to\infty\) its terms must go to zero (the divergence test). Thus
\[\lim_N \frac{x^{N+1}}{(N+1)}=0\hspace{1cm}\lim_N \exp(x)\frac{x^{N+1}}{(N+1)}=0\]
and so by the squeeze theorem, \(R_N(x)\) converges and
\[\lim_N R_N(x)=0\]
Now we have all the components together at last: we know that \(\exp\) exists, we have a candidate power series representation, that candidate converges, and the error between it and the exponential goes to zero!
Theorem 27.2 The natural exponential is given by the following power series \[\exp(x)=\sum_{k\geq 0}\frac{x^k}{k!}\]
Proof. Fix an arbitrary \(x\in\RR\). Then for any \(N\) we can write \[\exp(x)=p_N(x)+R_N(x)\] For \(p_N\) the partial sum of \(p(x)=\sum_{k\geq 0}x^k/k!\) and \(R_N(x)\) the error. Since we have proven both \(p_N\) and \(R_N\) converge, we can take the limit of both sides using the limit theorems (and, as \(\exp(x)\) is constant in \(N\), clearly \(\lim_N \exp(x)=\exp(x)\)):
\[\begin{align*} \exp(x)&=\lim_N(p_N(x)+R_N(x))\\ &= \lim_N p_N(x)+\lim_N R_N(x)\\ &= p(x)+0\\ &= \sum_{k\geq 0}\frac{x^k}{k!} \end{align*}\]
Its incredible in and of itself to have such a simple, explicit formula for the natural exponential. But this is just the beginning: this series actually gives us a means to express all exponentials:
Theorem 27.3 Let \(E(x)\) be an arbitrary exponential function. Then \(E\) has a power series representation on all of \(\RR\) which can be expressed for some real nonzero \(c\) as
\[E(x)=\sum_{n\geq 0} \frac{c^n}{n!}x^n\]
Proof. Because \(E\) is an exponential we know \(E\) is differentiable, and that \(E^\prime(x)=E^\prime(0)E(x)\) for all \(x\). Note that \(E\prime(0)\) is nonzero; else we would have \(E^\prime(x)=0\) constantly, and so \(E(x)\) would be constant. Set \(c=E^\prime(0)\).
Now, inductively take derivatives at zero: \[E^\prime(0)=c\hspace{1cm}E^{\prime\prime}(0)=c^2\hspace{1cm}E^{(n)}(0)=c^n\]
Thus, if \(E\) has a power series representation it must be \[\sum_{n\geq 0}\frac{c^n}{n!}x^n=\sum_{n\geq 0}\frac{1}{n!}(cx)^n\]
This is just the series for \(\exp\) evaluated at \(cx\): since \(\exp\) exists and is an exponential, so is this function (as its defined just by a substitution). So there is such an exponential.
Unfortunately our newfound tool does not apply so well to giving a formula for the logarithm: power series are always defined on some symmetric interval \((-r,r)\) about \(0\), but the domain of the logarithm is \((0,\infty)\). Thus there is no simple power series that will equal \(\log(x)\)!. We will come up with formulas to compute the logarithm later on, first as an integral; and then as a series (that converges only for some values of \(x\)).
27.1.3 \(\bigstar\) Existence of Exponentials: an Alternative Proof
Our argument above used that we had previously confirmed the existence of exponential functions, together with the Taylor Error formula to find a series representation. But as often happens, the amount of new technology we have developed along the way gives a new self-contained means of both proving the existence of exponentials, and constructing their series in one stroke! We give this alternative argument here.
The idea essentially turns some of our previous reasoning on its head: we start by looking at solutions to the equation \(y^\prime = y\) and (1) show they satisfy the law of exponents, then (2) construct an explicit solution as a power series. First, a helpful lemma about this differential equation:
Proposition 27.6 Let \(f,g\) be two solutions to the differential equation \(y^\prime =y\). Then they are constant multiples of one another.
Proof. Consider the function \(h(x)=\tfrac{f(x)}{g(x)}\). Differentiating with the quotient rule,
\[\begin{align} h^\prime(x)&=\frac{f^\prime(x)g(x)-f(x)g^\prime(x)}{g(x)^2}\\ &= \frac{f(x)g(x)-f(x)g(x)}{g(x)^2}\\ &=\frac{0}{g(x)^2}\\ &=0 \end{align}\]
Thus \(h^\prime(x)=0\) for all \(x\), which implies \(h=f/g\) is a constant function, and \(g\) is a constant multiple of \(f\) as claimed.
Now we’re ready for the main theorem:
Theorem 27.4 Let \(g\) be any differentiable function which solves \(g^\prime = g\) and has \(g(0)=1\). Then \(g\) is an exponential.
Proof. Let \(g\colon\RR\to\RR\) solve \(Y^\prime = Y\) and satisfy \(g(0)=1\). We wish to show that \(g(x+y)=g(x)g(y)\) for all \(x,y\in\RR\).
So, fix an arbitrary \(y\), and consider each of these separately, defining functions \(L(x)=g(x+y)\) and \(R(x)=g(x)g(y)\).
Differentiating,
\[\begin{align*} L^\prime(x)&=\left(g(x+y)\right)^\prime\\ &=g(x+y)(x+y)^\prime\\ &=g(x+y)\\ &=L(x) \end{align*}\]
\[\begin{align*} R^\prime(x)&=\left(g(x)g(y)\right)^\prime\\ &=(g(x))^\prime g(y)\\ &=g(x)g(y)\\ &=R(x) \end{align*}\]
Thus, both \(L\) and \(R\) satisfy the differential equation \(Y^\prime=Y\). Our previous proposition implies they are constant multiples of one another,
\[\frac{L(x)}{R(x)}=k\hspace{1cm} \forall x\in\RR\]
To find this constant we evaluate at \(x=0\) where (using \(g(0)=1\)) we have \[L(0)=g(0+y)=g(y)\] \[R(0)=g(0)g(y)=g(y)\]
They are equal at \(0\) so the constant is \(1\):
\[\frac{L(x)}{R(x)}=\frac{L(0)}{R(0)}=\frac{g(y)}{g(y)}=1\] \[\implies L=R\]
But these two functions are precisely the left and right side of the law of exponents for \(g\). Thus their equality is equivalent to \(g\) sayisfying the law of exponents for this fixed value of \(y\):
\[\forall x,\,\, L(x)=g(x+y)=g(x)g(y)=R(x)\]
As \(y\) was arbitrary, this holds for all \(y\), and \(g\) is an exponential.
This proof does not establish the existence of a solution to this equation, it only says if you have a solution then its an exponential. But we may now use the theory of power series to directly construct a solution!
Proposition 27.7 The series \(E(x)=\sum_{n\geq 0 }\frac{x^n}{n!}\) satisfies \(E^\prime(x)=E(x)\) and \(E(0)=1\). Thus, it defines an exponential function.
Proof. This series converges on the entire real line via the ratio test (as checked above). Thus it defines a continuous and differentiable function on \(\RR\), which can be differentiated term-by-term (Theorem 26.1) to yield \[\begin{align*} E(x)&=\left(1+x+\frac{x^2}{2}+\frac{x^3}{6}+\cdots+\frac{x^n}{n!}+\cdots\right)^\prime\\ &=\left(1\right)^\prime+\left(x\right)^\prime+\left(\frac{x^2}{2}\right)^\prime+\left(\frac{x^3}{6}\right)^\prime+\cdots + \left(\frac{x^n}{n!}\right)^\prime+\cdots\\ &= 0 + 1 + x + \frac{3x^2}{6}+\cdots + \frac{n x^{n-1}}{n!}+\cdots\\ &= 1+ x+ \frac{x^2}{2}+\cdots+ \frac{x^{n-1}}{(n-1)!}+\cdots\\ &= E(x) \end{align*}\]
Finally, plugging in zero yields \(E(0)=1+0+\frac{0^2}{2!}+\cdots = 1\), finishing the argument.
27.1.4 The Number \(e\)
Recalling our work with irrational exponents, we know that exponentials are powers: if \(E\) is an exponential with \(E(1)=a\), then we may write \(E(x)=a^x\) for any \(x\in\RR\) (defined as a limit of rational exponents). So, our special exponential \(\exp\) comes with a special number as its base.
Definition 27.3 We denote by the letter \(e\) the base of the exponential \(\exp(x)\): that is, \(e=\exp(1)\), and \[\exp(x)=e^x\]
What is this natural base? We can estimate its value using the power series representation for \(\exp\), and the Taylor error formula.
Proposition 27.8 The base of the natural exponential is between \(2\) and \(3\).
Proof. The series defining \(e\) is all positive terms, so we see that \(e\) is greater than any partial sum. Thus \[2=1+1=\frac{1}{0!}+\frac{1}{1!}< \sum_{k\geq 0}\frac{1}{k!}=e\] so we have the lower bound. To get the upper bound, we need to come up wtih a computable upper bound for our series. This turns out to be not that difficult: as the factorial grows so quickly, we can produce many upper bounds by just fining something that grows slower than the reciprocal and summing up their reciprocals. For instance, when \(k\geq 2\) \[k(k-1)\leq k!\]
and so,
\[e=\sum_{k\geq 0}\frac{1}{k!}=1+1+\sum_{k\geq 2}\frac{1}{k!}\leq 1+1+\sum_{k\geq 2}\frac{1}{k(k-1)}\]
But this upper bound now is our favorite telescoping series! After a rewrite with partial fractions, we directly see that it sums to \(1\). Plugging this in,
\[e<1+1+1=3\]
How can we get a better estimate? Since we do have a convergent infinite series just sitting here defining \(e\) for us, the answer seems obvious - why don’t we just sum up more and more terms of the series? And of course - that is part of the correct strategy, but it’s missing one key piece. If you add up the first 10 terms of the series and you get some number, how can you know how accurate this is?
Just because the first two digits are \(2.7\), who is to say that after adding a million more terms (all of which are positive) it won’t eventually become \(2.8\)? To give us any confidence in the value of \(e\) we need a way of measuring how far off any of our partial sums could be.
Our usual approach is to try and produce sequences of upper and lower estimates: nested intervals of error bars to help us out. But here we have only one sequence (and producing even a single upper bound above was a bit of work!) so we need to look elsewhere. It turns out, the correct tool for the job is the Taylor Error formula once more!
Proposition 27.9 Adding up the first \(N\) terms of the series expansion of \(e\) results in a an estimate of the true value accurate to within \(3/(N+1)!\).
Proof. The number \(e\) is defined as \(\exp(1)\), and so using \(x=1\) we are just looking at the old equation
\[\exp(1)=p_N(1)+R_N(1)\]
Where \(R_N(1)=\exp(c_N)\frac{1^{N+1}}{(N+1)!}\) for \(c_N\in[0,1]\). Since \(\exp\) is increasing, we can bound \(\exp(c_N)\) below by \(\exp(0)=1\) and above by \(\exp(1)=e\), and \(e\) above by \(3\): thus
\[\frac{1}{(N+1)!}\leq R_N(x)\leq \frac{3}{(N+1)!}\]
And so, the difference \(|e-p_N(1)|=|R_N(1)|\) is bounded above by the upper bound \(3/(N+1)!\)
This gives us a readily computable, explicit estimate. Precisely adding up to the \(N=5\)th term of the series yields
\[1+1+\frac{1}{2}+\frac{1}{6}+\frac{1}{24}+\frac{1}{120}\approx 2.71666\ldots\]
with the total error between this and \(e\) is less than \(\frac{3}{6!}=\frac{1}{240}=0.0041666\ldots\). Thus we can be confident that the first digit after the decimal is a 7, as \(2.7176-0.0041=2.7135\leq e\leq 2.7176+0.0041=2.7217\).
Adding up five more terms, to \(N=10\) gives
\[1+1+\frac{1}{2}+\frac{1}{3!}+\cdots+\frac{1}{10!}=2.71828180114638\ldots\]
now with a maximal error of \(3/11!=0.000000075156\ldots\). This means we are now absolutely confident in the first six digits:
\[e\approx 2.718281\]
Pretty good, for only having to add eleven fractions together! Thats the sort of calculation one could even manage by hand.
27.2 Trigonometric Functions
Take derivatives of identities.
Corollary 27.2 \[\lim_{x\to 0}\frac{\sin x}{x}=1\]
Definition of NATURAL trigonometric functions - this gives a natural period \(\tau\) and half period \(\pi\).
27.2.1 Finding Series Representations
Find power series satisfying these.
Prove these series directly satisfy the trigonometric functional equations, using complex exponentials.
PROVE VIETE FORMULA FOR PI (PAGE 381 in AMAZING)
27.3 Problems
Exercise 27.2 (Approximating \(\pi\) with Newton’s Method) The first zero of \(\cos(x)\) is \(\pi/2\), so one might hope to use Newton’s method to produce an approximation for \(\pi\). Show the sequence \[x_{n+1}=N(x_n)=x_n+\frac{\cos(x_n)}{\sin(x_n)}\] starting at \(x_0=1\) converges to \(\pi/2\), and use a calculator to compute the first couple terms.
This of course is not very satisfying as we had to use a calculator to find values of \(\sin\) and \(\cos\)! But we know enough to approximate these values with a series expansion.
Exercise 27.3 How many terms of the series expansions of \(\sin\), \(\cos\) are needed to evaluate at \(x=1\) to within \(0.0001\)? Use this many terms of the series expansion to approximate the terms appearing in the first two iterations of Newtons method \[1, N(1), N(N(1))\] What is your approximate value for \(\pi\) resulting from this?