21 Elementary Functions
Highlights of this Chapter: we introduce the idea of defining functions by a Functional Equation specifying how a function should behave instead of specifying how to compute it. Following this approach, we give rigorous definitions for exponentials logarithms and trigonometric functions, and investigate some of their consequences. With these definitions in hand, we are able to define the field of Elementary Functions, familiar from calculus and the sciences.
At the heart of real analysis is the study of functions. But which functions should we study? Polynomials are a natural class built from the field operations, and power series are a natural thing to look at given polynomials and the concept of a limit. But there are many, many other functions out there, and we should wonder which among them are worthy of our attention. Looking to history as a guide, we see millennia of use of trigonometric functions, and centuries of use of exponentials and logarithms. Indeed these functions are not only important to the origins of analysis but also to its modern development. In this chapter we will not focus on how to compute such functions, but rather on the more pressing question of how to even define them: if all we have available to us are the axioms of a complete ordered field how do we rigorously capture aspects of circles in the plane (trigonometry) or continuous growth (exponentials)? The key is the idea of a functional equation: something that will let us define a function by how it behaves, instead of by directly specifying a formula to compute it.
21.1 Exponentials & Logs
Definition 21.1 (The Law of Exponents) A function \(E\colon\RR\to\RR\) satisfies the law of exponents if for every \(x,y\in\RR\) \[E(x+y)=E(x)E(y)\] An exponential function is a continuous nonconstant solution to the law of exponents.
Definition 21.2 (The Law of Logarithms) A function \(L\) satisfies the law of logarithms if for every \(x,y>0\), \[L(xy)=L(x)+L(y)\] A logarithm is a continuous nonconstant solution to the law of logarithms.
21.1.1 Properties of Exponentials
Example 21.1 If \(E\) satisfies the law of exponents and evaluates to zero at any point, then \(E\) is the zero function.
Proof. Let \(E\) be an exponential function and assume there is some \(z\in\RR\) such that \(E(z)=0\). Then for any \(x\in\RR\) we may write \(x=x-z+z=(x-z)+z=y+z\) for \(y=x-z\in\RR\). Evaluating \(E(x)\) using the law of exponents, \[E(x)=E(y+z)=E(y)E(z)=E(y)\cdot 0 =0\]
Example 21.2 If \(E(x)\) is an exponential and \(s\neq 0\) is a real number, then \(x\mapsto E(sx)\) is also an exponential function.
Proof.
Exercise 21.1 If \(L(x)\) is an logarithm and \(s\neq 0\) is a real number, then \(x\mapsto L(x)/s\) is also an logarithm function.
Some of the most useful consequences of the law of exponents are a collection of results that let us express exponentials as powers for certain inputs. The following collection of propositions and exercises guide us through the most general case: that if \(x\in\RR\) and \(r\) is any rational number, we can compute \(E(rx)\) in terms of \(E(x)\) as \(E(rx)=E(x)^r\).
Proposition 21.1 (Exponentials on \(\NN\)) Let \(E(x)\) be any exponential function and \(n\in\NN\). Then \(E(nx)=E(x)^n\).
Proof.
Exercise 21.2 (Exponentials on \(\ZZ\)) Let \(E(x)\) be an exponential with base \(a\) and \(n\in\ZZ\). Then \(E(nx)=E(x)^n\).
Proposition 21.2 (Exponentials and \(1/n\)) Let \(E(x)\) be an exponential with base \(a\) and \(n\in\NN\). Then \(E(x/n)=E(x)^{1/n}\).
Proof.
Exercise 21.3 (Exponentials on \(\QQ\)) Let \(E(x)\) be an exponential with base \(a\) and \(r\in\QQ\). Then \(E(rx)=E(x)^r\).
Corollary 21.1 (Exponentials that agree at a point) If \(E,F\) are two exponential functions which take the same value at any nonzero \(x\in\RR\), then they are equal.
Proof.
Proposition 21.3 Let \(E\) be an exponential, then the range of \(E\) is
Proof.
Exercise 21.4 (Convexity of exponentials) Prove that exponential functions are convex (Definition 5.8): their secant lines lie above their graphs.
One can prove analogous results for logarithms, which we leave as an exercise:
Exercise 21.5 Let \(L(x)\) be a logarithm and \(r\in\QQ\) a rational number. Then \(L(x^r)=r L(x)\).
Corollary 21.2 For any \(x>0\) and real \(a\in\RR\), if \(L\) is a logarithm \(L(x^a)=a L(x)\).
Proof. Let \(a\in\RR\) and \(r_n\to a\) be a sequence of rationals converging to \(a\). By definition \(x^a = \lim x^{r_n}\), and by Exercise 21.5, we know \(L(x^{r_n})=r_n L(x)\). Using these and continuity, \[L(x^a)=L (\lim x^{r_n})=\lim L(x^{r_n})=\lim r_n L(x)=aL(x)\]
This implies something about the range of a logarithm: given any \(x\) where \(L(x)\neq 0\) the range must contain all multiples of \(L(x)\), which is the entire real line!
Corollary 21.3 If \(L\) is a logarithm function, its range is the entire real line \(\RR\).
21.1.2 Existence of Exponentials & Logs
The properties above showed us how to compute \(E\) on rational multiples of a given input \(x\) as powers: we can use this to express the exponential function itself in terms of powers of a fixed base.
Definition 21.3 (The Base of an Exponential) If \(E\) is any exponential function, its value at \(1\) is called its base.
Using this terminology, we can rephrase Exercise 21.3 to say that if \(E\) is an exponential of base \(a\), and \(r\in\QQ\) then \[E(r)=E(1)^r=a^r\]
So, on rational inputs, an exponential function is completely determined by its base as powers of that base.
To prove the existence of an exponential function
Prove \(a^x\) is uniformly continuous on the rational numbers, apply continuous extension.
Theorem 21.1 (The Existence of Exponentials) For any base \(a>0\), the function \(E(x)=a^x\), defined as the continuous extension of \(p/q\mapsto a^{p/q}\) from \(\QQ\) to \(\RR\), is an exponential function.
Proof.
Even better, this gives a complete classification of exponential functions
Proposition 21.4 Let \(E\) be any exponential function. Then \(E(x)=a^x\) for some \(a\in\RR\).
Proof.
This is a big deal! Starting from the single property of the law of exponents we completely characterized the continuous nonconstant solutions, rigorously justifying the existence of exponential functions. To do so we used several tools that we’ve developed over the course, from the density of rationals to the existence of rational powers, limits, and continuous extension theorems. However, while we now rigorously know of their existence, we are no closer to being able to compute specific values, such as \(2^\pi\). This will require further work developing a power series for exponentials, to come. For now, we focus on the theoretical implications of their existence, and use this to conclude the existence of logarithm functions.
Proposition 21.5 Let \(E\) be an exponential function. Then \(E\) is monotone, and hence invertible.
Proof. We work in the case that the base is greater than 1: the remaining case is similar, and left as an exercise. Since \(E\) is continuous, by Proposition 19.2 its enough to prove that \(E\) is monotone on rational inputs. So let Let \(x<y\) be rational; we wish to show that \(E(y)>E(x)\) or equivalently that \(E(y)/E(x)>1\). Write \(y=x+z\) for \(z>0\), and observe \(E(y)=E(x+z)=E(x)E(z)\) so \(E(y)/E(x)=E(z)\) and thus it suffices to prove that for \(z>0\) that \(E(z)>1\). Since \(x,y\in\QQ\) so is \(z\), so \(z=p/q\) and we only need to see \(E(p/q)>1\).
But \(E(p/q)=E(1)^{p/q}\), and as \(E(1)>1\) (by assumption) it follows that \(E(1)^p>1\) and hence that \((E(1)^p)^{1/q}>1\), so we are done. Thus \(E\) is strictly monotone increasing, so 1-1, and invertible.
Theorem 21.2 (Exponentials and Logarithms are Inverses) Let \(E(x)\) be an exponential function. Then its inverse function is a logarithm.
Proof. Let \(E\) be an exponential function, and \(L\) be its inverse. Because \(E\) is continuous, Theorem 18.4 implies that \(L\) is also continuous and nonconstant, so we just need to show \(L\) satisfies the law of logarithms. Since the range of \(E\) is \((0,\infty)\) this means we must check for any \(a,b>0\) that \(L(ab)=L(a)+L(b)\).
With \(a,b\) in the range fo \(E\) we may find \(x,y\) with \(E(x)=a\) and \(E(y)=b\), and (by the definition of \(L\) as the inverse) \(L(a)=x\) and \(L(b)=y\). By the law of exponents for \(E\) we see \(ab = E(x)E(y)=E(x+y)\), and as \(L\) and \(E\) are inverses, \(L(E(x+y))=x+y\). Putting this all together gives what we need: \[L(ab)=L(E(x)E(y))=L(E(x+y))=x+y=L(a)+L(b)\]
This simple argument already tells us a lot: we see that many logarithm functions must exist (the inverse of any exponential)! In light of this its natural to define the base of a logarithm in terms of its corresponding exponential: if \(E\) has base \(a\) it sends \(1\) to \(a\), so its inverse should send \(a\) to \(1\).
Definition 21.4 The base of a logarithm \(L\) is the real number \(a\) such that \(L(a)=1\).
We should check this definition makes sense: first since the range of every logarithm is \(\RR\), there is always at least one input sent to \(1\). To uniquely pick out a base requires there is only 1, which follows immediately if we can check that a logarithm is monotone.
Exercise 21.6 Following the style of argument in Proposition 21.5, show that any logarithm function \(L\) is strictly monotone.
We already know that there exists one logarithm of every positive base, as the inverse of the corresponding exponential. But could there be other, yet-undiscovered solutions to the law of logarithms? In fact there are not; we can prove (using continuity and density) that a logarithm function is completely determined by its base, giving a complete classification of logarithms.
Exercise 21.7 If \(L,G\) are two logarithm functions with the same base, then they are equal everywhere
Corollary 21.4 (The Existence of Logarithms) Logarithm functions exist, and every logarithm is the inverse of an exponential.
21.2 Trigonometric Functions
Motivation for functional Equation definition: a diagram with sine addition/subtraction.
Definition 21.5 (Angle Identities) A pair of two functions \((c,s)\) are trigonometric if they are a continuous nonconstant solution to the angle identities \[s(x-y)=s(x)c(y)-c(x)s(y)\] \[c(x-y)=c(x)c(y)+s(x)s(y)\]
Given this definition of trigonometric functions modeling sine and cosine, we can define the auxilliary trigonometric functions familiar from precalculus:
Definition 21.6 (Other Trigonometric Functions) Given a trigonometric pair \(s,c\) we define the tangent function \(t(x)=s(x)/c(x)\), as well as the secant \(1/c(x)\), cosecant \(1/s(x)\) and cotangent \(1/t(x)\).
Just like for exponentials and logs we don’t expect this to pick out a unique pair of functions, but rather there many be many solutions to the angle identities (corresponding to different units we could measure angles with)
Exercise 21.8 Prove that if \(s(x),c(x)\) are a trigonometric pair then so are \(s(kx), c(kx)\) for any constant \(k>0\).
21.2.1 Trigonometric Identities
A good warm-up to functional equations is using them to prove some identities! I’ll do the first one for you
Lemma 21.1 (Values at Zero) If \(s,c\) are trigonometric, then we can calculate their values at \(0\): \[s(0)=0\hspace{1cm}c(0)=1\]
Proof. Setting \(x=y\) in the first immediately gives the first claim \[s(0)=s(x-x)=s(x)c(x)-c(x)s(x)=0\]
Evaluating the second functional equation also at \(x=y\) \[c(0)=c(x-x)=c(x)c(x)+s(x)s(x)=c(x)^2+s(x)^2\]
From this we can see that \(c(0)\neq 0\), as if it were, we would have \(c(x)^2+s(x)^2=0\): since both \(c(x)^2\) and \(s(x)^2\) are nonnegative this implies each are zero, and so we would have \(c(x)=s(x)=0\) are constant, contradicting the definition. Now, plug in \(0\) to what we’ve derived, and use that we know \(s(0)=0\)
\[c(0)=c(0)^2+s(0)^2=c(0)^2\]
Finally, since \(c(0)\) is nonzero we may divide by it, which gives \(c(0)=1\) as claimed.
An important corollary showed up during the proof here, when we observed that \(c(0)=c(x)^2+s(x)^2\): now that we know \(c(0)=1\), we see that \((c,s)\) satisfy the Pythagorean identity!
Corollary 21.5 (Pythagorean Identity) If \(s,c\) are trigonometric, then for every \(x\in\RR\) \[s(x)^2+c(x)^2=1\]
Continuing this way, we can prove many other trigonometric identities: for instance, the double angle identity (which will be useful to us later)
Exercise 21.9 (Evenness and Oddness) If \(s,\) are trigonometric, then \(s\) is odd and \(c\) is even:
\[s(-x)=-s(x)\hspace{1cm}c(-x)=c(x)\]
Exercise 21.10 (Angle Sums) If \(s,c\) are trigonometric, then for every \(x\in\RR\) \[s(x+y)=c(x)s(y)+s(x)c(y)\] \[c(x+y)=c(x)c(y)-s(x)s(y)\]
Setting \(x=y\) in the above yields as corollaries the double angle identies
Corollary 21.6 (Double Angles) If \(s,c\) satisfy the angle sum identities, then for any \(x\in\RR\), \[s(2x)=2s(x)c(x)\]
Another useful identity derivable from this work is the half angle identity
Lemma 21.2 (Half Angles) If \(s,c\) are trigonometric functions, then \[c(x)^2=\frac{1+c(2x)}{2}\]
Proof. Using the angle sum identity we see \[c(2x)=c(x)c(x)-s(x)s(x)=c(x)^2-s(x)^2\] Then applying the pythagorean identity \[\begin{align*} c(2x)&=c(x)^2-s(x)^2\\ &=c(x)^2-(1-c(x)^2)\\ &= 2c(x)^2-1 \end{align*}\]
Re-arranging yields the claimed identity.
Exercise 21.11 If \(s,c\) are trigonometric functions, prove that \[s(x)^2=\frac{1-c(2x)}{2}\]
These two identities are often rewritten by replacing \(x\) with \(x/2\) and taking a square root:
\[c\left(\frac{x}{2}\right)=\sqrt{\frac{1+c(x)}{2}} \hspace{1cm} s\left(\frac{x}{2}\right)=\sqrt{\frac{1-c(x)}{2}} \]
EXERCISE: VIETE FORMULA USING DOUBLE ANGLE FOR AN ARBITRARY TRIG PAIR S,C (PAGE 381 IN AMAZING, ADPATED)
21.2.2 Periodicity
But to do so, we first need to show the functions even have a period. Why must solutions to the angle identities be periodic?
Lemma 21.3 Let \(c,s\) be a trigonometric pair. Then \(c\) has a root.
Proof. Since the cosine is not constant and \(c(0)=1\), there must be some \(t_0\) for which \(c(t_0)\neq 0\). Since \(c^2+s^2=1\) we see that \(-1\leq c(x)\leq 1\) so \(c(t_0)<1\).
If \(c(t_0)\) is negative, we are done by the intermediate value theorem - there is a zero between 0 and \(t_0\). So, we may assume \(0<c(t_0)<1\), and define the sequence
\[c_n=c(2^n t_0)\]
To show \(c(x)\) is eventually negative (and thus, has a root by the intermediate value theorem argument) it suffices to see that \(c_n\) is eventually negative; and thus that \(L=\inf \{c_n\}\) is negative (note the infimum exists as the set \(\{c_n\}\) is bounded below by \(-1\)).
First, notice that the half angle identity implies \(2c_0^2-1=c_1\). For \(x\in(0,1)\), we see \(2x^2-x-1\) is negative: plugging in \(c_0\) yields \(2c_0^2-c_0-1<0\), or \(c_1=2c_0^2-1<c_0\). Thus, \(c_0\) is not the smallest term in our sequence, and we can truncate it without changing the infimum: \[\inf_{n\geq 0}\{c_n\}=\inf_{n\geq 0}\{c_{n+1}\}\]
Using again the half angle identity, \(2c_n^2-1=c_{n+1}\), so
\[L=\inf\{c_n\}=\inf\{c_{n+1}\}=\inf\{2c_n^2-1\}=2\inf\{c_n^2\}-1\]
If our sequence were never negative, then \(\inf\{c_n\}=L\geq 0\) and \(\inf\{c_n^2\}=L^2\). Combining with the above, this implies \(L=2L^2-1\) whose only positive solution is \(L=1\) (which we know is not the infimum, as \(c_0<1\)). Thus, this is impossible, so it must be that \(L<0\), and our sequence eventually reaches a negative term.
Applying the intermediate value theorem to on the interval between \(c(t_0)>0\) and \(c(2^nt_0)<0\) furnishes a zero.
This shows that cosine has a zero somewhere. Because it will be convenient below, we carry this reasoning a little farther and show that cosine actually has a first positive zero.
Lemma 21.4 There is a \(z>0\) such that \(c(z)=0\), but the cosine is positive on the interval \([0,z)\): that is, \(z\) is the first zero of the cosine.
Proof. Let \(x\) be a zero of the cosine function. Since the cosine is even we know \(-x\) is also a zero: and, since \(c(0)=1\) we know neither \(x\neq 0\) so at least one of \(\pm x\) is positive. Thus, the cosine has at least one positive real root.
Let \(R=\{x>0\mid c(x)=0\}\) be the set of all positive roots of the cosine function. We prove this set has a minimum element, which is the first zero. Since \(R\) is nonempty (our first observation) and bounded below by zero (by definition) completeness implies \(r=\inf R\) exists. For every \(n\in\NN\), since \(r+1/n\) is not an upper bound we may choose some \(x_n\in\RR\) with \(r\leq x_n\leq r+1/n\). By the squeeze theorem \(x_n\to r\), and by continuity of the cosine this implies \[\lim \cos(x_n)=c(\lim x_n)=c(r)\]
However each \(x_n\) is a zero of cosine by definition! Thus this is the constant sequence \(0,0,\ldots,\) which converges to \(0\). All together this means \(c(r)=0\), and so \(r\in R\). But if the infimum is an element of the set then that set has a minimum element, so \(r\) is the smallest positive zero of the cosine!
It turns out that simply knowing the existence of a single zero of the cosine function is enough to resolve everything.
Exercise 21.12 (Periodicity of \(s,c\)) The functions \(s(x)\) and \(s(x)\) are periodic, with the same period \(T>0\).
Hint This period is four times the first zero of cosine: show that for this \(P\) that \(c(T)=1\) and \(s(T)=0\), and then that \(c(x+T)=c(x)\) and \(s(x+T)=s(x)\) for all \(x\in\RR\).
It is customary and convenient to work with the half period \(P=T/2\) of a trigonometric pair instead of the period itself. This has a nice intrinsic characterization in terms of the functions
Exercise 21.13 Let \(s,c\) be a trigonometric pair with half period \(P\). Then \(P\) is the first positive zero of \(s(x)\).
We can also use the work done to prove periodicity to show that a trigonometric pair consists of two copies of the same function, shifted with respect to one another
Exercise 21.14 Let \(s,c\) be a trigonometric pair of half period \(P\). Then \[s(x+P)=c(x)\]
21.2.3 Existence of Trigonometric Functions
So far we have computed many properties of the trigonometric functions if they exist but we still need to confront existence. Like for exponentials, we study their behavior on a dense set and use continuous extension theorems. We can use earlier work done on the trigonometric identities to understand their values on a dense set of dyadic rationals:
Exercise 21.15 Prove that if \(s,c\) are a trigonometric pair, the values \(c(x),s(x)\) at some fixed \(x\in\RR\) fully determine the values of \(c,s\) on dyadic rational multiples of \(x\), or points of the form \(mx/2^n\) for \(m\in\ZZ\), \(n\in\NN\).
This has a nice corollary; that if there is a trigonometric pair of period \(P\) then it is unique!
Proposition 21.6 (Uniqueness Given a Single Value) Continuous solutions to the angle sum identities are determined by their value at any nonzero input: in particular, there is a unique trigonometric pair for each possible period \(P>0\).
Proof. Let \(\mathbb{D}\) be the set of dydaic rationals, and choose some nonzero \(x\in\RR\). Then \(x\mathbb{D}=\{xm/2^n\mid m\in\ZZ,n\in\NN\}\) is a dense subset of \(\RR\), and the values of \(c,s\) on \(x\mathbb{D}\) are fully determined by the values \(c(x),s(x)\) by Exercise 21.15. And hence continuity fully determines the values of \(c,s\) at any other inputs.
By the ability to rescale trigonometric functions (Exercise 21.8), we can prove the existence of trig functions with arbitrary period so long as we can show at least one trigonometric pair exists. So here for simplicity we will focus on trigonometric functions with half period 1, and prove a useful lemma:
Lemma 21.5 Let \(s,c\) satisfy the angle sum identities and have \(s(1)=0\), \(c(1)=-1\). Then for any dyadic rational \(d=m/2^n\in[0,1]\), we have \[\frac{s(d)}{d}< 4\]
Proof.
Proposition 21.7 Let \(s,c\) satisfy the angle sum identities with \(s(1)=0\), \(c(1)=-1\). Then \(s\) is uniformly continuous on the dyadic rationals.
Proof. Choose \(\epsilon>0\), and set \(\delta=\epsilon/4\). Given any \(x,y\) with \(|x-y|<\delta\), we aim to show that \(|s(x)-s(y)|<\epsilon\). Its helpful to rewrite with a change of variables \(u=\tfrac{x+y}{2}\) and \(v=\tfrac{x-y}{2}\), so \(x=u+v\) and \(y=u-v\). Then applying the angle identites we see
\[\begin{align*} s(x)-s(y)&=s(u+v)-s(u-v)\\ &=s(u)c(v)+s(v)c(u)-[s(u)c(v)+s(v)c(u)]\\ &= 2 s(v)c(u) \end{align*}\]
Since \(|c(u)|\leq 1\), this implies
\[|s(x)-s(y)|= |2 s(v)c(u)|\leq 2|s(v)|\]
By the above lemma we know a bound for \(s\): for any dyadic rational \(v\in [-1,1]\), \(|s(v)|\leq 4|v|\) and thus
\[|s(x)-s(y)|\leq 2|s(v)|\leq 8|v|=8\left|\frac{x-y}{2}\right|=4|x-y|\]
But \(|x-y|<\delta\) which implies \(4|x-y|<4\delta =\epsilon\), as required.
Theorem 21.3 There exists a trigonometric pair of half period 1.
Proof. Define \(s,c\) on the dyadic rationals by setting \(s(1)=0\), \(c(1)=-1\) and imposing the angle sum identities. By the previous proposition, the resulting values of \(s\) define a uniformly continuous function, and hence \(c(x)=s(x+1/2)\) is also uniformly continuous on the dyadic rationals. Since this is a dense subset of \(\RR\), the continuous extension theorem applies and there exists a continuous extension \(\tilde{s},\tilde{c}\) of these functions to the entire real line.
It remains only to check that this continuous extension satisfies the angle sum identities. Let \(x,y\in\RR\) be arbitrary, and by density we may choose sequences \(x_n\to x\), \(y_n\to y\) of dyadic rationals. Then \(x_n-y_n\to x-y\) and we may compute \(\tilde{c}(x-y)\) using continuity:
\[\tilde{c}(x-y)=\tilde{c}(\lim x_n-y_n)=\lim\tilde{c}(x_n+y-n)=\lim c(x_n-y_n)\]
Now we know that \(c,s\) themselves satisfy the angle sum identites on the dyadic rationals, so \[c(x_n-y_n)=c(x_n)c(y_n)+s(x_n)s(y_n)\] and using the limit laws and continuity we see
\[\begin{align*} \lim c(x_n+y_n)&=\lim \left[c(x_n)c(y_n)+s(x_n)s(y_n)\right]\\ &=\lim\left[c(x_n)c(y_n)\right]+\lim\left[s(x_n)s(y_n)\right]\\ &=[\lim c(x_n)][\lim c(y_n)]+[\lim s(x_n)][\lim s(y_n)] \end{align*}\]
Since \(x_n\to x\) and \(y_n\to y\) these final limits equal the value of the continuous extensions \(\tilde{s},\tilde{c}\) there:
\[[\lim c(x_n)][\lim c(y_n)]+[\lim s(x_n)][\lim s(y_n)]=\tilde{c}(x)\tilde{c}(y)+\tilde{s}(x)\tilde{s}(y)\]
Putting this all together, we see that \(\tilde{s},\tilde{c}\) satisfy the angle sum identity
\[\tilde{c}(x-y)=\tilde{c}(x)\tilde{c}(y)+\tilde{s}(x)\tilde{s}(y)\]
as required. An analogous argument shows the same for \(\tilde{s}(x-y)\); thus these form a trigonometric pair.
Theorem 21.4 There exists a unique trigonometric pair \(s,c\) of functions with half period \(P>0\) for any \(P\).
Proof. Let \(S(x),C(x)\) be the trigonometric pair of half period \(1\). Then \(s(x)=S(x/P)\) and \(c(x)=C(x/P)\) are a trigonometric pair (since they are rescalings of a known pair: Exercise 21.8) and have \(s(P)=S(P/P)=S(1)=0\), \(c(P)=C(P/P)=C(1)=-1\), implying \(P\) is the half period as claimed.
21.3 \(\bigstar\) The Class of Elementary Functions
21.4 Problems
21.4.1 Infinite Products and Sums
As an application of our study of exponentials and logs, we see that the theory of infinite products reduces to that of inifnite sums. Indeed, let \(a_k\) be a sequence of positive terms defining the infinite product \(\prod_k a_k\) as the limit of the partial products \(p_n = \prod_{k\leq n} a_k\).
21.4.2 Trigonometric
Evaluate certain trig values in terms of the period.
Exercise 21.16 (Summing \(\sin kx\)) Prove the following trigonometric identity following the steps below: \[\sum_{k=1}^n \sin(kx)=\frac{\sin\left(\frac{n}{2}x\right)\sin\left(\frac{n+1}{2}x\right)}{\sin\frac{x}{2}}\]
Use angle sum and difference identities to prove \[2\sin(kx)\sin\left(\tfrac{x}{2}\right)=\cos\left((k-\tfrac{1}{2})x\right)-\cos\left((k+\tfrac{1}{2})x\right)\]
For \(k\in\{1,\ldots,n\}\) sum these up as a telescoping series
Again use sum and difference identites to show the right hand side \(\cos\left(\tfrac{x}{2}\right)-\cos\left((n+\frac{1}{2})x\right)\) is equal to \(2\sin\left(\frac{n}{2}x\right)\sin\left(\frac{n+1}{2}x\right)\)
Divide by \(\sin(x/2)\) to get \(\sum_{1\leq k\leq n}\sin(kx)\) alone.