$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} \renewcommand{\epsilon}{\varepsilon} % ALTERNATE VERSIONS % \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} % \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} % \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} % \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} % \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\uppersum}[1]{U_{#1}} \newcommand{\lowersum}[1]{L_{#1}} \newcommand{\upperint}[1]{U_{#1}} \newcommand{\lowerint}[1]{L_{#1}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} % extra auxiliary and additional topic/proof \newcommand{\extopic}{\bigstar} \newcommand{\auxtopic}{\blacklozenge} \newcommand{\additional}{\oplus} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\sampleset}[1]{\mathcal{S}_{#1}} \newcommand{\erf}{\operatorname{erf}} $$

20  Elementary Functions

Highlights of this Chapter: we introduce the idea of defining functions by a Functional Equation specifying how a function should behave instead of specifying how to compute it. Following this approach, we give rigorous definitions for exponentials logarithms and trigonometric functions, and investigate some of their consequences. With these definitions in hand, we are able to define the field of Elementary Functions, familiar from calculus and the sciences.

At the heart of real analysis is the study of functions. But which functions should we study? Polynomials are a natural class built from the field operations, and power series are a natural thing to look at given polynomials and the concept of a limit. But there are many, many other functions out there, and we should wonder which among them are worthy of our attention. Looking to history as a guide, we see millennia of use of trigonometric functions, and centuries of use of exponentials and logarithms. Indeed these functions are not only important to the origins of analysis but also to its modern development. In this chapter we will not focus on how to compute such functions, but rather on the more pressing question of how to even define them: if all we have available to us are the axioms of a complete ordered field how do we rigorously capture aspects of circles in the plane (trigonometry) or continuous growth (exponentials)? The key is the idea of a functional equation: something that will let us define a function by how it behaves, instead of by directly specifying a formula to compute it.

20.1 Warm Up: What is Linearity?

We know how to express linear functions already using the field axioms, as maps \(f(x)=kx\) for some real number \(k\). To speak of linear functions functionally however, we should not give a definition telling us how to compute their values (take the input, and multiply by a fixed constant \(k\)) but rather by what they’re for: by the defining property of linearity.

The most important property of a linear function is that it distributes over addition (think of how we use linear maps, say, in Linear Algebra). So, in the 1800

Definition 20.1 (Cauchy’s Functional Equation for Linearity) A function \(f\colon\RR\to\RR\) satisfies Cauchy’s functional equation if for all \(x,y\in\RR\), \[f(x+y)=f(x)+f(y)\]

Note it follows for any finite sum \(x_1+x_2+\cdots +x_n\), one can ‘distribute’ a solution to Cauchy’s functional equation, by induction: \(f(x_1+x_2+\cdots+x_n)=f(x_1)+f(x_2)+\cdots + f(x_n)\). Cauchy’s idea works: we can completely characterize the concept of Linear Function from \(\RR\) to \(\RR\) via this functional equation and continuity.

Theorem 20.1 (Characterizing Linear Functions) If \(f\) is a continuous solution to Cauchy’s functional equation, then \(f(x)=kx\) for some \(k\in\RR\).

Exercise 20.1 Prove Theorem 20.1, following the outline below.

Let \(f\colon\RR\to\RR\) be a continuous function where \(f(x+y)=f(x)+f(y)\).

  • Prove that \(f(n)=nf(1)\) for all \(n\in\NN\).
  • Extend this to negative integers.
  • Show that \(f(1/n)=\frac{1}{n}f(1)\) for \(n\in\NN\) *Hint: use that \(\tfrac{1}{n}+\tfrac{1}{n}+\cdots+\tfrac{1}{n}=1\)
  • From the above, deduce that for rational \(r=p/q\), \(p\in\ZZ\) \(q\in\NN\) that \(f(r)=rf(1)\).
  • Now use continuity! If \(k=f(1)\), then \(f(r)=kr\) on \(\QQ\)

20.2 Exponentials

Exponential functions occur all across the math and sciences, representing any kind of growth that compounds multiplicatively as time progresses linearly. THat is, the core feature of exponentials underlying their ubiquity is the law of exponents \(a^{m+n}=a^ma^n\) turning the addition of \(m\) and \(n\) into the multiplication of \(a^m\) and \(a^n\). Following Cauchy’s lead, we will single this out and use it to define a class of functions via functional equation.

Definition 20.2 (The Law of Exponents) A function \(E\colon\RR\to\RR\) satisfies the law of exponents if for every \(x,y\in\RR\) \[E(x+y)=E(x)E(y)\] An exponential function is a continuous nonconstant solution to the law of exponents.

This just rigorously spells out what we want exponential functions to be. We still have to prove they exist! But before doing that, we pause to gain some comfort with the functional equation definition, and derive a few basic properties that exponentials must have.

Proposition 20.1 If \(E\) satisfies the law of exponents and evaluates to zero at any point, then \(E\) is the zero function.

Proof. Let \(E\) be an exponential function and assume there is some \(z\in\RR\) such that \(E(z)=0\). Then for any \(x\in\RR\) we may write \(x=x-z+z=(x-z)+z=y+z\) for \(y=x-z\in\RR\). Evaluating \(E(x)\) using the law of exponents, \[E(x)=E(y+z)=E(y)E(z)=E(y)\cdot 0 =0\]

Proposition 20.2 Prove that if \(E\) is any exponential function, then \(E(0)=1\), and that \(E(-x)=1/E(x)\).

Proof. The number \(0\) has the property that \(0+0=0\). Plugging this into the exponential property, we find \[E(0)=E(0+0)=E(0)E(0)\] By the previous proposition, we know \(E(0)\) is nonzero, so we can divide by it, leaving \(1=E(0)\). For the second part, we begin with the identity \(x+(-x)=0\). Exponentiating gives \[1=E(0)=E(x+(-x))=E(x)E(-x)\] We can then divide by \(E(x)\) giving the result \(E(-x)=\frac{1}{E(x)}\).

Exercise 20.2 If \(E(x)\) is an exponential and \(s\neq 0\) is a real number, then \(x\mapsto E(sx)\) is also an exponential function.

20.2.1 Existence

Here we show that exponential functions exist, and fully characterize them. We’ve already done plenty of work understanding rational and irrational powers, so we can make perfect sense of the expression \(a^x\) for arbitrary \(a>1\) and real \(x\). Furthermore in algebra and calculus classes we have certainly treated such expressions as exponentials: we’ve used the law of exponents on their powers without worry! But now in our rigorous mindset there is much more to do, we need to confirm that our rather complicated definition of \(x\mapsto a^x\) actually is (1) continuous nonconstant and (2) satisfies the laws of exponents.

To do so we make use of all our previous work with exponents: namely the following facts (the first two are from our inital investigation into numbers and operations, the second two from when we studied Monotone Convergence)

  • If \(r=p/q\) then \(a^r\) is defined as \(a^{p/q}=\sqrt[q]{a^p}\).
  • If \(r,s\) are rational numbers \(a^{r+s}=a^ra^s\), so powers satisfy the law of exponents on rational inputs.
  • If \(r_n\) is a monotone increasing sequence of rational numbers \(a^{r_n}\) converges.
  • If \(r_n\) is a sequence of positive rationals with \(r_n\to 0\) then \(a^{r_n}\to 1\).

From these we can prove an important lemma that helps us make a rigorous definition of the function \(a^x\):

Lemma 20.1 (Irrational Powers) If \(x\in\RR\) and \(r_n\) is any monotone increasing sequence of rational numbers \(r_n\nearrow x\), then \(a^{r_n}\) converges to the same limit.

Proof. Let \(s_n\) and \(r_n\) be two monotone increasing sequences of rationals converging to \(x\). We know (by monotone convergence) that \(a^{r_n}\) and \(a^{s_n}\) both converge, so let’s name their limits \(\lim a^{r_n}=R\) and \(a^{s_n}=S\). We wish to prove \(R=S\).

Defining \(z_n=r_n-s_n\), note that \(z_n\in\QQ\) we can write \(r_n= s_n+z_n\), and using the law of exponents for rational numbers, \[a^{r_n}=a^{s_n+z_n}=a^{s_n}a^{z_n}\]

Applying the limit law for differences we see \(\lim z_n = \lim r_n-\lim s_n= x-x=0\), and so we know (from our earlier fact) that \(a^{z_n}\to 1\). Thus all three of the sequences above converge, and we can use the limit law for products:

\[R=\lim a^{r_n}=\lim\left(a^{s_n}a^{z_n}\right)=\left(\lim a^{s_n}\right)\left(\lim a^{z_n}\right)= \left(\lim a^{s_n}\right)\left(\lim a^{z_n}\right)=\left(\lim a^{s_n}\right)(1)=S\]

Because the limiting value of \(a^x\) does not depend on which sequence we take, we can define irrational powers by saying take any monotone sequence of rationals and compute the limit without worrying if different people will get different answers.

Definition 20.3 (Raising to the Power of \(x\)) Given a positive \(a\neq 1\), we define the function \(x\mapsto a^x\) via

\[E(x)=\begin{cases} a^x & x\in\QQ\\ \lim a^{r_n} & x\not\in \QQ\textrm{ for } r_n\in\QQ,\,r_n \nearrow x \end{cases}\]

Theorem 20.2 (Existence: Powers Are Exponentials) Exponential functions exist.
Precisely, any positive \(a\neq 1\), \(E(x)=a^x\) is a continuous function which satisfies the law of exponents \(a^{x+y}=a^xa^y\) for all \(x,y\in\RR\).

We prove these two claims separately. To simplify things, we work with the case \(a>1\): for \(a<1\) one can either perform analogous arguments, or write \(a=1/b\) fo$ \(b>1\), and work with \(E(x)=1/b^x\) where we already fully understand \(b^x\).

Proposition 20.3 The function \(a^x\) satisfies the law of exponents

Proof (Laws of Exponents). Let \(x,y\in\RR\), we wish to show that \(a^{x+y}=a^xa^y\). Note that if both \(x\) and \(y\) are rational we are done, as we already know that the law of exponents holds for rational powers. So, the interesting case is when at least one is irrational, where the function definition involves limits.

Let’s continue with the case where both \(x\) and \(y\) are irrational (we’ll see the case where only one is irrational also follows from the same logic). To define \(a^x\) and \(a^y\) we need to choose monotone sequences of rational numbers \(x_n\to x\) and \(y_n\to y\): then we have \(a^x=\lim a^{x_n}\) and \(a^y=\lim a^{y_n}\). Since both of these sequences converge, we can use the limit law for products to conclude

\[\lim a^{x_n}a^{y_n}=\left(\lim a^{x_n}\right)\left(\lim a^{y_n}\right)=a^xa^y\]

But since \(x_n\) and \(y_n\) are rational numbers, we know the law of exponents holds for them: \(a^{x_n}a^{y_n}=a^{x_n+y_n}\) for each \(n\). By the limit law for sums we know that \(x_n+y_n\to x+y\), but furthermore its a sequence of rational numbers (since \(x_n\) and \(y_n\) are) and its monotone increasing (since \(x_n\) and \(y_n\) are). This means (by definition!) \(\lim a^{x_n+y_n}=a^{x+y}\).

Stringing these equalities together yields the law of exponents for \(x\) and \(y\):

\[a^{x+y}=\lim a^{x_n+y_n}=\lim \left(a^{x_n}a^{y_n}\right)=\left(\lim a^{x_n}\right)\left(\lim a^{y_n}\right)=a^xa^y\]

We can use the same argument when only one of the numbers is rational: if \(x\in\QQ\) but \(y\not\in\QQ\), we still need to choose a monotone sequence \(y_n\to y\) of rationals, but there’s an obvious choice of sequence for \(x\): just take the constant sequence \(x,x,x,\ldots\): this is rational monotone (because its constant), and converges to \(x\) after all! Running the same argument as above with these two sequences yields \(a^{x+y}=a^xa^y\) as well.

Exercise 20.3 Prove the exponential function \(a^x\) for \(a>1\) is monotone increasing: that is, if \(x < y\), \(a^x\leq a^y\).

Hint: we know its monotone on rational inputs, so the interesting cases are again when at least one is irrational (and, the argument for both irrational can be generalized to include the other case). Write down monotone increasing sequences, truncate the sequences until you can insure \(x_n<y_n\) for all \(n\), and then apply the limit laws.

We will additionally need (for a later argument) that the exponential is strictly increasing: that is, if \(x<y\) then \(a^x<a^y\) (that is, the equals case is impossible), so we’ll prove that now as well.

Lemma 20.2 If \(x<y\) then \(a^x<a^y\), when \(a>1\).

Proof. This is equivalant to showing \(1<a^y/a^x\), which, since we know the laws of exponents hold, means showing \(1<a^{y-x}\). That is, our problem is equivalent to proving for any \(z>0\) the exponential \(a^z\) is strictly greater than 1.

First note this is clearly true for rational \(z=p/q\), as \(a>1\) implies \(a^p>1^p=1\), which implies \(\sqrt[q]{a^p}>\sqrt[q]{1}=1\). For irrational \(z\) we proceed by choosing a rational sequence \(z_n\to z\). By picking an epsilon (say \(\epsilon=z\)) we can truncate our sequence and after some point, and assume its always positive. So (possibly re-labeling the indices) we can assume without loss of generality \(z_n>0\) for all \(n\). Since \(z\) is monotone increasing, we see \(z_n\geq z_1\) for all \(n\), and since the exponential is monotone for rational numbers, \[z_1\leq z_n\implies a^{z_1}\leq a^{z_n}\]

Thus, by the inequality for limits, we see \(a^{z_1}\leq \lim a^{z_n}=a^z\). Since \(z_1\) is rational we know \(a^{z_1}\) is strictly greater than \(1\), so \(a^z\) is as well.

We are now ready to prove continuity. The sequence criterion looks suspiciously similar to our definition of \(a^x\), so this sounds like it might be easy. But as with many things in analysis, there are details to be considered: the definition of our function \(a^x\) considers monotone, rational sequences whereas the definition of continuity requires we consider arbitrary sequences. So, we need to bridge this gap. To do so, it will be useful to have a quick lemma, which relies on similar techinques to the above proof

Proof (Continuity). Let \(x\in\RR\) and \(x_n\to x\) be arbitrary. Choose an arbitrary sequence \(x_n\to x\), we wish to show that \(a^{x_n}\) converges to \(a^x\). We proceed by contradiction, assuming it does not. Our goal is to throw away terms of this sequence until we get something nicer (less arbitrary) to work with.

Negating the definition of convergence, there must be some bad \(\epsilon\) where for every \(N\) there is an \(n>N\) where \(a^{x_{n_k}}\) differs from \(a^x\) by more than \(\epsilon\). Taking \(N=1,N=2, N=3,\ldots\) we can build a subsequence \(x_{n_{k}}\) where every single term is more than \(\epsilon\) away from \(a^x\). But we can go even further, recalling that every sequence has a monotone subsequence, we can throw away more terms until we have a subsequence \(x_{n_{k_{\ell}}}\) which is monotone and has \(a^{x_{n_{k_{\ell}}}}\) not converging \(a^x\).

Phew! That’s a lot of subsequences. Its annoying to carry them all around in print, and so we will just rename things: let’s call this sequence \(y_\ell\). Since the original sequence \(x_n\) converged to \(x\) and this is a subsequence, we know \(y_\ell\to x\) as well. And, its much closer to something we might know about (its monotone, and the definition of \(a^x\) requires talking about monotone sequences). The only thing left to confront is rationality. We have no idea if the terms \(y_\ell\) are rational (and they need not be). So we are going to do a very cool trick, to replace this with a different sequence.

By the density of rationals we can find an \(r_\ell\) between each pair \(y_\ell, y_{\ell+1}\). This defines a sequence of rational numbers, \(y_\ell\leq r_\ell\leq y_{\ell+1}\), which converges to \(x\) by the squeeze theorem (since \(\lim y_\ell =\lim y_{\ell+1}=x\)). This was useful: we learned something about the \(r\) sequence using something we know about the \(y\) sequence. But since the sequences are interleaved

\[y_1\leq r_1\leq y_2\leq r_2\leq y_3\leq r_3\leq y_4\leq r_4\leq y_4\leq r_5\leq y_6\ldots\]

We can also think about the \(y\) sequence as trapped between the \(r\) sequence: \(r_{\ell-1}\leq y_{\ell}<r_{\ell}\). Because the exponential is monotone (increasing, for \(a>1\)) this implies that \(a^{r_{\ell-1}}\leq a^{y_\ell}<a^{r_\ell}\). But now we know about the convergence of the outer two sequences \(\lim a^{r_\ell}=a^x\) by definition, as \(r_n\to x\) is a monotone rational sequence. The same holds for \(r_{\ell+1}\) as truncating the first term doesn’t change convergence. Thus by the squeeze theorem,

\[\lim a^{y_\ell}=a^x\]

But this is a contradiction! As the terms \(y_\ell\) were specifically chosen so that \(a^{y_\ell}\) was always further from \(a^x\) than \(\epsilon\), so it can’t eventually be less than \(\epsilon\) from it.

We’ve done it! We’ve rigorously confirmed all the calculations we’ve done from pre-calculus onwards, involving the law of exponents: this really does hold for the continuous function \(a^x\), even at irrational powers! Before moving onwards, its useful to pause for a minute and put our newfound knowledge to the test, proving a couple other facts about the exponential.

Corollary 20.1 The exponential function \(a^x\) is one-to-one on its entire domain.

Proof. Let \(x\neq y\) be real numbers, we want to show \(a^x\neq a^y\).
By trichotomy, we know either \(x<y\) or \(x>y\). In the first case, by strict monotonicity, \(a^x<a^y\), and in the second \(a^x>a^y\). That is, in both cases \(a^x\neq a^y\), so we are done.

Exercise 20.4 Prove range of the exponential function \(a^x\) is all positive real numbers: that is, for any positive \(y\), show there is some \(x\) where \(a^x=y\).

Hint: can you find some \(n\in\NN\) where \(a^n>y\)? If so, can you modify the idea to get an \(m\) with \(a^{-m}<y\)? Once you have these two values, can you apply a theorem about continuity?

Exercise 20.5 (Convexity of exponentials) Prove that exponential functions are convex (Definition 5.8): their secant lines lie above their graphs.

20.3 Logarithms

We’ve completely put the theory of exponential functions on a rigorous footing, so its time to do the same for logarithms. We define logarithms similarly to what we did for exponentials, by a functional equation telling us what they are for.

Definition 20.4 (The Law of Logarithms) A function \(L\) satisfies the law of logarithms if for every \(x,y>0\), \[L(xy)=L(x)+L(y)\] A logarithm is a continuous nonconstant solution to the law of logarithms.

Exercise 20.6 Let \(L(x)\) be a logarithm and \(r\in\QQ\) a rational number. Prove directly from the functional equation that \(L(x^r)=r L(x)\).

One might be initially concerned: we don’t have a nice candidate function on the rationals that we know satisfies this, and we just need to extend: so how are we going to prove the existence of such functions? Happily this case actually turns out to be much less technical than it looks - because we can put all the hard work we did above to good use!

Theorem 20.3 (Logarithms Exist, and are Inverses to Exponentials) Let \(E(x)\) be an exponential function. Then its inverse function is a logarithm.

Proof. Let \(E\) be an exponential function, and \(L\) be its inverse. Because \(E\) is continuous, Theorem 16.5 implies that \(L\) is also continuous and nonconstant, so we just need to show \(L\) satisfies the law of logarithms. Since the range of \(E\) is \((0,\infty)\) this means we must check for any \(a,b>0\) that \(L(ab)=L(a)+L(b)\).
With \(a,b\) in the range fo \(E\) we may find \(x,y\) with \(E(x)=a\) and \(E(y)=b\), and (by the definition of \(L\) as the inverse) \(L(a)=x\) and \(L(b)=y\). By the law of exponents for \(E\) we see \(ab = E(x)E(y)=E(x+y)\), and as \(L\) and \(E\) are inverses, \(L(E(x+y))=x+y\). Putting this all together gives what we need: \[L(ab)=L(E(x)E(y))=L(E(x+y))=x+y=L(a)+L(b)\]

Definition 20.5 The base of a logarithm \(L\) is the real number \(a\) such that \(L(a)=1\). That is, the log base \(a\) is the inverse of \(a^x\).

20.4 \(\blacklozenge\) Trigonometric Functions

Like for the exponential and logarithm functions, to propose a rigorous definition of the trigonometric functions, we require them to satisfy the trigonometric identities. To make a specific choice, we take the angle difference identities

Definition 20.6 (Angle Identities) A pair of two functions \((c,s)\) are trigonometric if they are a continuous nonconstant solution to the angle identities \[s(x-y)=s(x)c(y)-c(x)s(y)\] \[c(x-y)=c(x)c(y)+s(x)s(y)\]

Definition 20.7 (Other Trigonometric Functions) Given a trigonometric pair \(s,c\) we define the tangent function \(t(x)=s(x)/c(x)\), as well as the secant \(1/c(x)\), cosecant \(1/s(x)\) and cotangent \(1/t(x)\).

It may seem strange at first: is this really enough to fully nail down trigonometry? It turns out it is: if \(s,c\) satisfy these identities then they actually satisfy all the usual trigonometric identities! Its good practice working with functional equations to confirm some of this, which is laid out in the exercises below. I’ll start it off, by confirming at least such functions take the right value at zero.

Lemma 20.3 (Values at Zero) If \(s,c\) are trigonometric, then we can calculate their values at \(0\): \[s(0)=0\hspace{1cm}c(0)=1\]

Proof. Setting \(x=y\) in the first immediately gives the first claim \[s(0)=s(x-x)=s(x)c(x)-c(x)s(x)=0\]

Evaluating the second functional equation also at \(x=y\) \[c(0)=c(x-x)=c(x)c(x)+s(x)s(x)=c(x)^2+s(x)^2\]

From this we can see that \(c(0)\neq 0\), as if it were, we would have \(c(x)^2+s(x)^2=0\): since both \(c(x)^2\) and \(s(x)^2\) are nonnegative this implies each are zero, and so we would have \(c(x)=s(x)=0\) are constant, contradicting the definition. Now, plug in \(0\) to what we’ve derived, and use that we know \(s(0)=0\)

\[c(0)=c(0)^2+s(0)^2=c(0)^2\]

Finally, since \(c(0)\) is nonzero we may divide by it, which gives \(c(0)=1\) as claimed.

An important corollary showed up during the proof here, when we observed that \(c(0)=c(x)^2+s(x)^2\): now that we know \(c(0)=1\), we see that \((c,s)\) satisfy the Pythagorean identity!

Exercise 20.7 (Pythagorean Identity) If \(s,c\) are trigonometric, then for every \(x\in\RR\) \[s(x)^2+c(x)^2=1\]

Continuing this way, we can prove many other trigonometric identities: for instance, the double angle identity (which will be useful to us later)

Exercise 20.8 (Evenness and Oddness) If \(s,\) are trigonometric, then \(s\) is odd and \(c\) is even: \[s(-x)=-s(x)\hspace{1cm}c(-x)=c(x)\]

Exercise 20.9 (Angle Sums) If \(s,c\) are trigonometric, then for every \(x\in\RR\) \[s(x+y)=c(x)s(y)+s(x)c(y)\] \[c(x+y)=c(x)c(y)-s(x)s(y)\]

Exercise 20.10 (Double Angles) If \(s,c\) satisfy the angle sum identities, then for any \(x\in\RR\), \[s(2x)=2s(x)c(x)\]

Another useful identity we’ll need is the ‘Half Angle Identites’:

Lemma 20.4 If \(s,c\) are trigonometric functions, then \[c(x)^2=\frac{1+c(2x)}{2}\]

Proof. Using the angle sum identity we see \[c(2x)=c(x)c(x)-s(x)s(x)=c(x)^2-s(x)^2\] Then applying the pythagorean identity \[\begin{align*} c(2x)&=c(x)^2-s(x)^2\\ &=c(x)^2-(1-c(x)^2)\\ &= 2c(x)^2-1 \end{align*}\]

Re-arranging yields the claimed identity.

Exercise 20.11 If \(s,c\) are trigonometric functions then \[s(x)^2=\frac{1-c(2x)}{2}\]

Just like for exponentials and logs we don’t expect this to pick out a unique pair of functions, but rather there many be many solutions to the angle identities (corresponding to different units we could measure angles with)

Exercise 20.12 Prove that if \(s(x),c(x)\) are a trigonometric pair then so are \(s(kx), c(kx)\) for any constant \(k>0\).

To prove the existence of trigonometric functions, we’ll follow a similar path to exponentials: we’ll propose a pair of functions, and confirm they are continuous and nonconstant, and satisfy the trig identities. This is one of the options of the final project, for those interested!