A theorem, due to Henri Joris, states that

## The discussion

First observe that we can write $\mathbb{R} = N \cup D$, the disjoint union of the sets $D$ on which $f \neq 0$ and $N$ on which $f = 0$.

Automatically $f\in C^\infty(\mathrm{int}(N))$. Since $D$ is equivalently characterized by $f^n \neq 0$, and $f^n$ is smooth, we also have that $D$ is open. On $D$, since $n,m$ are co-prime, there exists integers $a,b$ such that $1 = an + bm$. Since $f \neq 0$ on $D$, we have that $(f^m)^b, (f^n)^a\in C^\infty$ being integer powers of non-vanishing smooth functions. And therefore on the open set $D$ we have $f = f^{na + mb} \in C^\infty$.

Where Joris' Theorem is nontrivial is at the frontier $N \cap \overline{D}$. Fix $y$ an arbitrary point of $N\cap \overline{D}$ for the remainder of this post.

Without loss of generality we will assume throughout that $n$ is odd.

### Finite order case

Let us first treat the case where $y$ is a finite order zero of $f^n$. By the Malgrange preparation theorem there exists some $k_n$ such that $f^n(x) = (x-y)^{k_n} g_n(x)$ such that $g_n\in C^\infty$ and is non-vanishing in a neighborhood of $y$. Since $f^{nm} = f^{mn}$, we have that $f^{mn}$ vanishes also to finite order at $y$, and therefore so does $f^m$. Define $k_m$ and $g_m$ analogously. We must have then \[ k_n m = k_m n, \qquad (g_n)^m = (g_m)^n.\]

Now, since $g_n$ is non-vanshing near $y$, we can take its $n$th root (since $n$ is odd this root is unique), and the function $\sqrt[n]{g_n}$ is smooth near $y$.

On the other hand, since $n,m$ are co-prime, but $k_n m = k_m n$, we must have that $k_n$ is divisible by $n$. In particular, we can find $l_n$ such that $k_n = l_n n$. This means that near $y$ we can write $f(x) = (x-y)^{l_n} \sqrt[n]{g_n(x)}$ which is smooth.

### Infinite order case

The main non-trivial part of Joris' theorem is the case when $f$ vanishes to infinite order at $y$. Notice that in the finite order case, the argument is basically the same as what would've been done in the analogous theorem in the analytic category, where locally we can always represent a function by its power series. In the case of analytic functions, however, this would be the end of story, since analytic functions cannot vanish to infinite order. For smooth functions this causes some problems.

To overcome this, instead of looking at the Taylor series expansion at one point $y$, we will look at the Taylor series *on a neighborhood of $y$.*

Let us start by introducing some further notation. By $N_0$ we will refer to the points at which the smooth function $f^n$ vanishes to infinite order. It is the intersection of a family of closed sets, and hence is closed also. Our argument in the previous sections guarantee that $f$ is smooth on the complement $\mathbb{R}\setminus N_0$. Define by $f_k$ the functions \begin{equation} f_k(x) = \begin{cases} f^{(k)}(x) & x \not\in N_0 \newline 0 & x \in N_0. \end{cases} \end{equation}

Given $y\in \partial N_0$, it suffices to check that $f_k$ is differentiable at $y$. We note that away from $\partial N_0$ $f_{k+1}$ is the derivative of $f_k$. And by the closure of $N_0$ if $f_k$ is differentiable at $y$, then necessarily $f_k'(y) = 0$, and hence $f_{k+1}$ must be everywhere the derivative of $f_k$, and the claim would follow.

To prove that $f_k$ is differentiable at $y$, let $z \in \mathbb{R}\setminus N_0$. Since $N_0$ is closed, there exists $z'\in N_0$ minimizing the distance to $z$. Let $x$ be a point between $z$ and $z'$. By definition we have \[ \int_x^z f_{k+1}(s) ~\mathrm{d}s = f_k(z) - f_k(x) \] as the segment containing $z$ and $x$ are by definition contained within $\mathbb{R}\setminus N_0$, and hence we can appeal to the fundamental theorem of calculus. Taking limits on both sides bringing $x$ toward $z'$ we get \[ |f_k(z)| = | f_k(z) - f_k(z') | \leq \int_{z'}^z |f_{k+1}(s)| ~\mathrm{d}s.\] By continuity of $f_{k+1}$ the integral is $o(|z - z'|) = o(|z - y|)$. And hence $f_k$ is differentiable at $y$ with derivative $0$.

By the above Lemma, it suffices to show that the functions $f_k$ are continuous for all $k$. Before doing the general case, let's do an example.

Suppose $n = 3$ and $m = 2$, for ease of computation. Let $y\in \partial N_0$ and $z\in \mathbb{R}\setminus N_0$. We know that as $z$ approaches $y$, the following are true:

- $f^2(z) \to 0 \implies f(z) \to 0$ (So that $f_0$ is continuous.)
- $f_0(z) f_1(z) \to 0$ and $f_0(z) f_0(z) f_1(z) \to 0$. (This comes from the first derivatives of $f^2$ and $f^3$ being continuous.)
- $f_1(z) f_1(z) + f_0(z) f_2(z) \to 0$ and $2 f_0(z) f_1(z) f_1(z) + f_0(z) f_0(z) f_2(z) \to 0$. (This comes from the second derivatives of $f^2$ and $f^3$ being continuous.)
- $3 f_1(z) f_2(z) + f_0(z) f_3(z) \to 0$ and $2 f_1(z) f_1(z) f_1(z) + 6 f_0(z) f_1(z) f_2(z) + f_0(z) f_0(z) f_3(z) \to 0$.

The final line implies \begin{multline} f_1(z) \biggl[ - f_1(z) f_1(z) + 3[\underbrace{f_1(z) f_1(z) + f_0(z) f_2(z)}_{\to 0}]\biggr] \newline + \underbrace{f_0(z) \left[ 3 f_1(z) f_2(z) + f_0(z) f_3(z)\right]}_{\to 0} \to 0. \notag \end{multline} This implies that $f_1(z) \to 0$ as $z \to y$ and hence $f_1$ is also continuous.

An important fact to point out here is that to guarantee $f_1$ is continuous, so that $f$ is *once* continuously differentiable, we used information concerning the continuity of *up to three derivatives* of $f^2$ and $f^3$.
This derivative loss is unavoidable: for example, consider the function
\[ f(x) = |x|^{\frac34}.\]
This function has a cusp singularity at the origin and is not differentiable there.
But $f^2(x) = |x|^{\frac32}$ is $C^1$, and $f^3(x) = |x|^{\frac94}$ is $C^2$.
One can in principle quantify the derivative loss incurred and obtain a finite differentiability version of Joris' theorem; one such argument has been given by Fedja Nazarov.
Here we will instead insist on infinite differentiability to side step these finite derivative loss issues.

By taking higher and higher derivatives, we can analogously argue that $f_2, f_3, \ldots$ and so on are all continuous.
Instead of these *ad hoc* computations, however, let us see if we can make the linear algebra involved more systematic.

### The infinite order case, continued

To do so, let us denote by $\mathscr{R}$ the ring of real valued functions on $\mathbb{R}$, with pointwise addition and multiplication, and note that $C^0$ is a subring of $\mathscr{R}$. Notice that by definition we can write the $k$th derivative $(f^n)^{(k)}$ in terms of ${f_0, f_1, \ldots, f_k}$. This algebraic relation allows us to rephrase our differentiability property as algebraic conditions (which is what we did in the bullet points above). To be more precise: consider the ring of formal power series $\mathscr{R}[[\zeta]]$, as well as the subring of formal power series $C^0[[\zeta]]$. We are given an element $\mathfrak{f}\in \mathscr{R}[[\zeta]]$, given by \[ \mathfrak{f} = \sum f_j \zeta^j.\] A computation using Taylor series shows that the the $k$th derivative of $f^n$ coincides with the coefficient of $\zeta^k$ in the power series $\mathfrak{f}^n$. And so our knowledge that $f^n, f^m\in C^\infty$ translates to the algebraic statement that $\mathfrak{f}^n$ and $\mathfrak{f}^m$ are both in $C^0[[\zeta]]$. Our goal is to deduce that this implies all the coefficients of $\mathfrak{f}$ are in $C^0$.

One key feature of the following argument is that, by lifting to formal power series (actually elements of the jet bundle), we can forget about the formal relationship "$f_k' = f_{k+1}$" (which is an "integrability condition" in the jet bundle picture). This allows us more objects to work with, and simplify the computations involved.

It is convenient to introduce the following notation: let \[ \mathfrak{g}_k = \sum_{j = 0}^k f_j \zeta^j \] denote the $k$th order finite jet associated to $\mathfrak{f}$. As a convention we will require that \[ \mathfrak{g}_{-1} = 0.\] Our argument is based on the following computation: \begin{equation} (\mathfrak{f} - \mathfrak{g}_k)^r(\mathfrak{f} - \mathfrak{g}_j)^s = \sum_{\tilde{r} = 0}^r \sum_{\tilde{s} = 0}^s \binom{r}{\tilde{r}} \binom{s}{\tilde{s}} \mathfrak{f}^{r + s - \tilde{r} - \tilde{s}} (- \mathfrak{g}_k)^{\tilde{r}} (- \mathfrak{g}_j)^{\tilde{s}} \end{equation}

We will approach the argument by induction. More precisely, we will prove that, for every $k \geq 0$, the following two hypotheses (which together we will refer to as (H:k)) hold:

- $\mathfrak{g}_k \in C^0[[\zeta]]$
- $\mathfrak{g}_{k-1} \mathfrak{f}^j \in C^0[[\zeta]]$ for every $j \geq 0$.

By the definition, immediately (H:k) for all possible $k$ implies that $f_k \in C^0$ for every $k$, and the theorem would be proved.

*Base case*: Since $f^n\in C^\infty \subset C^0$, and that when $n$ is odd (as we assumed) the function $\sqrt[n]{\cdot}$ is continuous, we have that $f_0 \in C^0$. This implies $\mathfrak{g}_0 \in C^0[[\zeta]]$.
Additionally, by definition, $\mathfrak{g}_{-1} \mathfrak{f}^j = 0$ is in $C^0[[\zeta]]$.

*Induction step*: Assume now that (H:K) holds.
Note that, since $m$ and $n$ are coprime, there exists some $\rho\in \mathbb{N}$ such that every number greater or equal to $\rho$ can be decomposed as $an + bm$ for $a,b\in \mathbb{N}\cup {0}$. For any $r\geq \rho$ we have $\mathfrak{f}^r \in C^0[[\zeta]]$.

We claim first that (H:K) implies that $\mathfrak{g}_K \mathfrak{f}^j \in C^0[[\zeta]]$ for every $j$. We leave the proof of this claim as a separate lemma below.

Assuming this claim, consider equation (2) with $k = K$, $r = \rho$, and $s = 0$. Of the terms appearing on the right side of (2), every term can be written as $\mathfrak{g}_K^\alpha \mathfrak{f}^\beta$, which is in $C^0[[\zeta]]$ either due to our claim (and the fact that $\mathfrak{g}_K \in C^0[[\zeta]]$), or, in the only case when $\alpha = 0$, due to the fact that $\beta = \rho$ and by construction $\mathfrak{f}^\rho \in C^0[[\zeta]]$. This implies \[ f_{K+1}^\rho \zeta^{(K+1)\rho} + \cdots = (\mathfrak{f} - \mathfrak{g}_{K})^\rho \in C^0[[\zeta]] \] or that $f_{K+1}^\rho \in C^0$. Without loss of generality we can assume $\rho$ is odd, and hence we can take the $\rho$th root to get $f_{K+1} \in C^0$, which implies $\mathfrak{g}_{K+1} \in C^0[[\zeta]]$ and hence (H:K+1) holds.

We now prove the claim that (H:K) implies $\mathfrak{g}_K \mathfrak{f}^j \in C^0[[\zeta]]$ for every $j$.

It suffices to show that $f_K f_j^s \in C^0$ for every $j, s \geq 1$; for the mixed terms like $f_K f_j f_l$ we can write \[ f_K f_j f_l = ( f_K^3 f_j^3 f_l^3)^{1/3} = [ f_K (f_K f_j^3) (f_K f_l^3) ]^{1/3} \in C^0.\]

We prove this claim by induction on $j$. First observe that the claim holds trivially by our hypothesis for $j \leq K$. Consider equation (2) with $k = K -1$, $r \geq \rho$, a $j$ for which the claim holds and $s$ arbitrary. The right hand side of equation (2) takes the form \[ \sum_{\tilde{r} = 1}^r \sum_{\tilde{s} = 0}^s \binom{r}{\tilde{r}} \binom{s}{\tilde{s}} \underbrace{\mathfrak{f}^{r + s - \tilde{r} - \tilde{s}} (- \mathfrak{g}_{K-1})^{\tilde{r}}}_{\in C^0[[\zeta]] \text{ by (H:K)}} (- \mathfrak{g}_j)^{\tilde{s}} + \sum_{\tilde{s} = 0}^s \binom{s}{\tilde{s}} \mathfrak{f}^{r + s - \tilde{s}} (-\mathfrak{g}_j)^{\tilde{s}}.\] Using that $r + s - \tilde{s} \geq \rho$, and that $f_K \mathfrak{g}_j^{\tilde{s}} \in C^0[[\zeta]]$ by induction hypothesis on $j$, we conclude \[ f_K (\mathfrak{f} - \mathfrak{g}_{K-1})^r (\mathfrak{f} - \mathfrak{g}_j)^s \in C^0[[\zeta]].\] The coefficient of the lowest order term of this expression is \[ (f_K)^{r+1} f_{j+1}^s \] which then must be in $C^0$. Without loss of generality we can assume $r$ is even, and since $s$ is arbitrary taking the $r+1$st root we get that $ f_K f_{j+1}^s \in C^0$ as needed.