Lie Groups

I like to think of Lie groups as manifolds with additional group structure (as opposed to many common texts which think about Lie groups as groups with smooth structures). One way in which Lie groups are important is that the additional group structure gives a family of preferred diffeomorphisms between distinct points of the manifold; this means that certain geometric structures can be carried between points, and analysis can be done in a way respecting this level of homogeneity introduced by the group structure. An interesting aspect, however, is that in general the "homogeneity" coming from Lie groups is not unique, and this manifests in some interesting problems when trying to do analysis on Lie groups.

Basic definition

Let $G$ be a smooth manifold. We say that $G$ is a Lie group there are

  • a product map $\mu: G\times G \to G$ that is smooth and satisfies the associativity rule $\mu(x,\mu(y,z)) = \mu(\mu(x,y),z)$;
  • an inverse map $\iota: G\to G$ that is a smooth involution;
  • a distinguished identity element $e\in G$ such that $\mu(e,x) = \mu(x,e) = x$ and $\mu(x,\iota(x)) = e$ for every $x \in G$.

Given $g \in G$, we can further define the family of smooth mappings indexed by $g$

  • $\rho_g(x) := \mu(x,g)$,
  • $\lambda_g(x) := \mu(g,x)$.

We have that obviously $\rho_g(\rho_h(x)) = \rho_{\rho_g(h)}(x)$ and similarly for $\lambda$. For convenience we will also use the short hand that, when $g,h\in G$, we write $gh \in G$ for the element $\mu(g,h)$ and we write $g^{-1}\in G$ for the element $\iota(g)$. Note that $\rho_g(x) = xg$ and $\lambda_g(x) = gx$. (So $\rho$ and $\lambda$ stands for right- and left- multiplication respectively.)

The simplest example of a Lie group is the Euclidean space $\mathbb{R}^n$. The product map is given by \[ \mu(x,y) = x + y \] and the inverse map is given by \[ \iota(x) = -x.\] Notice that $\mu$ is symmetric in its arguments, representing the commutativity of the group operation on $\mathbb{R}^n$. In general, however, commutativity is not assumed. For much of the discussion in this post, it is helpful to keep in mind the following, alternative group structure on $\mathbb{R}^2$.

Example  

Our underlying manifold is still $G = \mathbb{R}^2$. Given $x, y\in \mathbb{R}^2$, the coordinate components we label by $x_1, x_2$ and $y_1, y_2$ respectively. The group action is given by \[ \mu(x,y) = (x_1 + y_1, e^{x_1} y_2 + x_2) .\] This group action can also be realized by identifying $x\in \mathbb{R}^2$ with the matrix \[ \begin{pmatrix} e^{x_1} & x_2 \newline 0 & 1 \end{pmatrix} \] with $\mu$ representing matrix multiplication.

We see then that the origin $(0,0)\in \mathbb{R}^2$ is the identity element, and the inverse map is given by $(x_1, x_2)\mapsto (-x_1, - e^{-x_1} x_2)$.

One sees easily that this group is not commutative: \[ \mu((1,1),(0,1)) = (1, 1 + e), \qquad \mu((0,1),(1,1)) = (1, 2).\]

Invariant fields and measures

Observe that given any $g, h\in G$, the diffeomorphisms $\lambda_{g h^{-1}}$ and $\rho_{h^{-1} g}$ both send $h$ to $g$, and respect the group structure. This means that we can consistently define left/right-invariant geometric objects by pushing forward or pulling back using the left/right invariance.

Example    [Left-invariant vector fields]
Given an arbitrary Lie group $G$, let $\mathring{v} \in T_e G$ be a tangent vector at the identity element. We can define a vector field $v$ over $G$ by requiring that, at the point $g\in G$, \[ T_g G \ni v_g = \mathrm{d}(\lambda_g)_e(\mathring{v}) \] is the pushforward by the left multiplication $\lambda_g$ from $T_e G$ to $T_g G$ of the vector $\mathring{v}$. This vector field is obviously well-defined. It is smooth since $\mu$ is a smooth map and hence $\mathrm{d}(\lambda_g)$ varies smoothly with its parameter. And by the composition rules of differentials (chain rule), we see that automatically $v$ satisfies \[ \mathrm{d}(\lambda_g)_{h} v_h = v_{gh}. \]

An immediate consequence is that any Lie group is parallelizable: take a basis $\{ \mathring{v}_1, \ldots, \mathring{v}_n\}$ of $T_e G$, and extend them to left-invariant fields by the above procedure.

Example  

Returning to the Lie group of Example 1. Consider the basis $\{ \partial_1, \partial_2 \}$ at $T_e G$. What are the left and right invariant bases generated using the group structure?

Left invariant. First we compute the differential of $\lambda_g$. Relative to the coordinates, we have that \[ \partial_1 \lambda_g = (1, 0) , \quad \partial_2 \lambda_g = (0, e^{g_1}). \] So the two left invariant vector fields are given in coordinates to be \[ \{ \partial_1, e^{g_1} \partial_2 \}. \]

Right invariant. Similarly, we compute the differential of $\rho_g$, which we find to be \[ \partial_1 \rho_g = (1, e^{x_1}g_2), \qquad \partial_2 \rho_g = (0,1) \] and hence the two right-invariant vector fields are \[ \{ \partial_1 + g_2 \partial_2, \partial_2 \}. \]

Notice how the left and right invariant fields, even though they are generated from the same basis elements at the origin, are different in general. In addition to the method described in Example 2, there is another way of creating left/right invariant vector fields.

Remark    [Invariant fields from group action]

Let $\gamma: (-1,1) \to G$ be a smooth curve such that $\gamma(0) = e$. Consider the family of mappings $\rho_{\gamma(s)}(x) = \mu(x, \gamma(s))$. For every fixed $x$, we can consider the derivative \[ \frac{d}{ds} \rho_{\gamma(s)}(x) \Big|_{s = 0}. \] When evaluated at $s = 0$, since $\gamma(0) = e$ we have that this is a tangent vector $\in T_x G$. And so this generates a vector field on $G$. We claim that this vector field is left invariant. (Notice that this is generated from a one-parameter family of right multiplications.)

The idea is simple: left and right multiplications commute! For every $g,h\in G$ we have $\lambda_g(\rho_h(x)) = \rho_h(\lambda_g(x))$. This follows immediately from the definition and associativity. So we have that \[ \rho_{\gamma(s)}(g) = \lambda_g (\rho_{\gamma(s)}(e)) \] and so \[ \frac{d}{ds} \rho_{\gamma(s)}(g) = \frac{d}{ds} \lambda_g(\rho_{\gamma(s)}(e)) = \mathrm{d}(\lambda_g)_{\rho_{\gamma(s)}} \cdot \frac{d}{ds} \gamma(s) .\] We see now that evaluating at $s = 0$ this construction is exactly the same as that of Example 2, with $\gamma'(0)$ playing the role of $\mathring{v}$.

Example    [Right-invariant volume form]
Given an arbitrary Lie group $G$, let $\mathring{\omega} \in \Lambda^n T^\star_e G$ be a top-degree form. We can define a volume form on $G$ that is invariant under pull-back by the mappings $\rho_g$ by requiring that \[ \omega_g = (\rho_{g^{-1}}^\star)_g \mathring{\omega}. \]
Example  

For the Lie group of Example 1, we can start with $\mathring{\omega} = \mathrm{d}x_1\wedge \mathrm{d}x_2$ in the standard coordinate basis. Then, we get two different invariant volume forms depending on whether we look at the left and right actions.

Left invariant. Using the computations we have in Example 3, we see that the left-invariant volume form is \[ e^{-x_1} \mathrm{d}x_1 \wedge \mathrm{d} x_2.\]

Right invariant. Similarly, we see that the right-invariant volume form is \[ \mathrm{d}x_1 \wedge \mathrm{d}x_2.\]

Each of the two invariant volume forms induce an invariant measure with respect of which we can integrate functions. These are usually called the left- and right- Haar measures. Note that the measures are only well-defined up to multiplication by a non-zero constant.

Remark    [Invariant metrics]
The same constructions above in Examples 2 and 5 can be used to construct other left/right invariant objects. As an example, following the computations of Example 3, the inverse Riemannian metric $(\partial_1\otimes \partial_1) + (e^{2 g_1} \partial_2 \otimes \partial_2)$ is left invariant. The corresponding left-invariant Riemannian metric is $\mathrm{d}x_1 \otimes \mathrm{d}x_1 + e^{-2 x_1} \mathrm{d}x_2\otimes \mathrm{d}x_2$. One easily sees that the corresponding volume form is exactly $ e^{-x_1} \mathrm{d}x_1 \wedge \mathrm{d}x_2$, precisely the one found in Example 6.
Remark    [Role of involution]

Observe that the mapping $\iota$ swaps left and right: we have that $\iota\circ \rho_g = \lambda_{g^{-1}} \circ \iota$. This means that pushing forward (or pulling back) a left-invariant object using the involution $\iota$ changes it to a right-invariant object and vice versa.

We can exhibit this by means of an example. Going back to the Lie group of Example 1, the differential of $\iota$ is \[ \partial_1 \iota = (-1, e^{-x_1} x_2), \qquad \partial_2 \iota = (0, - e^{-x_1}). \] So if we start with the left-invariant vector fields $\{ \partial_1, e^{g_1} \partial_2 \}$, we see that pushing forward by $\iota$ gives us \[ \partial_1 \mapsto - \partial_1 - g_2 \partial_2, \qquad e^{g_1} \partial_2 \mapsto - \partial_2 \] sending them to the negative of the right-invariant fields found in Example 3.

Definition    [Modular function]

Let $\mu$ denote a right Haar measure for our Lie group $G$, defined using the method in Example 5. Now, using again that left and right translations commute, if we take the pushforward measure using a left multiplication, that is \[ (\lambda_g)_\star \mu, \] this new measure is still a right Haar measure. Since the Haar measure is unique up to rescaling, there must then be a positive function $\Delta: G\to (0,\infty)$, such that \[ (\lambda_g)_\star \mu = \Delta(g) \mu. \] This function $\Delta$ is called the modular function of $G$.

Notice that by the uniqueness statement for Haar measures, $\Delta(g)$ is well-defined, independent of the representative $\mu$ chosen (both sides scale the same). If $\Delta$ is a constant (and hence necessarily 1, since $\Delta(e) = 1$ by definition), then we see that $\mu$ is also left invariant, and in this case the set of left- and right- invariant measures agree. When this happens, we say that the Lie group is unimodular.

Warning  

A left invariant object is not automatically killed by the Lie derivation of a left invariant vector field! The Lie derivation associated to left invariance is by that of a right invariant vector field and vice versa. This is immediate from Remark 4.

One sees this also in Example 3. The commutator (Lie bracket) of the left-invariant vector fields \[ [\partial_1, e^{g_1} \partial_2 ] = e^{g_1} \partial_2 .\] While for the right-invariant fields \[ [\partial_1 + g_2 \partial_2, \partial_2 ] = - \partial_2. \]

The exception is scalar objects: a function that is either left or right invariant must be constant, and hence Killed by all derivations. (Note that due to the lack of a preferred Riemannian metric on general Lie groups, one cannot use Hodge duality to conclude the same about volume forms, as we can see that the Lie derivative of the left invariant volume form $e^{-x_1} \mathrm{d}x_1 \wedge \mathrm{d} x_2$ by the left invariant vector field $\partial_1$ is not zero. )

In view of the previous Warning, one expects that objects that are both left- and right- invariant are somehow special. A manifestation of this is the following statement, that says that differential forms that are both left- and right- invariant are "constant".

Lemma  
If $\omega$ is a differential form on $G$, such that it is both left- and right- invariant. Then $\omega$ is a closed form.
Proof:
Let $p$ be the degree of $\omega$. Let $X_0, \ldots, X_{p}$ be left-invariant vector fields, the definition of exterior derivative gives that \[ \mathrm{d}\omega(X_0, \ldots, X_p) = \frac12 \sum_{i} (-1)^i X_i[ \omega(X_0, \ldots, \hat{X}_{i}, \ldots, X_p) ] + \frac12 \sum_{i} (-1)^{i} (\mathcal{L}_{X_i}\omega)( X_0, \ldots, \hat{X}_{i}, \ldots, X_p). \] Since $\omega$ is left invariant, the scalar $\omega(X_0, \ldots, \hat{X}_i, \ldots, X_p)$ is left invariant and hence is constant, so acting on it with $X_i$ yields zero for the first sum. Since $\omega$ is right invariant, we must also have $\mathcal{L}_{X_i} \omega = 0$ since it is the Lie derivative of a right invariant object by a left invariant vector field.

Exponential map

The construction of the invariant fields allows us to construct an exponential map.

Let $g \in G$ be an arbitrary point, and let $\mathring{v} \in T_g G$. There exists a unique extension $v$ of $\mathring{v}$ to a left (right) invariant vector field on $G$. Denote by $c(t)$ the integral curve of $v$ through $g$, that is, $c(t):\mathbb{R}\to G$ is such that

  • $c(0) = g$
  • $ \dot{c}(t) = v_{c(t)} $

The left (right) exponential map of $G$, based at $g$, is the map \[ \exp_g : T_g G \to G \] given by \[ \exp_g(\mathring{v}) = c(1) \] where $c$ is constructed as above.

Now, as $v$ is defined by $\mathring{v}$ through pushing it forward, by a diffeomorphism, its value at any fixed $h\in G$ is linear in $\mathring{v}$. Hence as we change $\mathring{v}$, the equations satisfied by the corresponding $c$ has coefficients depending smoothly on $\mathring{v}$. Therefore by the standard existence theory for ordinary differential equations, we see that $\exp_g$ is a smooth map.

The linearity of $v$ in $\mathring{v}$ tells us that if $c(t)$ is the integral curve for $\mathring{v}$, then the integral curve for $k \mathring{v}$ is $c(kt)$. Extending this shows that the derivative $\mathrm{d} (\exp_g)_0 : T_0 T_g G \cong T_g G \to T_g G$ is the identity map. And so by the inverse function theorem there exists an open neighborhood $N$ of the origin in $T_g G$ such that $\exp_g$ is a diffeomorphism of $N$ to some neighborhood of $g$.

Connections

We can also consider left-invariant connections. (Similarly right invariant ones.) Given a frame $X_1, \ldots, X_n$, a linear connection $\nabla$ is given by the connection coefficients $\Gamma^i_{jk}$ describing \[ \nabla_{X_i} X_j = \sum_k \Gamma^k_{ij} X_k. \] Given a left-invariant connection, take $X_1, \ldots, X_n$ to be also left-invariant (for example, constructed from a basis of $T_e G$), this requires then that $\Gamma^k_{ij}$ are constants (depending on the initial basis).

Noting that the space of left-invariant vector fields (which for convenience let us denote by $\mathfrak{G}_{\lambda}$) is finite dimensional (and we can identify them with $T_e G$), we see that the above implies that left-invariant connections are in one-to-one correspondence with bilinear mappings \[ \mathfrak{G}_{\lambda} \times \mathfrak{G}_{\lambda} \to \mathfrak{G}_{\lambda}. \] Note that for a torsion free connection, we need that $\nabla_X Y - \nabla_Y X = [X,Y]$; this constraints the antisymmetric part of the above mapping to be equal to 1/2 the corresponding structural constants of the Lie algebra $\mathfrak{G}$.

More generally, however, let us return briefly to the construction of the exponential map. There we chose the integral curves $c(t)$ of invariant vector fields as "preferred" curves. To join this with the notion of connections, note that if $c$ is a geodesic, we have that $\nabla_{\dot{c}} \dot{c} = 0$. So we naturally see that connections with the bilinear map vanishing along the diagonal are those for which the left-invariant fields are geodesic. (Note, if a map is bilinear and vanishes along the diagonal, then by polarization we must have that the bilinear maps is antisymmetric in its arguments.)

Example  

Going back to the Lie group of Example 3, where we found a right-invariant basis with \[ e_1 = \partial_1 + g_2 \partial_2, \quad e_2 = \partial_2.\] The geodesy condition (that all right-invariant fields are geodesic) requires that \[ \Gamma_{ij}^k \] be antisymmetric in the indices $i$ and $j$. Leaving the only non-vanishing terms $\Gamma_{12}^\star$ and $\Gamma_{21}^\star$, with them being negative of each other. For the connection to be torsion free, we further need compatibility with the structural constants, which take the form of \[ \Gamma_{12}^1 - \Gamma_{21}^1 = 0, \qquad \Gamma_{12}^2 - \Gamma_{21}^2 = -1. \] With antisymmetry this leaves necessary that the only nonvanishing connection coefficients are \[ \Gamma_{12}^2 = -\frac12, \qquad \Gamma_{21}^2 = \frac12 .\]

And here it also shows that the condition of geodesy of right-invariant fields, together with torsion free, may not be (inverse) metric compatible.

A right-invariant (inverse) (semi)Riemannian metric will necessarily take the form \[ a e_1 \otimes e_1 + b (e_2 \otimes e_2) + c (e_1 \otimes e_2 + e_2 \otimes e_1 ). \] For a connection to be metric compatible, we need that $\nabla$ of the metric vanishes. This requires \[ 2 a \Gamma_{\star 1}^1 e_1 \otimes e_1 + a \Gamma_{\star 1}^2 (e_1 \otimes e_2 + e_2 \otimes e_1) + 2 b \Gamma_{\star 2}^2 e_2 \otimes e_2 + b \Gamma_{\star 2}^1 (e_1 \otimes e_2 + e_2 \otimes e_1) + 2 c \Gamma_{\star 1}^2 e_2 \otimes e_2 + 2c \Gamma_{\star 2}^1 e_1 \otimes e_1 + c (\Gamma_{\star 1}^1 + \Gamma_{\star 2}^2)(e_1 \otimes e_2 + e_2 \otimes e_1) = 0 \] Simplifying we see that we need \[ \begin{aligned} a \Gamma_{\star 1}^1 + c \Gamma_{\star 2}^1 & = 0 \newline b \Gamma_{\star 2}^2 + c \Gamma_{\star 1}^2 & = 0 \newline a \Gamma_{\star 1}^2 + b \Gamma_{\star 2}^1 + c (\Gamma_{\star 1}^1 + \Gamma_{\star 2}^2) &= 0 \end{aligned} \] Plugging in what we know about the coefficients, this reduce to \[ b \Gamma_{12}^2 = c \Gamma_{21}^2 = 0 = a \Gamma_{21}^2 = c \Gamma_{12}^2 .\] Which means that the only solution is the trivial one.

On the other hand, it may be the case that we don't have a invariant inverse semi-Riemannian metric, yet have still a invariant symmetric bilinear form; this is the case where the form is degenerate. Let $\theta_1, \theta_2$ be the dual one forms to $e_1, e_2$, so that $\theta_i(e_j) = \delta_{ij}$. Right invariance means that \[ 0 = \nabla_{e_i} (\theta_j(e_k)) = (\nabla_{e_i}\theta_j)(e_k) + \theta_j( \sum_{\ell} \Gamma_{ik}^{\ell} e_{\ell}) \] So we conclude (as usual) \[ \nabla_{e_i} \theta_j = - \sum_k \Gamma^j_{ik} \theta_k. \] The compatibility condition for a symmetric bilinear form \[ a \theta_1 \otimes \theta_1 + b \theta_2\otimes \theta_2 + c (\theta_1 \otimes \theta_2 + \theta_2 \otimes \theta_1 ) \] can be analogously computed to read \[ \begin{aligned} a \Gamma_{\star 1}^1 + c \Gamma_{\star 1}^2 &= 0 \newline b \Gamma_{\star 2}^2 + c \Gamma_{\star 2}^1 & = 0 \newline a \Gamma_{\star 2}^1 + b \Gamma_{\star 1}^2 + c \Gamma_{\star 1}^1 + c \Gamma_{\star 2}^2 &= 0 \end{aligned} \] Unlike the case of the inverse metric, one sees that geodesy and torsion free conditions are now compatible with the solution $a = 1$ (can be any constant), and $b = c = 0$. This solution is in fact the well-known Killing form for our group. Notice that the form is still only semi-definite, and is a degenerate metric.

Function spaces

Using what we have discussed so far we can define some of the classical functions spaces on Lie groups.

Lebesgue spaces

The presence of the Haar measures give canonical measures (depending on whether you are working with left- or right- invariant definitions) on the manifold. We can use this to define the usual Lebesgue spaces $L^p$ on the manifold.

A crucial thing to keep in mind of is that, for general Lie groups, the left- and right- Haar measures can give rise to unequal spaces. Consider what we worked out in Example 6. The right-invariant Haar measure is identical to the Lebesgue measure on $\mathbb{R}^2$, and so admit the usual $L^p$ functions. But the left-invariant Haar measure carry an exponential weight. So if $\phi$ is any compactly supported, non-negative continuous function on $\mathbb{R}$, and $\psi(t) = \int_{-\infty}^t \phi(s) ~\mathrm{d}s$, we see that the function \[ (x_1, x_2) \mapsto \psi(x_1) \phi(x_2) \] is in $L^p$ with respect to the left Haar measure for any $p \in [1,\infty]$, while it is not in $L^p$ with respect to the right Haar measure for any $p\in [1,\infty)$.

On the other hand, the function \[ (x_1, x_2) \mapsto \frac{\phi(x_2)}{1 + |x_1|^2} \] is in $L^p$ with respect to the right Haar measure for any $p \in [1,\infty]$, while it is not in $L^p$ with respect to the left Haar measure for any $p \in [1,\infty)$.

Hölder spaces

By the means of the invariant vector fields, we can have a homogeneous way of looking at derivatives of functions. Using the inherent smooth structure of the underlying manifold we can speak of continuity and differentiability. To measure a function's derivatives, however, we need to know how we are taking the derivatives. So we can fix a basis $\mathcal{B}$ of the (left/right) invariant vector fields $\mathfrak{G}$ (recall that this is finite dimensional). We can then define the $C^k$ seminorm of a function $f$, for $k$ a non-negative integer, to be \[ \sup_{Z_\star} \sup_G | Z_1 Z_2\cdots Z_k f| \] where the inner supremum is taken over points on $G$ and the outer supremum are over all strings $Z_1, \ldots, Z_k$ drawn from $\mathcal{B}$.

Note that since $\mathcal{B}$ is a finite-dimensional vector space, the families of seminorms defined relative to different choices of bases $\mathcal{B}$ are necessarily comparable, with a constant depending on the number of derivatives $k$.

The semi-norms are however not necessarily comparable between the left- and right- varieties. Again let us consider the Lie group of Example 3. Let $\phi$ again be a compactly supported smooth function on the real numbers. Consider the function \[ (x_1, x_2) \mapsto \phi(x_2).\] Using the basis of right-invariant fields found in Example 3, this function has bounded $C^1$ seminorm. However, using the basis of left-invariant fields found there, this function has unbounded $C^1$ seminorm. Similarly, if we instead consider the function \[ (x_1, x_2) \mapsto \phi(x_1) x_2 \] this function has bounded $C^1$ seminorm with respect to the left-invariant fields, but its $C^1$ seminorm relative to the right-invariant fields is unbounded.

We can also augment the classical $C^k$ spaces with their Hölder counterparts. For this, however, we need to have a way of measuring distances on $G$. Note that, crucially, for defining Hölder spaces we only care about "local" notions of distance, we don't really care about points that are further than a fixed distance apart.

So fixing a basis of $\mathfrak{G}$ (either left or right invariant), we can examine the (left/right) exponential map.

Below let's focus on the case of left-invariance for convenience. The group properties means that if $\mathring{v}\in T_g G$, and if $v$ is its invariant extension, then $\lambda_h( \exp_g(\mathring{v})) = \exp_{hg}(v_{hg})$. Which means that the open-neighborhood on which the exponential map is a diffeomorphism is also left-invariant.

In other words, one can think of there being a bounded open neighborhood $N$ of the identity in $\mathfrak{G}$ such that for any $g\in G$, $\exp_g: N \to G$ is a diffeomorphism to its image, which we will call $B_g$.

This allows us to define a distance function between $g$ and points in $B_g$. Choose any basis of $\mathfrak{G}$. A point in $h \in B_g$ can be uniquely identified with a linear combination of said basis, using the above diffeomorphism. We can then define the distance $d(g,h)$ to be the $\ell_2$ norm of the coefficients of this linear combination. (We can use any $\ell_p$, they are all equivalent.) We note that $B_g$ depends only on the Lie group structure, and not on the basis chosen for $\mathfrak{G}$, and hence the distance function corresponding to two different bases are also comparable.

Then for $\alpha\in (0,1]$ we can define the $\alpha$-Hölder seminorms exactly analogously to the classical case: \[ f\mapsto \sup_{g\in G} \sup_{h\in B_g\setminus\{g\}} \frac{ | f(h) - f(g) |}{d(g,h)^\alpha} .\]

Again, the seminorms are comparable between different bases representations of left (right) invariant vector fields, but the left and right versions are not in general comparable. Again, let $\phi$ be a bump function so that it equal $1$ on $[0,10]$ and vanishes outside $[-10,20]$. Relative to the bases defined in Example 3 we can consider the two functions \[ (x_1, x_2) \mapsto \phi(x_2) \sin(x_2) , \quad \text{ and } (x_1, x_2) \mapsto \phi(x_1) \sin(x_2) \] to see that the Hölder spaces thus defined are not equal between the left- and right- variants.

Avatar
Willie WY Wong
Associate Professor

My research interests include partial differential equations, geometric analysis, fluid dynamics, and general relativity.

Related