Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-26T02:05:25.937Z Has data issue: false hasContentIssue false

Regularity of calibrated sub-actions for circle expanding maps and Sturmian optimization

Published online by Cambridge University Press:  27 April 2022

RUI GAO*
Affiliation:
College of Mathematics, Sichuan University, Chengdu 610064, China
Rights & Permissions [Opens in a new window]

Abstract

In this short and elementary note, we study some ergodic optimization problems for circle expanding maps. We first make an observation that if a function is not far from being convex, then its calibrated sub-actions are closer to convex functions in a certain effective way. As an application of this simple observation, for a circle doubling map, we generalize a result of Bousch saying that translations of the cosine function are uniquely optimized by Sturmian measures. Our argument follows the mainline of Bousch’s original proof, while some technical part is simplified by the observation mentioned above, and no numerical calculation is needed.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1 Introduction

Let X be a compact metric space and let $C(X)$ denote the collection of real-valued continuous functions on X. Let $T:X\to X$ be a continuous map. For the topological dynamical system $(X,T)$ , let ${\mathcal {M}}(X,T)$ denote the collection of T-invariant Borel probability measures on X. The ergodic optimization problem of $(X,T)$ is concerned about, for given $f\in C(X)$ , the extreme values (maximum or minimum) of $\mu \mapsto \int _{X} f\,{{d}}\mu $ for $\mu \in {\mathcal {M}}(X,T)$ , and which measures attain the maximum or minimum. For an overview of the research topic of ergodic optimization, one may refer to Jenkinson’s survey papers [Reference Jenkinson10, Reference Jenkinson11] and Bochi’s survey paper [Reference Bochi, Sirakov, Ney de Souza and Viana2]. See also Contreras, Lopes, and Thieullen [Reference Contreras, Lopes and Thieullen6], and Garibaldi [Reference Garibaldi9].

Since

$$ \begin{align*} \min_{\mu\in{\mathcal{M}}(X,T)}\int_{X} f\,{{d}}\mu=-\max_{\mu\in{\mathcal{M}}(X,T)}\int_{X} (-f)\,{{d}}\kern-1pt\mu, \end{align*} $$

to be definite, we shall mainly focus on the maximum problem and denote

$$ \begin{align*} \beta(f):=\max_{\mu\in {\mathcal{M}}(X,T)}\int_{X} f\,{{d}}\kern-1pt\mu, \end{align*} $$
$$ \begin{align*} {\mathcal{M}}_{\mathrm{max}}(f):=\bigg\{\mu\in {\mathcal{M}}(X,T) : {\int_{X}} f\,{{d}}\kern-1pt\mu=\beta(f) \bigg\}. \end{align*} $$

Measures contained in ${\mathcal {M}}_{\mathrm{max}}(f)$ are called maximizing measures of f for $(X,T)$ . Given $f\in C(X)$ , a basic tool to study maximizing measures is usually referred to as Mañé’s lemma, which aims at establishing the existence of $g\in C(X)$ , called a sub-action of f, such that

$$ \begin{align*} {\tilde{f}}:= f+g-g\circ T\le \beta(f) \end{align*} $$

holds. One of its advantages is based on a simple observation as follows. Once such g exists, then $\beta (f)=\beta ({\tilde {f}})$ and ${\mathcal {M}}_{\mathrm{max}}(f)={\mathcal {M}}_{\mathrm{max}}({\tilde {f}})$ ; moreover, for any $\mu \in {\mathcal {M}}_{\mathrm{max}}(f)$ , ${\tilde {f}}=\beta (f)$ holds on the support of $\mu $ . For a review of results on the existence or construction of sub-actions, one may refer to [Reference Jenkinson11, §6]; see also [Reference Contreras, Lopes and Thieullen6, Remark 12].

Given $f\in C(X)$ , a particular way to construct sub-action g of f is to solve the equation below:

(1) $$ \begin{align} g(x)+\beta(f)=\max_{Ty =x}(f(y)+g(y)) \quad \text{for all }\, x\in X. \end{align} $$

A solution $g\in C(X)$ to equation (1) is automatically a sub-action of f, and it is called a calibrated sub-action of f. It is well known that when $(X,T)$ is expanding and f is Hölder continuous, for the one-parameter family of potentials $(tf)_{t>0}$ , the zero-temperature limit points (as $t\to +\infty $ ) of their equilibrium states are contained in ${\mathcal {M}}_{\mathrm{max}}(f)$ ; see, for example, [Reference Jenkinson11, Theorem 4.1] and references therein. Calibrated sub-actions arise naturally in this limit process; see, for example, [Reference Contreras, Lopes and Thieullen6, Proposition 29]. For expanding $(X,T)$ and Hölder continuous f, the first proofs on the existence of calibrated sub-actions were given by, among others, Savchenko [Reference Savchenko12] and Bousch [Reference Bousch3] with different approaches. Savchenko’s proof is based on a zero-temperature limit argument mentioned above; while Bousch’s proof is to treat equation (1) as a fixed point problem and to apply the Schauder–Tychonoff fixed point theorem.

In the first half of this paper, for a smooth expanding system $(X,T)$ , given an observable in $C(X)$ with certain smoothness, we are interested in the regularity (or smoothness) of its calibrated sub-actions. For simplicity, let us focus on the simplest case that $T:X\to X$ is a linear circle expanding map; more precisely, $X={\mathbb {T}}:={\mathbb {R}}/{\mathbb {Z}}$ is the unit circle, and $Tx=dx$ for $x\in {\mathbb {T}}$ , where $d\ge 2$ is an integer. Our main results in this part are Theorem 2.3 and Corollary 2.4, and they roughly read as follows.

Theorem A. For $f\in C({\mathbb {T}})$ identified as a function on ${\mathbb {R}}$ of period $1$ , if f can be written as a sum of a convex function and a quadratic function, then any calibrated sub-action g of f for the $x\mapsto dx$ system can be written in the same form. In particular, both one-sided derivatives of g exist everywhere and only allow at most countably many discontinuities, and the set of points where g is not differentiable is also at most countable.

It might be interesting to compare the statement above with the following result of Bousch and Jenkinson [Reference Bousch and Jenkinson4, Theorem B]: for the $x\mapsto 2x$ system, there exists a real-analytic function such that any sub-action of it fails to be $C^{1}$ .

In the second half of this paper, as an application of Theorem 2.3, we shall generalize a classic result of Bousch [Reference Bousch3] roughly stated below with a simpler proof.

Theorem. (Bousch [Reference Bousch3, Théorème A])

For each $\omega \in {\mathbb {T}}$ , the function $x\mapsto \cos 2\pi (x-\omega )$ has a unique maximizing measure for the $x\mapsto 2x$ system; moreover, the maximizing measure is Sturmian, that is, supported on a semi-circle.

Our main technical result in the second half of this paper is Theorem 3.5. As a corollary of it, we can easily prove Theorem 3.8 as a generalization of [Reference Bousch3, Théorème A]. A slightly weaker and easier to read version of Theorem 3.8 is as follows.

Theorem B. Let $f:{\mathbb {T}}\to {\mathbb {R}}$ be a $C^{2}$ function with the following properties, where f is identified as a function on ${\mathbb {R}}$ of period $1$ :

  • $f(x+\tfrac 12)=-f(x)$ ;

  • $f(x)=f(-x)$ ;

  • $f^{\prime \prime }\le 0$ on $[-\tfrac 14,\tfrac 14]$ ;

  • $f(0)>{1}/{40}\cdot \max _{x\in {\mathbb {T}}}f^{\prime \prime }(x)$ .

Then for each $\omega \in {\mathbb {T}}$ , the function $x\,{\mapsto}\,f(x-\omega )$ has a unique maximizing measure for the $x\mapsto 2x$ system; moreover, the maximizing measure is Sturmian.

To prove Theorem 3.5, we follow Bousch’s proof of [Reference Bousch3, Théorème A], while Theorem 2.3 is used to simplify some technical steps in Bousch’s argument, especially [Reference Bousch3, Lemme Technique 2]. No numerical calculation is needed in our argument.

The paper is organized as follows. In §2, we mainly prove Theorem 2.3. The statements are given in §2.1 and the proof is given in §2.2. In §3, we try to generalize [Reference Bousch3, Théorème A]; the main results are Theorem 3.5 and its corollary Theorem 3.8. In §3.1, we recall Bousch’s ideas in his proof of [Reference Bousch3, Théorème A] that will be used for preparation. In §3.2, with the help of Theorem 2.3, we prove Theorem 3.5. Finally, Theorem 3.8 is proved in §3.3.

The following notation or conventions are used throughout the paper.

  • For a topological space X, $C(X)$ denotes the collection of real-valued continuous functions on X.

  • For a metric space X, $\mathop{\mathrm{Lip}}(X)$ denotes the collection of real-valued Lipschitz functions on X.

  • For a (one-dimensional) smooth manifold (possibly with boundary) X, $C^{k}(X)$ denotes the collection of real-valued functions that are continuously differentiable up to order k, where $k\ge 1$ is an integer.

  • ${\mathbb {T}}:={\mathbb {R}}/{\mathbb {Z}}$ denotes the unit circle. The following notation will be used when there is no ambiguity.

    • – For $x\in {\mathbb {T}}$ and $y\in {\mathbb {R}}$ , $x + (y~\mathrm {mod}~1)\in {\mathbb {T}}$ will be denoted by $x+y$ for short.

    • – For $a\in {\mathbb {R}}$ and $b\in (a,a+1)$ , the arc $\{x ~\mathrm {mod}~1: x\in [a,b]\}\subset {\mathbb {T}}$ will be denoted by $[a,b]$ for short.

  • A function $f:{\mathbb {T}}\to {\mathbb {R}}$ will be frequently identified as a function on ${\mathbb {R}}$ of period $1$ .

  • ‘Lebesgue almost everywhere’ is abbreviated as ‘a.e.’.

2 Regularity of calibrated sub-actions

Throughout this section, the dynamical system is fixed to be

$$ \begin{align*} T_{d}:{\mathbb{T}}\to{\mathbb{T}}, \quad x\mapsto dx, \end{align*} $$

where $d\ge 2$ is an integer. We focus on the regularity of calibrated sub-actions for this system. Let us begin with introducing the following notation to express equation (1) of calibrated sub-action in a more convenient way. Define ${\mathscr {M}}_{d}:C({\mathbb {T}})\to C({\mathbb {T}})$ as follows:

(2) $$ \begin{align} {\mathscr{M}}_{d} f(x):=\max_{T_{d}y=x}f(y)=\max_{0\le k<d}f\bigg(\frac{x+k}{d}\bigg)\quad \text{for all } f\in C({\mathbb{T}}), \end{align} $$

where, in the expression $f(({x+k})/{d})$ , f is considered as a function on ${\mathbb {R}}$ of period $1$ . Note that by definition, for $f\in C({\mathbb {T}})$ ,

$$ \begin{align*} g \ \text{is a calibrated sub-action of}\ f\ {\iff}\ g\in C({\mathbb{T}}) \quad\text{and}\quad g+\beta(f)={\mathscr{M}}_{d} (f+g). \end{align*} $$

Also note that $\mathop{\mathrm{Lip}}({\mathbb {T}})$ is ${\mathscr {M}}_{d}$ -invariant. The following is well known.

Lemma 2.1. For every $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ , there exists $g\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ such that g is a calibrated sub-action of f. Moreover, if f has a unique maximizing measure, then g is uniquely determined by f up to an additive constant.

For the existence of g, see, for example [Reference Contreras, Lopes and Thieullen6, Proposition 29(iii)]. For the uniqueness part, see, for example [Reference Bousch3, Lemme C], which is stated for $d=2$ , but the proof there is easily adapted to arbitrary $d\ge 2$ or more general systems; see also [Reference Garibaldi9, Proposition 6.7].

2.1 Statement of results

To characterize the regularity of calibrated sub-actions and to state Theorem 2.3 later on, we need the following quantity $\eta (\cdot )$ to measure how far a continuous function is away from being convex.

Definition 2.1. Given $f\in C({\mathbb {R}})$ , let $\eta (f)$ be the infimum of $a\in {\mathbb {R}}$ (we adopt the convention that $\inf \varnothing =+\infty $ ) such that the following holds:

$$ \begin{align*} f(x+\delta)+f(x-\delta)-2f(x) \ge -a\delta^{2} \quad\text{for all } x,\delta\in{\mathbb{R}}. \end{align*} $$

By definition, the following is evident.

Lemma 2.2. Given $f\in C({\mathbb {R}})$ and $a\in {\mathbb {R}}$ , $\eta (f)\le a$ if and only if $x\mapsto f(x)+({a}/{2}) x^{2}$ is convex on ${\mathbb {R}}$ .

It will be useful to introduce the following notation closely related to $\eta (\cdot )$ . Given $\delta>0$ , $x\in {\mathbb {R}}$ , and $f\in C({\mathbb {R}})$ , denote

(3) $$ \begin{align} {\xi_{\delta}^{x}(f):=2f(x)-f(x+\delta)-f(x-\delta),} \end{align} $$
(4) $$ \begin{align} \xi_{\delta}^{*}(f):=\sup_{x\in{\mathbb{R}}}\xi_{\delta}^{x}(f). \end{align} $$

Note that by definition,

(5) $$ \begin{align} \eta(f)=\sup_{\delta>0}\delta^{-2}\xi_{\delta}^{*}(f) \quad \text{for all } f\in C({\mathbb{R}}). \end{align} $$

Also note that $\eta (f)$ , $\xi _{\delta }^{x}(f)$ , and $\xi _{\delta }^{*}(f)$ are well defined for $f\in C({\mathbb {T}})$ considered as a function on ${\mathbb {R}}$ of period $1$ . It might be mentioned that when $f\in C({\mathbb {T}})$ , $\xi _{\delta }^{*}(f)\ge 0$ for any $\delta>0$ , and therefore $\eta (f)\ge 0$ , although we shall not use these facts. To see $\xi _{\delta }^{*}(f)\ge 0$ , let us consider f as a continuous function on ${\mathbb {R}}$ of period $1$ . Then f attains its maximum on ${\mathbb {R}}$ at some x, and hence $\xi _{\delta }^{*}(f)\ge \xi _{\delta }^{x}(f)\ge 0$ .

Now we are ready to state our main result in this section.

Theorem 2.3. Given $d\ge 2$ , let $f\in C({\mathbb {T}})$ and suppose that there exists a calibrated sub-action $g\in C({\mathbb {T}})$ of f for the $x\mapsto dx$ system. Then we have

(6) $$ \begin{align} \xi_{\delta}^{*}(g)\le \xi_{d^{-1}\delta}^{*}(f)+\xi_{d^{-1}\delta}^{*}(g) \quad \text{for all } \delta>0, \end{align} $$
(7) $$ \begin{align} \eta(g)\le (d^{2}-1)^{-1}\cdot\eta(f). \end{align} $$

The proof of Theorem 2.3 will be given in the next subsection. Let us state a corollary of it first, which indicates, under the regularity condition $\eta (f)<+\infty $ , how regular a calibrated sub-action of f could be. It should be mentioned that, as the referee pointed out to the author, in [Reference Bousch3, Lemme B], Bousch already showed that for any $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ and any calibrated sub-action $g\in C({\mathbb {T}})$ of f, g is automatically Lipschitz and $lip(g)\le {lip(f)}/({d-1})$ holds, where $lip(\cdot )$ denotes the Lipschitz constant of a function (although Bousch only focused on $d=2$ , his argument works equally well for general $d\ge 2$ ). The corollary below asserts that, under the additional assumption $\eta (f)<+\infty $ , we can say something more about the regularity of g.

Corollary 2.4. Given $d\ge 2$ , let $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be with $\eta (f)<+\infty $ and let $g\in C({\mathbb {T}})$ be a calibrated sub-action of f for the $x\mapsto dx$ system. Then the following hold.

  • Both of the one-sided derivatives $g_{\pm }^{\prime }(x):=\lim \nolimits _{\Delta \searrow 0}(({g(x\pm \Delta )-g(x)})/{\pm \Delta })$ of g exist at each $x\in {\mathbb {T}}$ .

  • $g_{\pm }^{\prime }$ are of bounded variation on ${\mathbb {T}}$ ; that is to say, identifying $g_{\pm }^{\prime }$ as functions on ${\mathbb {R}}$ of period $1$ , $g_{\pm }^{\prime }$ are of bounded variation on finite subintervals of ${\mathbb {R}}$ .

  • Given $x\in {\mathbb {T}}$ , $g^{\prime }(x)$ does not exist if and only if $g_{+}^{\prime }(x)>g_{-}^{\prime }(x)$ ; the set $\{x\in {\mathbb {T}}: g^{\prime }(x) \text { does not} \text { exist}\}$ is at most countable.

Proof. By equation (7) in Theorem 2.3, $\eta (g)<+\infty $ . Then, according to Lemma 2.2, identifying g as a function on ${\mathbb {R}}$ of period $1$ , $x\mapsto g(x)+({\eta (g)}/{2})x^{2}$ is convex on ${\mathbb {R}}$ . All the statements follow from basic properties of convex functions.

2.2 Proof of Theorem 2.3

This subsection is devoted to the proof of Theorem 2.3. Let us begin with some common elementary properties shared by $\eta (\cdot )$ and functionals defined in equations (4) and (4).

Lemma 2.5. Let X be a topological space and recall that $C(X)$ denotes the vector space of real-valued continuous functions on X. Let $\Gamma $ denote the collection of $\gamma : C(X)\to {(-\infty ,+\infty ]}$ satisfying the following assumptions (we adopt the convention $0\cdot (+\infty )=0$ below):

  • $\gamma (f+g)\le \gamma (f)+\gamma (g)$ ;

  • $\gamma (\max \{f,g\})\le \max \{\gamma (f),\gamma (g)\}$ ;

  • $\gamma (af+b)=a\gamma (f)$ for $a\ge 0$ , $b\in {\mathbb {R}}$ .

Then $\Gamma $ has the following properties:

  • if $a,b\ge 0$ and $\alpha ,\beta \in \Gamma $ , then $a\alpha +b\beta \in \Gamma $ ;

  • if $\Lambda \subset \Gamma $ , then $\sup _{\gamma \in \Lambda }\gamma \in \Gamma $ .

Moreover, for $X={\mathbb {R}}$ , $\xi _{\delta }^{x}\in \Gamma $ for any $\delta>0$ , $x\in {\mathbb {R}}$ . As a result, $\xi _{\delta }^{*}\in \Gamma $ for any $\delta>0$ and hence $\eta \in \Gamma $ .

Proof. All the statements are more or less evident and can be easily proved by checking the definition directly. Let us only verify the relatively less obvious fact that

$$ \begin{align*} \xi_{\delta}^{x}(\max\{f,g\}) \le \max\{\xi_{\delta}^{x}(f),\xi_{\delta}^{x}(g)\}, \end{align*} $$

and the rest is left to the reader. To prove the inequality above, denote $h:=\max \{f,g\}$ . If $h(x)=f(x)$ , then

$$ \begin{align*} \xi_{\delta}^{x}(h)={ 2f(x)- h(x-\delta)-h(x+\delta)} \le \xi_{\delta}^{x}(f), \end{align*} $$

where the ‘ $\le $ ’ is due to $f\le h$ ; otherwise, $h(x)=g(x)$ and the same argument shows that $\xi _{\delta }^{x}(h)\le \xi _{\delta }^{x}(g)$ . The proof is done.

The key ingredient to Theorem 2.3 is the simple fact below. Recall the operator ${\mathscr {M}}_{d}$ defined in equation (2).

Proposition 2.6. Given $d\ge 2$ and $f\in C({\mathbb {T}})$ , the following hold:

(8) $$ \begin{align} \xi_{\delta}^{*}({\mathscr{M}}_{d} f)\le \xi_{d^{-1}\delta}^{*}(f) \quad \text{for all } \delta>0; \end{align} $$
(9) $$ \begin{align} \eta( {\mathscr{M}}_{d} f)\le d^{-2}\cdot\eta(f). \end{align} $$

Proof. Identify f as a function on ${\mathbb {R}}$ of period $1$ . For $0\le k<d$ , let $f_{k}\in C({\mathbb {R}})$ be defined by $f_{k}(x):=f(({x+k})/{d})$ . By definition, ${\mathscr {M}}_{d} f= \max \nolimits _{0\le k<d}f_{k}$ , and

$$ \begin{align*} \xi_{\delta}^{*}(f_{k}) = \xi_{d^{-1}\delta}^{*}(f) \quad \text{for all }0\le k<d, \delta>0. \end{align*} $$

It follows that

$$ \begin{align*} \xi_{\delta}^{*}({\mathscr{M}}_{d} f)=\xi_{\delta}^{*}(\max\limits_{0\le k<d}f_{k})\le \max_{0\le k<d}\xi_{\delta}^{*}(f_{k})=\xi_{d^{-1}\delta}^{*}(f), \end{align*} $$

where the ‘ $\le $ ’ is due to Lemma 2.5. This completes the proof of equation (8). Equation (9) follows from equations (8) and (5) immediately.

Now we are ready to prove Theorem 2.3.

Proof of Theorem 2.3

Since $g+\beta (f)={\mathscr {M}}_{d}(f+g)$ , equation (6) follows from equation (8) in Proposition 2.6 and Lemma 2.5 directly. To prove equation (7), first note that by equations (6) and (5), the following holds for any $\delta>0$ :

$$ \begin{align*} \xi_{\delta}^{*}(g)\le \xi_{d^{-1}\delta}^{*}(f)+\xi_{d^{-1}\delta}^{*}(g)\le (d^{-1}\delta)^{2}\cdot\eta(f)+ \xi_{d^{-1}\delta}^{*}(g). \end{align*} $$

Iterating this inequality repeatedly and noting that $\xi _{d^{-n}\delta }^{*}(g)\to 0$ as $n\to \infty $ , we obtain that

$$ \begin{align*} \xi_{\delta}^{*}(g) \le \sum_{n=1}^{\infty} (d^{-n}\delta)^{2}\cdot \eta(f) =\delta^{2}\cdot(d^{2}-1)^{-1}\cdot\eta(f). \end{align*} $$

Since $\delta>0$ is arbitrary, equation (7) follows.

3 Sturmian optimization

In this section, we fix $d=2$ and denote $Tx=2x$ , $x\in {\mathbb {T}}$ . For this system, we shall show that certain functions are uniquely maximized by Sturmian measures, which generalizes the result of Bousch [Reference Bousch3, Théorème A]. The results are stated in Theorem 3.5 and its consequences in Corollary 3.6 and Theorem 3.8. Recall that for any closed semicircle $S\subset {\mathbb {T}}$ (that is, $S=[\gamma ,\gamma +\tfrac 12]$ for some $\gamma \in {\mathbb {R}}$ ), there exists a unique T-invariant Borel probability measure supported on S, called the Sturmian measure on S. Bullett and Sentenac [Reference Bullett and Sentenac5] studied Sturmian measures systematically. See also [Reference Fogg, Berthé, Ferenczi, Mauduit and Siegel8, §6] for connection of Sturmian measures with symbolic dynamics.

3.1 Bousch’s criterion of Sturmian condition

This subsection is devoted to reviewing the basic ideas of Bousch for his proof of [Reference Bousch3, Théorème A].

Definition 3.1. (Bousch [Reference Bousch3, p. 497])

Let $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ and let $g\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be a calibrated sub-action of f for the $x\mapsto 2x$ system. Denote

(10) $$ \begin{align} R(x)=R_{f,g}(x):=f(x)+g(x)-f\big(x+\tfrac{1}{2}\big)-g\big(x+\tfrac{1}{2}\big), \quad x\in{\mathbb{T}}. \end{align} $$

If $\{x\in {\mathbb {T}} :R(x)=0\}$ consists of a single pair of antipodal points, then we say that $(f,g)$ satisfies the Sturmian condition. We also say that f satisfies the Sturmian condition if $(f,g)$ satisfies the Sturmian condition for some g.

By definition, it is easy to see the following holds, which is a combination of the Proposition in of [Reference Bousch3, p. 497] and [Reference Bousch3, Lemme C].

Lemma 3.1. (Bousch [Reference Bousch3])

If $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ satisfies the Sturmian condition, then f admits a unique maximizing measure. Moreover, this measure is a Sturmian measure, and up to an additive constant, there exists a unique calibrated sub-action of f.

Remark. As a corollary, if $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ satisfies the Sturmian condition, then for the equilibrium state of $tf$ ( $t>0$ ), its zero-temperature limit (as $t\to +\infty $ ) exists. See, for example, [Reference Jenkinson11, Theorem 4.1] for details.

The basic idea of Bousch [Reference Bousch3] to verify the Sturmian condition is summarized below.

Lemma 3.2. Let $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ and let $g\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be a calibrated sub-action of f. Assume that for R defined by equation (10), there exist $a\in {\mathbb {R}}$ and $b\in (a,a+\tfrac 12)$ with the following properties:

  1. (S1) $R>0$ on $[a,b]$ ;

  2. (S2) R is strictly decreasing on $[b,a+\tfrac 12]$ .

Then $(f,g)$ satisfies the Sturmian condition.

Let us provide a proof for completeness.

Proof. By definition, $R(x+\tfrac 12)=-R(x)$ , so that $\{x\in {\mathbb {T}}: R(x)=0\}$ is non-empty and consists of pairs of antipodal points. However, the assumptions (S1) and (S2) imply that $\{x\in {\mathbb {T}}: R(x)=0\}\cap [a,a+\tfrac 12]$ contains at most one point. The conclusion follows.

To get an effective test for assumption (S1) in Lemma 3.2, Bousch made the following observation in his proof of [Reference Bousch3, Lemme Technique 1]. Recall the notation introduced in equations (3) and (4).

Lemma 3.3. Let $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ and let $g\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be a calibrated sub-action of f. Then for any $n\ge 2$ and any $x\in {\mathbb {T}}$ , we have

(11) $$ \begin{align} -2\bigg(g(x)-g\bigg(x+\frac{1}{2}\bigg)\bigg) \le \max_{y\in T^{-(n-1)}(x+{1}/{2})} \sum_{k=2}^{n} \xi_{2^{-k}}^{T^{n-k}y}(f) + \xi_{2^{-n}}^{*}(g). \end{align} $$

Let us reproduce Bousch’s proof for completeness.

Proof. We follow the beginning part in the proof of [Reference Bousch3, Lemme Technique 1]. Without loss of generality, assume that $\beta (f)=0$ for simplicity, so that equation (1) becomes

$$ \begin{align*} g(x)=\max_{Ty=x}(f(y)+g(y))\quad \text{for all } x\in{\mathbb{T}}. \end{align*} $$

To prove equation (11), fix $x\in {\mathbb {T}}$ , denote $x_{1}=x+\tfrac {1}{2}$ , and for each $n\ge 2$ , choose $x_{n}\in T^{-1}x_{n-1}$ such that

$$ \begin{align*} g(x_{n-1})=f(x_{n})+g(x_{n}) \end{align*} $$

inductively on n. Noting that $T(x_{n}\pm 2^{-n})=x_{n-1}\pm 2^{-n+1}$ , we have

$$ \begin{align*} g(x_{n-1}+2^{-n+1}) \ge f(x_{n}+2^{-n})+ g(x_{n}+2^{-n}), \end{align*} $$
$$ \begin{align*} g(x_{n-1}-2^{-n+1})\ge f(x_{n}-2^{-n})+ g(x_{n}-2^{-n}). \end{align*} $$

It follows that for each $n\ge 2$ ,

$$ \begin{align*} \xi_{2^{-(n-1)}}^{x_{n-1}}(g) \le \xi_{2^{-n}}^{x_{n}}(f) +\xi_{2^{-n}}^{x_{n}}(g), \end{align*} $$

and therefore

$$ \begin{align*} -2\bigg(g(x)-g\bigg(x+\frac{1}{2}\bigg)\bigg) = \xi_{2^{-1}}^{x_{1}}(g) \le \sum_{k=2}^{n} \xi_{2^{-k}}^{x_{k}}(f)+\xi_{2^{-n}}^{x_{n}}(g). \end{align*} $$

The proof of equation (11) is completed.

3.2 Sufficient conditions for the Sturmian condition

In this subsection, based on Lemma 3.2 and Theorem 2.3, we shall prove Theorem 3.5 and Corollary 3.6, which provide sufficient conditions for the Sturmian condition that are easy to check for specific observable f. Let us begin with a simple fact that will be used to verify assumption (S2) in Lemma 3.2.

Lemma 3.4. The following holds for any $f\in \mathop{\mathrm{Lip}}({\mathbb {R}})$ with $\eta (f)<+\infty $ and any $\delta>0$ :

$$ \begin{align*} f^{\prime}(x)-f^{\prime}(x+\delta)\le \eta(f)\cdot\delta \quad \text{a.e.}~x\in{\mathbb{R}}. \end{align*} $$

Proof. Let $\tilde {f}(x):=f(x)+({\eta (f)}/{2})x^{2}$ . Then by Lemma 2.2, $\tilde {f}$ is convex on ${\mathbb {R}}$ , so that for any $\delta>0$ ,

$$ \begin{align*} f^{\prime}(x)-f^{\prime}(x+\delta)-\eta(f)\cdot\delta =\tilde{f}^{\prime}(x)-\tilde{f}^{\prime}(x+\delta)\le 0 \quad \text{a.e.}~x\in{\mathbb{R}}.\\[-3pc] \end{align*} $$

Our main technical result in this section is the following. Its statement might look complicated, so we shall present a simplified version in Corollary 3.6 and further deduce another result in Theorem 3.8.

Theorem 3.5. Let $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be with $\eta (f)<+\infty $ . Suppose that there exist $a\in {\mathbb {R}}$ and $b\in (a,a+\tfrac 12)$ such that equations (12) and (13) below hold:

(12) $$ \begin{align} f(x)-f\bigg(x+\frac{1}{2}\bigg)-\frac{1}{2}\cdot\max_{y\in T^{-1}(x+{1}/{2})}\xi_{{1}/{4}}^{y}(f)>\frac{1}{96}\eta(f)\quad \text{for all } x\in [a,b]; \end{align} $$
(13) $$ \begin{align} f^{\prime}(x)-f^{\prime}\big(x+\tfrac{1}{2}\big)<-\tfrac{1}{6}\eta(f) \quad\text{a.e.}~x\in \big[b,a+\tfrac{1}{2}\big]. \end{align} $$

Then f satisfies the Sturmian condition.

Proof. Let $g\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ be a calibrated sub-action of f. Note that by equation (7), $\eta (g)\le \tfrac 13\eta (f)$ . It suffices to verify the two assumptions (S1) and (S2) in Lemma 3.2.

Verifying (S1). First, taking $n=2$ in equation (11) yields that

$$ \begin{align*} g(x)-g\big(x+\tfrac{1}{2}\big)\ge - \tfrac{1}{2}\cdot\max_{y\in T^{-1}(x+{1}/{2})}\xi_{{1}/{4}}^{y}(f) - \tfrac{1}{2}\cdot \xi_{{1}/{4}}^{*}(g). \end{align*} $$

Second, due to equation (5) and $\eta (g)\le \tfrac 13\eta (f)$ ,

$$ \begin{align*} \xi_{{1}/{4}}^{*}(g)\le \frac{1}{16}\eta(g)\le \frac{1}{48}\eta(f). \end{align*} $$

Combining the inequalities in the two displayed lines above with equation (12) and the definition of R, assumption (S1) is verified.

Verifying (S2). It suffices to show that $R^{\prime }<0$ a.e. on $[b,a+\tfrac 12]$ . Identify g as a function on ${\mathbb {R}}$ of period $1$ . Applying Lemma 3.4 to g with $\delta =\tfrac 12$ , we have

$$ \begin{align*} g^{\prime}(x)-g^{\prime}(x+\tfrac{1}{2})\le \tfrac{1}{2}\eta(g) \quad \text{a.e.}~x\in{\mathbb{R}}. \end{align*} $$

Combining this with $\eta (g)\le \tfrac 13\eta (f)$ , equation (13), and the definition of R, the conclusion follows.

To simplify the expressions in equations (12) and (13), let us add certain symmetry ((A0) in Definition 3.2) to f and introduce the following families of functions. Then the statement of Theorem 3.5 can be reduced to Corollary 3.6 below.

Definition 3.2. Given $a\in {\mathbb {R}}$ , $b\in (a,a+\tfrac 12)$ and $v\in {\mathbb {R}}$ , let ${\mathcal {A}}_{a,b}^{v}$ denote the collection of $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ such that the assumptions (A0), (A1), (A2) below hold.

  1. (A0) $f(x)+f(x+\tfrac 12)=2v$ for $x\in {\mathbb {T}}$ .

  2. (A1) $2f(x)-v -\max _{y\in {\mathbb {T}}} f(y)> ({1}/{96})\eta (f)$ for $x\in [a,b]$ .

  3. (A2) $f^{\prime }(x)<- ({1}/{12})\eta (f)$ for a.e. $x\in [b,a+\tfrac 12]$ .

Moreover, denote ${\mathcal {A}}_{a,b}:=\bigcup _{v\in {\mathbb {R}}}{\mathcal {A}}_{a,b}^{v}$ .

Remark. By definition, if $f\in {\mathcal {A}}_{a,b}^{v}$ , then for $f_{\omega }(x):=f(x-\omega )$ , $f_{\omega } \in {\mathcal {A}}_{a+\omega ,b+\omega }^{v}$ for any $\omega \in {\mathbb {R}}$ . Moreover, ${\mathcal {A}}_{a,b}$ is a ‘cone’ in the sense that if $f,g\in {\mathcal {A}}_{a,b}$ and $t>0$ , then $tf, f+g\in {\mathcal {A}}_{a,b}$ . This can be easily checked by definition and Lemma 2.5.

In the statement below, by saying that a T-invariant measure is a minimizing measure of $f\in C({\mathbb {T}})$ , we mean that it is a maximizing measure of $-f$ .

Corollary 3.6. For ${\mathcal {A}}_{a,b}$ defined in Definition 3.2, let $f\in {\mathcal {A}}_{a,b}$ . Then for any $\omega \in {\mathbb {R}}$ , $f_{\omega }(x):=f(x-\omega )$ satisfies the Sturmian condition. As a result, $f_{\omega }$ admits a unique maximizing measure and a unique minimizing measure, and both of them are Sturmian measures.

Remark. Results in [Reference Anagnostopoulou, Díaz-Ordaz, Jenkinson and Richard1, Theorem 1] and [Reference Fan, Schmeling and Shen7, Theorem 4.1] are of the same style as Corollary 3.6 but cannot be recovered by it.

Proof. Let $f\in {\mathcal {A}}_{a,b}^{v}$ for some $v\in {\mathbb {R}}$ . According to the remark ahead of the corollary, to show $f_{\omega }$ satisfies the Sturmian condition, it suffices to prove that f satisfies the Sturmian condition. First, since f satisfies (A0),

$$ \begin{align*} f(x)-f\big(x+\tfrac{1}{2}\big)-\tfrac{1}{2}\cdot\xi_{{1}/{4}}^{y}(f)=2f(x)-v-f(y)\quad \text{for all } x,y\in{\mathbb{T}}. \end{align*} $$

Combining this with assumption (A1), equation (12) holds. Second, combining assumptions (A0) with (A2), equation (13) holds. Therefore, by Theorem 3.5, f satisfies the Sturmian condition.

Finally, since $f_{\omega }$ satisfies the Sturmian condition, by Lemma 3.1, it admits a unique maximizing measure which is Sturmian; since $f_{\omega }=2v-f_{\omega +1/2}$ , the minimizing measure of $f_{\omega }$ is also unique and Sturmian.

3.3 A family of functions looking like cosine

As a concrete application of Corollary 3.6, we can recover the result of Bousch [Reference Bousch3, Théorème A]: for $f(x)=\cos 2\pi x$ , $\eta (f)=4\pi ^{2}$ and it is easy to check that $f\in {\mathcal {A}}_{-1/8,1/8}$ (or $f\in {\mathcal {A}}_{-{1}/{10},{1}/{10}}$ ).

To give a sufficient condition for the Sturmian condition more explicit than Corollary 3.6, by mimicking behaviors of the cosine function, we add more properties to the observable f and introduce the following family of functions. A sufficient condition will be given in Theorem 3.8.

Definition 3.3. Let ${\mathcal {B}}$ denote the collection of $f\in \mathop{\mathrm{Lip}}({\mathbb {T}})$ with the following properties.

  • $x\mapsto f(x)+f(x+\tfrac {1}{2})$ is constant on ${\mathbb {T}}$ .

  • $f(-x)=f(x)$ for $x\in {\mathbb {T}}$ .

  • $\eta (f)<+\infty $ .

  • f is concave on $[-\tfrac 14,\tfrac 14]$ .

By definition, the following is evident, where ${\mathrm {esssup}}$ ( ${\mathrm {essinf}}$ ) means the essential supremum (infimum) relative to Lebesgue measure.

Lemma 3.7. ${\mathcal {B}}$ is a subset of $\{f\in C^{1}({\mathbb {T}}): f^{\prime }\in \mathop{\mathrm{Lip}}({\mathbb {T}}),f^{\prime }(0)=0\}$ . Moreover,

$$ \begin{align*} \eta(f)={\mathrm{esssup}}_{x\in{\mathbb{T}}}f^{\prime\prime}(x) =-{\mathrm{essinf}}_{x\in{\mathbb{T}}}f^{\prime\prime}(x) \quad \text{for all } f\in{\mathcal{B}}. \end{align*} $$

Theorem 3.8. Denote $\kappa :={7}/{96}-{\sqrt {3}}/{36}>0$ . Let $f\in {\mathcal {B}}$ and suppose that

$$ \begin{align*} f(0)-f(\tfrac{1}{4})>\kappa\cdot\eta(f). \end{align*} $$

Then $x\,{\mapsto}\,f(x-\omega )$ satisfies the Sturmian condition for any $\omega \in {\mathbb {R}}$ .

Remark.

  • ${7}/{96}-{\sqrt {3}}/{36}<0.02481<{1}/{40}$ ; however,

    $$ \begin{align*} \sup_{f\in {\mathcal{B}}} \frac{f(0)-f({1}/{4})}{\eta(f)}=\frac{1}{32}= 0.03125, \end{align*} $$
    and the supremum is attained at $f\in {\mathcal {B}}$ defined by $f(x)=-x^{2}$ , $x\in [0,\tfrac 14]$ .
  • For $f(x)=\cos 2\pi x\in {\mathcal {B}}$ , $(f(0)-f(\tfrac 14))/\eta (f)={1}/{4\pi ^{2}}>0.02533$ .

Proof. Without loss of generality, assume that $f(\tfrac 14)=0$ for simplicity. It suffices to find $c\in (0,\tfrac 14)$ such that $f\in {\mathcal {A}}_{a,b}$ for $a=-c$ , $b=c$ . Assumption (A0) is automatically satisfied for $v=0$ . Since f is decreasing on $[0,\tfrac 12]$ and since $f(-x)=f(x)$ , assumption (A1) is reduced to $2f(c)-f(0)>({1}/{96})\eta (f)$ . Since $f^{\prime }$ is decreasing on $[0,\tfrac 14]$ and since $f^{\prime }(\tfrac 12-x)=f^{\prime }(x)$ , assumption (A2) is reduced to $f^{\prime }(c)<-({1}/{12})\eta (f)$ . Moreover, $h:=f(0)-f$ satisfies all the assumptions in Lemma 3.9 below, and assumptions (A1), (A2) are further reduced to assumptions (H1), (H2) in Lemma 3.9. Then, applying Lemma 3.9 to this h, the conclusion follows.

Lemma 3.9. Let $h:[0,\tfrac 14]\to {\mathbb {R}}$ satisfy the following.

  • h is $C^{1}$ , and $h^{\prime }$ is Lipschitz and increasing.

  • $h(0)=h^{\prime }(0)=0$ .

  • $h(\tfrac 14)>\kappa \cdot \eta $ for $\kappa :={7}/{96}-{\sqrt {3}}/{36}$ and $\eta :={\mathrm {esssup}}_{x\in [0,1/4]}h^{\prime \prime }(x)$ .

Then there exists $c\in (0,\tfrac 14)$ such that

  1. (H1) $h(\tfrac 14)-2h(c)>{\eta }/{96}$ ,

  2. (H2) $h^{\prime }(c)>{\eta }/{12}$ .

Proof. Without loss of generality, we may assume $\eta =1$ , because all the assumptions of h are positively homogeneous. Then $h^{\prime \prime }\le 1$ a.e. Let $b\in [0,\tfrac 14]$ be maximal such that $h^{\prime }(b)\le {1}/{12}$ . Since $h^{\prime }(0)=0$ and $h^{\prime \prime }\le 1$ a.e., $b>{1}/{12}$ and

$$ \begin{align*} h^{\prime}(x)\le \begin{cases} x ,\quad & 0\le x\le \dfrac{1}{12}, \\[3pt] \dfrac{1}{12} ,\quad& \dfrac{1}{12}< x\le b, \\[6pt] \dfrac{1}{12}+x-b ,\quad& b< x\le \dfrac{1}{4}. \end{cases}\! \end{align*} $$

It follows that

$$ \begin{align*} h(b)= \int_{0}^{b} h^{\prime}(x)\,{{d}} x \le \frac{b}{12}-\frac{1}{288}, \end{align*} $$
$$ \begin{align*} h\bigg(\frac{1}{4}\bigg)=h(b)+\int_{b}^{{1}/{4}}h^{\prime}(x)\,{{d}}x\le \frac{5}{288}+\frac{1}{2}\cdot\bigg(\frac{1}{4}-b\bigg)^{2}. \end{align*} $$

Then from

$$ \begin{align*} \frac{7}{96}-\frac{\sqrt{3}}{36}< h\bigg(\frac{1}{4}\bigg)\le \frac{5}{288}+\frac{1}{2}\cdot\bigg(\frac{1}{4}-b\bigg)^{2}, \end{align*} $$

we deduce that $b<b_{0}:=({5-2\sqrt {3}})/{12}<\tfrac 14$ , and hence $h(b)< {b_{0}}/{12}-{1}/{288}$ . It follows that

$$ \begin{align*} h\bigg(\frac{1}{4}\bigg) -2h(b)>\frac{7}{96}-\frac{\sqrt{3}}{36} - \frac{b_{0}}{6}+\frac{1}{144}=\frac{1}{96}. \end{align*} $$

Then the conclusion follows by choosing $c>b$ sufficiently close to b.

Motivated by Theorem 3.8, let us end this subsection with the following question that might be of interest.

Question. What is the optimal lower bound of $\kappa>0$ such that if $f\in {\mathcal {B}}$ and $f(0)-f(\tfrac 14)>\kappa \cdot \eta (f)$ holds, then $x\mapsto f(x-\omega )$ satisfies the Sturmian condition for any $\omega \in {\mathbb {R}}$ ?

Acknowledgements

The author thanks Zeng Lian for introducing the research topic of ergodic optimization to him. The author thanks Weixiao Shen for valuable suggestions. The author also thanks Bing Gao for helpful discussions. Finally, the author would like to thank the anonymous referee for valuable remarks.

References

Anagnostopoulou, V., Díaz-Ordaz, K., Jenkinson, O. and Richard, C.. Sturmian maximizing measures for the piecewise-linear cosine family. Bull. Braz. Math. Soc. (N.S.) 43(2) (2012), 285302.CrossRefGoogle Scholar
Bochi, J.. Ergodic optimization of Birkhoff averages and Lyapunov exponents. Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018 (Invited Lectures, III). Eds. Sirakov, B., Ney de Souza, P. and Viana, M.. World Scientific Publishing, Hackensack, NJ, 2018, pp. 18251846.Google Scholar
Bousch, T.. Le poisson n’a pas d’arêtes. Ann. Inst. Henri Poincaré Probab. Stat. 36(4) (2000), 489508.CrossRefGoogle Scholar
Bousch, T. and Jenkinson, O.. Cohomology classes of dynamically non-negative ${C}^k$ functions. Invent. Math. 148(1) (2002), 207217.CrossRefGoogle Scholar
Bullett, S. and Sentenac, P.. Ordered orbits of the shift, square roots, and the devil’s staircase. Math. Proc. Cambridge Philos. Soc. 115(3) (1994), 451481.CrossRefGoogle Scholar
Contreras, G., Lopes, A. O. and Thieullen, P.. Lyapunov minimizing measures for expanding maps of the circle. Ergod. Th. & Dynam. Sys. 21(5) (2001), 13791409.CrossRefGoogle Scholar
Fan, A., Schmeling, J. and Shen, W.. ${L}^{\infty }$ -estimation of generalized Thue–Morse trigonometric polynomials and ergodic maximization. Discrete Contin. Dyn. Syst. 41(1) (2021), 297327.CrossRefGoogle Scholar
Fogg, N. P.. Substitutions in Dynamics, Arithmetics and Combinatorics (Lecture Notes in Mathematics, 1794). Eds. Berthé, V., Ferenczi, S., Mauduit, C. and Siegel, A.. Springer-Verlag, Berlin, 2002.CrossRefGoogle Scholar
Garibaldi, E.. Ergodic Optimization in the Expanding Case: Concepts, Tools and Applications (SpringerBriefs in Mathematics). Springer, Cham, 2017.CrossRefGoogle Scholar
Jenkinson, O.. Ergodic optimization. Discrete Contin. Dyn. Syst. 15(1) (2006), 197224.CrossRefGoogle Scholar
Jenkinson, O.. Ergodic optimization in dynamical systems. Ergod. Th. & Dynam. Sys. 39(10) (2019), 25932618.CrossRefGoogle Scholar
Savchenko, S. V.. Homological inequalities for finite topological Markov chains. Funktsional. Anal. i Prilozhen. 33(3) (1999), 9193.CrossRefGoogle Scholar