Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-06-08T15:35:19.598Z Has data issue: false hasContentIssue false

Estimating the Hausdorff measure using recurrence

Published online by Cambridge University Press:  12 January 2024

Łukasz Pawelec*
Affiliation:
Department of Mathematics and Mathematical Economics, SGH Warsaw School of Economics, Warszawa, Poland (lpawelec@impan.pl) Institute of Mathematics, Polish Academy of Sciences, Warszawa, Poland
Rights & Permissions [Opens in a new window]

Abstract

We show a new method of estimating the Hausdorff measure of a set from below. The method requires computing the subsequent closest return times of a point to itself.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on Behalf of The Edinburgh Mathematical Society.

1. Introduction

Let (X, d) be a separable metric space and $(T,\mu)$ a transformation preserving a Borel, probability measure. The classical Poincaré lemma in such a setting gives that

\begin{equation*}\displaystyle\liminf_{n\to\infty}d(x,T^n(x))=0 \;\mbox{ for} \,\mu-\mbox{almost every}\, x.\end{equation*}

The historically first attempt at strengthening this result came in a paper by M. Boshernitzan [Reference Boshernitzan1], who proved that $d(x,T^n(x))\approx n^{-1/\alpha}$, where α is the Hausdorff dimension of X. Precisely speaking, he gave two results, which we state now.

For a dynamical system (X, T) preserving a probability Borel measure µ:

(1.1)\begin{align} &\mbox{If}\ H_\alpha(X) \lt +\infty, \mbox{then }\; \liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) \lt +\infty, \quad\mbox{ for}\, \mu-\mbox{a.e.}\ x. \end{align}
(1.2)\begin{align} &\mbox{If}\ H_\alpha(X) =0, \;\mbox{then } \liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) =0, \quad\mbox{ for}\, \mu-\mbox{a.e.} x. \end{align}

The second result from that paper states, that if the preserved probability measure $\mu=H_\alpha$, then

(1.3)\begin{equation} \liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) \leq 1, \quad\mbox{ for}\, \mu-\mbox{a.e.} x. \end{equation}

There has been a lot of development in the area, for an introduction into quantitative recurrence, see e.g. [Reference Barreira and Saussol3].

In this paper, we will be interested in showing some new bounds on the recurrence speed. We will prove a generalisation of Boshernitzan’s result, but the main new idea is to show how to use this improved result to get an estimate from below of the Hausdorff measure of a fractal set. We discuss this on an easy example. An upcoming paper with M. Urbański [Reference Pawelec and Urbański4] shows a more interesting application, namely for Cantor sets defined by the so-called Denjoy maps (i.e. we show a bound from below on the Hausdorff measure of the minimal set occurring for a $\mathcal{C}^{1+\alpha}$ diffeomorphism on the circle which is only semi-conjugate to a rotation).

The idea of the method comes from the author’s PhD Thesis.

The paper is organised as follows. In the next section, we give the needed definitions, state the relevant theorems and sketch the new technique. In $\S$ 3, we show the method of estimating the Hausdorff measure on an example. $\S$ 4 is filled with additional comments, improvements and limitations of the method. Finally, $\S$ 5 is devoted to the proof of Theorem 3.

2. Definitions and theorems

Throughout this paper, we will assume that (X, d) is a metric space and $T\colon~X\to~X$ a Borel measurable map; µ is a T-invariant, ergodic, probability, Borel measure on X.

As we are working with subtle measure estimates it seems prudent to put here the precise definitions in use in this paper.

We will use the (most common) version of the definition of the Hausdorff measure.

Definition 1. The outer Hausdorff measure is the following

\begin{equation*}H_\alpha(Y)= \lim_{r\to 0} \inf\left\{\sum_{k=1}^\infty (\operatorname{diam} U_k)^\alpha : \forall_k \operatorname{diam} U_k \lt r \,{\rm{and }}\ Y\subset \bigcup_{k=1}^\infty U_k\right\},\end{equation*}

where the infimum is take over all countable covers of Y satisfying the conditions as stated. By Carathéodory’s extension this gives the (typical) Hausdorff measure.

The next definition is also standard.

Definition 2. The Hausdorff dimension of the set Y is given by the formula

\begin{equation*}\dim_H(Y)=\inf\{\alpha \geq 0 : H_\alpha(Y)=0\}.\end{equation*}

We will now state a new version of Boshernitzan’s estimate (1.3). In contrast to his result we do not assume that the preserved measure $\mu=H_\alpha$.

Theorem 3. With the assumptions on the dynamical system as above, for any α > 0 for which Hα is σ-finite on X and for µ-almost every $x\in X$ we have

(2.1)\begin{align} &\liminf_{n\to \infty} n\Big(d(T^n(x),x)\Big)^\alpha\leq g(x):=\limsup\limits_{r\to 0}\frac{H_\alpha(B(x,r))}{\mu(B(x,r))}. \end{align}

Remark. Note that g(x) may be equal to 0 or $+\infty$. The statement still holds. Also, due to the way this theorem is used below, the σ-finiteness assumption is not restrictive at all. (We want to give a bound from below on the $H_\alpha(X)$, so infinite measure only helps us.)

The rather simple proof utilises the idea by M. Boshernitzan and some techniques from ergodic theory. We postpone it till the last section.

This result shows that the behaviour of the recurrence is (may be) governed by the Hausdorff measure of the space. We will try to apply this in a reverse manner: if we could compute/estimate the lower limit of the speed of recurrence, then this would give us some information on the Hausdorff measure.

More precisely, if we can show that the lower limit on the LHS of (2.1) is positive for some α > 0, then either $H_\alpha(X)=+\infty$ or we will get the lower bound on the density (and so on the α-Hausdorff measure of the space). Also, both cases trivially give $\dim_H(X)\geq \alpha$.

Regarding the dimension, note that there is a unique value $\alpha^*\in [0,+\infty]$ such that

\begin{align*} \liminf_{n\to \infty} n\big(d(T^n(x),x)\big)^\alpha = +\infty&, \quad\mbox{ for all}\ \alpha \lt \alpha^*\quad \mbox{and}\\ \liminf_{n\to \infty} n\big(d(T^n(x),x)\big)^\alpha = 0&, \quad\mbox{ for all}\ \alpha \gt \alpha^*. \end{align*}

Theorem 3 now gives that $\dim_H(X)\geq \alpha^*$.

Note that a priori we may take any map on the space, as long as it preserves some Borel, probability, ergodic measure µ. However, we ought to take a map with poor mixing properties because of a result that requires another well-known definition.

Definition 4. We say that a dynamical system has an exponential decay of correlations in Lipschitz–continuous functions (denoted by $\mathcal{L}$), if there exist $\gamma \in (0,1)$ and $C \lt +\infty$, such that for all $g\in \mathcal{L}$, all $f \in L_1(\mu)$ and every $n\in {\mathbb N}$, we have

(2.2)\begin{equation} \left|\mu\left(f\circ T^n\cdot g\right)-\mu(g)\cdot\mu(f)\right| \leq C\gamma^n||g||_\mathcal{L}\mu(|f|), \end{equation}

where $||\cdot||_\mathcal{L}$ denotes the typical norm of the space of Lipschitz functions.

The simplified version (stronger assumptions) of Theorem 3.1. from [Reference Pawelec6] states that

Theorem 5. With the assumptions on the dynamical system as above, if $\mu \approx H_\alpha$ and the system has an exponential decay of correlation in Lipschitz–continuous functions, then

(2.3)\begin{equation} \liminf_{n\to \infty} \;\left(n\ln\ln n \right)^{1/\alpha}d(T^n(x),x) =0, \end{equation}

which is the opposite of what we want (namely a positive lower limit). Thus, for the map to be useful to our method it needs to be slowly mixing. Typical examples of such maps include the irrational rotations on $\mathcal{S}^1$, Feigenbaum maps or the adding machine map, which we utilise below.

3. Example

Our example will be arguably the simplest of fractal sets – the one-third Cantor set. We will estimate from below the dimensional density g(x) for all values of α. As it turns out, we will get a meaningful result for α equal to the Hausdorff dimension of the Cantor set, leading to a bound on the Hausdorff dimension and the Hausdorff measure, both from below.

As mentioned, we will utilise a so-called adding machine map. We recall the definition now.

Every point x in the Cantor set C has a unique coding $(x_n)_{n=1}^{\infty}$ using symbols 0 and 1. The first symbol is 0 if the point is to the left of $1/2$ and 1 if it is to the right. The second symbol decides if the point is on the left or on the right of the second level segments, etc. The relation between coding and the point on the real line is $\displaystyle x=\sum\limits_{n=1}^{+\infty} \frac{2x_n}{3^n}$. It follows that the (Euclidean) distance between points x and y is given by a formula $\displaystyle |x-y|= 2\left|\sum\limits_{n=1}^{+\infty} \frac{x_n-y_n}{3^n}\right|$.

The map T on the coding space, is defined by an inductive scheme:

  1. A) Start with the first symbol: n = 1.

  2. B) If the symbol $x_n= 0$, then add 1 to it (new $(Tx)_n=1$) and finish.

  3. C) If the symbol $x_n=1$, then make it equal to 0 (new $(Tx)_n=0$), increase n by 1, and return to (B).

In other words – we scan the code for the first digit of $(x_n)$ equal to 0, set it to 1 and set all the previous digits (i.e. $(x_k)$ for k < n) to 0.

Note that this ‘program’ will run indefinitely, if our point x has code $[111\ldots]$ (i.e. if x = 1), but mathematically this is not an issue as we may set $T(1)=0$.

This map is called an adding machine, because it is equivalent to adding 1 to the first digit of a binary number, where the digits are written in reverse order (Figure 1). This transformation is a piecewise isometry and it preserves the Cantor measure µ (defined to be equally distributed on the cylinders of the same level/size).

Figure 1. Adding machine transformation on a Cantor set. The map in the neighbourhood of the point $1111\ldots$ is drawn only up to the cylinder of length 3.

Let us start computing the recurrence rate by taking the point $z^0=0=[0000\ldots]$ and denote the forward iterates as $T^n(z^0)=z^n$.

\begin{equation*}z^1=\frac{2}{3}=[100\ldots],\;z^2=\frac{2}{9}=[010\ldots],\;z^3=\frac{8}{9}=[110\ldots],\;z^4=\frac{2}{27}=[0010\ldots].\end{equation*}

To calculate the lower limit (LHS) of (2.1), we only need to look at the subsequent closest returns, i.e. we can ignore all n for which there exists k < n such that $|T^k(z)-z|\leq|T^n(z)-z|.$ For our point z 0 (and in fact any starting point) it is obvious that those returns will occur for the iterates being powers of 2. More precisely,

\begin{align*}\left|T^{2^n}(z^0)-z^0\right|=\frac{2}{3^{n+1}}, &\hskip1cm\mbox{ for all}\ n\geq 0, \\ \left|T^{k}(z^0)-z^0\right| \gt \frac{2}{3^{n+1}}, &\hskip1cm \mbox{for all}\ 0 \lt k \lt 2^{n}. \end{align*}

Taking any α > 0, we get the following

(3.1)\begin{equation} \liminf_{k\to +\infty} k\left|T^k(z^0)-z^0\right|^\alpha = \lim_{n\to+\infty} 2^n\Big(\frac{2}{3^{n+1}}\Big)^\alpha = \lim_{n\to+\infty} \frac{2^\alpha}{3^\alpha}\left(\frac{2}{3^{\alpha}}\right)^n. \end{equation}

Obviously, z 0 is not a typical point in this system. However, the general calculation is not that different. Take any point $x\in C$ and look at its code – $[x_1x_2x_3\ldots]$. As before, we only need to look at iterates that are of form 2n. The point $T^{2^n}(x)$ will have the first n symbols identical and the $(n+1)$st symbol will be different. What we do not control/know are the later symbols, which can lower the distance slightly, e.g. the distance between $[100\ldots]$ and $[010\ldots]$ is equal to $4/9$. However, it is easy to write down all the possibilities.

\begin{align*}\left|T^{2^n}(x)-x\right|=\frac{2}{3^{n+1}} &\hskip1cm\mbox{ if}\, x_{n+1}=0, \\ \left|T^{2^n}(x)-x\right|=\frac{4}{3^{n+2}} &\hskip1cm\mbox{ if}\ x_{n+1}=1\, \mbox{and}\ x_{n+2}=0,\\ \left|T^{2^n}(x)-x\right| \gt \frac{2}{3^{n+1}} &\hskip1cm\mbox{ if}\ x_{n+1}=1\, \mbox{and}\ x_{n+2}=1. \end{align*}

To sum up – the worst case is when we add 1 at the place where there is a symbol 1 followed by a 0.

Repeating (3.1) for a general point we get a slightly worse estimate

(3.2)\begin{equation} \liminf_{k\to +\infty} k\left|T^k(x)-x\right|^\alpha \geq \lim_{n\to +\infty}\Big(\frac{4}{9}\Big)^\alpha\left(\frac{2}{3^{\alpha}}\right)^n=\lim_{n\to +\infty}\Big(\frac{4}{9}\Big)^\alpha\left(3^{\log_32-\alpha}\right)^n.\end{equation}

So if we take any $\alpha \lt \log_32$, we see that the lower limit is infinite so by using Boshernitzan’s result (1.1) we know that the Hausdorff measure $H_\alpha(C)$ is infinite, so the Hausdorff dimension $\mathrm{HD}(C)\geq \log_32$.

Take $\alpha=\log_32$ and the Cantor measure µ. Now Thm. 3 gives that either $H_{\log_32}$ is not σ-finite on C (thus $H_{\log_32}(C)=+\infty$) or $g(x)\geq \big(\frac{4}{9}\big)^\alpha$ for all x (where $g(x)=\frac{dH_\alpha}{d\mu}$).So

(3.3)\begin{equation} H_{\log_32}(C) = \int_C g(x) d\mu(x) \geq \mu(C)\left(\frac{4}{9}\right)^{\log_32}\approx 0.6. \end{equation}

This is not a very strong result – in reality $H_{\log_32}(C)=1$, but on the other hand, the estimate has been acquired with little effort. The next section is dedicated to comments on improving this lower bound.

Note that, it is easy to apply this technique to other self-similar sets, which allow symbolic coding, e.g. the Sierpiński triangle. Unfortunately, the unoptimality of the lower bound may (and typically will) remain.

On the other hand, the coding is not strictly necessary. If a system has slow recurrence properties, then one could get meaningful results as well. An example of this is in a paper with M. Urbański [Reference Pawelec and Urbański4], where the underlying dynamics is that of an irrational rotation on a circle (which for numbers with bad Diophantine properties is in fact slowly recurrent).

4. Improvements and comments

4.1. Changing the metric

In the calculation above we used the Euclidean metric on the real line. However, on the Cantor set, there is another natural metric, coming from the symbolic representation. Define

\begin{equation*}d(x,y) = d\big((x_n),(y_n)\big) = 3^{1-\min\{k\geq 1\, :\, x_k\neq y_k \}}.\end{equation*}

Then, the diameter of C stays equal to 1. Also, the diameters of all the cylinder sets in this metric is equal to the diameters in the Euclidean one. And the Hausdorff measure (and dimension) are exactly as it was in the Euclidean case.

Let us check what happens to our recurrence estimates if we take this metric.

For any $z\in C$ we trivially get

\begin{align*}d\big(T^{2^n}(z), z\big)=\frac{1}{3^{n}}, &\hskip1cm\mbox{ for all}\ n\geq 0, \\ d\big(T^{k}(z), z\big)\geq \frac{1}{3^{n}}, &\hskip1cm\mbox{ for all}\ 0 \lt k \lt 2^{n}. \end{align*}

Inserting this into the liminf estimates yields

(4.1)\begin{equation} \liminf_{n\to +\infty} k\left|T^k(x)-x\right|^\alpha = \lim_{n\to +\infty}2^n\big(3^{-n}\big)^\alpha=\lim_{n\to +\infty}\big(3^{\log_32-\alpha}\big)^n.\end{equation}

Now, setting $\alpha=\log_32$ we get the estimate on the density $g(x)\geq1$, which in turns gives $H_\alpha(C)\geq1$.

We see that using this metric we get the optimal estimate.

4.2. Irremovable obstacle

Let us return to the Euclidean metric. One could ask a very natural question – would some different map yield a better estimate?

And while it is possible that there exists a map with even slower recurrence, there does not seem to be any chance of improving up to the optimal lower bound. This is shown by a result of Boshernitzan and Delecroix, [Reference Boshernitzan and Delecroix2], which we will utilise below.

To see the problem, let us try to apply our method to a circle S of length 1. To get the best bound we need to find a map T on the circle (preserving some probability measure µ) for which:

(4.2)\begin{equation} \liminf_{n\to \infty} nd(T^n(x),x) \geq 1, \end{equation}

for µ-a.e. $x\in S$. This would prove that $H_1(S)\geq 1$.

First, let us see what should we assume on the measure. Its support needs to be the entire circle (we get nonsense otherwise). Also, the dimension of the measure needs to be 1 (reason as before). Finally, as the circle is geometrically identical at any point so should be the measure – leaving as only with the Lebesgue measure. The last argument is not precise at all, but we are not actually proving anything here, so for the sake of clarity let us leave it like that.

There is still plenty of maps preserving the Lebesgue measure. The simplest of those are the rotations by angle γ, denoted by Rγ. Then the recurrence speed does not depend on the starting point, but on the continued fraction expansion of γ. It is very well studied subject. By the classic result of Khinchin we know that the slowest return speed happens for the rotation by the golden mean (minus one) $\varphi = \frac{\sqrt{5}-1}{2}$. Khinchin’s Theorem also states that

(4.3)\begin{equation} \liminf_{n\to \infty} nd(R_\gamma^n(x),x) \leq \frac{1}{\sqrt{5}}, \end{equation}

with the equality for φ.

This shows that taking only the rotations, we have no chance of realising (4.2). And Boshernitzan and Delecroix generalise this to prove (in [Reference Boshernitzan and Delecroix2]) that inequality (4.3) is true for all maps preserving the Lebesgue measure. This shows that the method shown here has an irremovable obstacle in achieving the best bound. At least on the circle, but their proof suggest this would happen on every space.

Actually, their proof indicates that there exists a constant correction term (depending on the dimension of the space, and perhaps slightly on the geometry of the space) which one could apply to get the correct measure (for the circle this would be $\frac{1}{\sqrt{5}}$). Unfortunately, making this argument precise would require very general results on the optimal packing of points in rather arbitrary sets.

4.3. Dependence on the dimension

Their proof also suggests that the scale of the unoptimality of the lower bound (i.e. the difference between the obtained result and the true Hausdorff measure) depends on the dimension. In fact, it should shrink to zero as the dimension goes to zero.

We cannot prove this general result here. What we can do, however, is show the phenomenon for basic Cantor sets.

Let us compute the lower bound on the measure of the Cantor sets of varying dimensions. Fix $0 \lt a \lt \frac{1}{2}$. The Cantor set Ca in question is given by the maps:

(4.4)\begin{equation} f_0(x)=ax, \qquad f_1(x)=ax+(1-a)=a(x-1)+1. \end{equation}

The Hausdorff dimension of this set is trivially computed $\dim_H(C_a)=\frac{\log(2)}{-\log(a)}$.

We define the coding as before, take the same adding machine map and repeat the calculation as in the example. We get for any $x\in C_a$ (where $1-2a$ is the size of the gap between intervals of the same order, to which we may add the length of the next-level cylinder a 2):

(4.5)\begin{equation} \left|T^{2^n}(x)-x\right|\geq a^{n}(1-2a+a^2), \end{equation}

where the inequality becomes equality for those n’s when $x_n=1$ and $x_{n+1}=0$ (exactly as before). Putting this into the lower limit yields

(4.6)\begin{equation} \liminf_{k\to +\infty} k\left|T^k(x)-x\right|^\alpha \geq \lim_{n\to +\infty}2^n\left(a^n(1-2a+a^2)\right)^\alpha=\lim_{n\to +\infty}(1-a)^{2\alpha}\left(2^{1+\alpha\log_2(a)}\right)^n. \end{equation}

Put $\alpha=\frac{\log(2)}{-\log(a)}$, so that the sequence is constant. Then, we see

(4.7)\begin{equation} H_\alpha(C_a)\geq (1-a)^{2\alpha}. \end{equation}

This expression goes to one as a goes to zero (note that then so does α). Thus the difference between the true Hausdorff measure (equal to one in this case) and our estimate does disappear in the limit.

5. Proofs

The proof of Theorem 3 is divided into a few steps. First we prove

Proposition 6. With the assumptions on the dynamical system as in Theorem 3, in addition suppose that $H_\alpha \ll \mu$ for some α > 0, and denote the corresponding density by $g:=\frac{dH_\alpha}{d\mu}$. Then for µ-almost every $x\in X$ we have

(5.1)\begin{equation} \liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) \leq (\operatorname{esssup} g)^{1/\alpha}. \end{equation}

Remark. Note that g is the inverse of the usually taken density.

Proof. First, if g is unbounded, then the inequality is trivial. Denote $\beta:=\frac{1}{\alpha}$ and $s:=\operatorname{esssup}\, g \lt +\infty$. In this notation we need to show that $\mu(D)=0$, where

(5.2)\begin{equation} D = \{x\in X : \liminf_{n\to\infty} \;n^\beta d(T^n(x),x) \gt s^\beta\}. \end{equation}

Take any ɛ > 0 and define

(5.3)\begin{equation} D_\varepsilon:=\{x\in X : n^\beta d(T^n(x),x) \gt \left(\frac{s}{1-\varepsilon}\right)^\beta, \mbox{for all}\ n\geq 1 \,\mbox{such that}\ d(T^n(x),x) \lt \varepsilon\}. \end{equation}

It suffices to show that for any ɛ > 0, this set has µ-measure zero.

Assume the opposite, i.e. $\mu(D_\varepsilon) \gt 0$ for some fixed ɛ. We will prove that this implies $H_\alpha(D_\varepsilon) \gt 0$.

Define $\tau(x)$ as the first return map of a point x into Dɛ. This map preserves the conditional measure ν, defined as

(5.4)\begin{equation} \nu(A) = \frac{\mu(A\cap D_\varepsilon)}{\mu(D_\varepsilon)}. \end{equation}

Now, if $H_\alpha(D_\varepsilon) =0$, then by a result of Boshernitzan cited in the introduction (1.2), we have

(5.5)\begin{equation}\liminf_{k\to+\infty} k^\beta d(\tau^k(x),x)=0 \,\mbox{for}\ \nu-\mbox{a.e.}\ x \in D_\varepsilon.\end{equation}

Denote by $n_k(x)$ the time of k-th return of x to Dɛ. Then $\tau^k(x) = T^{n_k(y)}(x)$ and also

(5.6)\begin{equation} \lim_{k\to\infty} \frac{k}{n_k(x)} = \mu(D_\varepsilon) \end{equation}

for ν-a.e. y because of the ergodic theorem. Combining the two limits above we get

(5.7)\begin{equation} \liminf_{k\to+\infty} \mu(D_\varepsilon)^\beta \cdot n_k(x)^\beta d(T^{n_k(x)}(x),x)=0 \,\mbox{for}\ \nu-\mbox{a.e.} x \in D_\varepsilon \end{equation}

which is contradicts the definition of Dɛ. Thus we know that $H_\alpha(D_\varepsilon) \gt 0$.

As the Hausdorff measure is positive, there must exist a measurable, non-empty subset $U\subset D_\varepsilon$ with $\operatorname{diam} U \lt \varepsilon$ satisfying $ (1-\varepsilon)(\operatorname{diam} U)^\alpha \leq H_\alpha(U)$. (If all subsets of Dɛ satisfy the opposite inequality, then this trivially violates the definition of the Hausdorff measure). Put $u=\mu(U)$ and $r=\operatorname{diam} U$. From the definition of density g

\begin{equation} \nonumber H_\alpha(U) = \int_X {\mathbb{1}}_{U} \,dH_\alpha = \int_X g {\mathbb{1}}_{U} \,d\mu \leq \operatorname{esssup} g \cdot \mu(U)= s\mu(U).\end{equation}

Combining the two inequalities on U gives

(5.8)\begin{equation} (1-\varepsilon) r^\alpha \leq s\cdot u. \end{equation}

Since T preserves µ, we can show that

(5.9)\begin{equation} T^{-n}U \cap U \neq \emptyset \,\mbox{for some } n \leq \frac{1}{u}. \end{equation}

Indeed, if all those intersections were empty, then U, $T^{-1}(U)$$T^{-[1/u]}(U)$ would be pairwise disjoint, and so

\begin{equation*}\mu\bigg(\bigcup_{i=0}^{[1/u]}T^{-i}(U)\bigg) = \bigg(\Big[\frac{1}{u}\Big]+1\bigg)\cdot u \gt 1,\end{equation*}

a contradiction.

Take n for which the intersection is non-empty and any $x\in T^{-n}U \cap U$. Then

(5.10)\begin{equation} d(T^n(x),x) \leq \ \mbox{diam }U = r \lt \varepsilon,\end{equation}

so this n satisfies the condition in the definition of $D(\varepsilon)$. Using (5.9) and (5.10), then (5.8), we get

\begin{align*} n^\beta d(T^n(x),x) &\leq \left(\frac{1}{u}\right)^\beta\cdot r \leq \left(\frac{s}{1-\varepsilon}\right)^\beta\frac{1}{r}\cdot r=\left(\frac{s}{1-\varepsilon}\right)^\beta, \end{align*}

which contradicts the definition of Dɛ and ends the proof.

As the next step we will ‘localize’ the result above obtaining:

Proposition 7. With the assumptions as in Proposition 6, we have for µ-almost every $x\in X$

(5.11)\begin{equation} \liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) \leq g(x)^{1/\alpha}. \end{equation}

Remark 8. The density g is defined only almost everywhere, so g(x) really means

\begin{equation*} g(x) = \lim_{r\to 0} (\operatorname{esssup} g|_{B(x,r)}). \end{equation*}

Remark 9. This result for $X=[0,1]$ (α = 1), has been proved in [Reference Ho Choe5]. That proof, however, works only in a 1-dimensional space.

Proof of Proposition 7

We will use basic ergodic properties as in the part of the previous proof. Fix x and r > 0 and consider S(y) – the first return function to the ball $B(x,r)$. S preserves the conditional measure ν, defined as

(5.12)\begin{equation} \nu(A) = \frac{\mu(A\cap B(x,r))}{\mu(B(x,r))}. \end{equation}

The density of this new measure is related to the old density:

(5.13)\begin{equation} h = \frac{dH_\alpha}{d\nu} = g|_{B(x,r)} \cdot \mu(B(x,r)). \end{equation}

Using Proposition 6 for a system $\left(B(x,r), \mathcal{F}|_{B(x,r)}, \nu, d|_{B(x,r)}, S\right)$ we get

(5.14)\begin{equation} \liminf_{k\to \infty} \;k^{1/\alpha}d(S^k(y),y) \leq (\operatorname{esssup} h)^{1/\alpha}. \end{equation}

Denote by $n_k(y)$ the time of k-th return of y to $B(x,r)$. Then $S^k(y) = T^{n_k(y)}(y)$ and from ergodic theorem

(5.15)\begin{equation} \lim_{k\to\infty} \frac{k}{n_k(y)} = \mu(B(x,r)) \quad\mbox{ for}\, \mu-\mbox{a.e.} y.\end{equation}

Note that the closest returns (to itself) of a point $y\in B(x,r)$ for the original system have to occur within the sequence $n_k(y)$. Thus, the limit in (5.14) transforms to

(5.16)\begin{equation} \liminf_{k\to \infty} \;\left(\frac{k}{n_k(y)}\right)^{1/\alpha} n_k(y)^{1/\alpha}\cdot d(T^{n_k(y)}(y),y)\geq \mu(B(x,r))^{1/\alpha}\cdot\liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(y),y). \end{equation}

It remains to compile (5.13), (5.14) and (5.16) obtaining

(5.17)\begin{equation} \mu(B(x,r))^{1/\alpha}\cdot\liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(y),y) \leq (\operatorname{esssup} g|_{B(x,r)})^{1/\alpha}\cdot \mu(B(x,r))^{1/\alpha}. \end{equation}

Letting r → 0 we finish the proof.

We may finally conclude the proof of the theorem.

Proof of Theorem 3

As the result is proved µ-almost everywhere we may assume that all the points $x\in\operatorname{supp} \mu$. Take any α > 0 and the Hausdorff measure Hα. Recall that we assume that Hα is σ-finite. There exists a decomposition $H_\alpha = H^s + H^c$, where $H^s\bot \mu$ and $H^c \ll \mu$.

If $H^c=0$, then by the definition of singular measures there exists a set A such that $\mu(A)=1$ and $H_\alpha(A)=H^s(A)=0$. Define a new measure preserving system only on A, that is consisting only of points whose entire orbit stays in A. As A has full measure this new system has all the properties of the original system. Then by the result of Boshernitzan cited in the introduction (1.2)

\begin{equation*}\liminf_{n\to \infty} \;n^{1/\alpha}d(T^n(x),x) =0, \quad\mbox{ for}\, \mu-\mbox{a.e.} x. \end{equation*}

Thus the limit is smaller than g(x) whatever the latter would be.

If $H^s=0$, then the result follows from Proposition 7.

Finally, if both the singular part and the absolutely continuous part exist, then – as above – take a set A of full µ-measure for which $H^s(A)=0$ and define the system restricted to A. Within this new system $H_\alpha\ll \mu$ and this case has already been solved above.

Acknowledgements

The research of Łukasz Pawelec was supported in part by National Science Centre, Poland, Grant OPUS21 ‘Holomorphic dynamics, fractals, thermodynamic formalism’, 2021/41/B/ST1/00461. The author would like to thank the referee for helpful comments and spotting the omission in one of the proofs.

References

Boshernitzan, M., Quantitative recurrence results, Invent. Math. 113 (1993), 617631.CrossRefGoogle Scholar
Boshernitzan, M. and Delecroix, V., From a packing problem to quantitative recurrence in [0, 1] and the Lagrange spectrum of interval exchanges, Discrete Anal., 10 (2017), 125.Google Scholar
Barreira, L. and Saussol, B., Hausdorff dimension of measures via Poincaré recurrence, Commun. Math. Phys. 219(2): (2001), 443463.CrossRefGoogle Scholar
Pawelec, Ł. and Urbański, M., Estimating Hausdorff measure for Denjoy maps, Nonlinearity 36(11): (2023), 62246238.CrossRefGoogle Scholar
Ho Choe, G., Recurrence of transformations with absolutely continuous invariant measures, App. Math. Comput. 129(2–3): (2002), 501516.CrossRefGoogle Scholar
Pawelec, Ł., Iterated logarithm speed of return times, Bull, Aust. Math. Soc. 96(3): (2017), 468478.CrossRefGoogle Scholar
Figure 0

Figure 1. Adding machine transformation on a Cantor set. The map in the neighbourhood of the point $1111\ldots$ is drawn only up to the cylinder of length 3.