Hostname: page-component-7bb8b95d7b-dvmhs Total loading time: 0 Render date: 2024-09-26T13:28:29.026Z Has data issue: false hasContentIssue false

Convergence of blanket times for sequences of random walks on critical random graphs

Published online by Cambridge University Press:  09 January 2023

George Andriopoulos*
Affiliation:
NYU-ECNU Institute of Mathematical Sciences at NYU Shanghai, China.
Rights & Permissions [Opens in a new window]

Abstract

Under the assumption that sequences of graphs equipped with resistances, associated measures, walks and local times converge in a suitable Gromov-Hausdorff topology, we establish asymptotic bounds on the distribution of the $\varepsilon$ -blanket times of the random walks in the sequence. The precise nature of these bounds ensures convergence of the $\varepsilon$ -blanket times of the random walks if the $\varepsilon$ -blanket time of the limiting diffusion is continuous at $\varepsilon$ with probability 1. This result enables us to prove annealed convergence in various examples of critical random graphs, including critical Galton-Watson trees and the Erdős-Rényi random graph in the critical window. We highlight that proving continuity of the $\varepsilon$ -blanket time of the limiting diffusion relies on the scale invariance of a finite measure that gives rise to realizations of the limiting compact random metric space, and therefore we expect our results to hold for other examples of random graphs with a similar scale invariance property.

Type
Paper
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

A simple random walk on a finite connected graph $G$ with at least two vertices is a reversible Markov chain that starts at some vertex $v\in G$ , and at each step moves with equal probability to any vertex adjacent to its present position. The mixing and the cover time of the random walk are among the graph parameters which have been extensively studied. The mixing time measures the time required such that the distribution of the Markov chain is within small total variation distance from the unique invariant measure. To these parameters, Winkler and Zuckerman [Reference Winkler and Zuckerman39] added the $\varepsilon$ -blanket time variable (an exact definition will be given later in (1.4)) as the least time such that the walk has spent at every vertex at least an $\varepsilon$ fraction of time as much as expected at stationarity. Then, the $\varepsilon$ -blanket time of $G$ is defined as the expected $\varepsilon$ -blanket time variable maximized over the starting vertex.

The motivation of introducing and studying the blanket time arises mainly from applications in computer science. For example, suppose that a limited access to a source of information is randomly transferred from (authorized) user to user in a network. How long does it take for each user to own the information for as long as it is supposed to? To answer this question under the assumption that each user has to be active processing the information equally often involves the consideration of the blanket time. To a broader extent, viewing the internet as a (directed) graph, where every edge represents a link, a web surfer can be regarded as a walker who visits and records the sites at random. In a procedure that resembles Google’s PageRank, one wishes to rank a website according to the amount of time such walkers spend on it. A way to produce such an estimate is to rank the website according to the number of visits. The blanket time is the first time at which we expect this estimate to become relatively accurate.

Obviously, for every $\varepsilon \in (0,1)$ , the $\varepsilon$ -blanket time is larger than the cover time since one has to wait for all the vertices to have been visited at least once. Winkler and Zuckerman [Reference Winkler and Zuckerman39] made the conjecture that, for every $\varepsilon \in (0,1)$ , the $\varepsilon$ -blanket time and the cover time are equivalent up to universal constants that depend only on $\varepsilon$ and not on the particular underlying graph $G$ . This conjecture was resolved by Ding, Lee and Peres [Reference Ding, Lee and Peres23] who provided a highly non-trivial connection between those graph parameters and the discrete Gaussian free field (GFF) on $G$ using Talagrand’s theory of majorizing measures. Recall that the GFF on $G$ with vertex set $V$ is a centred Gaussian process $(\eta _v)_{v\in V}$ with $\eta _{v_0}=0$ , for some $v_0\in V$ , and covariance structure given by the Green kernel of the random walk killed at $v_0$ .

Recent years have witnessed a growing interest in studying the geometric and analytic properties of random graphs partly motivated by applications in research areas ranging from sociology and systems biology to interacting particle systems as well as by the need to present convincing models to gain insight into real-world networks. One aspect of this development consists of examining the metric structure and connectivity of random graphs at criticality, that is precisely when we witness the emergence of a giant component that has size proportional to the number of vertices of the graph. Several examples of trees, including critical Galton-Watson trees, possess the Brownian Continuum Random Tree (CRT) as their scaling limit. A programme [Reference Bhamidi, Broutin, Sen and Wang8] has been launched in the last few years having as its general aim to prove that the maximal components in the critical regime of a number of fundamental random graph models, with their distances scaling like $n^{1/3}$ , fall into the domain of attraction of the Erdős-Rényi random graph. Their scaling limit is a multiple of the scaling limit of the Erdős-Rényi random graph in the critical window, that is a tilted version of the Brownian CRT where a finite number of vertices have been identified. Two of the examples that belong to the Erdős-Rényi universality class are the configuration model in the critical scaling window and critical inhomogeneous random graphs, where different vertices have different proclivity to form edges, as it was shown in the recent work of [Reference Bhamidi and Sen9] and [Reference Bhamidi, Sen and Wang10] respectively.

In [Reference Croydon, Hambly and Kumagai19], Croydon, Hambly and Kumagai established criteria for the convergence of mixing times for random walks on general sequences of finite graphs. Furthermore, they applied their mixing time results in a number of examples of random graphs, such as self-similar fractal graphs with random weights, critical Galton-Watson trees, the critical Erdős-Rényi random graph and the range of high-dimensional random walk. Motivated by their approach, starting with the strong assumption that the sequences of graphs, associated measures, walks and local times converge appropriately, we provide asymptotic bounds on the distribution of the blanket times of the random walks in the sequence.

To state the aforementioned assumption, we continue by introducing the graph theoretic framework in which we work. Firstly, let $G=(V(G),E(G))$ be a finite connected graph with at least two vertices, where $V(G)$ denotes the vertex set of $G$ and $E(G)$ denotes the edge set of $G$ . We endow the edge set $E(G)$ with a symmetric weight function $\mu ^G\;:\;V(G)^2\rightarrow \mathbb{R_{+}}$ that satisfies $\mu _{xy}^G\gt 0$ if and only if $\{x,y\}\in E(G)$ . Now, the weighted random walk associated with $(G,\mu ^G)$ is the Markov chain $((X^G_t)_{t\ge 0},\mathbb{P}_{x}^G,x\in V(G))$ with transition probabilities $(P_G(x,y))_{x,y\in V(G)}$ given by

\begin{equation*} P_{G}(x,y)\;:\!=\;\frac {\mu ^G_{xy}}{\mu _x^G}, \end{equation*}

where $\mu ^G_x=\sum _{y\in V(G)} \mu _{xy}^G$ . One can easily check that this Markov chain is reversible and has stationary distribution given by

\begin{equation*} \pi ^G(A)\;:\!=\;\frac {\sum _{x\in A} \mu _x^G}{\sum _{x\in V(G)} \mu _x^G}, \end{equation*}

for every $A\subseteq V(G)$ . The process $X^G$ has corresponding local times $(L_t^G(x))_{x\in V(G),t\ge 0}$ given by $L_0^G(x)=0$ , for every $x\in V(G)$ , and, for $t\ge 1$

(1.1) \begin{equation} L_t^G(x)\;:\!=\;\frac{1}{\mu _x^G} \sum _{i=0}^{t-1} \textbf{1}_{\{X_i^G=x\}}, \end{equation}

for every $x\in V(G)$ . The simple random walk on this graph is a Markov chain with transition probabilities $(P(x,y))_{x, y\in V(G)}$ given by

\begin{equation*} P(x,y)\;:\!=\;1/\textrm {deg}(x), \end{equation*}

where $\textrm{deg}(x)\;:\!=\; |\{y\in V(G)\;:\;\{x,y\}\in E(G)\} |$ . The simple random walk is reversible and has stationary distribution given by

\begin{equation*} \pi (A)\;:\!=\;\frac {\sum _{x\in A} \textrm {deg}(x)}{2 |E(G)|}, \end{equation*}

for every $A\subseteq V(G)$ . It has corresponding local times as in (1.1) normalized by $\textrm{deg}(x)$ .

To endow $G$ with a metric, we can choose $d_G$ to be the shortest path distance, which collects the total weight accumulated in the shortest path between a pair of vertices in $G$ . But this is not the most convenient choice in many cases. Another typical graph distance that arises from the view of $G$ as an electrical network equipped with conductances $(\mu ^G_{xy})_{\{x,y\}\in E(G)}$ is the so-called resistance metric. For $f, g\;:\;V(G)\rightarrow \mathbb{R}$ let

(1.2) \begin{equation} \mathcal{E}_G(f,g)\;:\!=\;\frac{1}{2} \sum _{\substack{x,y\in V(G):\\ \{x,y\}\in E(G)}} (f(x)-f(y))(g(x)-g(y)) \mu _{xy}^G \end{equation}

denote the Dirichlet form associated with the process $X^G$ . Note that the sum in the expression above counts each edge twice. One can give the following interpretation of $\mathcal{E}_G(f,f)$ in terms of electrical networks. Given a voltage $f$ on the network, the current flow $I$ associated with $f$ is defined as $I_{xy}\;:\!=\;\mu _{xy}^G (f(x)-f(y))$ , for every $\{x,y\}\in E(G)$ . Then, the energy dissipation of a wire per unit time connecting $x$ and $y$ is $\mu ^G_{xy} (f(x)-f(y))^2$ . So, $\mathcal{E}_G(f,f)$ is the total energy dissipation of $G$ . We define the resistance operator on disjoint sets $A, B\in V(G)$ through the formula

(1.3) \begin{equation} R_{G}(A,B)^{-1}\;:\!=\;\inf \{\mathcal{E}_G(f,f)\;:\;f\;:\;V(G)\rightarrow \mathbb{R}, f|_{A}=0, f|_{B}=1\}. \end{equation}

Now, the distance on the vertices of $G$ defined by $R_G(x,y)\;:\!=\;R_G(\{x\},\{y\})$ , for $x\neq y$ , and $R_G(x,x)\;:\!=\;0$ is indeed a metric on the vertices of $G$ . For a proof and a treatise on random walks on electrical networks see [Reference Levin and Peres34, Chapter 9].

Writing $\tau _{\textrm{cov}}^G$ for the first time at which every vertex of $G$ has been visited, $\mathbb{E}_x \tau _{\textrm{cov}}^G$ denotes the mean of this quantity when the random walk starts at $x\in V(G)$ . Define the cover time by

\begin{equation*} t_{\textrm {cov}}^G\;:\!=\;\max _{x\in V(G)} \mathbb {E}_x \tau _{\textrm {cov}}^G. \end{equation*}

For some $\varepsilon \in (0,1)$ , define the $\varepsilon$ -blanket time variable by

(1.4) \begin{equation} \tau _{\textrm{bl}}^G(\varepsilon )\;:\!=\;\inf \{t\ge 0\;:\;m^G L_t^G(x)\ge \varepsilon t, \ \forall x\in V(G)\}, \end{equation}

where $m^G$ is the total mass of the graph with respect to the measure $\mu ^G$ , i.e. $m^G\;:\!=\;\sum _{x\in V(G)} \mu _x^G$ . Taking the mean over the random walk started from the worst possible vertex defines the $\varepsilon$ -blanket time, i.e.

\begin{equation*} t_{\textrm {bl}}^G(\varepsilon )\;:\!=\;\max _{x\in V(G)} \mathbb {E}_x \tau _{\textrm {bl}}^G(\varepsilon ). \end{equation*}

Let us use the notation $\asymp$ to denote equivalence up to universal constant factors and $\asymp _{\varepsilon }$ to denote equivalence up to universal constant factors that depend on $\varepsilon$ ,

Theorem 1.1. (Ding, Lee, Peres [Reference Ding, Lee and Peres23]). For any finite connected graph $G=(V(G),E(G))$ with at least two vertices and any $\varepsilon \in (0,1)$ ,

\begin{equation*} t_{\textrm {cov}}^G\asymp |E(G)|\left (\mathbb {E} \max _{x\in V(G)} \eta _x\right )^2\asymp _{\varepsilon } t_{\textrm {bl}}^G(\varepsilon ), \end{equation*}

where $(\eta _x)_{x\in V(G)}$ is a centred Gaussian process with $\eta _{x_0}=0$ , for some $x_0\in V(G)$ , and

\begin{equation*} \left (\mathbb {E}(\eta _x-\eta _y)^2\right )_{x,y\in V(G)}=\left (R_G\right (x,y))_{x,y\in V(G)}. \end{equation*}

Secondly, let $(K,d_K)$ be a compact metric space and let $\pi$ be a Borel probability measure of full support on $(K,d_K)$ . Take $((X_t)_{t\ge 0},\mathbb{P}_x,x\in K)$ to be a $\pi$ -symmetric Hunt process that admits jointly continuous local times $(L_t(y))_{y\in K,t\ge 0}$ . A Hunt process is a strong Markov process that possesses useful properties such as the right-continuity and the existence of the left limits of sample paths (for definitions and other properties see [Reference Fukushima, Oshima and Takeda26, Appendix A.2]). Analogously, it is possible to define the $\varepsilon$ -blanket time variable of $K$ as

(1.5) \begin{equation} \tau _{\textrm{bl}}(\varepsilon )\;:\!=\;\inf \{t\ge 0\;:\;L_t(x)\ge \varepsilon t, \ \forall x\in K\} \end{equation}

and check that it is a non-trivial quantity (see Proposition 4.1).

The following assumption encodes the information that, properly rescaled, the discrete state spaces, invariant measures, random walks, and local times, converge to $(K,d_k)$ , $\pi$ , $X$ , and $(L_t(x))_{x\in K,t\in [0,T]}$ respectively, for some fixed $T\gt 0$ . This formulation will be described in terms of the extended Gromov-Hausdorff topology constructed in Section 2.

Assumption 1. Fix $T\gt 0$ . Let $(G^n)_{n\ge 1}$ be a sequence of finite connected graphs that have at least two vertices, for which there exist sequences of real numbers $(\alpha (n))_{n\ge 1}$ and $(\beta (n))_{n\ge 1}$ , such that

\begin{equation*} \!\left (\!\alpha (n) G^n,\pi ^n,\left (\alpha (n) X^n_{\beta (n) t}\!\right )_{t\in [0,T]},\left (\alpha (n) L^n_{\beta (n) t}(x)\!\right )_{\substack {x\in V(G^n), \\ t\in [0,T]}}\right )\!\longrightarrow\! \left (\!\left (K,d_K,\rho \right ),\pi,X,\left (L_t(x)\right )_{\substack {x\in K, \\ t\in [0,T]}}\,\right ) \end{equation*}

in the sense of the extended pointed Gromov-Hausdorff topology, where $\alpha (n) G^n\;:\!=\;(G^n,\alpha (n) d_{G^n},\rho ^n)$ for distinguished points $\rho ^n\in V(G^n)$ and $\rho \in K$ at which $X^n$ and $X$ start respectively. In the above expression the definition of the discrete local times is extended to all positive times by linear interpolation.

The convergence in the display above is in distribution; even when the spaces are deterministic, the objects on both sides are random because of the processes and their local times. In most of the examples that will be discussed later, we will consider random graphs. In this context, we want to verify that the previous convergence holds in distribution. Our first conclusion is the following.

Theorem 1.2. Suppose that Assumption 1 holds in such a way that the time and space scaling factors satisfy $\alpha (n) \beta (n)=m^{G^n}$ , for every $n\ge 1$ . Then, for every $\varepsilon \in (0,1)$ , $\delta \in (0,1)$ and $t\in [0,T]$ ,

(1.6) \begin{equation} \limsup _{n\rightarrow \infty } \mathbb{P}_{\rho ^n}^n \left (\beta (n)^{-1} \tau _{\textrm{bl}}^n(\varepsilon )\le t\right )\le \mathbb{P}_{\rho } \left (\tau _{\textrm{bl}}(\varepsilon (1-\delta ))\le t\right ), \end{equation}
(1.7) \begin{equation} \liminf _{n\rightarrow \infty } \mathbb{P}_{\rho ^n}^n \left (\beta (n)^{-1} \tau _{\textrm{bl}}^n(\varepsilon )\le t\right )\ge \mathbb{P}_{\rho } \left ({\tau }_{\textrm{bl}}(\varepsilon (1+\delta ))\lt t\right ), \end{equation}

where $\mathbb{P}_{\rho ^n}^n$ is the law of $X^n$ on $G^n$ , started from $\rho ^n$ , and $\mathbb{P}_{\rho }$ is the law of $X$ on $K$ , started from $\rho$ .

The mapping $\varepsilon \mapsto \tau _{\textrm{bl}}(\varepsilon )$ is increasing in $(0,1)$ , so it posseses left and right limits at each point. It also becomes clear that if $\tau _{\textrm{bl}}(\varepsilon )$ is continuous with probability 1 at $\varepsilon$ , then letting $\delta \to 0$ on both (1.6) and (1.7) demonstrates the corollary below.

Corollary 1.3. Suppose that Assumption 1 holds in such a way that the time and space scaling factors satisfy $\alpha (n) \beta (n)=m^{G^n}$ , for every $n\ge 1$ . Then, for every $\varepsilon \in (0,1)$ ,

\begin{equation*} \beta (n)^{-1} \tau ^n_{\textrm {bl}}(\varepsilon )\to \tau _{\textrm {bl}}(\varepsilon ) \end{equation*}

in distribution, if $\tau _{\textrm{bl}}(\varepsilon )$ is continuous with probability 1 at $\varepsilon$ .

To demonstrate our main results consider first $T$ , a critical Galton-Watson tree (with finite variance $\sigma ^2$ for the offspring distribution). The following result on the cover time of the simple random walk was obtained by Aldous (see [Reference Aldous3, Proposition 15]), which we apply to the blanket time in place of the cover time. The two parameters are equivalent up to universal constants as was conjectured in [Reference Winkler and Zuckerman39] and proved in [Reference Ding, Lee and Peres23].

Theorem 1.4. (Aldous [Reference Aldous3]). Let $T$ be a critical Galton-Watson tree (with finite variance $\sigma ^2$ for the offspring distribution). For any $\delta \gt 0$ there exists $A=A(\delta,\varepsilon,\sigma ^2)\gt 0$ such that

\begin{equation*} \mathbb {P}\big ( A^{-1} k^{3/2}\le t_{\textrm {bl}}^{T}(\varepsilon )\le A k^{3/2} \,\big |\, |T|\in [k,2k] \big )\ge 1-\delta, \end{equation*}

for every $\varepsilon \in (0,1)$ .

In what follows $\mathbb{P}_{\rho ^n}$ , $n\ge 1$ as well as $\mathbb{P}_{\rho }$ are the annealed measures, that is the probability measures obtained by integrating out the randomness of the state spaces involved. Our contribution refines the previous existing tightness result on the order of the blanket time.

Theorem 1.5. Let $\mathcal{T}_n$ be a critical Galton-Watson tree (with finite variance for the aperiodic offspring distribution) conditioned to have total progeny $n+1$ . Fix $\varepsilon \in (0,1)$ . If $\tau _{\textrm{bl}}^n(\varepsilon )$ is the $\varepsilon$ -blanket time variable of the simple random walk on $\mathcal{T}_n$ , started from its root $\rho ^n$ , then

\begin{equation*} \mathbb {P}_{\rho ^n}\left (n^{-3/2} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right )\to \mathbb {P}_{\rho }\left (\tau _{\textrm {bl}}^e(\varepsilon )\le t\right ), \end{equation*}

for every $t\ge 0$ , where $\tau _{\textrm{bl}}^e(\varepsilon )\in (0,\infty )$ is the $\varepsilon$ -blanket time variable of the Brownian motion on the Brownian CRT $\mathcal{T}_e$ , started from the root $\rho \in \mathcal{T}_e$ . Equivalently, for every $\varepsilon \in (0,1)$ , $n^{-3/2} \tau _{\textrm{bl}}^n(\varepsilon )$ under $\mathbb{P}_{\rho ^n}$ converges weakly to $\tau _{\textrm{bl}}^e(\varepsilon )$ under $\mathbb{P}_{\rho }$ .

Now let $G(n,p)$ be the resulting subgraph of the complete graph on $n$ vertices labelled by $[n]\;:\!=\;\{1,\ldots,n\}$ obtained by $p$ -bond percolation. If $p=n^{-1}+\lambda n^{-4/3}$ for some $\lambda \in \mathbb{R}$ , that is when we are in the so-called critical window, the largest connected component $\mathcal{C}_1^n$ , rooted at its first ordered vertex, say $\rho ^n$ , as a graph, converges to a random compact metric space $\mathcal{M}$ that can be constructed directly from the Brownian CRT $\mathcal{T}_e$ (see the work of [Reference Addario-Berry, Broutin and Goldschmidt2]). The following result on the blanket time of the simple random walk on $\mathcal{C}_1^n$ is due to Barlow, Ding, Nachmias and Peres [Reference Barlow, Ding, Nachmias and Peres7].

Theorem 1.6. (Barlow, Ding, Nachmias, Peres [Reference Barlow, Ding, Nachmias and Peres7]). Let $\mathcal{C}_1^n$ be the largest connected component of $G(n,p)$ , $p=n^{-1}+\lambda n^{-4/3}$ , $\lambda \in \mathbb{R}$ fixed. For any $\delta \gt 0$ there exists $B=B(\delta,\varepsilon )\gt 0$ such that

\begin{equation*} \mathbb {P}\big ( B^{-1} n\le t_{\textrm {bl}}^{\mathcal {C}_1^n}(\varepsilon )\le B n \big ) \ge 1-\delta, \end{equation*}

for every $\varepsilon \in (0,1)$ .

Our contribution refines the previous existing tightness result on the order of the blanket time.

Theorem 1.7. Fix $\varepsilon \in (0,1)$ . If $\tau _{\textrm{bl}}^{n}(\varepsilon )$ is the $\varepsilon$ -blanket time variable of the simple random walk on $\mathcal{C}_1^n$ , started from its root $\rho ^n$ , then

\begin{equation*} \mathbb {P}_{\rho ^n}\left (n^{-1} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right )\to \mathbb {P}_{\rho }\left (\tau _{\textrm {bl}}^{\mathcal {M}}(\varepsilon )\le t\right ), \end{equation*}

for every $t\ge 0$ , where $\tau _{\textrm{bl}}^{\mathcal{M}}(\varepsilon )\in (0,\infty )$ is the $\varepsilon$ -blanket time variable of the Brownian motion on $\mathcal{M}$ , started from its root $\rho$ .

The paper is organized as follows. In Section 2, we introduce the extended Gromov-Hausdorff topology and derive some useful properties. In Section 3, we present Assumption 2, a weaker sufficient assumption when the sequence of spaces is equipped with resistance metrics. In Section 4, we prove Theorem 1.2 under Assumption 1. In Section 5, we verify the assumptions of Corollary 1.3, and therefore prove convergence of blanket times for the series of critical random graphs mentioned above, thus proving Theorem 1.51.7.

2. Extended Gromov-Hausdorff topologies

In this section we define an extended Gromov-Hausdorff distance between quadruples consisting of a compact metric space, a Borel probability measure, a time-indexed right-continuous path with left-hand limits and a local time-type function. This allows us to make precise the assumption under which we are able to prove convergence of blanket times for the random walks on various models of critical random graphs. In Lemma 2.3, we give an equivalent characterization of the assumption that will be used in Section 4 when proving distributional limits for the rescaled blanket times. Also, Lemma 2.4 will be useful when it comes to checking that the examples we treat satisfy the assumption.

Let $(K,d_K)$ be a non-empty compact metric space. For a fixed $T\gt 0$ , let $X^K$ be a path in $D([0,T],K)$ , the space of càdlàg functions, i.e. right-continuous functions with left-hand limits, from $[0,T]$ to $K$ . We say that a function $\lambda$ from $[0,T]$ onto itself is a time change if it is strictly increasing and continuous. Let $\Lambda$ denote the set of all time-changes. If $\lambda \in \Lambda$ , then $\lambda (0)=0$ and $\lambda (T)=T$ . We equip $D([0,T],K)$ with the Skorohod metric $d_{J_1}$ defined as follows:

\begin{equation*} d_{J_1}(x,y)\;:\!=\;\inf _{\lambda \in \Lambda } \bigg \{\sup _{t\in [0,T]} |\lambda (t)-t|+\sup _{t\in [0,T]} d_{K} (x(\lambda (t)),y(t))\bigg \}, \end{equation*}

for $x,y\in D([0,T],K)$ . The idea behind going from the uniform metric to the Skorohod metric $d_{J_1}$ is to say that two paths are close if they are uniformly close in $[0,T]$ , after allowing small perturbations of time. Moreover, $D([0,T],K)$ endowed with $d_{J_1}$ becomes a separable metric space (see [Reference Billingsley11, Theorem 12.2]). Let $\mathcal{P}(K)$ denote the space of Borel probability measures on $K$ . If $\mu,\nu \in \mathcal{P}(K)$ we set

\begin{equation*} d_{P}(\mu,\nu )=\inf \{\varepsilon \gt 0\;:\;\mu (A)\le \nu (A^{\varepsilon })+\varepsilon \textrm { and }\nu (A)\le \mu (A^{\varepsilon })+\varepsilon,\textrm { for any }A\in \mathcal {M}(K)\}, \end{equation*}

where $\mathcal{M}(K)$ is the set of all closed subsets of $K$ . This expression gives the standard Prokhorov metric between $\mu$ and $\nu$ . Moreover, it is known, see [Reference Daley and Vere-Jones22] Appendix A.2.5, that $(\mathcal{P}(K),d_{P})$ is a Polish metric space, i.e. a complete and separable metric space, and the topology generated by $d_P$ is exactly the topology of weak convergence, the convergence against bounded and continuous functionals.

Let $\pi ^K$ be a Borel probability measure on $K$ and $L^K=(L_t^K(x))_{x\in K,t\in [0,T]}$ be a jointly continuous function of $(t,x)$ taking non-negative real values. Let $\mathbb{K}$ be the collection of quadruples $(K,\pi ^K,X^K,L^K)$ . We say that two elements $(K,\pi ^K,X^K,L^K)$ and $(K',\pi ^{K'},X^{K'},L^{K'})$ of $\mathbb{K}$ are equivalent if there exists an isometry $f\;:\;K\rightarrow K'$ such that

  • $\pi ^{K}\circ f^{-1}=\pi ^{K'}$ ,

  • $f\circ X^K=X^{K'},$ which is a shorthand of $f(X_t^K)=X_t^{K'}$ , for every $t\in [0,T]$ .

  • $L_t^{K'}\circ f=L_t^K$ , for every $t\in [0,T]$ , which is a shorthand of $L_t^{K'}(f(x))=L_t^K(x)$ , for every $t\in [0,T]$ , $x\in K$ .

Not to overcomplicate our notation, we will often identify an equivalence class of $\mathbb{K}$ with a particular element of it. We now introduce a distance $d_{\mathbb{K}}$ on $\mathbb{K}$ by setting

\begin{align*} d_{\mathbb{K}}&((K,\pi ^{K},X^{K},L^{K}),(K',\pi ^{K'},X^{K'},L^{K'})) \\[5pt] &\;:\!=\;\inf _{Z,\phi,\phi ',\mathcal{C}}\bigg \{d_{P}^Z(\pi ^K\circ \phi ^{-1},\pi ^{K'}\circ{\phi '}^{-1})+d_{J_{1}}^Z(\phi (X_t^K),\phi '(X_{t}^{K'})) \\[5pt] &+\sup _{(x,x')\in \mathcal{C}}\bigg (d_{Z}(\phi (x),\phi '(x'))+\sup _{t\in [0,T]} |L_{t}^{K}(x)-L_{t}^{K'}(x')|\bigg )\bigg \}, \end{align*}

where the infimum is taken over all metric spaces $(Z,d_Z)$ , isometric embeddings $\phi \;:\;K\rightarrow Z$ , $\phi '\;:\;K'\rightarrow Z$ and correspondences $\mathcal{C}$ between $K$ and $K'$ . A correspondence between $K$ and $K'$ is a subset of $K\times K'$ , such that for every $x\in K$ there exists at least one $x'$ in $K'$ such that $(x,x')\in \mathcal{C}$ and conversely for every $x'\in K'$ there exists at least one $x\in K$ such that $(x,x')\in \mathcal{C}$ . In the above expression $d^Z_P$ is the standard Prokhorov distance between Borel probability measures on $Z$ , and $d_{J_1}^Z$ is the Skorohod metric $d_{J_1}$ between càdlàg paths on $Z$ .

In the following proposition we check that the definition of $d_{\mathbb{K}}$ induces a metric and that the resulting metric space is separable. The latter fact will be used repeatedly later when it comes to applying Skorohod’s representation theorem on sequences of random graphs to prove statements regarding their blanket times or the cover times. Before proceeding to the proof of Proposition 2.1, let us first make a few remarks about the ideas behind the definition of $d_{\mathbb{K}}$ . The first term along with the Hausdorff distance on $Z$ between $\phi (K)$ and $\phi '(K')$ is that used in the Gromov-Hausdorff-Prokhorov distance for compact metric spaces (see [Reference Abraham, Delmas and Hoscheit1, Section 2.2, (6)]). Though, in our definition of $d_{\mathbb{K}}$ we did not consider the Hausdorff distance between the embedded compact metric spaces $K$ and $K'$ , since it is absorbed by the first part of the third term in the expression for $d_{\mathbb{K}}$ . Recall here the equivalent definition of the standard Gromov-Hausdorff distance via correspondences as a way to relate two compact metric spaces (see [Reference Burago, Burago and Ivanov12, Theorem 7.3.25]). The motivation for the second term comes from [Reference Croydon16], where the author defined a distance between pairs of compact length spaces (for a definition of a length space see [Reference Burago, Burago and Ivanov12, Definition 2.1.6]) and continuous paths on those spaces. As we will see later, the restriction to the set of length spaces is not necessary for proving that $d_{\mathbb{K}}$ provides a metric. Considering càdlàg paths instead of continuous paths and replacing the uniform metric with the Skorohod metric $d_{J_1}$ allows us to prove separability without assuming that $(K,d_K)$ is a non-empty compact length space. The final term was first introduced in [Reference Duquesne and Gall24, Section 6] to define a distance between spatial trees equipped with a continuous function.

Proposition 2.1. $(\mathbb{K},d_{\mathbb{K}})$ is a separable metric space.

Proof. That $d_{\mathbb{K}}$ is non-negative and symmetric is obvious. To prove that it is also finite, for any choice of $(K,\pi ^K,X^K,L^K)$ , $(K',\pi ^{K'},X^{K'},L^{K'})$ consider the disjoint union $Z=K\sqcup K'$ of $K$ and $K'$ . Then, set $d_{Z}(x,x')\;:\!=\;\textrm{diam}_{K}(K)+\textrm{diam}_{K'}(K')$ , for any $x\in K$ , $x'\in K'$ , where

\begin{equation*} \textrm {diam}_{K}(K)=\sup _{y,z\in K} d_K(y,z) \end{equation*}

denotes the diameter of $K$ with respect to the metric $d_K$ . Since $K$ and $K'$ are compact their diameters are finite. Therefore, $d_Z$ is finite for any $x\in K$ , $x'\in K'$ . To conclude that $d_{\mathbb{K}}$ is finite, simply suppose that $\mathcal{C}=K\times K'$ .

Next, we show that $d_{\mathbb{K}}$ is positive-definite. Let $(K,\pi ^K,X^K,L^K), (K',\pi ^{K'},X^{K'},L^{K'})$ be in $\mathbb{K}$ such that $d_{\mathbb{K}}((K,\pi ^K,X^K,L^K),(K',\pi ^{K'},X^{K'},L^{K'}))=0$ . Then, for every $\varepsilon \gt 0$ there exist $Z,\phi,\phi ',\mathcal{C}$ such that the sum of the quantities inside the infimum in the definition of $d_{\mathbb{K}}$ is bounded above by $\varepsilon$ . Furthermore, there exists $\lambda _{\varepsilon }\in \Lambda$ such that the sum of the quantities inside the infimum in the definition of $d_{J_1}^Z$ is bounded above by $2 \varepsilon$ . Recall that for every $t\in [0,T]$ , $L^K_t\;:\;K\to \mathbb{R}_{+}$ is a continuous function and since $K$ is a compact metric space, then it is also uniformly continuous. Therefore, there exists a $\delta \in (0,\varepsilon ]$ such that

(2.1) \begin{equation} \sup _{\substack{x_1,x_2\in K:\\ d_{K}(x_1,x_2)\lt \delta }} \sup _{t\in [0,T]} |L_t^K(x_1)-L_t^K(x_2)|\le \varepsilon . \end{equation}

Now, let $(x_i)_{i\ge 1}$ be a dense sequence of disjoint elements in $K$ . Since $K$ is compact, there exists an integer $N_{\varepsilon }$ such that the collection of open balls $(B_{K}(x_i,\delta ))_{i=1}^{N_{\varepsilon }}$ covers $K$ . Defining $A_1=B_{K}(x_1,\delta )$ and $A_i=B_{K}(x_i,\delta )\setminus \cup _{j=1}^{i-1} B_{K}(x_j,\delta )$ , for $i=2,\ldots,N_{\varepsilon }$ , we have that $(A_i)_{i=1}^{N_{\varepsilon }}$ is a disjoint cover of $K$ . Consider a function $f_{\varepsilon }\;:\;K\rightarrow K'$ by setting

\begin{equation*} f_{\varepsilon }(x)\;:\!=\;x'_{\!\!i} \end{equation*}

on $A_i$ , where $x'_{\!\!i}$ is chosen such that $(x_i,x'_{\!\!i})\in \mathcal{C}$ , for $i=1,\ldots,N_{\varepsilon }$ . Note that by definition $f_{\varepsilon }$ is a measurable function defined on $K$ . For any $x\in K$ , such that $x\in A_i$ for some $i=1,\ldots,N_{\varepsilon }$ , we have that

(2.2) \begin{align} d_{Z}(\phi (x),\phi '(f_{\varepsilon }(x)))&=d_{Z}(\phi (x),\phi '(x'_{\!\!i})) \nonumber \\[5pt] &\le d_{Z}(\phi (x),\phi (x_i))+d_{Z}(\phi (x_i),\phi '(x'_{\!\!i}))\le \delta +\varepsilon \le 2 \varepsilon . \end{align}

From (2.2), it follows that for any $x\in K$ and $y\in K$

\begin{align*} |d_{Z}(\phi (x),\phi (y))-d_{Z}(\phi '(f_{\varepsilon }(x)),\phi '(f_{\varepsilon }(y))|&\le d_{Z}(\phi (y),\phi '(f_{\varepsilon }(y)))+d_{Z}(\phi (x),\phi '(f_{\varepsilon }(x))) \\[5pt] &\le 2 \varepsilon +2 \varepsilon =4 \varepsilon . \end{align*}

This immediately yields

(2.3) \begin{equation} \sup _{x,y\in K} |d_{K}(x,y)-d_{K'}(f_{\varepsilon }(x),f_{\varepsilon }(y))|\le 4 \varepsilon . \end{equation}

From (2.3), we deduce the bound

(2.4) \begin{equation} d_{P}^{K'}(\pi ^{K}\circ f_{\varepsilon }^{-1},\pi ^{K'})\le 5 \varepsilon \end{equation}

for the Prokhorov distance between $\pi ^K\circ f_{\varepsilon }^{-1}$ and $\pi ^{K'}$ in $K'$ . Using (2.1) and the fact that the last quantity inside the infimum in the definition of $d_{\mathbb{K}}$ is bounded above by $\varepsilon$ , we deduce

(2.5) \begin{equation} \sup _{x\in K,t\in [0,T]} |L_t^K(x)-L_t^{K'}(f_{\varepsilon }(x))|\le 2 \varepsilon . \end{equation}

Using (2.2) and the fact that the second quantity in the infimum is bounded above by $\varepsilon$ , we deduce that for any $t\in [0,T]$

\begin{align*} d_{Z}(\phi '(f_{\varepsilon }(X^K_{\lambda _{\varepsilon }(t)})),\phi '(X_t^{K'}))&\le d_{Z}(\phi '(f_{\varepsilon }(X^K_{\lambda _{\varepsilon }(t)})),\phi (X^K_{\lambda _{\varepsilon }(t)}))+d_{Z}(\phi (X^K_{\lambda _{\varepsilon }(t)}),\phi '(X_t^{K'})) \\[5pt] &\le 2 \varepsilon +2 \varepsilon =4 \varepsilon . \end{align*}

Therefore,

(2.6) \begin{equation} \sup _{t\in [0,T]} d_{K'}(f_{\varepsilon }(X_{\lambda _{\varepsilon }(t)}^K),X_t^{K'})\le 4 \varepsilon . \end{equation}

Using a diagonalization argument we can find a sequence $(\varepsilon _n)_{n\ge 1}$ such that $f_{\varepsilon _n}(x_i)$ converges to some limit $f(x_i)\in K'$ , for every $i\ge 1$ . From (2.3) we immediately get that $d_{K}(x_i,x_j)=d_{K'}(f(x_i),f(x_j))$ , for every $i,j\ge 1$ . By [Reference Burago, Burago and Ivanov12, Proposition 1.5.9], this map can be extended continuously to the whole $K$ . This shows that $f$ is distance-preserving. Reversing the roles of $K$ and $K'$ , we are able to find also a distance-preserving map from $K'$ to $K$ . Hence $f$ is an isometry. We are now able to check that $\pi ^K\circ f^{-1}=\pi ^{K'}$ , $L_t^{K'}\circ f=L_t^K$ , for all $t\in [0,T]$ , and $f\circ X^K=X^{K'}$ . Since $f_{\varepsilon _n}(x_i)$ converges to $f(x_i)$ in $K'$ , we can find $\varepsilon '\in (0,\varepsilon ]$ such that $d_{K'}(f_{\varepsilon '}(x_i),f(x_i))\le \varepsilon$ , for $i=1,\ldots,N_{\varepsilon }$ . Recall that $(x_i)_{i=1}^{N_{\varepsilon }}$ is an $\varepsilon$ -net in $K$ . Then, for $i=1,\ldots,N_{\varepsilon }$ , such that $x\in A_i$ , using (2.3) and the fact that $f$ is an isometry, we deduce

(2.7) \begin{equation} d_{K'}(f_{\varepsilon '}(x),f(x))\le d_{K'}(f_{\varepsilon '}(x),f_{\varepsilon '}(x_i))+d_{K'}(f_{\varepsilon '}(x_i),f(x_i))+d_{K'}(f(x_i),f(x))\le 7 \varepsilon . \end{equation}

This, combined with (2.4) implies

\begin{equation*} d_{P}^{K'}(\pi ^K\circ f^{-1},\pi ^{K'})\le d_{P}^{K'}(\pi ^K\circ f^{-1},\pi ^{K}\circ f_{\varepsilon '}^{-1})+d_{P}^{K'}(\pi ^K\circ f_{\varepsilon '}^{-1},\pi ^{K'})\le 12 \varepsilon . \end{equation*}

Since $\varepsilon \gt 0$ was arbitrary, $\pi ^{K}\circ f^{-1}=\pi ^{K'}$ . Moreover, from (2.5) and (2.7) we have that

\begin{align*} &\sup _{x\in K,t\in [0,T]} |L_t^K(x)-L_t^{K'}(f(x))| \\[5pt] &\le \sup _{x\in K,t\in [0,T]} |L_t^{K}(x)-L_t^{K'}(f_{\varepsilon '}(x))|+\sup _{x\in K,t\in [0,T]} |L_t^{K'}(f_{\varepsilon '}(x))-L_t^{K'}(f(x))| \\[5pt] &\le 2 \varepsilon +\sup _{\substack{x_1',x_2'\in K':\\ d_{K'}(x_1',x_2')\le 7\varepsilon }} \sup _{t\in [0,T]} |L_t^{K'}(x_1')-L_t^{K'}(x_2')|. \end{align*}

Now, this and the uniform continuity of $L^{K'}$ (replace $L^K$ by $L^{K'}$ in (2.1)) gives $L_t^{K'}\circ f=L_t^K$ , for all $t\in [0,T]$ . Finally, we verify that $f\circ X^K=X^{K'}$ . For any $t\in [0,T]$

\begin{equation*} d_{K'}(f(X_{\lambda _{\varepsilon }(t)}^K),X_t^{K'})\le d_{K'}(f(X_{\lambda _{\varepsilon }(t)}^K),f_{\varepsilon '}(X_{\lambda _{\varepsilon }(t)}^K))+d_{K'}(f_{\varepsilon '}(X_{\lambda _{\varepsilon }(t)}^K),X_t^{K'})\le 7 \varepsilon +4 \varepsilon =11 \varepsilon, \end{equation*}

where we used (2.6) and (2.7). Therefore,

(2.8) \begin{equation} \sup _{t\in [0,T]} d_{K'}(f(X_{\lambda _{\varepsilon }(t)}^K),X_t^{K'})\le 11 \varepsilon . \end{equation}

Recall that $\sup _{t\in [0,T]} |\lambda _{\varepsilon }(t)-t|\le 2 \varepsilon$ . From this and (2.8), it follows that for every $t\in [0,T]$ , there exists a sequence $(z_n)_{n\ge 1}$ , such that $z_n\rightarrow t$ and $d_{K'}(f(X_{z_n}^{K}),X_t^{K'})\rightarrow 0$ , as $n\rightarrow \infty$ . If $t$ is a continuity point of $f\circ X^K$ , then $d_{K'}(f(X_{z_n}^{K}),f(X_t^K))\rightarrow 0$ , as $n\rightarrow \infty$ . Thus, $f(X_t^K)=X_t^{K'}$ . If $f\circ X^K$ has a jump at $t$ and $(z_n)_{n\ge 1}$ has a subsequence $(z_{n_k})_{k\ge 1}$ , such that $z_{n_k}\ge t$ , for any $k\ge 1$ , then $d_{K'}(f(X_{z_{n_k}}^{K}),X_t^{K'})\rightarrow 0$ , as $n\rightarrow \infty$ , and $d_{K'}(f(X_{z_{n_k}}^{K}),f(X_t^K))\rightarrow 0$ , as $n\rightarrow \infty$ . Therefore, $f(X_t^K)=X_t^{K'}$ . Otherwise, $z_n\lt t$ , for $n$ large enough and $d_{K'}(f(X_{z_n}^{K}),f(X_{t-}^{K}))\rightarrow 0$ , as $n\rightarrow \infty$ , which implies $f(X_{t-}^{K})=X_t^{K'}$ . Essentially, what we have proved is that if $f\circ X^K$ has a jump, then either $f(X^K_t)=X_t^{K'}$ or $f(X^K_{t-})=X_t^{K'}$ . But, since $X^{K'}$ is càdlàg, $f\circ X^K=X^{K'}$ . This completes the proof that the quadruples $(K,\pi ^K,X^K,L^K)$ and $(K',\pi ^{K'},X^{K'},L^{K'})$ are equivalent in $(\mathbb{K},d_{\mathbb{K}})$ , and consequently that $d_{\mathbb{K}}$ is positive-definite.

For the triangle inequality we follow the proof of [Reference Burago, Burago and Ivanov12, Proposition 7.3.16], which proves the triangle inequality for the standard Gromov-Hausdorff distance. Let $\mathcal{K}^i=(K^i,\pi ^i,X^i,L^i)$ be an element of $(\mathbb{K},d_{\mathbb{K}})$ for $i=1,2,3$ . Suppose that

\begin{equation*} d_{\mathbb {K}}(\mathcal {K}^1,\mathcal {K}^2)\lt \delta _1. \end{equation*}

Thus, there exists a metric space $Z_1$ , isometric embeddings $\phi _{1,1}\;:\;K^1\rightarrow Z_1$ , $\phi _{2,1}\;:\;K^2\rightarrow Z_1$ and a correspondence $\mathcal{C}_1$ between $K^1$ and $K^2$ such that the sum of the quantities inside the infimum that defines $d_{\mathbb{K}}$ is bounded above by $\delta _1$ . Similarly, if

\begin{equation*} d_{\mathbb {K}}(\mathcal {K}^2,\mathcal {K}^3)\lt \delta _2, \end{equation*}

there exists a metric space $Z_2$ , isometric embeddings $\phi _{2,2}\;:\;K^2\rightarrow Z_2$ , $\phi _{2,3}\;:\;K^3\rightarrow Z_2$ and a correspondence $\mathcal{C}_2$ between $K^2$ and $K^3$ such that the sum of the quantities inside the infimum that defines $d_{\mathbb{K}}$ is bounded above by $\delta _2$ . Next, we set $Z=Z_1\sqcup Z_2$ to be the disjoint union of $Z_1$ and $Z_2$ and we define a distance on $Z$ in the following way. Let $d_{Z|Z_i\times Z_i}=d_{Z_i}$ , for $i=1,2$ , and for $x\in Z_1$ , $y\in Z_2$ set

\begin{equation*} d_{Z}(x,y)\;:\!=\;\inf _{z\in K^2} \{d_{Z_1}(x,\phi _{2,1}(z))+d_{Z_2}(\phi _{2,2}(z),y)\}. \end{equation*}

It is obvious that $d_Z$ is symmetric and non-negative. It is also easy to check that $d_Z$ satisfies the triangle inequality. Identifying points that are separated by zero distance and slightly abusing notation, we turn $(Z,d_{Z})$ into a metric space, which comes with isometric embeddings $\phi _i$ of $Z_i$ for $i=1,2$ . Using the triangle inequality of the Prokhorov metric on $Z$ , gives us that

\begin{equation*} d_P^Z(\pi ^1\circ (\phi _1\circ \phi _{1,1})^{-1},\pi ^3\circ (\phi _2\circ \phi _{3,2})^{-1}) \end{equation*}
\begin{equation*} \le d_P^Z(\pi ^1\circ (\phi _1\circ \phi _{1,1})^{-1},\pi ^2\circ (\phi _1\circ \phi _{2,1})^{-1})+d_P^Z(\pi ^2\circ (\phi _1\circ \phi _{2,1})^{-1},\pi ^3\circ (\phi _2\circ \phi _{3,2})^{-1}). \end{equation*}

Now, since $\phi _1(\phi _{2,1}(y))=\phi _2(\phi _{2,2}(y))$ , for all $y\in K^2$ , we deduce

(2.9) \begin{equation} d_P^Z(\pi ^1\circ (\phi _1\circ \phi _{1,1})^{-1},\pi ^3\circ (\phi _2\circ \phi _{3,2})^{-1}) \le d_P^{Z_1}(\pi ^1\circ \phi _{1,1}^{-1},\pi ^2\circ \phi _{2,1}^{-1})+d_P^{Z_2}(\pi ^2\circ \phi _{2,2}^{-1},\pi ^3\circ \phi _{3,2}^{-1}). \end{equation}

A similar bound also applies for the embedded càdlàg paths. Namely, using the same methods as above, we deduce

(2.10) \begin{equation} d_{J_1}^Z((\phi _1\circ \phi _{1,1})(X^1),(\phi _2\circ \phi _{3,2})(X^3))\le d_{J_1}^Z(\phi _{1,1}(X^1),\phi _{2,1}(X^2))+d_{J_1}^Z(\phi _{2,2}(X^2),\phi _{3,2}(X^3)). \end{equation}

Now, let

\begin{equation*} \mathcal {C}\;:\!=\;\{(x,z)\in K^1\times K^3\;:\;(x,y)\in \mathcal {C}_1,(y,z)\in \mathcal {C}_2,\textrm { for some }y\in K^2\}. \end{equation*}

Observe that $\mathcal{C}$ is a correspondence between $K^1$ and $K^3$ . Then, if $(x,z)\in \mathcal{C}$ , there exists $y\in K^2$ such that $(x,y)\in \mathcal{C}_1$ and $(y,z)\in \mathcal{C}_2$ , and noting again that $\phi _1(\phi _{2,1}(y))=\phi _2(\phi _{2,2}(y))$ , for all $y\in K^2$ , we deduce

(2.11) \begin{equation} d_Z(\phi _1(\phi _{1,1}(x)),\phi _2(\phi _{3,2}(z)))\le d_{Z_1}(\phi _{1,1}(x),\phi _{2,1}(y))+d_{Z_2}(\phi _{2,2}(y),\phi _{3,2}(z)). \end{equation}

Using the same arguments one can prove a corresponding bound involving $L^i$ , $i=1,2,3$ . Namely, if $(x,z)\in \mathcal{C}$ , there exists $y\in K^2$ such that $(x,y)\in \mathcal{C}_1$ and $(y,z)\in \mathcal{C}_2$ , and moreover

(2.12) \begin{equation} \sup _{t\in [0,T]}|L_t^1(x)-L_t^3(z)|\le \sup _{t\in [0,T]} |L_t^1(x)-L_t^2(y)|+\sup _{t\in [0,T]} |L_t^2(y)-L_t^3(z)|. \end{equation}

Putting (2.9), (2.10), (2.11) and (2.12) together gives

\begin{equation*} d_{\mathbb {K}}(\mathcal {K}^1,\mathcal {K}^3)\le \delta _1+\delta _2, \end{equation*}

and the triangle inequality follows. Thus, $(\mathbb{K},d_{\mathbb{K}})$ forms a metric space.

To finish the proof, we need to show that $(\mathbb{K},d_{\mathbb{K}})$ is separable. Let $(K,\pi,X,L)$ be an element of $\mathbb{K}$ . First, let $K^n$ be a finite $n^{-1}$ -net of $K$ , which exists since $K$ is compact. Furthermore, we can endow $K^n$ with a metric $d_{K^n}$ , such that $d_{K^n}(x,y)\in \mathbb{Q}$ , and moreover $|d_{K^n}(x,y)-d_K(x,y)|\le n^{-1}$ , for every $x,y \in K^n$ . Since, $K^n$ is a finite $n^{-1}$ -net of $K$ we can choose a partition for $K$ , $(A_x)_{x\in K^n}$ , such that $x\in A_x$ , and $\textrm{diam}_{K}(A_x)\le 2 n^{-1}$ . We can even choose the partition in such a way that $A_x$ is measurable for all $x\in K^n$ (see for example the definition of $(A_i)_{i=1}^{N_{\varepsilon }}$ after (2.1)). Next, we construct a Borel probability measure $\pi ^n$ in $K^n$ that takes rational mass at each point, i.e. $\pi ^n(\{x\})\in \mathbb{Q}$ , and $|\pi ^n(\{x\})-\pi (A_x)|\le n^{-1}$ . Define $\varepsilon _n$ by

\begin{equation*} \varepsilon _n\;:\!=\;\sup _{\substack {s,t\in [0,T]:\\ |s-t|\le n^{-1}}} \sup _{\substack {x,x'\in K:\\ d_{K}(x,x')\le n^{-1}}} |L_s(x)-L_t(x')|. \end{equation*}

By the joint continuity of $L$ , $\varepsilon _n\rightarrow 0$ , as $n\rightarrow \infty$ . Let $0=s_0\lt s_1\lt \cdot \cdot \cdot \lt s_r=T$ be a set of rational times such that $|s_{i+1}-s_i|\le n^{-1}$ , for $i=0,\ldots,r-1$ . Choose $L_{s_i}^n(x)\in \mathbb{Q}$ with $|L_{s_i}^n(x)-L_{s_i}(x)|\le n^{-1}$ , for every $x\in K^n$ . We interpolate linearly between the finite collection of rational time points in order to define $L^n$ to the whole domain $K^n\times [0,T]$ . Let $\mathcal{C}^n\;:\!=\;\{(x,x')\in K\times K^n\;:\;d_K(x,x')\le n^{-1}\}$ . Clearly $\mathcal{C}^n$ defines a correspondence between $K$ and $K^n$ . Let $(x,x')\in \mathcal{C}^n$ and $s\in [s_i,s_{i+1}]$ , for some $i=0,\ldots,r-1$ . Then, using the triangle inequality we observe that

(2.13) \begin{equation} |L_s^n(x)-L_s(x')|\le |L_s^n(x)-L_s(x)|+|L_s(x)-L_s(x')|\le |L_s^n(x)-L_s(x)|+\varepsilon _n. \end{equation}

Since we interpolated linearly to define $L^n$ beyond rational time points on the whole space $K^n\times [0,T]$ we have that

(2.14) \begin{equation} |L_s^n(x)-L_s(x)|\le |L_{s_{i+1}}^n(x)-L_s(x)|+|L_{s_i}^n(x)-L_s(x)|. \end{equation}

Applying the triangle inequality again yields

\begin{align*} |L_{s_i}^n(x)-L_s(x)|&\le |L_{s_i}^n(x)-L_{s_i}(x)|+|L_{s_i}(x)-L_s(x)| \\[5pt] &\le n^{-1}+\varepsilon _n. \end{align*}

The same upper bound applies for $|L_{s_{i+1}}^n(x)-L_s(x)|$ , and from (2.13) and (2.14) we conclude that for $(x,x')\in \mathcal{C}^n$ and $s\in [s_i,s_{i+1}]$ , for some $i=0,\ldots,r-1$ ,

\begin{equation*} |L_s^n(x)-L_s(x')|\le 2 n^{-1}+3 \varepsilon _n. \end{equation*}

For $X\in D([0,T],K)$ and $A\subseteq [0,T]$ put

\begin{equation*} w(X;\;A)\;:\!=\;\sup _{s,t\in A} d_K(X_t,X_s). \end{equation*}

Now, for $\delta \in (0,1)$ , define the càdlàg modulus to be

\begin{equation*} w'(X;\;\delta )\;:\!=\;\inf _{\Sigma } \max _{1\le i\le k} w(X;\;[t_{i-1},t_i)), \end{equation*}

where the infimum is taken over all partitions $\Sigma =\{0=t_0\lt t_1\lt \cdot \cdot \cdot \lt t_k=T\}$ , $k\in \mathbb{N}$ , with $\min _{1\le i\le k}(t_i-t_{i-1})\gt \delta$ . For a function to lie in $D([0,T],K)$ , it is necessary and sufficient to satisfy $w'(X;\;\delta )\to 0$ , as $\delta \to 0$ . Let $B_n$ be the set of functions having a constant value in $K^n$ over each interval $[(u-1)T/n,uT/n)$ , for some $n\in \mathbb{N}$ and also a value in $K^n$ at time $T$ . Take $B=\cup _{n\ge 1} B_n$ , and observe that is countable. Clearly, putting $z=(z_u)_{u=0}^{n}$ , with $z_u=uT/n$ , for every $u=0,\ldots,n$ satisfies $0=z_0\lt z_1\lt \cdot \cdot \cdot \lt z_n=T$ . Let $T_z\;:\;D([0,T],K)\to D([0,T],K)$ be the map that is defined in the following way. For $X\in D([0,T],K)$ take $T_zX$ to have a constant value $X(z_{u-1})$ over the interval $[z_{u-1},z_u)$ for $1\le u\le n$ and the value $X(T)$ at $t=T$ . From an adaptation of [Reference Billingsley11, Lemma 3, p.127], considering càdlàg paths that take values on metric spaces, we have that

(2.15) \begin{equation} d_{J_1}(T_zX,X)\le T n^{-1}+w'(X;\;T n^{-1}). \end{equation}

Also, there exists $X^n\in B_n$ , for which

(2.16) \begin{equation} d_{J_1}(T_zX,X^n)\le T n^{-1}. \end{equation}

Combining (2.15) and (2.16), we have that

\begin{equation*} d_{J_1}(X^n,X)\le d_{J_1}(X^n,T_zX)+d_{J_1}(T_zX,X)\le 2 T n^{-1}+w'(X;\;T n^{-1}). \end{equation*}

With the choice of the sequence $(K^n,\pi ^n,X^n,L^n)$ , we find that

\begin{equation*} d_{\mathbb {K}}((K^n,\pi ^n,X^n,L^n),(K,\pi,X,L))\le (4 +2 T) n^{-1}+3 \varepsilon _n+w'(X;\;T n^{-1}). \end{equation*}

Recalling that $w'(X;\;T n^{-1})\to 0$ , as $n\to \infty$ , and noting that our sequence was drawn from a countable subset of $\mathbb{K}$ completes the proof of the proposition.

Fix $T\gt 0$ . Let $\tilde{\mathbb{K}}$ be the space of quadruples of the form $(K,\pi ^K,X^K,L^K)$ , where $K$ is a non-empty compact pointed metric space with distinguished vertex $\rho$ , $\pi ^K$ is a Borel probability measure on $K$ , $X^K=(X^K_t)_{t\in [0,K]}$ is a càdlàg path on $K$ and $L^K=(L_t(x))_{x\in K,t\in [0,T]}$ is a jointly continuous positive real-valued function of $(t,x)$ . We say that two elements of $\tilde{\mathbb{K}}$ , say $(K,\pi ^K,X^K,L^K)$ and $(K',\pi ^{K'},X^{K'},L^{K'})$ , are equivalent if and only there is a root-preserving isometry $f\;:\;K\to K'$ , such that $f(\rho )=\rho '$ , $\pi ^K\circ f^{-1}=\pi ^{K'}$ , $f\circ X^K=X^{K'}$ and $L_t^{K'}\circ f=L_t^K$ , for every $t\in [0,T]$ . It is possible to define a metric on the equivalence classes of $\tilde{\mathbb{K}}$ by imposing in the definition of $d_{\mathbb{K}}$ that the infimum is taken over all correspondences that contain $(\rho,\rho ')$ . The incorporation of distinguished points to the extended Gromov-Hausdorff topology leaves the proof of Proposition 2.1 unchanged and it is possible to show that $(\tilde{\mathbb{K}},d_{\tilde{\mathbb{K}}})$ is a separable metric space.

The aim of the following lemmas is to establish a sufficient condition for Assumption 1 to hold, as well as to show that if Assumption 1 holds then we can isometrically embed the rescaled graphs, measures, random walks and local times into a random common metric space such that they all converge to the relevant objects. Even when $G^n$ and $K$ are deterministic, the common metric space will depend on the realizations of $X^n$ and $X$ . To be more precise we formulate this last statement in the next lemma.

Lemma 2.2. If Assumption 1 is satisfied, then we can find isometric embeddings of $(V(G^n),\alpha (n) d_{G^n})_{n\ge 1}$ and $(K,d_K)$ into a common metric space $(F,d_F)$ such that

(2.17) \begin{equation} \lim _{n\rightarrow \infty } d_H^F(V(G^n),K)=0, \qquad \lim _{n\to \infty } d_F(\rho ^n,\rho )=0, \end{equation}

where $d_H^F$ is the standard Hausdorff distance between $V(G^n)$ and $K$ , regarded as subsets of $(F,d_F)$ ,

(2.18) \begin{equation} \lim _{n\rightarrow \infty } d_P^F(\pi ^n,\pi )=0, \end{equation}

where $d_P^F$ is the standard Prokhorov distance between $V(G^n)$ and $K$ , regarded as subsets of $(F,d_F)$ ,

(2.19) \begin{equation} \lim _{n\rightarrow \infty } d_{J_1}^F(\alpha (n) X^n,X)=0, \end{equation}

where $d_{J_1}^F$ is the Skorohod $d_{J_1}$ metric between $V(G^n)$ and $K$ , regarded as subsets of $(F,d_F)$ . Also,

(2.20) \begin{equation} \lim _{\delta \rightarrow 0} \limsup _{n\rightarrow \infty } \sup _{\substack{x^n\in V(G^n),x\in K: \\ d_F(x^n,x)\lt \delta }} \sup _{t\in [0,T]} |\alpha (n) L^n_{\beta (n) t}(x^n)-L_t(x)|=0. \end{equation}

For simplicity we have identified the measures and the random walks in $V(G^n)$ with their isometric embeddings in $(F,d_F)$ .

Proof. Since Assumption 1 holds, for each $n\ge 1$ we can find metric spaces $(F_n,d_n)$ , isometric embeddings $\phi _n\;:\;V(G^n)\rightarrow F_n$ , $\phi _n'\;:\;K\rightarrow F_n$ and correspondences $\mathcal{C}^n$ (that contain $(\rho ^n,\rho )$ ) between $V(G^n)$ and $K$ such that (identifying the relevant objects with their embeddings)

(2.21) \begin{align} d_{P}^{F_n}(\pi ^n,\pi )+d_{J_1}^{F_n}(\alpha (n) X^n,X)+\sup _{(x,x')\in \mathcal{C}^n}\bigg (d_n(x,x')+\sup _{t\in [0,T]} |\alpha (n) L^n_{\beta (n) t}(x)-L_t(x')|\bigg )\le \varepsilon _n, \end{align}

where $\varepsilon _n\rightarrow 0$ , as $n\rightarrow \infty$ . Now, let $F=\sqcup _{n\ge 1} F_n$ , be the disjoint union of $F_n$ , and define the distance $d_{F|F_n\times F_n}=d_{n}$ , for $n\ge 1$ , and for $x\in F_n$ , $x'\in F_{n'}$ , $n\neq n'$

\begin{equation*} d_F(x,x')\;:\!=\;\inf _{y\in K}\{d_n(x,y)+d_{n'}(y,x')\}. \end{equation*}

This distance, as the distance that was defined in order to prove the triangle inequality in Proposition 2.1, is symmetric and non-negative, so identifying points that are separated by a zero distance, we turn $(F,d_F)$ into a metric space, which comes with natural isometric embeddings of $(V(G^n),\alpha (n) d_{G^n})_{n\ge 1}$ and $(K,d_K)$ . In this setting, under the appropriate isometric embeddings (2.17), (2.18) and (2.19) readily hold from (2.21). Thus, it only remains to prove (2.20). For every $x\in V(G^n)$ , since $\mathcal{C}^n$ is a correspondence in $V(G^n)\times K$ , there exists an $x'\in K$ such that $(x,x')\in \mathcal{C}^n$ . Then, (2.21) implies that $d_F(x,x')\le \varepsilon _n$ . Now, let $(y,y')\in \mathcal{C}^n$ , $(z,z')\in \mathcal{C}^n$ and note that

\begin{align*} &\sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|\\[5pt] &\le \sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(y)-L_t(y')|+\sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(z)-L_t(z')|+\sup _{t\in [0,T]} |L_t(y')-L_t(z')|\\[5pt] &\le 2 \varepsilon _n+\sup _{t\in [0,T]} |L_t(y')-L_t(z')|. \end{align*}

For any $\delta \gt 0$ and $y, z\in V(G^n)$ , such that $\alpha (n) d_{G^n}(y,z)\lt \delta$ , we have that

\begin{equation*} d_K(y',z')\le d_F(y,y')+d_F(z,z')+\alpha (n) d_{G^n}(y,z)\lt 2 \varepsilon _n+\delta . \end{equation*}

Therefore,

(2.22) \begin{align} &\sup _{\substack{y,z\in V(G^n): \\ \alpha (n) d_{G^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|\nonumber \\[5pt] &\le 2 \varepsilon _n+\sup _{\substack{y,z\in K: \\ d_K(y,z)\lt 2 \varepsilon _n+\delta }} \sup _{t\in [0,T]} |L_t(y)-L_t(z)|. \end{align}

Also, for every $x\in K$ there exists an $x'\in V(G^n)$ such that $(x',x)\in \mathcal{C}^n$ , and furthermore $d_F(x',x)\le \varepsilon _n$ . Let $x^n\in V(G^n)$ such that $d_F(x^n,x)\lt \delta$ . Then,

\begin{equation*} d_F(x^n,x')\le d_F(x^n,x)+d_F(x',x)\lt 2 \varepsilon _n+\delta . \end{equation*}

More generally, if we denote by $B_F(x,r)$ , the open balls of radius $r$ , centred in $x$ , we have the following inclusion

\begin{equation*} B_F(x,\delta )\cap V(G^n)\subseteq B_F(x',2 \varepsilon _n+\delta )\cap V(G^n). \end{equation*}

For $x\in K$ , and $x'\in V(G^n)$ with $d_F(x',x)\le \varepsilon _n$ , using (2.21), we deduce

\begin{align*} &\sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(x^n)-L_t(x)|\\[5pt] &\le \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(x^n)-L_{\beta (n) t}^n(x')|+\sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(x')-L_t(x)|\\[5pt] &\le \varepsilon _n+\sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(x^n)-L_{\beta (n) t}^n(x')|. \end{align*}

Since $x^n \in B_F(x',2 \varepsilon _n+\delta )\cap V(G^n)$ , taking the supremum over all $x^n\in V(G^n)$ and $x\in K$ , for which $d_F(x^n,x)\lt \delta$ and using (2.22), we deduce

\begin{align*} &\sup _{\substack{x^n\in V(G^n),x\in K: \\ d_F(x^n,x)\lt \delta }} \sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(x^n)-L_t(x)|\\[5pt] &\le \varepsilon _n+\sup _{\substack{y,z\in V(G^n): \\ \alpha (n) d_{G^n}(y,z)\lt 2 \varepsilon _n+\delta }} \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|\\[5pt] &\le 3 \varepsilon _n+\sup _{\substack{y,z\in K: \\ d_K(y,z)\lt 4 \varepsilon _n+\delta }} \sup _{t\in [0,T]} |L_t(y)-L_t(z)|. \end{align*}

Using the continuity of $L$ , as $n\rightarrow \infty$

(2.23) \begin{equation} \limsup _{n\rightarrow \infty } \sup _{\substack{x^n\in V(G^n),x\in K: \\ d_F(x^n,x)\lt \delta }} \sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(x^n)-L_t(x)|\le \sup _{\substack{y,z\in K: \\ d_K(y,z)\le \delta }} \sup _{t\in [0,T]} |L_t(y)-L_t(z)|. \end{equation}

Again appealing to the continuity of $L$ , the right-hand side converges to 0, as $\delta \rightarrow 0$ . Thus, we showed that (2.20) holds, and this finishes the proof of Lemma 2.2.

In the process of proving (2.20) we established a useful equicontinuity property. We state and prove this property in the next corollary.

Corollary 2.3. Fix $T\gt 0$ and suppose that Assumption 1 holds. Then,

(2.24) \begin{equation} \lim _{\delta \rightarrow 0} \limsup _{n\rightarrow \infty } \sup _{\substack{y,z\in V(G^n): \\ \alpha (n) d_{G^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|=0. \end{equation}

Proof. As we hinted upon when deriving (2.23), using the continuity of $L$ ,

\begin{equation*} \limsup _{n\rightarrow \infty } \sup _{\substack {y,z\in V(G^n): \\ \alpha (n) d_{G^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|\le \sup _{\substack {y,z\in K: \\ d_K(y,z)\le \delta }} \sup _{t\in [0,T]} |L_t(y)-L_t(z)|. \end{equation*}

Sending $\delta \rightarrow 0$ gives the desired result.

Next, we prove that if we reverse the conclusions of Lemma 2.2, more specifically if (2.17)-(2.20) hold, then also Assumption 1 holds.

Lemma 2.4. Suppose that (2.17)-(2.20) hold. Then so does Assumption 1.

Proof. There exist isometric embeddings of $(V(G^n),\alpha (n) d_{G^n})_{n\ge 1}$ and $(K,d_K)$ into a common metric space $(F,d_F)$ , under which the assumptions (2.17)-(2.20) hold. Since (2.17) gives the convergence of spaces under the Hausdorff metric, (2.18) gives the convergence of measures under the Prokhorov metric and (2.19) gives the convergence of paths under $d_{J_1}$ , it only remains to check the uniform convergence of local times. Let $\mathcal{C}^n$ be the set of all pairs $(x,x')\in K\times V(G^n)$ , for which $d_F(x,x')\le n^{-1}$ . Since (2.17) holds, $\mathcal{C}^n$ are correspondences for $n\ge 1$ . Then, for $(x,x')\in \mathcal{C}^n$

\begin{equation*} \sup _{t\in [0,T]} |\alpha (n) L_{\beta (n) t}^n(x')-L_t(x)|\le \sup _{\substack {x^n\in V(G^n),x\in K: \\ d_F(x^n,x)\lt n^{-1}}} \sup _{t\in [0,T]} |\alpha (n) L^n_{\beta (n) t}(x^n)-L_t(x)|, \end{equation*}

and using (2.20) completes the proof.

3. Local time convergence

To check that Assumption 1 holds we need to verify that the convergence of the rescaled local times in (2.20), as suggested by Lemma 2.2. Due to work done in a more general framework in [Reference Croydon, Hambly and Kumagai20], we can weaken the local convergence statement of (2.20) and replace it by the equicontinuity condition of (2.24). In (1.3) we defined a resistance metric on a graph viewed as an electrical network. Next, we give the definition of a regular resistance form and its associated resistance metric for arbitrary non-empty sets, which is a combination of [Reference Croydon, Hambly and Kumagai20, Definition 2.1] and [Reference Croydon, Hambly and Kumagai20, Definition 2.2].

Definition 3.1. Let $K$ be a non-empty set. A pair $(\mathcal{E},\mathcal{K})$ is called a regular resistance form on $K$ if the following six conditions are satisfied.

  1. (i) $\mathcal{K}$ is a linear subspace of the collection of functions $\{f\;:\;K\to \mathbb{R}\}$ containing constants, and $\mathcal{E}$ is a non-negative symmetric quadratic form on $\mathcal{K}$ such that $\mathcal{E}(f,f)=0$ if and only if $f$ is constant on $K$ .

  2. (ii) Let $\sim$ be an equivalence relation on $\mathcal{K}$ defined by saying $f\sim g$ if and only if the difference $f-g$ is constant on $K$ . Then, $(\mathcal{K}/\sim,\mathcal{E})$ is a Hilbert space.

  3. (iii) If $x\neq y$ , there exists $f\in \mathcal{K}$ such that $f(x)\neq f(y)$ .

  4. (iv) For any $x,y\in K$ ,

    (3.1) \begin{equation} R(x,y)\;:\!=\;\sup \left \{\frac{|f(x)-f(y)|^2}{\mathcal{E}(f,f)}\;:\;f\in \mathcal{K}, \ \mathcal{E}(f,f)\gt 0\right \}\lt \infty . \end{equation}
  5. (v) For any $f\in \mathcal{K}$ , $\bar{f}\;:\!=\;(f\wedge 1)\vee 0\in \mathcal{K}$ , and $\mathcal{E}(\bar{f},\bar{f})\le \mathcal{E}(f,f)$ .

  6. (vi) $\mathcal{K}\cap C_0(K)$ is dense with respect to the supremum norm on $C_0(K)$ , where $C_0(K)$ denotes the space of compactly supported, continuous (with respect to $R$ ) functions on $K$ .

It is the first five conditions that have to be satisfied in order for the pair $(\mathcal{E},\mathcal{K})$ to define a resistance form. If in addition the sixth condition is satisfied then $(\mathcal{E},\mathcal{K})$ defines a regular resistance form.

Note that the fourth condition can be rewritten as $R(x,y)^{-1}=\inf \{\mathcal{E}(f,f)\;:\;f\;:\;K\to \mathbb{R},f(x)=0, f(y)=1\}$ , and it can be proven that it is actually a metric on $K$ (see [Reference Kigami31, Proposition 3.3]). It also clearly resembles the effective resistance on $V(G)$ as defined in (1.3). More specifically, taking $\mathcal{K}\;:\!=\;\{f\;:\;V(G)\to \mathbb{R}\}$ and $\mathcal{E}_G$ as defined in (1.2) one can prove that the pair $(\mathcal{E}_G,\mathcal{K})$ satisfies the six conditions of Definition 3.1, and therefore is a regular resistance form on $V(G)$ with associated resistance metric given by (1.3). For a detailed proof of this fact see [Reference Fukushima, Oshima and Takeda26, Example 1.2.5]. Finally, in this setting given a regular Dirichlet form, standard theory gives us the existence of an associated Hunt process $X=((X_t)_{t\ge 0},\mathbb{P}_x,x\in K)$ that is defined uniquely everywhere (see [Reference Fukushima, Oshima and Takeda26, Theorem 7.2.1] and [Reference Kigami31, Theorem 9.9]).

Suppose that the discrete state spaces $(V(G^n))_{n\ge 1}$ are equipped with resistances $(R_{G^n})_{n\ge 1}$ as defined in (1.3) and that the limiting non-empty metric space $K$ , that appears in Assumption 1, is equipped with a resistance metric $R$ as in Definition 3.1, such that

  • $(K,R)$ is compact,

  • $\pi$ is a Borel probability measure of full support on $(K,R)$ ,

  • $X=((X_t)_{t\ge 0},\mathbb{P}_x,x\in K)$ admits jointly continuous (with respect to $R$ ) local times $L=(L_t(y))_{y\in K,t\ge 0}$ .

In the following extra assumption we input the information encoded in the first three conclusions of Lemma 2.2, given that we work in a probabilistic setting instead. For simplicity as before we identify the various objects with their embeddings.

Assumption 2. Fix $T\gt 0$ . Let $(G^n)_{n\ge 1}$ be a sequence of finite connected graphs that have at least two vertices, for which there exist sequences of real numbers $(\alpha (n))_{n\ge 1}$ and $(\beta (n))_{n\ge 1}$ , such that

\begin{equation*} \left (\left (V(G^n),\alpha (n) R_{G^n},\rho ^n\right ),\pi ^n,\left (\alpha (n) X^n_{\beta (n) t}\right )_{t\in [0,T]}\right )\longrightarrow \left (\left (K,R,\rho \right ),\pi,X\right ) \end{equation*}

in distribution with respect to the pointed extended pointed Gromov-Hausdorff topology, where $\rho ^n\in V(G^n)$ and $\rho \in K$ are distinguished points at which $X^n$ and $X$ start respectively. Furthermore, suppose that for every $\varepsilon \gt 0$ and $T\gt 0$ ,

(3.2) \begin{equation} \lim _{\delta \to 0} \limsup _{n\to \infty } \mathbb{P}_{\rho ^n}^n\left (\sup _{\substack{y,z\in V(G^n): \\ \alpha (n) R_{G^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} \alpha (n) |L_{\beta (n) t}^n(y)-L_{\beta (n) t}^n(z)|\ge \varepsilon \right )=0. \end{equation}

It is Assumption 2 we have to verify in the examples of random graphs we will consider later. As we prove below in the last lemma of this subsection, if Assumption 2 holds then the finite dimensional local times converge in distribution, see (2.20) in Lemma 2.2, which is one of the conditions for Assumption 1 to be satisfied. Given that $(V(G^n),\alpha (n) R_{G^n})_{n\ge 1}$ and $(K,R)$ can be isometrically embedded into a common metric space $(F,d_F)$ such that $X^n$ under $\mathbb{P}_{\rho ^n}^n$ converges weakly to the law of $X$ under $\mathbb{P}_{\rho }$ on $D([0,T],F)$ (see Lemma 2.2), we can couple $X^n$ started from $\rho ^n$ and $X$ started from $\rho$ into a common probability space such that $(\alpha (n) X^n_{\beta (n) t})_{t\in [0,T]}\to (X_t)_{t\in [0,T]}$ in $D([0,T],F)$ , almost-surely. Denote by $\mathbb{P}$ the joint probability measure under which the convergence above holds. Proving the convergence of finite dimensional distributions of local times is then an application of three lemmas that appear in [Reference Croydon, Hambly and Kumagai20], which we summarize below. For every $x\in F$ , $\delta \gt 0$ , introduce the function $f_{\delta,x}(y)\;:\!=\;\max \{0,\delta -d_F(x,y)\}$ .

Lemma 3.2. (Croydon, Hambly, Kumagai [Reference Croydon, Hambly and Kumagai20]). Under Assumption 2,

  1. (i) $\mathbb{P}$ -a.s., for each $x\in K$ and $T\gt 0$ , as $\delta \to 0$ ,

    \begin{equation*} \sup _{t\in [0,T]} \left |\frac {\int _{0}^{t} f_{\delta,x}(X_s) ds}{\int _K f_{\delta,x}(y) \pi (dy)} -L_t(x) \right |\to 0. \end{equation*}
  2. (ii) $\mathbb{P}$ -a.s., for each $x\in K$ , $T\gt 0$ and $\delta \gt 0$ , as $n\to \infty$ ,

    \begin{equation*} \sup _{t\in [0,T]} \left |\frac {\int _{0}^{t} f_{\delta,x}(X_s) ds}{\int _K f_{\delta,x}(y) \pi (dy)} -\frac {\int _{0}^{t} f_{\delta,x}(\alpha (n) X_{\beta (n) s}^n) ds}{\int _{V(G^n)} f_{\delta,x}(y) \pi ^n(dy)} \right |\to 0. \end{equation*}
  3. (iii) For each $x\in K$ and $T\gt 0$ , if $x^n\in V(G^n)$ is such that $d_F(x^n,x)\to 0$ , as $n\to \infty$ , then

    \begin{equation*} \lim _{\delta \to 0} \limsup _{n\to \infty } \mathbb {P}\left (\sup _{t\in [0,T]} \left |\frac {\int _{0}^{t} f_{\delta,x}(\alpha (n) X_{\beta (n) s}^n) ds}{\int _{V(G^n)} f_{\delta,x}(y) \pi ^n(dy)}-\alpha (n) L_{\beta (n) t}^n(x^n)\right |\gt \varepsilon \right )=0. \end{equation*}

By applying the conclusions of Lemma 3.2, one deduces that for any $x\in K$ and $T\gt 0$ , if $x^n\in V(G^n)$ such that $d_F(x^n,x)\to 0$ , as $n\to \infty$ , then $(\alpha (n) L^n_{\beta (n) t}(x^n))_{t\in [0,T]}\to (L_t(x))_{t\in [0,T]}$ in $\mathbb{P}$ -probability in $C([0,T],\mathbb{R})$ . This result extends to finite collections of points, and this is enough to establish the convergence of finite dimensional distributions of local times.

Lemma 3.3. Suppose that Assumption 2 holds. Then, if the finite collections $(x_i^n)_{i=1}^{k}$ in $V(G^n)$ , for $n\ge 1$ , are such that $d_F(x_i^n,x_i)\to 0$ , as $n\to \infty$ , for some $(x_i)_{i=1}^{k}$ in $K$ , then it holds that

(3.3) \begin{equation} (\alpha (n) L_{\beta (n) t}^n(x_i^n))_{i=1,\ldots,k,t\in [0,T]}\to (L_t(x_i))_{i=1,\ldots,k,t\in [0,T]}, \end{equation}

in distribution in $C([0,T],\mathbb{R}^k)$ .

4. Blanket time-scaling and distributional bounds

In this section, we show that under Assumption 1, and as a consequence of the local time convergence in Lemma 3.3, we are able to establish asymptotic bounds on the distribution of the blanket times of the graphs in the sequence. The argument for the cover time-scaling was provided first in [Reference Croydon17, Corollary 7.3] by restricting to the unweighted Sierpiński gasket graphs. The argument is applicable to any other model as long as the relevant assumptions are satisfied. First, let us check that the $\varepsilon$ -blanket time variable of $K$ as written in (1.5) is well-defined.

Proposition 4.1. Fix $\varepsilon \in (0,1)$ . For every $x\in K$ , $\mathbb{P}_x$ -a.s. we have that $\tau _{\textrm{bl}}(\varepsilon )\in (0,\infty )$ .

Proof. Fix $x\neq y$ . There is a strictly positive $\mathbb{P}_x$ -probability that $L_t(x)\gt 0$ for $t$ large enough, which is a consequence of [Reference Marcus and Rosen36, Lemma 3.6]. From the joint continuity of $(L_t(z))_{z\in K,t\ge 0}$ , there exist $r\equiv r(x)$ , $\delta \equiv \delta (x)\gt 0$ and $t_{*}\equiv t_{*}(x)\lt \infty$ such that

(4.1) \begin{equation} \mathbb{P}_x\left (\inf _{z\in B(x,r)} L_{t_{*}}(z)\gt \delta \right )\gt 0. \end{equation}

Now, let $\tau _{x,y}(t_{*})\;:\!=\;\inf \{t\gt t_{*}+\tau _x\;:\;X_t=y\}$ , where $\tau _x\;:\!=\;\inf \{t\gt 0\;:\;X_t=x\}$ is the hitting time of $x$ . In other words, $\tau _{x,y}(t_{*})$ is the first hitting of $y$ by $X$ after time $t_{*}+\tau _x$ . The commute time identity for a resistance derived in the proof of [Reference Croydon, Hambly and Kumagai20, Lemma 2.9], lets it be deduced that

\begin{equation*} \mathbb {E}_z \tau _w\le \mathbb {E}_z \tau _w+\mathbb {E}_w \tau _z=R(z,w) \pi (K), \end{equation*}

for every $z,w\in K$ , which in turn deduces that $\mathbb{E}_x \tau _y\lt \infty$ . Here $\pi (K)=1$ . Applying this observation about the finite first moments of hitting times, it is easy to check that $\tau _{x,y}(t_{*})\lt \infty$ , $\mathbb{P}_y$ -a.s., and also that

\begin{equation*} \mathbb {P}_y\left (\inf _{z\in B(x,r)} L_{\tau _{x,y}(t_{*})}(z)\gt \delta \right )\gt 0. \end{equation*}

The latter simply follows from an application of (4.1) and the Strong Markov property. We only need to show that $\mathbb{E}_y \tau _{x,y}(t_{*})\lt \infty$ in order to conclude that the former holds. This is easy to show since we have for instance:

(4.2) \begin{align} \mathbb{E}_y \tau _{x,y}(t_{*})&\le t_*+\mathbb{E}_y \tau _x+\sup _{z\in K} \mathbb{E}_z \tau _y \nonumber \\[5pt] &\le t_*+\pi (K) R(x,y)+\pi (K) \sup _{z,w\in K} R(z,w) \le t_*+2 \sup _{z,w\in K} R(z,w)\lt \infty . \end{align}

The additivity of local times and the Strong Markov property implies that

(4.3) \begin{equation} \liminf _{t\to \infty } \inf _{z\in B(x,r)} \frac{L_t(z)}{t}\ge \left (\sum _{i=1}^{\infty }\xi _i^1\right ) \left (\sum _{i=1}^{\infty } \xi _i^2\right )^{-1}, \end{equation}

where $(\xi _i^1)_{i\ge 1}$ are independent random variables distributed as $\inf _{z\in B(x,r)} L_{\tau _{x,y}(t_{*})}(z)$ and $(\xi _i^2)_{i\ge 1}$ are independent copies of $\tau _{x,y}(t_{*})$ . The strong law of large numbers along with (4.1) yields that the right-hand side of the inequality above converges to

\begin{equation*} \mathbb {E}_y \left [\inf _{z\in B(x,r)} L_{\tau _{x,y}(t_{*})}(z)\right ] (\mathbb {E}_y \tau _{x,y}(t_{*}))^{-1}, \end{equation*}

$\mathbb{P}_x$ -a.s. Using basic properties of the resolvents of killed processes of resistance forms (e.g. [Reference Croydon18, (6)-(8), p. 1945]), the way the stopping times are defined, and the upper bound in (4.2),

\begin{align*} \mathbb{E}_y \left [L_{\tau _{x,y}(t_{*})}(x)\right ] (\mathbb{E}_y \tau _{x,y}(t_{*}))^{-1} &\ge \mathbb{E}_x L_{\tau _y}(x) (\mathbb{E}_y \tau _{x,y}(t_{*}))^{-1} \\[5pt] &=\frac{R(x,y)}{t_*+R(x,y)+\sup _{z,w\in K} R(z,w)}, \end{align*}

and therefore, the joint continuity of local times lets it to be deduced that the right-hand side of (4.3) satisfies

(4.4) \begin{equation} \liminf _{t\to \infty } \inf _{z\in B(x,r)} \frac{L_t(z)}{t}\ge \varepsilon _{*}(x), \end{equation}

$\mathbb{P}_y$ -a.s., for some $\varepsilon _*(x)\in (\varepsilon,1)$ . To extend this statement that holds uniformly over $B(x,r)$ , $\mathbb{P}_y$ -a.s., we use the compactness of $K$ . Consider the open cover $(B(x,r))_{x_i\in K}$ the open cover for $K$ , which say admits a finite subcover $(B(x_i,r_i))_{i=1}^{N}$ . Since the right-hand of (4.4) is greater than $\varepsilon _{*}$ , $\mathbb{P}_y$ -a.s., the result clearly follows as

\begin{equation*} \liminf _{t\to \infty } \frac {L_t(x)}{t}\ge \min _{1\le i\le N} \liminf _{t\to \infty } \inf _{x\in B(x_i,r_i)} \frac {L_t(x)}{t}, \end{equation*}

which implies that there exists $t_0\equiv t_0(x)\lt \infty$ , such that $L_{t_0}(x)\ge \varepsilon t_0$ , for every $x\in K$ , and recalling the definition of the $\varepsilon$ -blanket time variable of $K$ in (1.5), we deduce that $\tau _{\textrm{bl}}(\varepsilon )\le t_0\lt \infty$ .

4.1 Proof of Theorem 1.2

We are now ready to prove one of our main results.

Proof of Theorem 1.2. Let $\varepsilon \in (0,1)$ , $\delta \in (0,1)$ and $t\in [0,T]$ . Suppose that $t\lt{\tau }_{\textrm{bl}}(\varepsilon (1-\delta ))$ . Then, there exists a $y\in K$ for which $L_t(y)\lt \varepsilon (1-\delta ) t$ . Using the Skorohod representation theorem, we can suppose that the conclusions of Lemma 2.2 hold in an almost-surely sense. From (2.17), there exists $y^n\in V(G^n)$ such that, for $n$ large enough, $d_F(y^n,y)\lt 2 \varepsilon$ . Then, the local convergence at (2.20) implies that, for $n$ large enough

\begin{equation*} \alpha (n) L_{\beta (n) t}^n(y^n)\le L_t(y)+\varepsilon \delta t. \end{equation*}

Thus, for $n$ large enough, it follows that $\alpha (n) L_{\beta (n) t}^n(y^n)\le L_t(y)+\varepsilon \delta t\lt \varepsilon t$ . Using the time and space scaling identity, we deduce $m^{G^n} L^n_{\beta (n) t}(y^n)\lt \varepsilon \beta (n) t$ , for $n$ large enough, which in turn implies that $\beta (n) t\le \tau _{\textrm{bl}}^n(\varepsilon )$ , for $n$ large enough. As a consequence, we get that $\tau _{\textrm{bl}}(\varepsilon (1-\delta ))\le \liminf _{n\rightarrow \infty } \beta (n)^{-1} \tau _{\textrm{bl}}^n(\varepsilon )$ , which proves (1.6).

Assume now that $\tau _{\textrm{bl}}(\varepsilon (1+\delta ))\lt t$ . Then, for some $\tau _{\textrm{bl}}(\varepsilon (1+\delta ))\le t_0\lt t$ , it is the case that $L_{t_0}(x)\ge \varepsilon (1+\delta ) t_0$ , for every $y\in K$ . As in the previous paragraph, using the Skorohod representation theorem, we suppose that the conclusions of Lemma 2.2 hold almost-surely. From (2.17), for every $y^n\in V(G^n)$ , there exists a $y\in K$ such that, for $n$ large enough, $d_F(y^n,y)\lt 2 \varepsilon$ . From the local convergence statement at (2.20), we have that, for $n$ large enough

\begin{equation*} \alpha (n) L_{\beta (n) t_0}^n(y^n)\ge L_{t_0}(y)-\varepsilon \delta t_0. \end{equation*}

Therefore, for $n$ large enough, it follows that $\alpha (n) L^n_{\beta (n) t_0}(y^n)\ge L_{t_0}(y)-\varepsilon \delta t_0\ge \varepsilon t_0$ , for every $y\in K$ . As before, using the time and space scaling identity yields $m^{G^n} L^n_{\beta (n) t_0}(y^n)\ge \varepsilon \beta (n) t_0$ , for every $y^n\in V(G^n)$ and large enough $n$ , which in turn implies that $\beta (n) t_0\ge \tau ^n_{\textrm{bl}}(\varepsilon )$ , for $n$ large enough. As consequence we get that $\limsup _{n\rightarrow \infty } \beta (n)^{-1} \tau ^n_{\textrm{bl}}(\varepsilon )\le \tau _{\textrm{bl}}(\varepsilon (1+\delta ))$ , from which (1.7) follows.

5. Examples

In this section we demonstrate that it is possible to apply our main results in a number of examples where the graphs, and the limiting spaces are random. These examples include critical Galton-Watson trees and the critical Erdős-Rényi random graph. We expect the method to apply to other examples. The aforementioned models of sequences of random graphs exhibit a mean-field behaviour at criticality in the sense that the scaling exponents for the walks, and consequently for the local times, are a multiple of the volume and the diameter of the graphs. In the first few pages of each subsection we quickly survey some of the key features of each example that will be helpful when verifying Assumption 2. Our method used in proving continuity of the blanket time of the limiting diffusion is generic in the sense that it applies on each random metric measure space and a corresponding $\sigma$ -finite measure that generates realizations of the random metric measure space in such a way that rescaling the $\sigma$ -finite measure by a constant factor results in generating the same space with its metric and measure perturbed by a multiple of this constant factor. For that reason we believe our results to easily transfer when considering Galton-Watson trees with critical offspring distribution in the domain of attraction of a stable law with index $\alpha \in (1,2)$ (see [Reference Gall33, Theorem 4.3]) and random stable looptrees (see [Reference Curien and Kortchemski21, Theorem 4.1]). Also, we hope our work to be seen as a stepping stone to deal with the more delicate problem of establishing convergence in distribution of the rescaled cover times of the discrete-time walks in each application of our main result. See [Reference Croydon17, Remark 7.4] for a thorough discussion on the demanding nature of this project.

5.1 Critical Galton-Watson trees

We start by briefly describing the connection between critical Galton-Watson trees and the Brownian continuum random tree (CRT). Let $\xi$ be a mean 1 random variable, whose distribution is aperiodic (its support generates the lattice $\mathbb{Z}$ , not just a strict subgroup of $\mathbb{Z}$ ), with variance $0\lt \sigma _{\xi }^2\lt +\infty$ and exponential moments, i.e. there exists a constant $\lambda \gt 0$ such that $\mathbb{E}(e^{\lambda \xi })\lt \infty$ . Let $\mathcal{T}_n$ be a Galton-Watson tree with offspring distribution $\xi$ conditioned to have total number of vertices $n+1$ , which is well-defined from the aperiodicity of the distribution of $\xi$ . Then, it is the case that

(5.1) \begin{equation} \left (V(\mathcal{T}_n),n^{-1/2} d_{\mathcal{T}_n}\right )\rightarrow \left (\mathcal{T}_e,\left (2/\sigma _{\xi }\right ) d_{\mathcal{T}_e}\right ), \end{equation}

in distribution with respect to the Gromov-Hausdorff distance between compact metric spaces, where $d_{\mathcal{T}_n}$ is the shortest path distance on the vertex set $V(\mathcal{T}_n)$ (see [Reference Aldous4] and [Reference Gall32]). To describe the limiting object in (5.1), let $(e_t)_{0\le t\le 1}$ denote the normalized Brownian excursion, which is informally a linear Brownian motion, started from zero, conditioned to remain positive in $(0,1)$ and come back to zero at time 1. We extend the definition of $(e_t)_{0\le t\le 1}$ by setting $e_t=0$ , if $t\gt 1$ . We define a distance $d_{\mathcal{T}_e}$ in $[0,1]$ by setting

(5.2) \begin{equation} d_{\mathcal{T}_e}(s,t)=e(s)+e(t)-2 \min _{r\in [s\wedge t,s\vee t]} e(r). \end{equation}

Introducing the equivalence relation $s\sim t$ if and only if $e(s)=e(t)=\min _{r\in [s\wedge t,s\vee t]} e(r)$ and defining $\mathcal{T}_e\;:\!=\;[0,1]/\sim$ it is possible to check that $(\mathcal{T}_e,d_{\mathcal{T}_e})$ is almost-surely a compact metric space. Moreover, $(\mathcal{T}_e,d_{\mathcal{T}_e})$ is a random real tree, called the CRT. For the notion of compact real trees coded by functions and a proof of the previous result see [Reference Gall33, Section 2]. There is a natural Borel probability measure upon $\mathcal{T}_e$ , $\pi ^e$ say, which is the image measure on $\mathcal{T}_e$ of the Lebesgue measure on $[0,1]$ by the canonical projection of $[0,1]$ onto $\mathcal{T}_e$ .

Upon almost-every realization of the metric measure space $(\mathcal{T}_e,d_{\mathcal{T}_e},\pi ^e)$ , it is possible to define a corresponding Brownian motion $X^e$ . The way this can be done is described in [Reference Croydon13, Section 2.2]. Now if we denote by $\mathbb{P}_{\rho ^n}^{\mathcal{T}_n}$ the law of the simple random walk in $\mathcal{T}_n$ , started from a distinguished point $\rho ^n$ , and by $\pi ^n$ the stationary probability measure, then as it was shown in [Reference Croydon15] the scaling limit in (5.1) can be extended to the distributional convergence of

\begin{equation*} \bigg (\big (V(\mathcal {T}_n),n^{-1/2} d_{\mathcal {T}_n},\rho ^n\big ),\pi ^n,\mathbb {P}_{\rho ^n}^{\mathcal {T}_n}\big ((n^{-1/2} X^n_{\lfloor n^{3/2} t \rfloor })_{t\in [0,1]}\in \cdot \big )\bigg ) \end{equation*}

to $ ( (\mathcal{T}_e,d_{\mathcal{T}_e},\rho ),\pi ^e,\mathbb{P}_{\rho }^e )$ , where $\mathbb{P}_{\rho }^e$ is the law of $X^e$ , started from a distinguished point $\rho$ . This convergence described in [Reference Croydon15] holds after embedding all the relevant objects nicely into a Banach space. We can reformulate this result in terms of the pointed extended Gromov-Hausdorff topology that incorporates distinguished points. Namely

(5.3) \begin{equation} \left (\left (V(\mathcal{T}_n),n^{-1/2} d_{\mathcal{T}_n},\rho ^n\right ),\pi ^n,\left (n^{-1/2} X^n_{\lfloor n^{3/2} t \rfloor }\right )_{t\in [0,1]}\right )\longrightarrow \left (\left (\mathcal{T}_e,d_{\mathcal{T}_e},\rho \right ),\pi ^e,\left (X^e_{(\sigma _{\xi }/2) t}\right )_{t\in [0,1]}\right ), \end{equation}

in distribution in an extended pointed Gromov-Hausdorff sense.

Next, we introduce the contour function of $\mathcal{T}_n$ . Informally, it encodes the trace of the motion of a particle that starts from the root at time $t=0$ and then explores the tree from left to right, moving continuously at unit speed along its edges. Formally, we define a function first for integer arguments as follows:

\begin{equation*} f(0)=\rho ^n. \end{equation*}

Given $f(i)=v$ , we define $f(i+1)$ to be, if possible, the leftmost child that has not been visited yet, let’s say $w$ . If no child is left unvisited, we let $f(i+1)$ be the parent of $v$ . Then, the contour function of $\mathcal{T}_n$ , is defined as the distance of $f(i)$ from the root $\rho ^n$ , i.e.

\begin{equation*} V_n(i)\;:\!=\;d_{\mathcal {T}_n}(\rho ^n,f(i)), \qquad 0\le i\le 2 n. \end{equation*}

The function $V_n$ is only defined for integer arguments. To map intermediate values of $f$ into $V(\mathcal{T}_n)$ extend $f$ to $[0,2 n]$ by taking $f(t)$ to be $f(\lfloor t\rfloor )$ or $f(\lceil t\rceil )$ , whichever is further away from the root. The following theorem is due to Aldous.

Theorem 5.1. (Aldous [Reference Aldous4]). Let $v_n$ denote the normalized contour function of $\mathcal{T}_n$ , defined by

\begin{equation*} v_n(s)\;:\!=\;\frac {V_n(2 n s)}{\sqrt {n}}, \qquad 0\le s\le 1. \end{equation*}

Then, the following convergence holds in distribution in $C([0,1])$ :

\begin{equation*} v_n\xrightarrow {(d)} v\;:\!=\;\frac {2}{\sigma _{\xi }} e, \end{equation*}

where $e$ is a normalized Brownian excursion.

An essential tool in what follows will be a universal concentration estimate of the fluctuations of local times that holds uniformly over compact time intervals. Since $(\mathcal{T}_n)_{n\ge 1}$ is a collection of graph trees it follows that the shortest path distance $d_{\mathcal{T}_n}$ , $n\ge 1$ is identical to the resistance metric on the vertex set $V(\mathcal{T}_n)$ , $n\ge 1$ . For the statement of this result let

\begin{equation*} r(\mathcal {T}_n)\;:\!=\;\sup _{x,y\in V(\mathcal {T}_n)} d_{\mathcal {T}_n}(x,y) \end{equation*}

denote the diameter of $\mathcal{T}_n$ in the shortest path distance and $m(\mathcal{T}_n)$ denote the total mass of $\mathcal{T}_n$ . Here $m(\mathcal{T}_n)=2 n$ . Also, we introduce the rescaled shortest path distance $\tilde{d}_{\mathcal{T}_n}(x,y)\;:\!=\;r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(x,y)$ .

Theorem 5.2. (Croydon [Reference Croydon17]). For every $T\gt 0$ and any rooted tree $(V(\textbf{t}),d_{\textbf{t}},\rho ^{\textbf{t}})$ , there exist constants $c_1$ , $c_2\gt 0$ not depending on $\textbf{t}$ such that

(5.4) \begin{equation} \sup _{y,z\in V(\textbf{t})} \mathbb{P}_{\rho ^{\textbf{t}}}^{\textbf{t}} \left (r(\textbf{t})^{-1} \sup _{t\in [0,T]} \left |L_{r(\textbf{t}) m(\textbf{t}) t}^{\textbf{t}}(y)-L_{r(\textbf{t}) m(\textbf{t}) t}^{\textbf{t}}(z) \right |\ge \lambda \sqrt{\tilde{d}_{\textbf{t}}(y,z)}\right )\le c_1 e^{-c_2 \lambda } \end{equation}

for every $\lambda \ge 0$ . Moreover, the constants can be chosen in such a way that only $c_1$ depends on $T$ .

We remark here that the product $m(\mathcal{T}_n) r(\mathcal{T}_n)$ , that is the product of the volume and the diameter of the graph, which is also the maximal commute time of the random walk, gives the natural time-scaling for the various models of sequences of critical random graphs we are going to consider. The concentration estimate of Theorem 5.2 is a version of [Reference Croydon17, (V.3.28)] for graphs. The last ingredient we are going to make considerable use of is the tightness of the sequence $||v_n||_{H_{\alpha }}$ of Hölder norms, for some $\alpha \gt 0$ . The proof of Theorem 5.3 is based on Kolmogorov’s continuity criterion (and its proof to get uniformity in $n$ ). Indeed, the following result can be obtained for any $\alpha \in (0,1/2)$ .

Theorem 5.3. (Janson and Marckert [Reference Janson and Marckert29]). There exists $\alpha \in (0,1/2)$ such that for every $\varepsilon \gt 0$ there exists a finite real number $K_{\varepsilon }$ such that

(5.5) \begin{equation} \mathbb{P}\left (\sup _{0\le s\neq t\le 1} \frac{|v_n(s)-v_n(t)|}{|t-s|^{\alpha }}\le K_{\varepsilon }\right )\ge 1-\varepsilon, \end{equation}

uniformly in $n$ .

Remark 5.4. Building upon [Reference Gittenberger27], Janson and Marckert proved this precise estimate on the geometry of the trees when the offspring distribution has finite exponential moments. Relaxing this strong condition to only a finite variance assumption, the recent work of Marzouk and more specifically [Reference Marzouk37, Lemma 1] implies that Theorem 5.3 holds for the normalized height function of $\mathcal{T}_n$ , which constitutes an alternative encoding of the trees.

In this context, we make use of the full machinery provided by the theorems above in order to prove that the local times are equicontinuous with respect to the annealed law, which is formally defined for suitable events as

(5.6) \begin{equation} \mathbb{P}_{\rho ^n}(\!\cdot\!)\;:\!=\;\int \mathbb{P}_{\rho ^n}^{\mathcal{T}_n}(\!\cdot\!) P(d \mathcal{T}_n). \end{equation}

Proposition 5.5. For every $\varepsilon \gt 0$ and $T\gt 0$ ,

\begin{equation*} \lim _{\delta \rightarrow 0} \limsup _{n\to \infty } \mathbb {P}_{\rho ^n} \left (\sup _{\substack {y,z\in V(\mathcal {T}_n): \\ n^{-1/2} d_{\mathcal {T}_n}(y,z)\lt \delta }} \sup _{t\in [0,T]} n^{-1/2} |L_{n^{3/2} t}^n(y)-L_{n^{3/2} t}^n(z)|\ge \varepsilon \right )=0. \end{equation*}

Proof. Let us define, similarly to $d_{\mathcal{T}_e}$ , the distance $d_{v_n}$ in $[0,1]$ by setting

\begin{equation*} d_{v_n}(t_1,t_2)\;:\!=\;v_n(t_1)+v_n(t_2)-2 \min _{r\in [t_1\wedge t_2,t_1\vee t_2]} v_n(r). \end{equation*}

Using the terminology introduced to describe the CRT, $\mathcal{T}_n$ equipped with $n^{1/2} d_{v_n}$ , when $t_1$ and $t_2$ are equivalent if and only if

\begin{equation*} v_n(t_1)=v_n(t_2)=\min _{r\in [t_1\wedge t_2,t_1\vee t_2]} v_n(r), \end{equation*}

coincides with the tree coded by $n^{1/2} v_n$ . We denote by $p_{v_n}\;:\;[0,1]\rightarrow \mathcal{T}_n$ the canonical projection that maps every time point in $[0,1]$ to its equivalence class on $\mathcal{T}_n$ .

Given $t_1,t_2\in [0,1]$ , with $2 n t_1$ and $2 n t_2$ integers, such that $p_{v_n}(t_1)=y$ and $p_{v_n}(t_2)=z$ , let $u\in [t_1\wedge t_2, t_1\vee t_2]$ with $\min _{r\in [t_1\wedge t_2,t_1\vee t_2]} v_n(r)=v_n(u)$ . From Theorem 5.3, there exist $K\gt 0$ and $\alpha \gt 0$ , such that

(5.7) \begin{align} d_{v_n}(t_1,t_2)=(v_n(t_1)-v_n(u))+(v_n(t_2)-v_n(u))&\le K(|t_1-u|^{\alpha }+|u-t_2|^{\alpha }) \nonumber \\[5pt] &\le 2 K |t_1-t_2|^{\alpha } \end{align}

with probability arbitrarily close to 1, where the last inequality follows from the concavity of $t^{\alpha }$ . We condition on $v_n$ , assuming that it satisfies (5.7). The total length of the path between $y$ and $z$ is, using (5.7), is

\begin{equation*} V_n(2 n t_1)+V_n(2 n t_2)-2 \min _{r\in [t_1\wedge t_2,t_1\vee t_2]} V_n(2 n r)=n^{1/2} d_{v_n}(t_1,t_2)\le 2 K n^{1/2} |t_1-t_2|^{\alpha }. \end{equation*}

Hence, by Theorem 5.2, if we denote by

\begin{equation*} ||L^n(x)||_{\infty,[0,T]}\;:\!=\;\sup _{t\in [0,T]} |L_t^n(x)|, \qquad x\in V(\mathcal {T}_n), \end{equation*}

the supremum norm of $L^n(x)\;:\;[0,T]\to \mathbb{R}_{+}$ , for any fixed $p\ge 2$ ,

\begin{align*} &\mathbb{E}_{\rho ^n}^{\mathcal{T}_n} \left |\left |r(\mathcal{T}_n)^{-1} \left (L_{r(\mathcal{T}_n) m(\mathcal{T}_n) \cdot }^n(y)-L_{r(\mathcal{T}_n) m(\mathcal{T}_n) \cdot }^n(z)\right )\right |\right |_{\infty,[0,T]}^p \\[5pt] =&\int _{0}^{\infty } \mathbb{P}_{\rho ^n}^{\mathcal{T}_n} \left (\sup _{t\in [0,T]} r(\mathcal{T}_n)^{-1} \left |L_{r(\mathcal{T}_n) m(\mathcal{T}_n) t}^n(y)-L_{r(\mathcal{T}_n) m(\mathcal{T}_n) t}^n(z) \right |\ge \varepsilon ^{1/p}\right ) d \varepsilon \\[5pt] \le &\; c_1 \int _{0}^{\infty } e^{-c_2 \frac{\varepsilon ^{1/p}}{\sqrt{r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z)}}} d \varepsilon . \end{align*}

Changing variables, $\lambda ^{1/p}=\frac{\varepsilon ^{1/p}}{\sqrt{r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z)}}$ yields

\begin{align*} &\int _{0}^{\infty } e^{-c_2 \frac{\varepsilon ^{1/p}}{\sqrt{r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z)}}} d \varepsilon \\[5pt] &=(r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z))^{p/2} \int _{0}^{\infty }e^{-c_2 \lambda ^{1/p}} d \lambda \le c_3 (r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z))^{p/2}, \end{align*}

where $c_3$ is a constant depending only on $p$ . Therefore,

(5.8) \begin{equation} \mathbb{E}_{\rho ^n}^{\mathcal{T}_n} \left |\left |r(\mathcal{T}_n)^{-1} \left (L_{r(\mathcal{T}_n) m(\mathcal{T}_n) \cdot }^n(y)-L_{r(\mathcal{T}_n) m(\mathcal{T}_n) \cdot }^n(z)\right )\right |\right |_{\infty,[0,T]}^p \le c_3 (r(\mathcal{T}_n)^{-1} d_{\mathcal{T}_n}(y,z))^{p/2}. \end{equation}

Conditioned on the event that $v_n$ satisfies (5.7), the total length of the path between $y$ and $z$ is bounded above by

\begin{equation*} 2 K n^{1/2} |t_1-t_2|^{\alpha }, \end{equation*}

and consequently the diameter of $\mathcal{T}_n$ is bounded above by a multiple of $n^{1/2}$ . More specifically,

\begin{equation*} r(\mathcal {T}_n)\le 2 K n^{1/2}. \end{equation*}

Moreover, $m(\mathcal{T}_n)=2n$ . Hence, by (5.8), we have shown that, conditioned on $v_n$ satisfying (5.7), for any fixed $p\ge 2$ ,

\begin{align*} \mathbb{E}_{\rho ^n}^{\mathcal{T}_n} \left |\left |n^{-1/2} \left (L_{n^{3/2} \cdot }^n(y)-L_{n^{3/2} \cdot }^n(z) \right )\right |\right |_{\infty,[0,T]}^p &\le c_4 (n^{-1/2} d_{\mathcal{T}_n}(y,z))^{p/2} 2^{-\alpha p} \\[5pt] &\le c_5 \left |t_1-t_2\right |^{\alpha p/2}. \end{align*}

Choosing $p$ such that $\alpha p\ge 4$ , this is at most, except in the trivial case $t_1=t_2$ ,

\begin{equation*} c_5|t_1-t_2|^2. \end{equation*}

This holds for all $t_1$ and $t_2$ , with $2 n t_1$ and $2 n t_2$ integers, such that $p_{v_n}(t_1)=y$ and $p_{v_n}(t_2)=z$ . Since the local time process is interpolated linearly between these time points, it also holds for every $t_1$ , $t_2\in [0,1]$ . Using the moment condition (13.14) of [Reference Billingsley11, Theorem 13.5] yields that, on the event that $v_n$ satisfies (5.7), the sequence

\begin{equation*} \big |\big |n^{-1/2} L^n_{n^{3/2} \cdot }\left (p_n(t_1)\right )\big |\big |_{\infty, [0,T]} \end{equation*}

is tight in $C ([0,1];\; C[0,T] )$ . This gives subsequential convergence of the discrete tours $(v_n,r_n)$ associated with $(n^{-1/2} \mathcal{T}_n,n^{-1/2} L_{n^{3/2}\cdot }^n)$ , to $(v,r)$ , where

\begin{equation*} r_n(t)\;:\!=\;n^{-1/2} L^n_{n^{3/2} \cdot }\left (p_n(t)\right ), \end{equation*}

and $r$ is a continuous $C[0,T]$ -valued function. This convergence is as functions on $C[0,1]\times C([0,1];\;C[0,T])$ . By a straightforward extension of [Reference Croydon14, Proposition 2.4] (there the tours were of the form $C(\mathbb{R}_{+},\mathbb{R}_{+})\times C(\mathbb{R}_{+},\mathbb{R}^d)$ ), it implies, along the relevant subsequence, the convergence of $(n^{-1/2} \mathcal{T}_n,n^{-1/2} L^n_{n^{3/2} \cdot })$ to $(\mathcal{T}_e,r)$ , which in turn deduces that (3.2) holds along that subsequence. Moreover, since the first part of Assumption 2 holds, we can obtain that the limit of the rescaled local times on $\mathcal{T}_n$ is indeed the local time process of the Brownian motion on the CRT, see Lemma 3.3. This classifies the limit, and so we can extend from subsequential to full convergence to conclude the proof of the proposition.

5.1.1 Itô’s excursion theory of Brownian motion

We recall some key facts of Itô’s excursion theory of reflected Brownian motion. Our main interest here lies on the scaling property of the Itô excursion measure. Let $(L_t^0)_{t\ge 0}$ denote the local time process at level 0 of the reflected Brownian motion $(|B_t|)_{t\ge 0}$ , which can be defined by the approximation

\begin{equation*} L_t^0=\lim _{\varepsilon \rightarrow 0}\frac {1}{2 \varepsilon } \int _{0}^{t} \unicode{x1D7D9}_{[0,\varepsilon ]}(|B_s|) ds, \end{equation*}

for every $t\ge 0$ , a.s.

The local time process at level 0 is increasing, and its set of points of increase coincides with the set of time points for which the reflected Brownian is identical to zero. Now, introducing the right-continuous inverse of the local time process at level 0, i.e.

\begin{equation*} \tau _k\;:\!=\;\inf \{t\ge 0\;:\;L_t^0\gt k\}, \end{equation*}

for every $k\ge 0$ , we have that the set of points of increase of $(L_t^0)_{t\ge 0}$ alternatively belong to the set

\begin{equation*} \{\tau _k\;:\;k\ge 0\}\cup \{\tau _{k-}\;:\;k\in D\}, \end{equation*}

where $D$ is the set of countable discontinuities of the mapping $k\mapsto \tau _k$ . Then, for every $k\in D$ we define the excursion $(e_k(t))_{t\ge 0}$ with excursion interval $(\tau _{k-},\tau _k)$ away from 0 as

\begin{equation*} e_k(t)\;:\!=\; \begin {cases} |B_{t+\tau _{k-}}| & \textrm {if } 0\le t\le \tau _k-\tau _{k-}, \\[5pt] 0 & \textrm {if } t\gt \tau _k-\tau _{k-}. \end {cases} \end{equation*}

Let $E$ denote the space of excursions, namely the space of functions $e\in C(\mathbb{R}_+,\mathbb{R}_+)$ , satisfying $e(0)=0$ and $\zeta (e)\;:\!=\;\sup \{s\gt 0\;:\;e(s)\gt 0\}\in (0,\infty )$ . By convention $\sup \emptyset =0$ . Observe here that for every $k\in D$ , $e_k\in E$ , and moreover $\zeta (e_k)=\tau _k-\tau _{k-}$ .

The main theorem of Itô’s excursion theory adapted in our setting is the existence of a $\sigma$ -finite measure $\mathbb{N} (d e)$ on the space of positive excursions of linear Brownian motion, such that the point measure

\begin{equation*} \sum _{k\in D}\delta _{(k,e_k)}(ds \ de) \end{equation*}

is a Poisson measure on $\mathbb{R}_{+}\times E$ , with intensity $ds \otimes \mathbb{N}(d e)$ . The Itô excursion measure has the following scaling property. For every $a\gt 0$ consider the mapping $\Theta _a\;:\;E\rightarrow E$ defined by setting $\Theta _a(e)(t)\;:\!=\;\sqrt{a}e(t/a)$ , for every $e\in E$ , and $t\ge 0$ . Then, we have that

(5.9) \begin{equation} \mathbb{N}\circ \Theta ^{-1}_a=\sqrt{a} \mathbb{N} \end{equation}

Versions of the Itô excursion measure $\mathbb{N}(d e)$ under different conditionings are possible. For example one can define conditionings with respect to the height or the length of the excursion. For our purposes we focus on the fact that there exists a unique collection of probability measures $(\mathbb{N}_s\;:\;s\gt 0)$ on $E$ , such that $\mathbb{N}_s(\zeta =s)=1$ , for every $s\gt 0$ , and moreover for every measurable event $A\in E$ ,

(5.10) \begin{equation} \mathbb{N}(A)=\int _{0}^{\infty } \mathbb{N}_s(A) \frac{ds}{2 \sqrt{2 \pi s^3}}. \end{equation}

In other words $\mathbb{N}_s(d e)$ is the Itô excursion measure $\mathbb{N}(d e)$ , conditioned on the event $\{\zeta =s\}$ . We might write $\mathbb{N}_1=\mathbb{N}(\cdot |\zeta =1)$ to denote law of the normalized Brownian excursion. It is straightforward from (5.9) and (5.10) to check that $\mathbb{N}_s$ satisfies the scaling property

(5.11) \begin{equation} \mathbb{N}_s\circ \Theta _a^{-1}=\mathbb{N}_{as}, \end{equation}

for every $s\gt 0$ and $a\gt 0$ .

To conclude our recap on Itô’s excursion theory we highlight the fact that for every $t\gt 0$ the process $(e(t+r))_{r\ge 0}$ is Markov under the conditional probability measure $\mathbb{N}(\cdot |\zeta \gt t)$ . The transition kernel of the process is the same with the one of a Brownian motion killed upon the first time it hits zero.

5.1.2 Continuity of blanket times of Brownian motion on the CRT

We are primarily interested in proving continuity of the $\varepsilon$ -blanket time variable of the Brownian motion on the CRT. The mapping $\varepsilon \mapsto \tau _{\textrm{bl}}^e(\varepsilon )$ is increasing in $(0,1)$ , so it posseses left-hand and right-hand limits. We let

\begin{equation*} \mathcal {A}_{\varepsilon }\;:\!=\;\left \{(\mathcal {T}_e)_{e\in E}\;:\;\mathbb {P}_{\rho }^{e}\left (\tau _{\textrm {bl}}^e(\varepsilon \!-\!)=\tau _{\textrm {bl}}^e(\varepsilon \!+\!)\right )=1\right \} \end{equation*}

denote the collection of random trees coded by positive excursions that have continuous blanket time variable at $\varepsilon \in (0,1)$ almost-surely with respect to $\mathbb{P}_{\rho }^{e}$ , the law of the corresponding Brownian motion on $\mathcal{T}_e$ .

Moreover, $\varepsilon \mapsto \tau _{\textrm{bl}}^e(\varepsilon )$ has at most a countably infinite number of discontinuities $\mathbb{P}_{\rho }^e$ -a.s as a real-valued monotone function defined on an interval. Recalling the definition of Itô’s (unconditioned) excursion measure $\mathbb{N}$ in (5.10), by Fubini, we immediately get

\begin{equation*} \int _{0}^{1} \int _E \mathbb {P}_{\rho }^{e}\left (\tau _{\textrm {bl}}^e(\varepsilon \!-\!)\neq \tau _{\textrm {bl}}^e(\varepsilon \!+\!)\right ) \mathbb {N}(d e) d \varepsilon =\mathbb {E}_{\mathbb {P}_{\rho }} \left [\int _{0}^{1} \unicode {x1D7D9} \left \{\tau _{\textrm {bl}}^e(\varepsilon \!-\!)\neq \tau _{\textrm {bl}}^e(\varepsilon \!+\!)\right \} d \varepsilon \right ]=0, \end{equation*}

where by $\textbf{E}_{\mathbb{P}_{\rho }}$ , we denoted the expectation with respect to the annealed law, which is formally defined for suitable events as

(5.12) \begin{equation} \mathbb{P}_{\rho }(\!\cdot\!)\;:\!=\;\int _E \mathbb{P}_{\rho }^{e}(\!\cdot\!) \mathbb{N}(d e). \end{equation}

Therefore, denoting the Lebesgue measure on the real line by $\lambda$ as usual, we deduce that for $\lambda$ -a.e. $\varepsilon \in (0,1)$ ,

\begin{equation*} \int _E \left (1-\mathbb {P}_{\rho }^{e}\left (\tau _{\textrm {bl}}^e(\varepsilon \!-\!)=\tau _{\textrm {bl}}^e(\varepsilon \!+\!)\right )\right ) \mathbb {N}(d e)=0. \end{equation*}

The fact that $\mathbb{N}(d e)$ is a sigma-finite measure on $E$ yields that for $\lambda$ -a.e. $\varepsilon \in (0,1)$ , $\mathbb{N}$ -a.e. $e\in E$ ,

(5.13) \begin{equation} \mathbb{P}_{\rho }^{e}\left (\tau _{\textrm{bl}}^e(\varepsilon \!-\!)=\tau _{\textrm{bl}}^e(\varepsilon \!+\!)\right )=1. \end{equation}

Thus, we inferred that for $\lambda$ -a.e. $\varepsilon \in (0,1)$ , $\mathbb{N}$ -a.e. $e\in E$ , $\mathcal{T}_e\in \mathcal{A}_{\varepsilon }$ . To be satisfactory for our purposes, we need to improve this statement to hold for every $\varepsilon \in (0,1)$ .

For a fixed positive excursion $e$ compactly supported on $[0,\zeta ]$ , consider the random real tree $((\mathcal{T}_e,d_{\mathcal{T}_e}),\pi ^e)$ , where $d_{\mathcal{T}_e}$ is defined as in (5.2) and $\pi ^e$ is the image measure on $\mathcal{T}_e$ of the Lebesgue measure on $[0,\zeta ]$ by the canonical projection $p_e$ of $[0,\zeta ]$ onto $\mathcal{T}_e$ . Recalling the mapping introduced in Section 5.1.1,

\begin{equation*} \Theta _a(e)(t)=\sqrt {a} e(t/a),\qquad t\in [0,a \zeta ], \end{equation*}

applied to $e$ for some $a\gt 1$ , results in perturbing $d_{\mathcal{T}_e}$ by a factor of $\sqrt{a}$ and $\pi ^e$ by a factor of $a$ . To be more precise, consider the set $A\cap \textrm{Sk}(\mathcal{T}_{\Theta _a(e)})$ , where $\textrm{Sk}(\mathcal{T}_{\Theta _a(e)})\;:\!=\;\cup _{x,y\in \mathcal{T}_{\Theta _a(e)}} [x,y]$ ,

\begin{equation*} [x,y]\;:\!=\;\{z\in \mathcal {T}_{\Theta _a(e)}\;:\;d_{\mathcal {T}_{\Theta _a(e)}}(x,y)=d_{\mathcal {T}_{\Theta _a(e)}}(x,z)+d_{\mathcal {T}_{\Theta _a(e)}}(z,y)\}\setminus \{x,y\} \end{equation*}

is the path interval, and $A\in \mathcal{B}(\mathcal{T}_{\Theta _a(e)})$ . In particular, if $\mathcal{T}'\subseteq \mathcal{T}_{\Theta _a(e)}$ is a countable dense subset, we have that

(5.14) \begin{equation} \left \{A\cap \textrm{Sk}(\mathcal{T}_{\Theta _a(e)})\;:\;A\in \mathcal{B}(\mathcal{T}_{\Theta _a(e)})\right \}=\sigma (\{[x,y]\;:\;x,y\in \mathcal{T}'\}) \end{equation}

For $s$ , $t\in p_{\Theta _a(e)}^{-1}(\mathcal{T}')\subseteq [0,a\zeta ]$ , such that $p_{\Theta _e(a)}(x)=s$ and $p_{\Theta _e(a)}(y)=t$ , observe that

(5.15) \begin{equation} d_{\mathcal{T}_{\Theta _a(e)}}(s,t)=\sqrt{a} d_{\mathcal{T}_e}(\tilde{x}_a,\tilde{y}_a), \end{equation}

where $\tilde{x}_a$ , $\tilde{y}_a\in \mathcal{T}_e$ are in such a way that $p_e(s/a)=\tilde{x}_a$ and $p_e(t/a)=\tilde{y}_a$ . Using the scaling property of the Lebesgue measure implies

\begin{align*} \pi ^{\Theta _a(e)}([x,y])&=\lambda (\{r\in [0,a\zeta ]\;:\;p_{\Theta _a(e)}(r)\in [x,y]\}) \\[5pt] &=\lambda (\{r\in [0,a\zeta ]\;:\;p_e(r/a)\in [\tilde{x}_a,\tilde{y}_a]\}) \\[5pt] &=a \lambda (\{r\in [0,\zeta ]\;:\;p_e(r)\in [\tilde{x}_a,\tilde{y}_a]\}). \end{align*}

Therefore,

(5.16) \begin{equation} \pi ^{\Theta _a(e)}([x,y])=a \pi ^e([\tilde{x}_a,\tilde{y}_a]). \end{equation}

For simplicity, for the random real tree $\mathcal{T}= ((\mathcal{T}_e,d_{\mathcal{T}_e}),\pi ^e)$ , we write $\Theta _a \mathcal{T}$ to denote the resulting random real tree $((\mathcal{T}_{\Theta _a(e)},d_{\mathcal{T}_{\Theta _a(e)}}),\pi ^{\Theta _e(a)})$ , where $d_{\mathcal{T}_{\Theta _a(e)}}$ and $\pi ^{\Theta _e(a)}$ satisfy (5.15) and (5.16) respectively.

Next, if the Brownian motion $(X_t^e)_{t\ge 0}$ on $\mathcal{T}$ admits local times $(L_t^e(x))_{x\in \mathcal{T},t\ge 0}$ that, $\mathbb{P}_{\rho }^e$ -a.s., are jointly continuous in $(x,t)$ , then it is the case that the Brownian motion on $\Theta _a \mathcal{T}$ admits local times distributed as

\begin{equation*} (\sqrt {a} L_{a^{-3/2} t}^e(x))_{x\in \mathcal {T},t\ge 0} \end{equation*}

that, $\mathbb{P}_{\rho }^{\Theta _a(e)}$ -a.s., are jointly continuous in $(x,t)$ . To justify this, it takes two steps to check that they satisfy, $\mathbb{P}_{\rho }^{\Theta _a(e)}$ -a.s., the occupation density formula (see [Reference Croydon, Hambly and Kumagai20, Lemma 2.4] and the references that lie in the proof of (b) and (2.6)):

\begin{align*} \int _{[x,y]} \sqrt{a} L_{a^{-3/2 } u}^e(z) \pi ^{\Theta _a(e)}(d z)&= \int _{[\tilde{x}_a,y_{\tilde{a}}]} a^{3/2} L_{a^{-3/2} u}^e(z) \pi ^ e(d z) \\[5pt] &=\int _{0}^{a^{-3/2} u} a^{3/2} \unicode{x1D7D9}_{[\tilde{x}_a,\tilde{y}_a]}(X_k^e) dk \\[5pt] &=\int _{0}^{u} \unicode{x1D7D9}_{[\tilde{x}_a,\tilde{y}_a]}(X_{a^{-3/2} k}^e) dk, \end{align*}

for every $t\ge 0$ , where the first equality is obtained by (5.16), and the second holds, $\mathbb{P}_{\rho }^e$ -a.s., by the occupation density formula applied to $(L_t^e(x))_{x\in \mathcal{T},t\ge 0}$ . In addition, for $a\gt 1$ ,

\begin{equation*} \{X^e_{a^{-3/2} t}, \mathbb {P}_{\rho }^e(;\; [\tilde {x}_a,\tilde {y}_a])\}\overset {(d)}=\{X^{\Theta _a(e)}_{a^{-3/2} t}, \mathbb {P}_{\rho }^{\Theta _a(e)}(;\; [x,y])\}, \end{equation*}

where $\overset{(d)}=$ means equality in distribution (to justify why the processes are equal in law, see the definition of a speed motion on a compact real tree after Proposition 1.9 in [Reference Athreya, Eckhoff and Winter6]), which brings us to our second step, confirming that, $\mathbb{P}_{\rho }^{\Theta _a(e)}$ -a.s.,

\begin{equation*} \int _{[x,y]} \sqrt {a} L_{a^{-3/2 } u}^e(z) \pi ^{\Theta _a(e)}(d z)=\int _{0}^{u} \unicode {x1D7D9}_{[\tilde {x}_a,\tilde {y}_a]}(X_{a^{-3/2} k}^e) dk =\int _{0}^{u} \unicode {x1D7D9}_{[x,y]}(X_k^{\Theta _a(e)}) dk, \end{equation*}

for every open line segment $[x,y]\subseteq \mathcal{T}_{\Theta _a(e)}$ , with $x$ , $y\in \mathcal{T}'$ , and $t\ge 0$ . In view of (5.14), this can be seen to hold for any $A\cap \textrm{Sk}(\mathcal{T}_{\Theta _a(e)})$ with $A\in \mathcal{B}(\mathcal{T}_{\Theta _a(e)})$ .

Now, for every $\varepsilon \in (0,1)$ fraction of time and every scalar parameter $a\gt 1$ , for the $\varepsilon$ -blanket time variable of the Brownian motion on $\Theta _a \mathcal{T}$ as defined in (1.5), we have that

\begin{align*} \tau ^{\Theta _a(e)}_{\textrm{bl}}(a^{-1} \varepsilon ) &\,{\buildrel (d) \over =}\, \inf \{t\ge 0\;:\;\sqrt{a} L_{a^{-3/2} t}^e(x)\ge \varepsilon a^{-1} t,\ \forall x\in \mathcal{T}_e\} \\[5pt] &\,{\buildrel (d) \over =}\, \inf \{t\ge 0\;:\;L_{a^{-3/2} t}^e(x)\ge \varepsilon a^{-3/2} t,\ \forall x\in \mathcal{T}_e\}\ \,{\buildrel (d) \over =}\, a^{-3/2} \tau _{\textrm{bl}}^e(\varepsilon ). \end{align*}

This implies that $ \mathcal{T}\in \mathcal{A}_{\varepsilon }$ if and only if $\Theta _a \mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ . In other words, $\tau _{\textrm{bl}}^e(\varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^{e}$ -a.s. if and only if $\tau _{\textrm{bl}}^{\Theta _a(e)}(a^{-1} \varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^e$ -a.s. Using the precise way in which the blanket times above relate as well as the scaling properties of the usual and the normalized Itô excursion we prove the following proposition.

Proposition 5.6. For every $\varepsilon \in (0,1)$ , $\mathbb{N}$ -a.e. $e\in E$ , $\tau _{\textrm{bl}}^e(\varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^e$ -a.s. Moreover, $\mathbb{N}_1$ -a.e. $e\in E$ , $\tau _{\textrm{bl}}^e(\varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^e$ -a.s.

Proof. Fix $\varepsilon \in (0,1)$ . We choose $a\gt 1$ in such a way that $a^{-1} \varepsilon \in \Omega _0$ , where $\Omega _0$ is the set for which the assertion in (5.13) holds $\lambda$ almost everywhere. Namely, $\mathbb{N}$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ . Using the scaling property of Itô’s excursion measure as quoted in (5.9) yields $\sqrt{a} \mathbb{N}$ -a.e. $e\in E$ , $\Theta _a \mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ , and consequently $\mathbb{N}$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{\varepsilon }$ , where we exploited the fact that $\Theta _a \mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ if and only if $\mathcal{T}\in \mathcal{A}_{\varepsilon }$ . Since $\varepsilon$ was arbitrary, this establishes our first conclusion.

What remains now is to prove a similar result but with $\mathbb{N}(d e)$ replaced with its version conditioned on the length of the excursion. Following the same steps we used in order to prove (5.13), we infer that for $\lambda$ -a.e. $\varepsilon \in (0,1)$ , $\mathbb{N}(\cdot |\zeta \in [1,2])$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{\zeta ^{-1} \varepsilon }$ , and consequently $\lambda$ -a.e. $\varepsilon \in (0,1)$ , $\mathbb{N}(\cdot |\zeta \in [1,2])$ -a.e. $e\in E$ , $\Theta _{\zeta } \mathcal{T}\in \mathcal{A}_{\varepsilon }$ . Using the scaling property of the normalized Itô excursion measure quoted in (5.11), we deduce that $\lambda$ -a.e. $\varepsilon \in (0,1)$ , $\mathbb{N}_1$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{\varepsilon }$ , where $\mathbb{N}_1$ is the law of the normalized Brownian excursion. To conclude, we proceed using the same argument as in the first part of the proof. Fix an $\varepsilon \in (0,1)$ and choose $a\gt 1$ such that $a^{-1} \varepsilon \in \Phi _0$ , where $\Phi _0$ is the set for which the assertion $\mathbb{N}_1$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{\varepsilon }$ holds $\lambda$ almost-equally. Namely, $\mathbb{N}_1$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ , which from the scaling property of the normalized Itô excursion measure yields $a \mathbb{N}_1$ -a.e. $e\in E$ , $\Theta _a \mathcal{T}\in \mathcal{A}_{a^{-1} \varepsilon }$ . As before this gives us that $\mathbb{N}_1$ -a.e. $e\in E$ , $\mathcal{T}\in \mathcal{A}_{\varepsilon }$ , or in other words that $\mathbb{N}_1$ -a.e. $e\in E$ , $\tau _{\textrm{bl}}^e(\varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^e$ -a.s.

Since the space in the convergence in (5.3) is separable, we can use Skorohod’s coupling to deduce that there exists a common metric space $(F,d_F)$ and a joint probability measure $\tilde{\mathbb{P}}$ such that, as $n\to \infty$ ,

\begin{equation*} d_H^F(V(\tilde {\mathcal {T}}_n),\tilde {\mathcal {T}}_e)\to 0, \qquad d_P^F(\tilde {\pi }^n,\tilde {\pi }^e)\to 0, \qquad d_F(\tilde {\rho }^n,\tilde {\rho })\to 0, \qquad \tilde {\mathbb {P}}\textrm { -a.s.}, \end{equation*}

where $((V(\mathcal{T}_n),d_{\mathcal{T}_n},\rho ^n),\pi ^n)\,{\buildrel (d) \over =}\, ((V(\tilde{\mathcal{T}_n}),d_{\tilde{\mathcal{T}_n}},\tilde{\rho }^n),\tilde{\pi }^n)$ and $((\mathcal{T}_e,d_{\mathcal{T}_e},\rho ),\pi ^e)\,{\buildrel (d) \over =}\, ((\tilde{\mathcal{T}_e},d_{\tilde{\mathcal{T}_e}},\tilde{\rho }),\tilde{\pi }^e)$ . Moreover, $X^n$ under $\mathbb{P}_{\tilde{\rho }^n}^{\tilde{\mathcal{T}_n}}$ converges weakly to the law of $X^e$ under $\mathbb{P}_{\tilde{\rho }}^{\tilde{e}}$ on $D([0,1],F)$ . In Proposition 5.5, we proved equicontinuity of the local times with respect to the annealed law. Reexamining the proof of Lemma 3.3, one can see that in this case $L^n$ under $\mathbb{P}_{\tilde{\rho }^n}(\!\cdot\!)\;:\!=\; \int \mathbb{P}_{\tilde{\rho }^n}^{\tilde{\mathcal{T}_n}}(\!\cdot\!) d \mathbb{\tilde{P}}$ will converge weakly to $L$ under $\mathbb{P}_{\tilde{\rho }}(\!\cdot\!)\;:\!=\;\int \mathbb{P}_{\tilde{\rho }}^{\tilde{e}}(\!\cdot\!) d \mathbb{\tilde{P}}$ in the sense of the local convergence as stated in (3.3). It was this precise statement that was used extensively in the derivation of asymptotic distributional bounds for the rescaled blanket times in Section 4.1. Then, the statement of Theorem 1.2 translates as follows. For every $\varepsilon \in (0,1)$ , $\delta \in (0,1)$ and $t\in [0,1]$ ,

\begin{equation*} \limsup _{n\rightarrow \infty } \int \mathbb {P}_{\tilde {\rho }^n}^{\tilde {\mathcal {T}_n}} \left (n^{-3/2} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right ) d \mathbb {\tilde {P}}\le \int \mathbb {P}_{\tilde {\rho }}^{\tilde {e}} \left (\tau _{\textrm {bl}}^e(\varepsilon (1-\delta ))\le t\right ) d \mathbb {\tilde {P}}, \end{equation*}
\begin{equation*} \liminf _{n\rightarrow \infty } \int \mathbb {P}_{\tilde {\rho }^n}^{\tilde {\mathcal {T}_n}} \left (n^{-3/2} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right ) d \mathbb {\tilde {P}}\ge \int \mathbb {P}_{\tilde {\rho }}^{\tilde {e}} \left ({\tau }_{\textrm {bl}}^e(\varepsilon (1+\delta ))\lt t\right ) d \mathbb {\tilde {P}}. \end{equation*}

From Proposition 5.6 and the dominated convergence theorem we have that for every $\varepsilon \in (0,1)$ and $t\in [0,1]$ ,

\begin{align*} \lim _{\delta \to 0} \int \mathbb{P}_{\tilde{\rho }}^{\tilde{e}} \left (\tau _{\textrm{bl}}^e(\varepsilon (1\pm \delta ))\le t\right ) d \mathbb{\tilde{P}} &=\lim _{\delta \to 0} \int \mathbb{P}_{\tilde{\rho }}^{\tilde{e}} \left ({\tau }_{\textrm{bl}}^e(\varepsilon )\lt t\right ) d \mathbb{\tilde{P}} \\[5pt] &=\int \mathbb{P}_{\rho }^e\left (\tau _{\textrm{bl}}^e(\varepsilon )\le t\right ) \mathbb{N}(d e) =\mathbb{P}_{\rho }\left (\tau _{\textrm{bl}}^e(\varepsilon )\le t\right ). \end{align*}

Therefore, we deduce that for every $\varepsilon \in (0,1)$ and $t\in [0,1]$ ,

\begin{align*} \lim _{n\to \infty } \mathbb{P}_{\rho ^n} \left (n^{-3/2} \tau _{\textrm{bl}}^n(\varepsilon )\le t\right )&=\lim _{n\to \infty } \int \mathbb{P}_{\rho ^n}^{\mathcal{T}_n} \left (n^{-3/2} \tau _{\textrm{bl}}^n(\varepsilon )\le t\right ) \mathbb{P}(d \mathcal{T}_n) \\[5pt] &=\mathbb{P}_{\rho }\left (\tau _{\textrm{bl}}^e(\varepsilon )\le t\right ). \end{align*}

In the theorem below we state the $\varepsilon$ -blanket time variable convergence result we have just proved, which is a restatement of Theorem 1.5.

Theorem 5.7. Fix $\varepsilon \in (0,1)$ . If $\tau _{\textrm{bl}}^n(\varepsilon )$ is the $\varepsilon$ -blanket time variable of the random walk on $\mathcal{T}_n$ , started from its root $\rho ^n$ , then

\begin{equation*} \mathbb {P}_{\rho ^n}\left (n^{-3/2} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right )\to \mathbb {P}_{\rho }\left (\tau _{\textrm {bl}}^e(\varepsilon )\le t\right ), \end{equation*}

for every $t\ge 0$ , where $\tau _{\textrm{bl}}^e(\varepsilon )\in (0,\infty )$ is the $\varepsilon$ -blanket time variable of the Brownian motion on $\mathcal{T}_e$ , started from $\rho$ . Equivalently, for every $\varepsilon \in (0,1)$ , $n^{-3/2} \tau _{\textrm{bl}}^n(\varepsilon )$ under $\mathbb{P}_{\rho ^n}$ converges weakly to $\tau _{\textrm{bl}}^e(\varepsilon )$ under $\mathbb{P}_{\rho }$ .

5.2 The critical ErdőHs-Rényi random graph

Our interest in this section shifts to the Erdős-Rényi random graph at criticality. Take $n$ vertices labelled by $[n]$ and put edges between any pair independently with fixed probability $p\in [0,1]$ . Denote the resulting random graph by $G(n,p)$ . Let $p=c/n$ for some $c\gt 0$ . This model exhibits a phase transition in its structure for large $n$ , as it was discovered in the groundbreaking work of Erdős and Rényi in [Reference Erdős and Rényi25]. With probability tending to $1$ , when $c\lt 1$ , the largest connected component has size $O(\log n)$ . On the other hand, when $c\gt 1$ , we see the emergence of a giant component that contains a positive proportion of the vertices. In the critical case, when $c=1$ , they showed that the largest components of $G(n,p)$ have size of order $n^{2/3}$ .

We will focus here on the critical case $c=1$ , and more specifically, on the critical window $p=n^{-1}+\lambda n^{-4/3}$ , $\lambda \in \mathbb{R}$ . The most significant result in this regime was proven by Aldous [Reference Aldous5]. Fix $\lambda \in \mathbb{R}$ and let $(C_i^n)_{i\ge 1}$ denote the sequence of the component sizes of $G(n,n^{-1}+\lambda n^{-4/3})$ . For reasons that are inherent in understanding the structure of the components, we track the surplus of each one, that is the number of vertices that have to be removed in order to obtain a tree. Let $(S_i^n)_{n\ge 1}$ be the sequence of the corresponding surpluses.

Theorem 5.8. (Aldous [Reference Aldous5]). As $n\to \infty$ ,

(5.17) \begin{equation} \left (n^{-2/3} (C_i^n)_{i\ge 1},(S_i^n)_{i\ge 1}\right )\longrightarrow \left ((C_i)_{i\ge 1},(S_i)_{i\ge 1}\right ) \end{equation}

in distribution, where the convergence of the first sequence takes place in $\ell ^2_{\downarrow }$ , the set of positive, decreasing sequences $(x_i)_{i\ge 1}$ with $\sum _{i=1}^{\infty } x_i^2\lt \infty$ . For the second sequence it takes place in the product topology.

The limit is described by stochastic processes that encode various aspects of the structure of the random graph. Consider a Brownian motion with parabolic drift, $(B^{\lambda }_t)_{t\ge 0}$ , where

(5.18) \begin{equation} B^{\lambda }_t\;:\!=\;B_t+\lambda t-\frac{t^2}{2} \end{equation}

and $(B_t)_{t\ge 0}$ is a standard Brownian motion. Then, the limiting sequence $(C_i)_{i\ge 1}$ has the distribution of the ordered sequence of lengths of excursions of the process $B^{\lambda }_t-\inf _{0\le s\le t} B^{\lambda }_s$ , that is the parabolic Brownian motion reflected upon its minimum. Finally, $(S_i)_{i\ge 1}$ is recovered as follows. Draw the graph of the reflected process and scatter points on the place according to a rate 1 Poisson process and keep those that fall between the $x$ -axis and the function. Then, $S_i$ are the Poisson number of points that fell in the corresponding excursion with length $C_i$ . Observe that the distribution of the limit $(C_i)_{i\ge 1}$ depends on the particular value of $\lambda$ chosen.

The scaling limit of the largest connected component of the Erdős-Rényi random graph on the critical window arises as a tilted version of the CRT following a procedure introduced in [Reference Addario-Berry, Broutin and Goldschmidt2]. Given a $\mathcal{P}$ , a subset of the upper half plane that contains only a finite number of points in any compact subset, and a positive excursion $e$ , we define $\mathcal{P}\cap e$ as the set of points from $\mathcal{P}$ that fall under the graph of $e$ . We construct a “glued” metric space $\mathcal{M}_{e,\mathcal{P}}$ as follows. For each point $(t,x)\in \mathcal{P}\cap e$ , let $u_{(t,x)}$ be the unique vertex $p_e(t)\in \mathcal{T}_e$ and $v_{(t,x)}$ be the unique vertex on the path from the root to $u_{(t,x)}$ at a distance $x$ from the root. Let $E_{\mathcal{P}}=\{(u_{(t,x)},v_{(t,x)}):(t,x)\in \mathcal{P}\cap e\}$ be the finite set that consists of the pairs of vertices to be identified. Let $\{u_i,v_i\}_{i=1}^{k}$ be $k$ pairs of points that belong to $E_{\mathcal{P}}$ . We define a quasi-metric on $\mathcal{T}_e$ by setting:

(5.19) \begin{equation} d_{\mathcal{M}_{e,\mathcal{P}}}(x,y)\;:\!=\;\min \left \{d_{\mathcal{T}_e}(x,y),\inf _{i_1,\ldots,i_r} \left \{d_{\mathcal{T}_e}(x,u_{i_1})+\sum _{j=1}^{r-1} d_{\mathcal{T}_e}(v_{i_j},u_{i_{j+1}})+d_{\mathcal{T}_e}(v_r,y)\right \}\right \}, \end{equation}

where the infimum is taken over $r$ positive integers, and all subsets $\{i_1,\ldots,i_r\}\subseteq \{1,\ldots,k\}$ . Moreover, note that the vertices $i_1,\ldots,i_k$ can be chosen to be distinct. The metric defined above gives the shortest distance between $x,y\in \mathcal{T}_e$ when we glue the vertices $v_i$ and $u_i$ for $i=1,\ldots,k$ . It is clear that $d_{\mathcal{M}_{e,\mathcal{P}}}$ defines only a quasi-metric since $d_{\mathcal{M}_{e,\mathcal{P}}}(u_i,v_i)=0$ , for every $i=1,\ldots,k$ , but $u_i\neq v_i$ , for every $i=1,\ldots,k$ . We define an equivalence relation on $\mathcal{T}_e$ by setting $x\sim _{E_{\mathcal{P}}} y$ if and only if $d_{\mathcal{M}_{e,\mathcal{P}}}(x,y)=0$ . This makes the vertex identification explicit and $\mathcal{M}_{e,\mathcal{P}}$ is defined as

\begin{equation*} \mathcal {M}_{e,\mathcal {P}}\;:\!=\;(\mathcal {T}_e/\sim _{E_{\mathcal {P}}},d_{\mathcal {M}_{e,\mathcal {P}}}). \end{equation*}

To endow $\mathcal{M}_{e,\mathcal{P}}$ with a canonical measure let $p_{e,\mathcal{P}}$ denote the canonical projection from $\mathcal{T}_e$ to the quotient space $\mathcal{T}_e/\sim _{E_{\mathcal{P}}}$ . We define $\pi _{e,\mathcal{P}}\;:\!=\;\pi ^e\circ p_{e,\mathcal{P}}^{-1}$ , where $\pi ^e$ is the image measure on $\mathcal{T}_e$ of the Lebesgue measure $\lambda$ on $[0,\zeta ]$ by the canonical projection $p_e$ of $[0,\zeta ]$ onto $\mathcal{T}_e$ . So, $\pi _{e,\mathcal{P}}=(\lambda \circ p_e^{-1}) \circ p_{e,\mathcal{P}}^{-1}$ . We note that the restriction of $p_{e,\mathcal{P}}$ to $\mathcal{T}_e$ is $p_e$ .

For every $\zeta \gt 0$ , as in [Reference Addario-Berry, Broutin and Goldschmidt2], we define a tilted excursion of length $\zeta$ to be a random variable that takes values in $E$ whose distribution is characterized by

\begin{equation*} \mathbb {P}(\tilde {e}\in \mathcal {E})=\frac {\mathbb {E}\left (\unicode {x1D7D9}_{\{e\in \mathcal {E}\}}\exp \left (\int _{0}^{\zeta } e(t) dt\right )\right )}{\mathbb {E}\left (\exp \left (\int _{0}^{\zeta } e(t) dt\right )\right )}, \end{equation*}

for every measurable $\mathcal{E}\subseteq E$ . We note here that the $\sigma$ -algebra on $E$ is the one generated by the open sets with respect to the supremum norm on $C(\mathbb{R}_{+},\mathbb{R}_{+})$ . Write $\mathcal{M}^{(\zeta )}$ for the random compact metric space distributed as $(\mathcal{M}_{\tilde{e},\mathcal{P}},2 d_{\mathcal{M}_{\tilde{e},\mathcal{P}}})$ , where $\tilde{e}$ is a tilted Brownian excursion of length $\zeta$ and the random set of points of interest $\mathcal{P}$ is a Poisson point process on $\mathbb{R}_{+}^2$ of unit intensity with respect to the Lebesgue measure independent of $\tilde{e}$ .

We now give an alternative description of $\mathcal{M}_{\tilde{e},\mathcal{P}}$ for which the full details can be found in [Reference Addario-Berry, Broutin and Goldschmidt2, Proposition 20]. From the construction, it is easy to prove that the number $|\mathcal{P}\cap \tilde{e}|$ of vertex identifications is a Poisson random variable with mean $\int _{0}^{\zeta } \tilde{e}(u) du$ . Given that $|\mathcal{P}\cap \tilde{e}|=k$ , the co-ordinate $u_{(t,x)}$ has density

\begin{equation*} \frac {\tilde {e}(u)}{\int _{0}^{\zeta } \tilde {e}(t) dt} \end{equation*}

on $[0,\zeta ]$ , and given $u_{(t,x)}$ , its pair $v_{(t,x)}$ is uniformly distributed on $[0,\tilde{e}(u_{(t,x)})]$ . The other $k-1$ vertex identifications are distributed accordingly and independently of the pair $(u_{(t,x)},v_{(t,x)})$ .

After introducing notation, we are in the position to write the limit of the largest connected component, say $\mathcal{C}_1^n$ , as $\mathcal{M}^{(C_1)}$ , where $C_1$ has the distribution of the length of the longest excursion of the reflected upon its minimum parabolic Brownian motion as defined in (5.18). Moreover, the longest excursion, when conditioned to have length $C_1$ , is distributed as a tilted excursion $\tilde{e}$ with length $C_1$ . The following convergence is a simplified version of [Reference Addario-Berry, Broutin and Goldschmidt2, Theorem 2]. As $n\to \infty$ ,

(5.20) \begin{equation} \left (n^{-2/3} C_1^n,\left (V(\mathcal{C}_1^n),n^{-1/3} d_{\mathcal{C}_1^n}\right )\right )\longrightarrow \left (C_1,\left (\mathcal{M},d_{\mathcal{M}}\right )\right ), \end{equation}

in distribution, where conditional on $C_1$ , $\mathcal{M}\,{\buildrel (d) \over =}\,\mathcal{M}^{(C_1)}$ . The convergence of the associated stationary probability measures, say $\pi ^n$ , was not directly proven in [Reference Croydon16], although the hard work to this direction has been done. More specifically, see [Reference Croydon16, Lemma 6.3]. Moreover, it was shown in [Reference Croydon16], that the discrete-time simple random walks $X^{\mathcal{C}_1^n}$ on $\mathcal{C}_1^n$ , started from a distinguished vertex $\rho ^n$ satisfy a distributional convergence of the form

(5.21) \begin{equation} \left (n^{-1/3} X_{\lfloor n t \rfloor }^{\mathcal{C}_1^n}\right )_{t\ge 0}\to \left (X_t^{\mathcal{M}}\right )_{t\ge 0}, \end{equation}

where $X^{\mathcal{M}}$ is a diffusion on $\mathcal{M}$ , started from a distinguished point $\rho \in \mathcal{M}$ . To describe the scaling limit of the associated random walks, the process on $\mathcal{M}$ is being built in terms of a “fused” resistance form, i.e. viewing $\mathcal{T}_{\tilde{e}}$ as an electrical network equipped with the natural resistance form $(\mathcal{E}_{\mathcal{T}_{\tilde{e}}},\mathcal{F}_{\mathcal{T}_{\tilde{e}}})$ , and then “fusing” the disjoint pairs of vertices $\{u_i,v_i\}_{i=1}^{J}$ . (Note that $J$ is a random variable.) For a concise construction, we follow the two steps briefly described above. The unique resistance form $(\mathcal{E}_{\mathcal{T}_{\tilde{e}}},\mathcal{F}_{\mathcal{T}_{\tilde{e}}})$ on $\mathcal{T}_{\tilde{e}}$ satisfies

\begin{equation*} d_{\mathcal {T}_{\tilde {e}}}^{-1}(x,y)=\inf \{\mathcal {E}_{\mathcal {T}_{\tilde {e}}}(f,f)\;:\;f\in \mathcal {F}_{\mathcal {T}_{\tilde {e}}},f(x)=0,f(y)=1\}. \end{equation*}

Moreover, the result in [Reference Kigami30, Theorem 5.4] yields that $(\mathcal{E}_{\mathcal{T}_{\tilde{e}}},\mathcal{F}_{\mathcal{T}_{\tilde{e}}})$ is a regular Dirichlet form on $L^2(\mathcal{T}_{\tilde{e}},\pi ^{\tilde{e}})$ . Recall that the canonical projection from $\mathcal{T}_{\tilde{e}}$ into the quotient space $\mathcal{T}_{\tilde{e}}/\sim _{E_{\mathcal{P}}}$ was denoted by $p_{\tilde{e},\mathcal{P}}$ . Then, define

(5.22) \begin{equation} \mathcal{E}_{\mathcal{M}}(f,f)\;:\!=\;\mathcal{E}_{\mathcal{T}_{\tilde{e}}}(f\circ p_{\tilde{e},\mathcal{P}},f\circ p_{\tilde{e},\mathcal{P}}), \qquad \forall f\in \mathcal{F}_{\mathcal{M}}, \end{equation}

where $\mathcal{F}_{\mathcal{M}}\;:\!=\;\{f\;:\;f\circ p_{\tilde{e},\mathcal{P}}\in \mathcal{F}_{\mathcal{T}_{\tilde{e}}}\}=\{f\circ p_{\tilde{e},\mathcal{P}}^{-1}\;:\;f\in \mathcal{T}_{\tilde{e}},f|_{\{u_i,v_i\}} \textrm{ constant, }\forall i=1,\ldots,J\}$ . The form $(\mathcal{E}_{\mathcal{M}},\mathcal{F}_{\mathcal{M}})$ is the “fused” form, and [Reference Croydon16, Proposition 2.1] ensures it is indeed a resistance form on $\mathcal{M}$ . Eventually, it is shown that $(\mathcal{E}_{\mathcal{M}},\mathcal{F}_{\mathcal{M}})$ is a regular Dirichlet form on $L^2(\mathcal{M},\pi _{\tilde{e},\mathcal{P}})$ . Given that $(\mathcal{E}_{\mathcal{M}},\mathcal{F}_{\mathcal{M}})$ is a resistance form, the function $R_{\mathcal{M}}$ defined by setting $R_{\mathcal{M}}(\hat{x},\hat{y})$ to be equal to the supremum in (3.1), is the associated resistance metric on $\mathcal{M}$ , where we have also used the notation $\hat{x}\;:\!=\;p_{\tilde{e},\mathcal{P}}(x)$ for $x\in \mathcal{T}_{\tilde{e}}$ . In [Reference Croydon16, Lemma 2.2], it was shown that $R_{\mathcal{M}}$ and $d_{\mathcal{M}}$ are equivalent, i.e.

(5.23) \begin{equation} \frac{d_{\mathcal{M}}(\hat{x},\hat{y})}{(4 J+1)!}\le R_{\mathcal{M}}(\hat{x},\hat{y})\le d_{\mathcal{M}}(\hat{x},\hat{y}), \qquad \forall \hat{x}, \hat{y}\in \mathcal{M}. \end{equation}

Now, we describe how to generate a connected component on a fixed number of vertices, for which the full details can be found in [Reference Addario-Berry, Broutin and Goldschmidt2, Lemma 6] and [Reference Addario-Berry, Broutin and Goldschmidt2, Lemma 7]. To any such component we can associate a spanning subtree, the depth-first tree by considering the following algorithm. The initial step places the vertex with label 1 in a stack and declares it open. In the next step vertex 1 is declared as explored and is removed from the top of the stack, where we place in increasing order the neighbours of 1 that have not been seen (open or explored) yet, while declaring them open. We proceed inductively. When the set of open vertices becomes empty the algorithm finishes. It is obvious that the resulting graph that consists of edges between a vertex that was explored at a given step and a vertex that has not been seen yet at the same step, is a tree. For a connected graph $G$ with $m$ vertices, we refer to this tree as the depth-first tree and write $T(G)$ . For $i=0,\ldots,m-1$ , let $X(i)$ be the number of vertices seen but not yet fully explored at step $i$ . The process $(X(i)\;:\;0\le i\lt m)$ is called the depth-first walk of the graph $G$ .

Let $\mathbb{T}_m$ be the set of (ordered) tree labelled by $[m]$ . For $T\in \mathbb{T}_m$ , its associated depth-first tree is $T$ itself. We call an edge permitted by the depth-first procedure run on $T$ if its addition produces the same depth-first tree. Exactly $X(i)$ edges are permitted at step $i$ , and therefore the total number of permitted edges is given by

\begin{equation*} a(T)\;:\!=\;\sum _{i=0}^{m-1} X(i), \end{equation*}

which is called the area of $T$ . Given a tree $T$ and a connected graph $G$ , $T(G)=T$ if and only if $G$ can be obtained from $T$ by adding a subset of permitted edges by the depth-first procedure. Therefore, writing $\mathbb{G}_T$ for the set of connected graphs $G$ that satisfy $T(G)=T$ , we have that $\{\mathbb{G}_T\;:\;T\in \mathbb{T}_m\}$ is a partition of the connected graphs on $[m]$ , and that the cardinality of $\mathbb{G}_T$ is $2^{a(T)}$ , since every permitted edge is included or not.

Back to the question on how to generate a connected component, write $G_m^p$ for the graph with the same distribution as $G(m,p)$ conditioned to be connected. Thus, we focus on generating $G_m^p$ instead.

Lemma 5.9. (Addario-Berry, Broutin, Goldschmidt [Reference Addario-Berry, Broutin and Goldschmidt2]). Fix $p\in (0,1)$ . Pick a random tree $\tilde{T}_m^p$ that has a “tilted” distribution which is biased in favour of trees with large area. Namely, pick $\tilde{T}_m^p$ in such a way that

\begin{equation*} \mathbb {P}(\tilde {T}_m^p=T)\propto (1-p)^{-a(T)}, \qquad T\in \mathbb {T}_m. \end{equation*}

Use $\rho ^m$ to denote the root of $\tilde{T}_m^p$ . Add to $\tilde{T}_m^p$ each of the $a(\tilde{T}_m^p)$ permitted edges independently with probability $p$ . Call the graph generated $\tilde{G}_m^p$ . Then, $\tilde{G}_m^p$ has the same distribution as $G_m^p$ .

Given $\mathcal{P}_n$ , a random subset of $\mathbb{N}\times \mathbb{N}$ in which every point is contained independently of the others with probability $p$ , we define $\mathcal{P}_n\cap X_n$ as the set of points from $\mathcal{P}_n$ that fall under the graph of $X_n$ , where conditional on $C_1^n$ , $(X_n(i)\;:\;0\le i\lt C_1^n)$ is the depth-first walk of $\tilde{T}_n^p$ in such a way that $|\tilde{T}_n^p|$ has the same distribution as $C_1^n$ . We write

\begin{equation*} \mathcal {P}_n\cap X_n\;:\!=\;\left \{(i,k)\in \mathcal {P}_n\;:\;i\lt C_1^n, k\le X_n(i)\right \}. \end{equation*}

For each point $(i,k)\in \mathcal{P}_n\cap X_n$ , let $u^n_{(i,k)}$ be the unique vertex visited by the depth-first walk of $\tilde{T}_n^p$ at time $i$ and $v^n_{(i,k)}$ be the unique vertex that is lying in the $(X_n(i)-k+2)$ -th of the stack at time $i$ . Let $E_{\mathcal{P}_n}=\{(u^n_{(i,k)},v^n_{(i,k)})\;:\;(i,k)\in \mathcal{P}_n\cap X_n\}$ be the finite set of the pairs of vertices to be added. Lemma 5.9 implies that

\begin{equation*} \mathcal {C}_1^n\overset {(d)}=(V(\tilde {T}_n^p),E(\tilde {T}_n^p)\cup E_{\mathcal {P}_n}), \end{equation*}

which permits us to suppose that the objects $\tilde{T}_n^p$ , $\mathcal{P}_n$ and $\mathcal{C}_1^n$ belong to the same probability space such that the equality that precedes holds in the almost-sure sense as ordered graphs labelled by $[C_1^n]$ . (Note that in this case $|\tilde{T}_n^p|=C_1^n$ , almost-surely.)

Based on [Reference Addario-Berry, Broutin and Goldschmidt2, Lemma 19], for the random set of points introduced above, we have that

\begin{equation*} \{(n^{-2/3} i, n^{-1/3} k)\;:\;(i,k)\in \mathcal {P}_n\}\cap (n^{-1/3} X_n(\lfloor n^{2/3} \cdot \rfloor ))\to \mathcal {P}\cap \tilde {e}, \end{equation*}
\begin{equation*} (n^{-1/3} X_n(\lfloor n^{2/3} t\rfloor ))_{t\in [0,1]}\to (\tilde {e}(t))_{t\in [0,1]}, \end{equation*}

simultaneously in distribution, where $\tilde{e}$ is defined as the tilted excursion with length $C_1$ and $\mathcal{P}$ as the Poisson process with intensity measure the Lebesgue measure on $\mathbb{R}_{+}^2$ , independent of $\tilde{e}$ . The first convergence is with respect to the Hausdorff convergence between compact sets. Since the random variables involved are integer-valued, there exists a random variable $N$ such that for all $n\ge N$ ,

\begin{equation*} \left |\{(n^{-2/3} i, n^{-1/3} k)\;:\;(i,k)\in \mathcal {P}_n\}\cap (n^{-1/3} X_n(\lfloor n^{2/3} \cdot \rfloor ))\right |=\left |\mathcal {P}\cap \tilde {e}\right |=J, \end{equation*}

for a $J\ge 0$ . However we will not present the full details here, [Reference Addario-Berry, Broutin and Goldschmidt2] (see the proof of Lemma 2.2 from that article) gives the distributional convergence:

(5.24) \begin{equation} \left (V(\mathcal{C}_1^n),n^{-1/3} d_{\mathcal{C}_1^n},\rho ^n,\{u_i^n,v_i^n\}_{i=1}^{J}\right )\longrightarrow \left (\mathcal{M},d_{\mathcal{M}},\rho,\{u_i,v_i\}_{i=1}^{J}\right ), \end{equation}

in the pointed extended Gromov-Hausdorff topology with marked points (add $\sum _{i=1}^J (d_Z(\phi (u_i^n),\phi '(u_i))+d_Z(\phi (v_i^n),\phi '(v_i)))$ inside the infimum at the definition of $d_{\mathbb{K}}$ in Section 2), where $\{u_i^n,v_i^n\}_{i=1}^{J}$ , $n\ge 1$ , are elements of $\{(u^n_{(\hat{i},\hat{k})},v^n_{(\hat{i},\hat{k})})\;:\;(\hat{i},\hat{k})\in \{(n^{-2/3} i, n^{-1/3} k)\;:\;(i,k)\in \mathcal{P}_n\}\cap (n^{-1/3} X_n(\lfloor n^{2/3} \cdot \rfloor ))\}$ that have been identified instead, and $\{u_1,v_i\}_{i=1}^{J} \in \{(u_{(t,x)},v_{(t,x)}):(t,x)\in \mathcal{P}\cap \tilde{e}\}$ are distinct. In [Reference Croydon18], it was shown that “fusing” resistance forms at disjoint pairs of points is continuous with respect to the Gromov-Hausdorff topology. Consequently, regarding (5.24) which gives the convergence of spaces and marked points, we apply [Reference Croydon18, Proposition 8.4] to obtain that Assumption 2 is verified:

(5.25) \begin{equation} \left (\left (V(\mathcal{C}_1^n),n^{-1/3} R_{\mathcal{C}_1^n},\rho ^n\right ),\pi ^n,\left (n^{-1/3} X_{\lfloor n t \rfloor }^{\mathcal{C}_1^n}\right )_{t\ge 0}\right )\longrightarrow \left (\left (\mathcal{M},R_{\mathcal{M}},\rho \right ),\pi ^{\mathcal{M}},X^{\mathcal{M}}\right ), \end{equation}

where $R_{\mathcal{C}_1^n}$ (see (1.3)) and $R_{\mathcal{M}}$ (see (5.23)) are the resistance metrics on $\mathcal{C}_1^n$ and $\mathcal{M}$ respectively.

In what follows we give a detailed description on how we can transfer the results proved in Section 5.1. We denote by $\tilde{V}_m=(\tilde{V}_m(i)\;:\;0\le i\le 2 m)$ the contour process of $\tilde{T}_m^p$ , and by $\tilde{v}_m=(((m/\zeta )^{-1/2} \tilde{V}_m(2 (m/\zeta ) s)\;:\;0\le s\le \zeta )$ the rescaled contour process of positive length $\zeta$ . We start by showing that, for some $\alpha \gt 0$ , the sequence $||\tilde{v}_m||_{H_{\alpha }}$ of Hölder norms is tight.

Lemma 5.10. Suppose that p = p(m) in such a way that $m p^{2/3}\to \zeta$ , as $m\to \infty$ . There exists $\alpha \gt 0$ , such that

(5.26) \begin{equation} \lim _{M\to \infty } \liminf _{m\to \infty } \mathbb{P}\left (\sup _{0\le \neq t\le 1}\frac{|\tilde{v}_m(s)-\tilde{v}_m(t)|}{|t-s|^{\alpha }}\le M\right )=1. \end{equation}

Proof. It will suffice to prove the lemma in the case $\zeta = 1$ , since the general result follows by Brownian scaling. The general result follows by Brownian scaling. Let $T_m$ be a tree chosen uniformly from $[m]$ . Write $V_m$ and $v_m$ for its associated contour process and normalized contour process respectively. We note here that Theorem 5.3 is stated in the more general framework of size-conditioned Galton-Watson trees with critical offspring distribution that has finite variance and exponential moments. If the offspring is distributed according to a Poisson with mean $1$ , then the conditioned tree is a uniformly distributed labelled tree (e.g. [Reference Goldschmidt28, Proposition 2.3]). Then, by Lemma 5.9,

\begin{equation*} \mathbb {P}\left (\sup _{0\le s\neq t\le 1}\frac {|\tilde {v}_m(s)-\tilde {v}_m(t)|}{|t-s|^{\alpha }}\ge M\right )=\frac {\mathbb {E}\left [\unicode {x1D7D9}_{\left \{\sup _{0\le s\neq t\le 1}\frac {|v_m(s)-v_m(t)|}{|t-s|^{\alpha }}\ge M\right \}} (1-p)^{-a(T_m)}\right ]}{\mathbb {E}\left [(1-p)^{-a(T_m)}\right ]}. \end{equation*}

Using the Cauchy-Schwarz inequality, we have that

(5.27) \begin{align} &\mathbb{P}\left (\sup _{s,t\in [0,1]}\frac{|\tilde{v}_m(s)-\tilde{v}_m(t)|}{|t-s|^{\alpha }}\ge M\right ) \nonumber \\[5pt] &\le \frac{\mathbb{P}\left (\sup _{s,t\in [0,1]}\frac{|v_m(s)-v_m(t)|}{|t-s|^{\alpha }}\ge M\right )^{1/2} \left (\mathbb{E}\left [(1-p)^{-2 a(T_m)}\right ]\right )^{1/2}}{\mathbb{E}\left [(1-p)^{-a(T_m)}\right ]}. \end{align}

Since $m p^{2/3}\to 1$ , as $m\to \infty$ , there exists $c\gt 0$ such that $p\le c m^{-3/2}$ , for every $m\ge 1$ . Since $T_m$ is a uniform random tree on $[m]$ , from [Reference Addario-Berry, Broutin and Goldschmidt2, Lemma 14] we can find universal constants $K_1$ , $K_2\gt 0$ such that

(5.28) \begin{equation} \mathbb{E}\left [(1-p)^{-\xi a(T_m)}\right ]\lt K_1 e^{K_2 c^2 \xi ^2}, \end{equation}

for fixed $\xi \gt 0$ . Recall that $a(T_m)=\sum _{i=0}^{m-1} X_m(i)$ , where $(X_m(i)\;:\;0\le i\le m)$ is the depth-first walk associated with $T_m$ (for convenience we have put $X_m(m)=0$ ). From [Reference Marckert and Mokkadem35, Theorem 3] we know that, as $m\to \infty$

\begin{equation*} (m^{-1/2} X_m(\lfloor m t \rfloor ))_{t\in [0,1]}\to (e(t))_{t\in [0,1]}, \end{equation*}

in distribution in $D([0,1],\mathbb{R}_{+})$ , where $(e(t))_{t\in [0,1]}$ is a normalized Brownian excursion. Writing

\begin{equation*} (1-p)^{-a(T_m)}=(1-p)^{-\sum _{i=0}^{m-1} X_m(i)}=(1-p)^{-m^{3/2} \int _{0}^{1} m^{-1/2} X_m(\lfloor m t \rfloor ) dt} \end{equation*}

and using that the sequence $(1-p)^{-a(T_m)}$ is uniformly integrable, we deduce that

(5.29) \begin{equation} \mathbb{E}\left [(1-p)^{-a(T_m)}\right ]\to \mathbb{E}\left [\exp \left (\int _{0}^{1} e(u) du\right )\right ]\gt 0, \end{equation}

as $m\to \infty$ . Thus, for $m$ large enough, $ (\mathbb{E} [(1-p)^{-2 a(T_m)} ] )^{1/2}/\mathbb{E} [(1-p)^{-a(T_m)} ]$ is bounded by a universal constant, see (5.28) and (5.29). To conclude, taking first $m\to \infty$ and then $M\to \infty$ , the desired result follows from (5.27) and Theorem 5.3.

It is now immediate to check that the local times $(L^m_t(x))_{x\in V(G_m^p),t\ge 0}$ of the corresponding simple random walk on $G_m^p$ are equicontinuous under the annealed law. The proof of the next lemma relies heavily on the same methods used to establish Proposition 5.5, and therefore we will make use of the parts that remain unchanged.

Recall that the graph generated by the process of adding $\textrm{Bin}(a(\tilde{T}_m^p),p)$ number of surplus edges to $\tilde{T}_m^p$ was denoted by $\tilde{G}_m^p$ . We view $\tilde{G}_m^p$ as the metric space $\tilde{T}_m^p$ that includes the edges (of length 1) that have been added and we equip it with the resistance metric $R_{\tilde{G}_m^p}$ defined by (1.3).

Lemma 5.11. Suppose that p = p(m) in such a way that $m p^{2/3}\to \zeta$ , as $m\to \infty$ . For every $\varepsilon \gt 0$ and $T\gt 0$ ,

\begin{equation*} \lim _{\delta \rightarrow 0} \limsup _{m\to \infty } \mathbb {P}_{\rho ^m} \Bigg (\sup _{\substack {y,z\in V(G_m^p): \\ m^{-1/2} R_{G_m^p}(y,z)\lt \delta }} \sup _{t\in [0,T]} m^{-1/2} \big | L_{m^{3/2} t}^m(y)-L_{m^{3/2} t}^m(z) \big | \ge \varepsilon \, \bigg | \, s(G_m^p)=s \Bigg )=0. \end{equation*}

Proof. It will suffice to prove the lemma in the case $\zeta = 1$ , since the general result follows by Brownian scaling. From Lemma 5.10, given $t_1$ , $t_2\in [0,1]$ , with $2 m t_1$ and $2 m t_2$ integers, such that $p_{\tilde{v}_m}(t_1)=y$ and $p_{\tilde{v}_m}(t_2)=z$ there exist $M\gt 0$ and $\alpha \gt 0$ , such that

(5.30) \begin{equation} d_{\tilde{v}_m}(t_1,t_2)=\tilde{v}_m(t_1)+\tilde{v}_m(t_2)-2 \min _{r\in [t_1\wedge t_2,t_1\vee t_2]} \tilde{v}_m(r)\le 2 M |t_1-t_2|^{\alpha } \end{equation}

with probability arbitrarily close to 1, cf. (5.7). Conditioned on $\tilde{v}_m$ satisfying (5.30), the resistance between $y$ and $z$ on $\tilde{G}_m^p$ is smaller than the total length of the path between $y$ and $z$ on $\tilde{T}_m^p$ . Therefore,

\begin{align*} R_{\tilde{G}_m^p}(y,z)\le d_{\tilde{T}_m^p}(y,z)=m^{1/2} d_{\tilde{v}_m}(t_1,t_2)\le 2 M m^{1/2} |t_1-t_2|^{\alpha }, \end{align*}

which indicates that, on the event that (5.30) holds, the maximum resistance of $\tilde{G}_m^p$ is bounded above by a multiple of $m^{1/2}$ . More specifically, $r(\tilde{G}_m^p)\le M m^{1/2}$ . Moreover, $m(\tilde{G}_m^p)=2 E(\tilde{G}_m^p)=2 (s(\tilde{G}_m^p)+m-1)$ . An application of Theorem 5.2, which was originally formulated for the local times of random walks on weighted graphs in terms of the resistance metric, yields

\begin{equation*} \mathbb {E}_{\rho ^m}^{\tilde {G}_m^p} \left [\left |\left |m^{-1/2} \left (L_{m^{3/2} \cdot }^m(y)-L_{m^{3/2} \cdot }^m(z) \right )\right |\right |_{\infty,[0,T]}^p\bigg | s(\tilde {G}_m^p)=s\right ] \le \tilde {c}_5 \left |t_1-t_2\right |^{\alpha p/2}, \end{equation*}

conditional on $\tilde{v}_m$ satisfying (5.30), for any fixed $p\ge 2$ , cf. (5.8). Since the discrete local times process is interpolated linearly between the integer time points $2 m t_1$ and $2 m t_2$ , the statement is also valid for every $t_1$ , $t_2\in [0,1]$ . The rest of proof is finished in the manner of Proposition 5.5, and therefore we omit it.

For notational simplicity, the next result is stated for the largest connected component on the critical window. In fact, it holds for the family of the $i$ -th largest connected components, $i\ge 1$ . In this case, let us denote by $\mathcal{C}_1^n$ the largest connected component of $G(n,n^{-1}+\lambda n^{-4/3})$ , and by $(L_t^n(x))_{x\in V(\mathcal{C}_1^n),t\ge 0}$ the local times of the simple random walk on $\mathcal{C}_1^n$ .

Proposition 5.12. For every $\varepsilon \gt 0$ and $T\gt 0$ ,

\begin{equation*} \lim _{\delta \rightarrow 0} \limsup _{n\to \infty } \mathbb {P}_{\rho ^n} \left (\sup _{\substack {y,z\in V(\mathcal {C}_1^n): \\ n^{-1/3} R_{\mathcal {C}_1^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} n^{-1/3} |L_{n t}^n(y)-L_{n t}^n(z)|\ge \varepsilon \right )=0. \end{equation*}

Proof. Fix $\varepsilon \gt 0$ , $\delta \gt 0$ and $T\gt 0$ . In the random graph $G(n,p)$ , conditional on $C_1^n$ ,

\begin{equation*} \mathcal {C}_1^n\overset {(d)}=G^p_{C_1^n}, \end{equation*}

where as above $p=n^{-1}+\lambda n^{-4/3}$ , for fixed $\lambda \in \mathbb{R}$ . Note that $n p\to 1$ , as $n\to \infty$ . By (5.20) and Skorohod’s representation theorem, there exists a probability space and random variables $\tilde{C}_1^n$ , $\tilde{\mathcal{C}}_1^n$ , $n\ge 1$ and $\tilde{C}_1$ , $\tilde{\mathcal{M}}$ defined on that space, such that $(\tilde{C}_1^n,\tilde{\mathcal{C}}_1^n)\overset{(d)}=(\tilde{C}_1,\tilde{\mathcal{M}})$ with $n^{-3/2} \tilde{C}_1^n\to \tilde{C}_1$ , as $n\to \infty$ , in the almost-sure sense. Conditioning on the size and the surplus of $\mathcal{C}_1^n$ , if we denote by $B_n^{\delta }$ the measurable event

\begin{equation*} B_n^{\delta }\;:\!=\;\sup _{\substack {y,z\in V(\mathcal {C}_1^n): \\ n^{-1/3} R_{\mathcal {C}_1^n}(y,z)\lt \delta }} \sup _{t\in [0,T]} n^{-1/3} |L_{n t}^n(y)-L_{n t}^n(z)|\ge \varepsilon, \end{equation*}

for large enough constant $A$ (appears in (5.31)) and $S$ (appears in (5.32)), note that

(5.31) \begin{align} \mathbb{P}_{\rho ^n} (B_n^{\delta }) \le &\int \mathbb{P}_{\rho ^n}^{\mathcal{C}_1^n} (B_n^{\delta }; A^{-1} n^{2/3}\le C_1^n\le A n^{2/3}) P(d \mathcal{C}_1^n) \nonumber \\[5pt] &+P(C_1^n\notin [A^{-1} n^{2/3}, A n^{2/3}]) \end{align}

Since $\tilde{C}_1^n$ and $p=p(n)$ are in such a way that $\tilde{C}_1^n p^{2/3}\to \tilde{C}_1$ , as $n\to \infty$ , in the almost-sure sense, we can bound (5.31) by

(5.32) \begin{align} &\mathbb{P}_{\rho ^n} \left (\sup _{\substack{y,z\in V(G_{C_1^n}^p): \\ (C_1^n)^{-1/2} R_{G_{C_1^n}^p}(y,z)\lt \delta '}} \sup _{t\in [0,T']} (C_1^n)^{-1/2} |L_{(C_1^n)^{3/2} t}^{n}(y)-L_{(C_1^n)^{3/2} t}^{n}(z)|\ge \varepsilon '\bigg |s(G_{C_1^n}^p)\le S\right ) \nonumber \\[5pt] &+\mathbb{P}(C_1^n\notin [A^{-1} n^{2/3}, A n^{2/3}])+\mathbb{P}(S_1^n\gt S), \end{align}

for appropriate $\varepsilon '\gt 0$ , $\delta '\gt 0$ and $T'\gt 0$ that only depend on $\varepsilon$ , $\delta$ , $T$ and $A$ . By Theorem 5.17,

(5.33) \begin{equation} \lim _{A\to \infty } \limsup _{n\to \infty } \mathbb{P}(C_1^n\notin [A^{-1} n^{2/3}, A n^{2/3}])=0. \end{equation}

Furthermore, as $n\to \infty$ ,

\begin{equation*} S_1^n\overset {(d)} \longrightarrow \textrm {Poi}\left (\int _{0}^{\zeta } \tilde {e}(u) du\right ), \end{equation*}

where $\textrm{Poi} (\!\int _{0}^{\zeta } \tilde{e}(t) dt )$ denotes a Poisson random variable with mean the area under a tilted excursion of length $\zeta$ (see [Reference Addario-Berry, Broutin and Goldschmidt2, Corollary 23]). As a consequence tightness of process that encodes the surplus of $\mathcal{C}_1^n$ follows:

(5.34) \begin{equation} \lim _{S\to \infty } \limsup _{n\to \infty } \mathbb{P}(S_1^n\gt S). \end{equation}

The proof is finished by combining (5.33) and (5.34) with the equicontinuity result of Lemma 5.11, see (5.32).

5.2.1 Continuity of blanket times of Brownian motion on $\mathcal{M}$

To prove continuity of the $\varepsilon$ -blanket time of the Brownian motion on $\mathcal{M}$ , we first define a $\sigma$ -finite measure on the product space of positive excursions and random set of points of $\mathbb{R}_{+}^2$ . Throughout this section we denote the Lebesgue measure on $\mathbb{R}_{+}^2$ by $\ell$ . We define $\textbf{N}(d(e,\mathcal{P}))$ by setting:

(5.35) \begin{equation} \textbf{N}(d e,|\mathcal{P}|=k, (d x_1,\ldots,d x_k)\in A_1\times \ldots \times A_k)\;:\!=\;\int _{0}^{\infty } f_L(l) \mathbb{N}_l (d e) \frac{e^{-1}}{k!} \prod _{i=1}^{k} \frac{\ell (A_i\cap A_e)}{\ell (A_e)}, \end{equation}

where $f_L(l)\;:\!=\;d l/\sqrt{2 \pi l^3}$ , $l\ge 0$ gives the density of the length of the excursion $e$ and $A_e\;:\!=\;\{(t,x)\;:\;0\le x\le e(t)\}$ denotes the area under its graph. In other words, the measure picks first an excursion length according to $f_L(l)$ and, given $L=l$ , it picks a Brownian excursion of that length. Then, independently of $e$ it chooses $k$ points according to a Poisson with unit mean, which distributes uniformly on the area under the graph of $e$ .

It turns out that this is an easier measure to work with when applying our scaling argument to prove continuity of the blanket times. Also, as will see later, $\textbf{N}$ is absolutely continuous with respect to the canonical measure $\textbf{N}^{t,\lambda }(d(e,\mathcal{P}))$ that first at time $t$ picks a tilted Brownian excursion $e$ of a randomly chosen length $l$ , and then independently of $e$ chooses $k$ points distributed as a Poisson random variable with mean $\int _{0}^{l} e(t) dt$ , which as before are distributed uniformly on the area under the graph of $e$ . To fully describe this measure let $\mathbb{N}^{t,\lambda }$ denote the measure (for excursions starting at time $t$ ) associated to $B^{\lambda }_t-\inf _{0\le s\le t} B^{\lambda }_s$ , first stated by Aldous in [Reference Aldous5]. We note that $\mathbb{N}^{t,\lambda }=\mathbb{N}^{0,\lambda -t}$ and thus it suffices to write $\mathbb{N}^{0,\lambda }$ for every $\lambda \in \mathbb{R}$ . For every measurable subset $A$ ,

\begin{equation*} \mathbb {N}^{0,\lambda }(A)=\int _{0}^{\infty } \mathbb {N}^{0,\lambda }_l(A) f_{L}(l) F_{\lambda }(l) \mathbb {N}_l\left (\exp \left (\int _{0}^{l} e(u) du\right )\right ), \end{equation*}

where $\mathbb{N}_{l}^{0,\lambda }$ is a shorthand for the excursion measure $\mathbb{N}^{0,\lambda }$ , conditioned on the event $\{\tilde{L}=l\}$ and $F_{\lambda }(l)\;:\!=\;\exp (\!-\!1/6 (\lambda ^3+(l-\lambda )^3 ) )$ . For simplicity, let

\begin{equation*} g_{\tilde {L}}(l,\lambda )\;:\!=\;f_{L}(\lambda ) F_{\lambda }(l) \mathbb {N}_l\left (\exp \left (\int _{0}^{l} e(u) du\right )\right ). \end{equation*}

In analogy with (5.35) we characterize $\textbf{N}^{t,\lambda }(d(e,\mathcal{P}))$ by setting:

(5.36) \begin{align} &\textbf{N}^{t,\lambda }(d e,|\mathcal{P}|=k, (d x_1,\ldots,d x_k)\in A_1\times \ldots \times A_k) \nonumber \\[5pt] \;:\!=\; &\int _{0}^{\infty } g_{\tilde{L}}(l,\lambda -t) \mathbb{N}^{t,\lambda }_l (d e) \exp \left (-\int _{0}^{l} e(u) du\right )\frac{\left (\int _{0}^{l} e(u) du\right )^k}{k!} \prod _{i=1}^{k} \frac{\ell (A_i\cap A_e)}{\ell (A_e)}. \end{align}

After calculations that involve the use of the Cameron-Martin-Girsanov formula [Reference Revuz and Yor38, Chapter IX, (1.10) Theorem] (for the entirety of those calculations one can consult [Reference Addario-Berry, Broutin and Goldschmidt2, Section 5]), one deduces that

\begin{equation*} \mathbb {N}_l^{t,\lambda }(d e)=\exp \left (\int _{0}^{l} e(u) du\right ) \frac {\mathbb {N}_l(d e)}{\mathbb {N}_l\left (\exp \left (\int _{0}^{l} e(u) du\right )\right )}, \end{equation*}

and as a consequence the following expression for the Radon-Nikodym derivative is valid:

(5.37) \begin{equation} \frac{d \textbf{N}^{t,\lambda }}{d \textbf{N}}=\frac{F_{\lambda -t}(l) \left (\int _{0}^{l} e(u) du\right )^k/k!}{e^{-1}/k!}=\exp \left (1-\frac{1}{6}\left (\lambda ^3+(l-\lambda +t)^3\right )\right ) \left (\int _{0}^{l} e(u) du\right )^k. \end{equation}

Recall that for every $b\gt 1$ , the mapping $\Theta _b\;:\;E\to E$ is defined by setting

\begin{equation*} \Theta _b(e)(t)\;:\!=\;\sqrt {b} e(t/b), \qquad t\in [0,b\zeta ], \end{equation*}

for every $e\in E$ . As we saw in Subsection 5.1.2, it acts on the real tree coded by $e$ scaling its distance and measure appropriately, see (5.15) and (5.16). Recall the alternative description of the “glued” metric space $\mathcal{M}_{e,\mathcal{P}}$ , where $e$ is a Brownian excursion of length $\zeta$ and $\mathcal{P}$ is a Poisson point process on $\mathbb{R}_{+}^2$ of unit intensity with respect to the Lebesgue measure independent of $e$ . The number $|\mathcal{P}\cap e|$ of vertex identifications is a Poisson random variable with mean $\int _{0}^{\zeta } e(u) du$ . As a result, the number of vertex identifications $|\mathcal{P}\cap \Theta _b(e)|$ has law given by a Poisson distribution with mean

\begin{equation*} \int _{0}^{b \zeta } \sqrt {b} e(u/b) du=b^{3/2} \int _{0}^{\zeta } e(u) du. \end{equation*}

Moreover, conditioned on $|\mathcal{P}\cap e|$ , the coordinates of a point $(u_{(t,x)},v_{(t,x)})$ in $\mathcal{P}\cap e$ have densities proportional to $e(u)$ for $u_{(t,x)}$ and, conditioned on $u_{(t,x)}$ , $v_{(t,x)}$ is uniformly distributed on $[0,e(u_{(t,x)})]$ . Then, conditioned on $|\mathcal{P}\cap \Theta _b(e)|$ , the coordinates of a point $(u^b_{(t,x)},v^b_{(t,x)})$ in $\mathcal{P}\cap \Theta _b(e)$ are equal in law to $b u_{(t,x)}$ in the case of $u^b_{(t,x)}$ , and conditioned on $u^b_{(t,x)}$ , $v^b_{(t,x)}$ is uniformly distributed on $[0,\sqrt{b} u_{(t,x)}]$ . From now on, we use $\Theta _b(e,\mathcal{P})$ to denote the mapping from the product space of positive excursions and set of points of the upper half plane onto itself that applies $\Theta _b(e)$ to $e$ and repositions the collection of points in $\mathcal{P}$ as described above.

From the definition of the quasi-metric $d_{\mathcal{M}_{e,\mathcal{P}}}$ in (5.19), we have that under the application of $\Theta _b$ , it rescales like $b^{1/2} d_{\mathcal{M}_{e,\mathcal{P}}}$ , a statement that should be understood in accordance with (5.15). Let $\mathcal{L}(\mathcal{T}_e)$ denote the set of leaves of $\mathcal{T}_e$ , that is the set of points $\sigma \in \mathcal{T}_e$ , such that $\mathcal{T}_e\setminus \{\sigma \}$ is connected, i.e. the complement of the set of leaves is the skeleton of $\mathcal{T}_e$ . In particular, $\mathcal{L}(\mathcal{T}_e)$ is uncountable, and $\pi ^e(\mathcal{L}(\mathcal{T}_e))=\zeta$ . Consider the set

\begin{equation*} I=\{\sigma \in \mathcal {L}(\mathcal {T}_e)\;:\;p_{e,\mathcal {P}}(\sigma )\in B\}, \end{equation*}

for a measurable subset $B$ of $\mathcal{M}_{e,\mathcal{P}}$ , where $p_{e,\mathcal{P}}$ is the canonical projection from $\mathcal{T}_e$ to the resulting quotient space after the vertex identifications, made explicit by the equivalence relation $\sim _{E_{\mathcal{P}}}$ . We endowed $\mathcal{M}_{e,\mathcal{P}}$ with the measure $\pi _{e,\mathcal{P}}$ , that is the image measure on $\mathcal{M}_{e,\mathcal{P}}$ of $\pi ^e$ on $\mathcal{T}_e$ by the canonical projection $p_{e,\mathcal{P}}$ of $\mathcal{T}_e$ onto $\mathcal{M}_{e,\mathcal{P}}$ . Then, by definition $\pi _{e,\mathcal{P}}(B)=\pi ^e(I)$ , and consequently $\pi _{\Theta _b(e,\mathcal{P})}(B)=\pi ^{\Theta _b(e)}(I)$ . As we examined before, under the application of $\Theta _b$ , $\pi ^{\Theta _b(e)}$ rescales like $b \pi ^e$ , where once again this should be understood according to (5.16) and the notation that was introduced in the course of its derivation. Finally, since $\mathbb{N}\circ \Theta _b^{-1}=\sqrt{b} \mathbb{N}$ , see (5.9), and using the fact that $\ell (A_i\cap A_e)/\ell (A_e)$ in (5.35) is scale invariant under $\Theta _b$ , we have that

\begin{equation*} \textbf{N}\circ \Theta _b^{-1}=\sqrt {b} \textbf{N}, \end{equation*}

Therefore, considering $\textbf{N}$ instead of $\textbf{N}^{t,\lambda }$ is advantageous as it could easily be seen to enjoy the same scaling property as $\mathbb{N}$ .

We now have all the ingredients to prove continuity of the blanket times of the Brownian motion on $\mathcal{M}$ . We describe the arguments that have been already used in establishing Proposition 5.6. Let $\tau _{\textrm{bl}}^{e,\mathcal{P}}(\varepsilon )$ denote the $\varepsilon$ -blanket time of the Brownian motion $X^{e,\mathcal{P}}$ on $\mathcal{M}_{e,\mathcal{P}}$ started from a distinguished vertex $\bar{\rho }$ , for some $\varepsilon \in (0,1)$ . Taking the expectation of the law of $\tau _{\textrm{bl}}^{e,\mathcal{P}}(\varepsilon )$ , $\varepsilon \in (0,1)$ against the $\sigma$ -finite measure $\textbf{N}$ as in (5.12), using Fubini and the monotonicity of the blanket times, yields

\begin{equation*} \mathbb {P}_{\bar {\rho }}^{e,\mathcal {P}}\left (\tau _{\textrm {bl}}^{e,\mathcal {P}}(\varepsilon \!-\!)=\tau _{\textrm {bl}}^{e,\mathcal {P}}(\varepsilon \!+\!)\right )=1, \end{equation*}

$\lambda$ -a.e. $\varepsilon$ , $\textbf{N}$ -a.e. $(e,\mathcal{P})$ , where $\mathbb{P}_{\bar{\rho }}^{e,\mathcal{P}}$ denotes the law of $X^{e,\mathcal{P}}$ started from $\bar{\rho }$ . The rest of the argument relies on improving such a statement to hold for every $\varepsilon \in (0,1)$ .

In the transformed “glued” metric space $\mathcal{M}_{\Theta _b(e,\mathcal{P})}$ , the Brownian motion admits $\mathbb{P}_{\bar{\rho }}^{\Theta _b(e,\mathcal{P})}$ -a.s. jointly continuous local times $(\sqrt{b} L_{b^{-3/2} t}(x))_{x\in \mathcal{M}_{e,\mathcal{P}},t\ge 0}$ . This enough to infer that, for every $\varepsilon \in (0,1)$ and $b\gt 1$ , the continuity of the $\varepsilon$ -blanket time variable of $\mathcal{M}_{e,\mathcal{P}}$ is equivalent (in law) to the continuity of the $b^{-1} \varepsilon$ -blanket time variable of $\mathcal{M}_{\Theta _b(e,\mathcal{P})}$ , and consequently as in the proof of Proposition 5.6, applying our scaling argument implies

\begin{equation*} \mathbb {P}_{\bar {\rho }}^{e,\mathcal {P}}\left (\tau _{\textrm {bl}}^{e,\mathcal {P}}(\varepsilon \!-\!)=\tau _{\textrm {bl}}^{e,\mathcal {P}}(\varepsilon \!+\!)\right )=1, \end{equation*}

$\textbf{N}$ -a.e. $(e,\mathcal{P})$ . Recall that, conditional on $C_1$ , $\mathcal{M}\,{\buildrel (d) \over =}\,\mathcal{M}^{(C_1)}$ , where $C_1$ is the length of the longest excursion of the process defined in (5.18), which is distributed as a tilted excursion of that length. Then, applying again our scaling argument as in the end of the proof of Proposition 5.6, conditional on $C_1$ , we deduce

\begin{equation*} \mathbb {P}_{\rho }^{\mathcal {M}}\left (\tau _{\textrm {bl}}^{\mathcal {M}}(\varepsilon \!-\!)=\tau _{\textrm {bl}}^{\mathcal {M}}(\varepsilon \!+\!)\right )=1, \end{equation*}

$\textbf{N}_{C_1}$ -a.e. $(e,\mathcal{P})$ , where $\textbf{N}_l$ is the version of $\textbf{N}$ defined in (5.35) conditioned on the event $\{L=l\}$ . Since the canonical measure $\textbf{N}^{0,\lambda }_{C_1}$ is absolutely continuous with respect to $\textbf{N}_{C_1}$ as it was shown in (5.37), the above also yields that conditional on $C_1$ , $\textbf{N}^{0,\lambda }_{C_1}$ -a.e. $(e,\mathcal{P})$ , $\tau _{\textrm{bl}}^{\mathcal{M}}(\varepsilon )$ is continuous at $\varepsilon$ , $\mathbb{P}_{\rho }^{\mathcal{M}}$ -a.s.

Fix $\varepsilon \in (0,1)$ . Here, for a particular real value of $\lambda$ and conditional on $C_1$ ,

(5.38) \begin{equation} \mathbb{P}_{\rho }(\!\cdot\!)\;:\!=\;\int \mathbb{P}_{\rho }^{\mathcal{M}}(\!\cdot\!) \textbf{N}_{C_1}^{0,\lambda }(d(e,\mathcal{P})), \end{equation}

formally defines the annealed measure for suitable events. Given the continuity of $\tau _{\textrm{bl}}^{\mathcal{M}}(\varepsilon )$ at $\varepsilon$ , $\mathbb{P}_{\rho }^{\mathcal{M}}$ -a.s. and Proposition (5.12), the desired annealed convergence follows by applying Theorem 1.2 exactly in the same manner as we did in the proof of Theorem 5.7 in the end of Subsection 5.1.2. For clarity, we restate Theorem 1.7.

Theorem 5.13. Fix $\varepsilon \in (0,1)$ . If $\tau _{\textrm{bl}}^{n}(\varepsilon )$ is the $\varepsilon$ -blanket time variable of the random walk on $\mathcal{C}_1^n$ , started from its root $\rho ^n$ , then

\begin{equation*} \mathbb {P}_{\rho ^n}\left (n^{-1} \tau _{\textrm {bl}}^n(\varepsilon )\le t\right )\to \mathbb {P}_{\rho }\left (\tau _{\textrm {bl}}^{\mathcal {M}}(\varepsilon )\le t\right ), \end{equation*}

for every $t\ge 0$ , where $\tau _{\textrm{bl}}^{\mathcal{M}}(\varepsilon )\in (0,\infty )$ is the $\varepsilon$ -blanket time variable of the Brownian motion on $\mathcal{M}$ , started from $\rho$ .

Acknowledgements

I would like to thank my supervisor Dr David Croydon for suggesting the problem, his support and many useful discussions.

Funding

Supported by EPSRC grant Number EP/HO23364/1.

References

Abraham, R., Delmas, J.-F. and Hoscheit, P. (2013) A note on the Gromov-Hausdorff-Prokhorov distance between (locally) compact metric measure spaces. Electron. J. Probab. 18(14) 121.CrossRefGoogle Scholar
Addario-Berry, L., Broutin, N. and Goldschmidt, C. (2012) The continuum limit of critical random graphs. Probab. Theory Related Fields 152(3-4) 367406.Google Scholar
Aldous, D. (1991) Random walk covering of some special trees. J. Math. Anal. Appl. 157(1) 271283.CrossRefGoogle Scholar
Aldous, D. (1993) The continuum random tree. III. Ann. Probab. 21(1) 248289.Google Scholar
Aldous, D. (1997) Brownian excursions, critical random graphs and the multiplicative coalescent. Ann. Probab. 25(2) 812854.CrossRefGoogle Scholar
Athreya, S., Eckhoff, M. and Winter, A. (2013) Brownian motion on $\Bbb{R}$ -trees. Trans. Amer. Math. Soc. 365(6) 31153150.CrossRefGoogle Scholar
Barlow, M. T., Ding, J., Nachmias, A. and Peres, Y. (2011) The evolution of the cover time. Combin. Probab. Comput. 20(3) 331345.CrossRefGoogle Scholar
Bhamidi, S., Broutin, N., Sen, S. and Wang, X. Scaling limits of random graph models at criticality: Universality and the basin of attraction of the Erdős-Rényi random graph, Preprint available at arXiv: 1411.3417.Google Scholar
Bhamidi, S. and Sen, S. (2020) Geometry of the vacant set left by random walk on random graphs, Wright’s constants, and critical random graphs with prescribed degrees. Random Struct. Algor. 56(3) 676721, Preprint available at arXiv: 1608.07153.CrossRefGoogle Scholar
Bhamidi, S., Sen, S. and Wang, X. (2017) Continuum limit of critical inhomogeneous random graphs. Probab. Theory Related Fields 169(1-2) 565641.CrossRefGoogle Scholar
Billingsley, P. (1999) Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, A Wiley-Interscience Publication. second ed. John Wiley & Sons, Inc.CrossRefGoogle Scholar
Burago, D., Burago, Y. and Ivanov, S. (2001) A course in metric geometry, Graduate Studies in Mathematics, vol. 33. American Mathematical Society.Google Scholar
Croydon, D. A. (2008) Convergence of simple random walks on random discrete trees to Brownian motion on the continuum random tree. Ann. Inst. Henri Poincaré Probab. Stat. 44(6) 9871019.CrossRefGoogle Scholar
Croydon, D. A. (2009) Hausdorff measure of arcs and Brownian motion on Brownian spatial trees. Ann. Probab. 37(3) 946978.CrossRefGoogle Scholar
Croydon, D. A. (2010) Scaling limits for simple random walks on random ordered graph trees. Adv. Appl. Probab. 42(2) 528558.CrossRefGoogle Scholar
Croydon, D. A. (2012) Scaling limit for the random walk on the largest connected component of the critical random graph. Publ. Res. Inst. Math. Sci. 48(2) 279338.CrossRefGoogle Scholar
Croydon, D. A. (2015) Moduli of continuity of local times of random walks on graphs in terms of the resistance metric. Trans. London Math. Soc. 2(1) 5779.CrossRefGoogle Scholar
Croydon, D. A. (2018) Scaling limits of stochastic processes associated with resistance forms. Ann. Inst. Henri Poincaré Probab. Stat. 54(4) 19391968.CrossRefGoogle Scholar
Croydon, D. A., Hambly, B. M. and Kumagai, T. (2012) Convergence of mixing times for sequences of random walks on finite graphs. Electron. J. Probab. 17(3) 132.CrossRefGoogle Scholar
Croydon, D. A., Hambly, B. M. and Kumagai, T. (2017) Time-changes of stochastic processes associated with resistance forms. Electron. J. Probab. 22(82) 141.CrossRefGoogle Scholar
Curien, N. and Kortchemski, I. (2014) Random stable looptrees. Electron. J. Probab. 19(108) 135.CrossRefGoogle Scholar
Daley, D. J. and Vere-Jones, D. (2008) An Introduction to the Theory of Point Processes, Probability and its Applications (New York). General Theory and Structure. second ed. Springer . Google Scholar
Ding, J., Lee, J. R. and Peres, Y. (2012) Cover times, blanket times, and majorizing measures. Ann.Math. 175(3) 14091471.CrossRefGoogle Scholar
Duquesne, T. and Gall, J.-F. Le (2005) Probabilistic and fractal aspects of Lévy trees. Probab. Theory Related Fields 131(4) 553603.CrossRefGoogle Scholar
Erdős, P. and Rényi, A. (1960) On the evolution of random graphs. Magyar Tud. Akad. Mat. Kutató Int. Közl. 5 1761.Google Scholar
Fukushima, M., Oshima, Y. and Takeda, M. (2011) Dirichlet Forms and Symmetric Markov Processes, De Gruyter Studies in Mathematics, 19, extended ed. Walter de Gruyter & Co.Google Scholar
Gittenberger, B. (2003) State spaces of the snake and its tour—convergence of the discrete snake. [J. Theoret. Probab. 16(4), 1015–1046; mr2033196] by J.-F. Marckert and A. Mokkadem. J. Theoret. Probab. 16(4) 1063–1067 (2004).Google Scholar
Goldschmidt, C. (2016) A short introduction to random trees. Mongolian Math. J. 20 5372.Google Scholar
Janson, S. and Marckert, J.-F. (2005) Convergence of discrete snakes. J. Theoret. Probab. 18(3) 615647.CrossRefGoogle Scholar
Kigami, J. (1995) Harmonic calculus on limits of networks and its application to dendrites. J. Funct. Anal. 128(1) 4886.CrossRefGoogle Scholar
Kigami, J. (2012) Resistance forms, quasisymmetric maps and heat kernel estimates. Mem. Amer. Math. Soc. 216(1015) vi+132.Google Scholar
Gall, J.-F. (1993) The uniform random tree in a Brownian excursion. Probab. Theory Related Fields 96(3) 369383.CrossRefGoogle Scholar
Gall, J.-F. (2006) Random real trees. Ann. Fac. Sci. Toulouse Math. 15(1) 3562.CrossRefGoogle Scholar
Levin, D. A. and Peres, Y. (2017) Markov Chains and Mixing Times. American Mathematical SocietySecond edition of [ MR2466937], With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling from the past” by James G. Propp and David B. Wilson.Google Scholar
Marckert, J.-F. and Mokkadem, A. (2003) The depth first processes of Galton-Watson trees converge to the same Brownian excursion. Ann. Probab. 31(3) 16551678.CrossRefGoogle Scholar
Marcus, M. B. and Rosen, J. (1992) Sample path properties of the local times of strongly symmetric Markov processes via Gaussian processes. Ann. Probab. 20(4) 16031684.Google Scholar
Marzouk, C. (2020) Scaling limits of discrete snakes with stable branching. Ann. Inst. Henri Poincaré Probab. Stat. 56(1) 502523.CrossRefGoogle Scholar
Revuz, D. and Yor, M. (1999) Continuous martingales and Brownian motion, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 293, third ed. Springer-Verlag.Google Scholar
Winkler, P. and Zuckerman, D. (1996) Multiple cover time. Random Struct. Algor. 9(4) 403411.3.0.CO;2-0>CrossRefGoogle Scholar