1. Introduction
Given a random variable $\zeta$ with values in $(0,\infty)$ , the soft-killing inverse first-passage-time problem for Brownian motion consists of finding a function $b \,:\ [0,\infty ) \to \mathbb{R}$ such that the stopping time $\tau_b \, :\!= \, \inf\big\{t \le 0 \,:\ \int_0^t \textbf{1}_{({-}\infty, b(s))}(X_s) \,\textrm{d} s > U \big\}$ has the same distribution as $\zeta$ , where $(X_t)_{t\geq 0}$ is a Brownian motion and U an independent exponential random variable with rate 1. This means that b needs to satisfy
where $\mu$ denotes the initial distribution of the Brownian motion $(X_t)_{t\geq 0}$ . We refer to this problem as the soft-killing (inverse first-passage-time) problem.
The existence of solutions for this soft-killing problem was established in [Reference Ettinger, Evans and Hening9] in the case where the initial distribution of $X_0$ admits a bounded, strictly positive, twice continuously differentiable density with bounded derivatives and the survival function $g\,:\ [0,\infty) \to [0,1]$ , $g (t) \, :\!= \, \mathbb{P}\left( \zeta \le t \right)$ , is twice continuously differentiable and fulfills the condition
Note that differentiating the required condition (1.1) in $t> 0$ yields (cf. Lemma 3.1)
which shows that (1.2) is close to the necessary condition for the existence of continuous solutions. The question concerning uniqueness for the soft-killing inverse first-passage-time problem for Brownian motion together with another proof of the existence was answered in [Reference Ettinger, Hening and Wong10] under suitable conditions on the initial distribution and the survival function g. Again, the methods therein are mainly analytic, relying on an associated partial differential equation for which the existence and uniqueness of weak solutions can be shown.
The classical inverse first-passage-time problem for Brownian motion is to find a function b such that the first-passage-time $\tau_b^{\text{fp}} \, :\!= \, \inf\{t > 0 \,:\ X_t < b(t)\}$ has the given distribution of $\zeta$ . This means that b needs to satisfy $\mathbb{P}_{\mu}(\tau_b^{\text{fp}} > t) = \mathbb{P}(\zeta > t)$ for all $t \geq 0$ , where $\mu$ denotes the initial distribution of the Brownian motion $(X_t)_{t\geq 0}$ . The classical problem can be seen as the limit case of the soft-killing problem in the sense that, when substituting U with an exponential random variable with rate $\lambda >0$ , the stopping time $\tau_b$ becomes the first-passage-time $\tau_b^{\text{fp}}$ of b as $\lambda \to \infty$ . The existence of so-called barrier solutions of the classical problem was shown in [Reference Anulova3], which give rise to lower semicontinuous solutions. Much later, the uniqueness of the classical problem for a one-sided boundary was tackled in [Reference Chen, Cheng, Chadam and Saunders5, Reference Cheng, Chen, Chadam and Saunders6] by connecting the problem with a certain partial differential equation. In [Reference Cheng, Chen, Chadam and Saunders6] the authors studied the existence and uniqueness of viscosity solutions for a related variational inequality; in [Reference Chen, Cheng, Chadam and Saunders5] the authors proved that the unique solution to the classical problem can be extracted from the solution of this variational inequality if $\zeta$ has no atoms. A probabilistic approach to showing general uniqueness is shown in [Reference Ekström and Janson8], consisting of a connection to an optimal stopping problem and solving a time-discrete version of it. The counterpart of the approach to establishing the uniqueness and properties of solutions in this present work for the classical problem, using stochastic ordering, can be found in [Reference Klump and Kolb20]. Sufficient criteria for the continuity of solutions can be found in [Reference Chen, Cheng, Chadam and Saunders5, Reference Ekström and Janson8, Reference Potiron22]. Higher regularity was studied in [Reference Chen, Chadam and Saunders4]. A relation of the classical problem to integral equations for b and g can be found in [Reference Jaimungal, Kreinin and Valov15, Reference Peskir and Shiryaev21]. An overview of inverse first-passage-time problems is given by [Reference Abundo2]. Another related issue is the modification of the classical problem, where b and $\zeta$ are given and we seek the initial distribution $\mu$ such that the distribution of $\tau_b^{\text{fp}}$ equals the distribution of $\zeta$ . This problem has been studied in [Reference Jackson, Kreinin and Zhang14, Reference Jaimungal, Kreinin and Valov16, Reference Jaimungal, Kreinin and Valov17].
Both the classical and soft-killing problems can be seen as special cases of the problem of finding stopping times with given laws as in [Reference Dudley and Gutmann7].
In this contribution we present a new, more probabilistic, proof of existence and uniqueness for the soft-killing inverse first-passage-time problem. In our approach we approximate the marginal distribution of the Brownian motion at a specific timepoint conditioned to the event of non-killing up to this timepoint. A major tool to gain control over our approximation will be the use of the usual stochastic ordering. We will show that this approximation allows us to extract a sequence of functions that converge to a continuous solution, and that this does automatically prove the uniqueness. This direct approach has the following advantages:
-
It allows us to remove the assumption from [Reference Ettinger, Hening and Wong10] that the initial distribution $\mu$ of the Brownian motion has a continuous and strictly positive density, which is contained in the second Sobolev space $H^2(\mathbb{R})$ .
-
It leads to a possible numerical approximation of the solution due to the fact that the method is already discrete in time; see, for example, Figure 1. This raises the question of a rigorous study from the numerical point of view in order to obtain reliable results.
-
The method immediately extends to a large class of diffusion processes, as we point out in the last section. Thus, it gives in some sense an answer to the conjecture in [Reference Ettinger, Hening and Wong10]. As one of the main motivations from [Reference Ettinger, Evans and Hening9] is related to financial mathematics, it is relevant to allow larger classes of processes in order to have more flexibility in the modeling process.
-
It relies on comparatively elementary probabilistic arguments and fully avoids methods from the theory of partial differential equations.
The paper is organized as follows. In Section 2 we present our main result regarding existence and uniqueness, sketch the basic idea, and motivate the relevant notation. In Section 3 we carry out the proof of our main result, for which the auxiliary statements are proved in Section 4. In Section 5 we argue that our methods for proving the main result extend to a larger class of Markov processes. In Section 6 we briefly present the Monte Carlo method used to obtain the simulations of Figure 1.
2. Main result and notation
For a probability measure $\mu$ we denote by $\mathbb{P}_{\mu}$ a probability measure under which the process $(X_t)_{t\geq 0}$ is a Brownian motion with initial distribution $\mu$ . As usual we set $\mathbb{P}_x=\mathbb{P}_{\delta_x}$ . If $\mu$ is only a sub-probability measure we define $\mathbb{P}_{\mu}:=\mu(\mathbb{R})\cdot \mathbb{P}_{\mu/\mu(\mathbb{R})}$ . We call a function $g\,:\ [0,\infty) \to [0,1]$ a survival distribution if $g(t) = \mathbb{P}\left( \zeta > t \right)$ for a random variable $\zeta >0$ . For a probability measure $\mu$ and a survival distribution g we denote the set of functions which solve the soft-killing inverse first-passage-time problem for Brownian motion with respect to a survival distribution g and a Brownian motion with initial condition $\mu$ by
Theorem 2.1. (Existence and uniqueness for continuous solutions.) Let $\mu$ be a probability measure equivalent to the Lebesgue measure. Furthermore, let g be a continuously differentiable survival distribution satisfying (1.2). Then there exists exactly one continuous $b \in \textrm{ifptk} (g,\mu)$ .
At this point let us give an introduction to the basic idea of our approach and our notation. If b is a continuous function and $\mu$ a probability measure then
The integral in (2.1) could be approximated by a Riemann-type sum. For this, let $\delta \, :\!= \, \delta^{(n)} \, :\!= \, 2^{-n}$ , with $n\in \mathbb{N}$ . Taking the mesh grid $[0,t]\cap \delta^{(n)} \mathbb{N}_0$ results in the expression
Using the Markov property, this can be written inductively as
and $Q_{0}^{b,n}(\mu) = \mu$ , where, for $t\geq 0$ and a sub-probability measure $\mu$ , the operator $P_t$ is defined by
In the soft-killing problem the function b is unknown, and therefore we have to modify the terms involving b. One natural approach is to substitute these values with values chosen such that at the points $(k\delta^{(n)})_{k=1,\dots, \ell}$ the relation to g is correctly satisfied. Our choice consists of choosing the sequence $(q^n_k)_{k=1,\ldots , \ell}$ iteratively in the following way:
In order to condensate the scheme of (2.2) and (2.3), let us define for a finite measure $\mu$ and $t>0$ , $\alpha\in [\textrm{e}^{-t}\mu (\mathbb{R} ) , \mu (\mathbb{R})]$ the reweighted measure
where $q_\alpha^t(\mu) \, :\!= \, \sup\{q \in \mathbb{R} \,:\ \int_\mathbb{R}\exp\!\big\{{-}t\textbf{1}_{({-}\infty,q)}(x)\big\}\mu(\textrm{d} x) \geq \alpha\}$ with $\sup \emptyset \, :\!= \, -\infty$ is the reweighting threshold.
Let g be a survival distribution fulfilling (1.2). As an abbreviation we round a timepoint $t\geq 0$ to $\delta^{(n)} \mathbb{N}_ 0$ by $\lfloor t\rfloor_n \, :\!= \, \lfloor t/\delta^{(n)}\rfloor\delta^{(n)}$ .
We can write the iterative scheme of (2.2) and (2.3) with the help of the reweighting operation $R^t_\alpha$ and define
which shall serve as our approximation to the unknown $Q_t^b(\mu)$ . Corresponding to the substitution of the value $b(k\delta^{(n)})$ by $q_k^n$ , the function
can be thought of as an approximation of the unknown solution b. On the other hand, (1.3) suggests that the function
is also a possible approximation for b. By the definition of the reweighting thresholds, we note that both $q^{(n)}(t)$ and $a^{(n)}(t)$ are certain quantiles. We will see in Lemma 3.2 that their relation consists of the fact that $q^{(n)}(t)$ is a quantile corresponding to a differential quotient of g, whereas $a^{(n)}(t)$ is a quantile governed by the derivative of g.
We use stochastic ordering in combination with the operations $P_t$ and $R_\alpha^t$ in order to study the relation of $Q_{t}^{+,n}(\mu)$ and $Q_{t}^{b}(\mu)$ . Let $\mathcal{P}$ denote the set of probability measures on $\mathbb{R}$ . For any two measures $\mu, \nu \in \mathcal{P}$ on $\mathbb{R}$ we say $\mu$ is dominated by $\nu$ in the usual stochastic order, and write $\mu \preceq_{\operatorname{st}} \nu$ , if $\mu (({-}\infty ,c]) \geq \nu(({-}\infty,c])$ for all $c \in \mathbb{R}$ . This concept extends naturally to finite measures if $\mu(\mathbb{R}) = \nu(\mathbb{R})$ , so in general by the notation we do not assume the measures involved to be probability measures unless mentioned otherwise. Since the usual stochastic order is closed under convolutions, see [Reference Shaked and Shanthikumar23, Theorem 1.A.3], we have that $P_t$ preserves this order relation, i.e. if $\mu \preceq_{\operatorname{st}} \nu$ , then
With the uniqueness at hand, the statements for proving the main result summarize to the following.
Theorem 2.2. Let $\mu$ be a probability measure equivalent to the Lebesgue measure. Furthermore, let g be a continuously differentiable survival distribution satisfying (1.2), and $b\in \textrm{ifptk} (g,\mu)$ the unique continuous solution. For fixed $t>0$ , $Q_t^{+,n}(\mu) \searrow \mathbb{P}_{\mu} \left( X_t \in \cdot\,, \tau_b > t \right)$ as $n\to\infty$ in the sense of weak convergence, where $\searrow$ refers to the usual stochastic order. Additionally, on compact intervals $q^{(n)}$ and $a^{(n)}$ converge uniformly to b as $n\to\infty$ , with $a^{(n)} \searrow b$ .
3. Proofs of Theorems 2.1 and 2.2
We begin with the following elementary statement from [Reference Ettinger, Hening and Wong10, Lemma 4.2], which shows that, under appropriate conditions, we can recover the boundary function from $Q_t^b(\mu)$ and g.
Lemma 3.1. ([Reference Ettinger, Hening and Wong10].) Assume that $b \,:\ [0,\infty ) \to \mathbb{R}$ is continuous, $\mu \in \mathcal{P}$ has no atoms, and g is a differentiable survival distribution such that $b \in \textrm{ifptk}(g, \mu)$ . Then $-g'(t) = Q_t^b(\mu)(({-}\infty, b(t)))$ for all $t\geq 0$ .
With Lemma 3.1 in mind, as a first step we establish the correct behavior of the mass of $Q_t^{+,n}(\mu)$ below $q^{(n)}(t)$ , which will later lead to the convergence of $q^{(n)}(t)$ .
Lemma 3.2 Let g be a continuously differentiable survival distribution fulfilling (1.2). As $n\to\infty$ , $Q_t^{+,n}(\mu)(({-}\infty, q^{(n)}(t))) \to -g'(t)$ uniformly in $t\in [0,T]$ .
We first want to make the following observation regarding the reweighting operator $R_\alpha^t$ . If $\mu$ is a non-atomic sub-probability measure, $R_\alpha^t(\mu)$ is again non-atomic and we have $R_\alpha^t(\mu)(\mathbb{R}) = \alpha$ and
Proof of Lemma 3.2. Recall the notation $\delta \, :\!= \, \delta^{(n)}$ . We set $D \, :\!= \, \cup_{n\in\mathbb{N}} D_n$ with $D_n \, :\!= \, \{k\delta^{(n)} \,:\ k\in \mathbb{N}_0\}$ . Recall the definition of $Q_t^{+,n}(\mu)$ from (2.5). In view of (3.1) we have, for $t>0$ ,
Since g fulfills (1.2), we have $g' \leq g$ . Furthermore, $g'$ is uniformly continuous on [0, T]. For $\varepsilon > 0$ let n be large enough that $\vert u-r \vert < \delta^{(n)}$ implies $\vert g'(u) -g'(r) \vert \leq \varepsilon$ . We can deduce by the mean value theorem and the inequality $\vert 1 - h / (\textrm{e}^{h}-1) \vert \leq h$ that
for $t\in [0,T]$ . Letting $n\to\infty$ yields the statement, since $\varepsilon$ can be chosen arbitrarily small.
Now we shift our attention to analysis of the relation of the discretized measures $Q_t^{+,n}$ from (2.5) and the measure $Q_{t}^b(\mu)$ from (2.1).
Lemma 3.3. Let $\mu$ be a probability measure and g a survival distribution fulfilling (1.2). Then, for $b\in \textrm{ifptk}(g, \mu)$ , $Q_{t}^b(\mu) \preceq_{\operatorname{st}} Q^{+,n+1}_{t}(\mu) \preceq_{\operatorname{st}} Q^{+,n}_t(\mu)$ for $t\geq 0$ .
In order to prove Lemma 3.3 we have to take into account the effect of $P_t$ and $R_\alpha^t$ on the usual stochastic order. We already mentioned in (2.8) that $P_t$ preserves the order. Regarding $R_\alpha^t$ , we make use of the following properties, the proofs of which can be found in Section 4.
Lemma 3.4. Let $\mu, \nu$ be finite measures with $\mu(\mathbb{R}) = \nu(\mathbb{R})$ and $\mu \preceq_{\operatorname{st}} \nu$ . Let $t > 0$ and $\alpha \in [\textrm{e}^{-t}\mu(\mathbb{R}), \mu(\mathbb{R})]$ , and assume that $R_\alpha^t(\mu)(\mathbb{R}) = R_\alpha^t(\nu)(\mathbb{R})$ . Then $R_\alpha^t(\mu) \preceq_{\operatorname{st}} R_\alpha^t(\nu)$ .
Lemma 3.5. Let $t,s,u,v > 0$ , $\mu$ be a finite measure, and $\beta \in [\textrm{e}^{-t}\mu(\mathbb{R}), \mu(\mathbb{R})]$ , $\alpha \in [\textrm{e}^{-s}\beta, \beta]$ . Then $R_\alpha^s \circ P_u \circ R_\beta^t \circ P_v(\mu) \preceq_{\operatorname{st}} R_\alpha^{s+t} \circ P_{u+v}(\mu)$ .
Proof of Lemma 3.3. In the proof we drop the dependency on b in the notation and write $Q_t(\mu) = Q_t^b(\mu)$ . Furthermore, we write
As a preparational step we claim that, for $t \geq s > 0$ ,
We abbreviate $\delta = \delta^{(n)}$ , write $q \, :\!= \, q_{g(k\delta)}^{\delta}(P_\delta Q_s(\mu))$ , and let $c \geq q$ . Then
by the Markov property. Since $R_{g(t)}^{t-s}(P_{t-s}Q_s(\mu))(\mathbb{R}) = g(t) = Q_{t}\mu(\mathbb{R})$ , it follows that $R_{g(t)}^{t-s}(P_{t-s}Q_s(\mu))(({-}\infty, c]) \leq Q_{t}(\mu)(({-}\infty , c])$ . Now let $c < q$ . Then
This shows that $Q_{t}(\mu) \preceq_{\operatorname{st}} R_{g(t)}^{t-s}(P_{t-s}Q_s(\mu))$ .
As the next step we abbreviate $S^{+,n}_k(\mu) \, :\!= \, R^{\delta^{(n)}}_{g(k\delta^{(n)})} \circ P_{\delta^{(n)}} \circ \cdots \circ R^{\delta^{(n)}}_{g(\delta^{(n)})} \circ P_{\delta^{(n)}}(\mu)$ for $k\in\mathbb{N}$ , and show by induction over k that
for $k\in\mathbb{N}$ . For this, we assume that $Q_{(k-1)\delta^{(n)}}(\mu) \preceq_{\operatorname{st}} S_{k-1}^{+,n}(\mu)$ .
Thus, we can deduce by (3.2), Lemma 3.4, and (2.8) that
For the second inequality, assume that $S^{+,n+1}_{2(k-1)}(\mu) \preceq_{\operatorname{st}} S^{+,n}_{k-1}(\mu)$ . We then have, using Lemma 3.5, Lemma 3.4, and (2.8),
The desired (3.3) follows inductively, since for $k=0$ all the inequalities are fulfilled.
As the final step, for $t>0$ we now have, by (3.2), (3.3), and (2.8),
Furthermore, by Lemma 3.5, (3.3), and (2.8),
which completes the proof.
Observe that it directly follows from the definition of $R_\alpha^t$ that, if $\alpha \in [\textrm{e}^{-t}\beta\mu(\mathbb{R}), \beta\mu(\mathbb{R})]$ with $\beta >0$ ,
This and the Markov property lead to the following alternative representations for $Q_t^{+,n}(\mu)$ .
Remark 3.1. Let $n\in \mathbb{N}$ . By the definitions of $Q_t^{+,n}$ and the reweighting operator, it follows inductively that
By (3.4), another representation is
where $\alpha_k = g(k\delta)/g((k-1)\delta)$ for $k\in \mathbb{N}_0$ .
Now we aim to use the stochastic inequality in order to obtain limits for $Q_t^{+,n}(\mu)$ and $q^{(n)}(t)$ , and compare the limits to $Q_t^b(\mu)$ and b.
Lemma 3.6. Let $\mu \in \mathcal{P}$ be equivalent to Lebesgue measure, g be a differentiable survival distribution fulfilling (1.2), and $b\in \textrm{ifptk}(g, \mu)$ be continuous. For every $t\geq 0$ there exists a sub-probability measure $Q^+_t(\mu)$ such that:
-
(i) $Q_t^{+,n} \to Q^+_t(\mu)$ in the sense of weak convergence as $n\to\infty$ ;
-
(ii) $Q_t^b(\mu) \preceq_{\operatorname{st}} Q_t^+(\mu)$ ;
-
(iii) $Q^+_t(\mu)$ is equivalent to the Lebesgue measure and $Q^+_t(\mu)(\mathbb{R}) = g(t)$ ;
-
(iv) $q^{(n)}(t) \to a(t)$ as $n\to\infty$ for every $t\geq 0$ , where a(t) is the unique value determined by $Q^+_t(\mu)(({-}\infty, a(t))) = -g'(t)$ ; and
-
(v) $a(t) \geq b(t)$ for all $t\geq 0$ .
Proof. In the proof we drop the dependency on b in the notation and write $Q_t(\mu) = Q_t^b(\mu)$ . In the case $t=0$ we have, by definition, $Q_0^+(\mu) \, :\!= \, Q_0^{+,n} = \mu = Q_t(\mu)$ , and thus from now on assume $t>0$ .
To prove (i) and (iii), by Remark 3.1 we have
for every measurable $A\subseteq \mathbb{R}$ . By the upper bound of this inequality, the collection $(Q_t^{+,n}(\mu))_{n\in\mathbb{N}}$ , seen as finite measures, is tight since $\mathbb{P}_{\mu} \left( X_{t} \in \cdot\, \right)$ is tight. Since $Q_t^{+,n}(\mu)(\mathbb{R}) = g(t)$ we can deduce by Prokhorov’s theorem that $(Q_t^{+,n}(\mu))_{n\in\mathbb{N}}$ is relatively compact. Let $\sigma_t$ be an accumulation point of $(Q_t^{+,n}(\mu))_{n\in\mathbb{N}}$ in the sense of weak convergence. Then, by the portmanteau theorem and (3.5) we have, for all closed sets $F\subseteq \mathbb{R}$ ,
and, for all open sets $U\subseteq \mathbb{R}$ ,
Since the measures are regular, it follows that $\textrm{e}^{-t}\mathbb{P}_{\mu} \left( X_{t} \in A \right) \leq \sigma_t(A) \leq \mathbb{P}_{\mu} \left( X_{t} \in A \right)$ for every measurable $A\subseteq\mathbb{R}$ , which implies that $\sigma_t$ is equivalent to the Lebesgue measure.
Then, by Lemma 3.3, for every $c \in \mathbb{R}$ we have that the sequence $Q_t^{+,n}(\mu)(({-}\infty, c])$ is monotonic in n and thus, by the equivalence to the Lebesgue measure and the portmanteu theorem, must converge to $\sigma_t(({-}\infty, c])$ . But in view of the equivalence to the Lebesgue measure and the portmanteau theorem, this already means that $Q_t^{+,n}(\mu)$ converges weakly to $Q_t^{+}(\mu) \, :\!= \, \sigma_t$ .
To prove (ii), since $Q_{t}(\mu) \preceq_{\operatorname{st}} Q_t^{+,n}(\mu) $ by Lemma 3.3, this ordering is preserved in the limit $n\to\infty$ , and thus $Q_{t}(\mu) \preceq_{\operatorname{st}} Q_t^{+}(\mu)$ .
To prove (iv) and (v), due to the fact that $Q_t^{+}(\mu)$ is equivalent to the Lebesgue measure and g fulfills (1.2), we can find a unique value a(t) such that
where the last equality is due to Lemma 3.1. By the inequality $Q_t\mu \preceq_{\operatorname{st}} Q_t^{+}(\mu)$ , it follows that $b(t) \leq a(t)$ . Since $Q_t^{+}(\mu)$ is equivalent to Lebesgue measure, this directly implies that, for every $(c_n)_{n\in\mathbb{N}}$ with $Q_t^{+,n}(\mu)(({-}\infty, c_n]) \to -g'(t)$ , $c_n \to a(t)$ (for the details, see Lemma 4.4). Hence, considering Lemma 3.2, we have $q^{(n)}(t) \to a(t)$ .
We continue with a study of the function a, which is our candidate for a continuous solution of the soft-killing inverse first-passage-time problem.
Let $d_{\operatorname{P}}$ denote the Prokhorov metric for probability measures defined in (B.1). The following statement will let us deduce that the function a is continuous.
Lemma 3.7. Let g be a differentiable survival distribution fulfilling (1.2), and let $\mu \in \mathcal{P}$ . Then $d_{\operatorname{P}} \left( g(t)^{-1} Q^+_{t} (\mu) , g(s)^{-1}Q^+_{s}(\mu) \right) \leq 2\vert t-s \vert^{1/4}$ for every $t,s \geq 0$ .
In order to prove Lemma 3.7 we will bound the effect of $R_\alpha^t$ in the Prokhorov metric by the inequality $d_{\operatorname{P}} \leq d_{\operatorname{TV}}$ ; see (B.3), and the following statement. The proof is to be found in Section 4.
Lemma 3.8. Let $\mu, \nu\in \mathcal{P}$ and $t,s>0$ . Then, for all $\alpha \in [\textrm{e}^{-t},1]$ ,
Proof of Lemma 3.7. Since the Prokhorov metric metrizes the weak convergence, it suffices to show a bound of the type $d_{\operatorname{P}} \left( g(t)^{-1}Q^{+,n}_{t}(\mu) , g(s)^{-1}Q^{+,n}_{s}(\mu) \right) \leq 2\vert t-s \vert^{1/4} + \varepsilon_n$ , where $(\varepsilon_n)_{n\in\mathbb{N}}$ is a sequence converging to zero. We achieve this by showing bounds with respect to the total variation distance and using the inequality $d_{\operatorname{P}} \leq d_{\operatorname{TV}}$ from (B.3). Without loss of generality, assume that $t\geq s$ and $\vert t-s \vert \leq 1$ . By the triangle inequality we observe that
To generate a bound for I, we first abbreviate $\nu = g(\lfloor s\rfloor_n)^{-1}Q^{+,n}_{\lfloor s\rfloor_n}(\mu)$ . Observe that, in view of Remark 3.1, with $\alpha_k \, :\!= \, g(k\delta)/g((k-1)\delta)$ we obtain
Further, we can write
Recall that $h(t) = -({\partial}/{\partial t})\log(g(t))$ . Then $g(t) = \exp\!\big\{{-}\int_0^{t}h(y) \,\textrm{d} y\big\}$ , and thus
Iteratively comparing (3.7) with (3.8) yields, in view of Lemma (3.8),
as $0\leq h \leq 1$ , since g fulfills (1.2).
To find a bound for II, we now observe that, by Corollary B.1,
For a bound for III, in a similar manner to above we have
This means that, with an application of Lemma 3.8,
As the last step, by putting (3.10), (3.9), and (3.11) together, and in view of the triangle bound in (3.6) and the assumption that $\vert t-s \vert\leq 1$ , we obtain
which completes the proof.
Corollary 3.1. Let g be a continuously differentiable survival distribution fulfilling (1.2), and $\mu \in \mathcal{P}$ equivalent to the Lebesgue measure. Then the function $a\,:\ [0,\infty) \to \mathbb{R}$ from Lemma 3.6 is continuous.
Proof. Lemma 3.7 yields in particular that $t \to Q_t^+(\mu)$ is continuous in the sense of weak convergence. Since $Q_t^+(\mu)$ is equivalent to the Lebesgue measure for every $t\geq 0$ , this directly implies that $\lim_{s \to t}a(s) = a(t)$ (for the details, see Lemma 4.4).
Recall that our approach was motivated by the discretization of the integral into a Riemann-type sum involving the function $q^{(n)}$ . In order to bring this discretization together with the original form of the integral we use the fact that the function $q^{(n)}$ converges uniformly.
Lemma 3.9. Let g be a continuously differentiable survival distribution fulfilling (1.2), and $\mu \in \mathcal{P}$ equivalent to the Lebesgue measure. Recall the function $a\,:\ [0,\infty)\to\mathbb{R}$ implicitly defined by $Q_t^+(\mu)(({-}\infty, a(t))) = -g'(t) $ . For $T>0$ , the functions $q^{(n)}(t)$ and $a^{(n)}(t)$ defined in (2.6) and (2.7) converge uniformly to the function a(t) in $t\in [0,T]$ .
In the proof of Lemma 3.9 we will use that $t\mapsto Q_t^{+,n}$ is continuous in the sense of weak convergence, which is easily deduced from the following auxiliary statement. The proof is to be found in Section 4.
Lemma 3.10. Let $\mu\in \mathcal{P}$ , and let $g\,:\ [0,\infty) \to [0,1]$ be continuous with $\textrm{e}^{-t} < g(t) < 1$ for every $t>0$ . Then the mapping $[0,\infty) \to \mathcal{P}$ , $t \mapsto R_{g(t)}^t(P_t\mu)$ is continuous in the sense of weak convergence, where we identify $R_1^0(\mu) = \mu$ .
Proof of Lemma 3.9. Recall that $a^{(n)}(t)$ is implicitly defined by
which is possible since $Q_t^{+,n}(\mu)$ is equivalent to the Lebesgue measure and g fulfills (1.2). By Lemma 3.10 we can deduce that $t \mapsto Q_t^{+,n}$ is continuous in the sense of weak convergence. Now, analogously to the proof of Corollary 3.1, it can be seen that $a^{(n)}$ is continuous. By the ordering of Lemma 3.3 we have $a(t) \leq a^{(n+1)}(t) \leq a^{(n)}(t)$ for every $t\geq 0$ . Since $Q_t^{+,n}(({-}\infty, a^{(n)}(t))) \to Q_t^+(({-}\infty, a(t))$ , it directly follows that $a^{(n)}(t) \to a(t)$ (for the details, see Lemma 4.4). In view of this, and by Dini’s theorem amd the continuity of a and $a^{(n)}$ , it follows that $\sup_{t \in [0,T]}\vert a^{(n)}(t) - a(t) \vert \to 0$ as $n\to\infty$ . We complete the proof by showing that $\sup_{t \in [0,T]}\vert q^{(n)}(t) - a^{(n)}(t) \vert \to 0$ as $n\to\infty$ . As preparation for this, we fix $T>0$ and claim that there exists a compact set $K_T \subset \mathbb{R}$ , only depending on T, such tha,t for all n large enough, $q^{(n)}(t), a^{(n)}(t) \in K_T$ for all $t\in [0,T]$ . In order to see this, we begin as follows. By Prokhorov’s theorem the collection of measures $(P_t\mu)_{t\in [0,T]}$ is tight. Thus, for $\varepsilon > 0$ , by Remark 3.1 we can find $k(\varepsilon) > 0$ such that, for all $n\in \mathbb{N}$ and $t\in [0,T]$ ,
Thus, we have $Q^{+,n}_t(\mu)((q^{(n)}(t), \infty)) \leq \varepsilon$ whenever $q^{(n)}(t) > k(\varepsilon)$ , and similarly $Q^{+,n}_t(\mu)(({-}\infty, q^{(n)}(t)))) \leq \varepsilon$ whenever $q^{(n)}(t) < -k(\varepsilon)$ . For a function f, write $\Vert f \Vert_{[0,T]} \, :\!= \, \sup_{t\in [0,T]}\vert f(t) \vert$ . We have, by Lemma 3.2,
as $n\to\infty$ . On the other hand, we have
Now, note that, due to (1.2) and the continuity of g and $g'$ ,
In view of the above, for n large enough we necessarily have that $q^{(n)}(t) \leq k(\varepsilon_T)$ and $q^{(n)}(t) \geq - k(\varepsilon_T)$ for all $t\in [0,T]$ . For $a^{(n)}(t)$ , we have
Hence, analogously to above, we necessarily have that $\vert a^{(n)}(t) \vert \leq k(\varepsilon_T)$ for all $t\in [0,T]$ . This yields the claim by setting $K_T \, :\!= \, [-k(\varepsilon_T), k(\varepsilon_T)]$ . As the next step, assume that
Then there would exist $\eta > 0$ , a subsequence $(n_k)_{k\in\mathbb{N}}$ of $\mathbb{N}$ , and a converging sequence $(t_k)_{k\in\mathbb{N}}$ contained in [0,T] such that
for all $k\in\mathbb{N}$ . We write $t_0 \, :\!= \, \lim_{k\to\infty}t_k$ and observe that, since $q^{(n)}(t), a^{(n)}(t) \in K_T$ for all $t\in [0,T]$ , we can assume without loss of generality that $\lim_{k\to\infty}q^{(n_k)}(t_k)$ and $\lim_{k\to\infty}a^{(n_k)}(t_k)$ exist. Write $c_1 \, :\!= \, \min\!(\lim_{k\to\infty}q^{(n_k)}(t_k), \lim_{k\to\infty}a^{(n_k)}(t_k))$ and $c_2 \, :\!= \, \max\!(\lim_{k\to\infty}q^{(n_k)}(t_k), \lim_{k\to\infty}a^{(n_k)}(t_k))$ . By (3.13) it follows that $\vert c_1 - c_2 \vert \geq \eta >0$ . Now let
and observe that, by Remark 3.1,
since $P_t\mu$ is continuous in $t \in [0,T]$ and is equivalent to Lebesgue measure for every $t\geq 0$ . But on the other hand we have, in view of Lemma 3.2,
Consequently, the assumption in (3.12) has to be false, and it follows that
which completes the proof.
We are now able to prove that the continuous function a is indeed a solution to the soft-killing inverse first-passage-time problem.
Proposition 3.1. Let $\mu \in \mathcal{P}$ be equivalent to the Lebesgue measure. Furthermore, let g be a continuously differentiable survival distribution fulfilling (1.2). Recall the function $a\,:\ [0,\infty) \to \mathbb{R}$ implicitly defined by $Q_t^+(\mu)(({-}\infty, a(t))) = -g'(t) $ . Then
-
(i) $Q_t^a(\mu) = Q_t^+(\mu)$ for every $t\geq 0$ , and
-
(ii) $a \in \textrm{ifptk}(g,\mu)$ .
Proof. By Lemma 3.9, $q^{(n)}$ converges to a uniformly on [0, t]. Further, we have that, almost surely, $\int_0^t\textbf{1}_{\{0\}}(a(s) - X_s) \,\textrm{d} s = 0$ . By Remark 3.1, the dominated convergence theorem, and Lemma A.1, we obtain that $\lim_{n\to\infty}Q_t^{+,n}(\mu)(A)$ equals
which means in particular that $Q_t^{+}(\mu) = Q_t^a(\mu)$ , and thus $a\in \textrm{ifptk}(g,\mu)$ .
Now we can prove the existence and uniqueness of continuous solutions.
Proof of Theorem 2.1. Denote the time spent by the Brownian motion under a boundary b function by $\Gamma_t^b \, :\!= \, \int_0^t\textbf{1}_{({-}\infty, b(r))}(X_r) \,\textrm{d} r$ . From Proposition 3.1 and Corollary 3.1 it follows that the function $a\,:\ [0,\infty) \to \mathbb{R}$ implicitly defined by $Q_t^a(\mu)(({-}\infty, a(t))) = -g'(t)$ is a continuous solution in $\textrm{ifptk}(g,\mu)$ . Now let $b \in \textrm{ifptk}(g,\mu)$ be continuous. From Proposition 3.1 we know that $Q_t^a(\mu) = Q_t^+(\mu)$ . In view of Lemma 3.6 we have $a \geq b$ pointwise. Consequently, we also have $\Gamma_t^a \geq \Gamma_t^b$ . By
we see that $\Gamma_t^a = \Gamma_t^b$ almost surely. If $a \neq b$ then due to continuity the Brownian path would spend more time below b than below a with positive probability, which is a contradiction, and thus $a=b$ .
In view of the uniqueness, the limit behaviors of the approximate objects are summarized in the statement of Theorem 2.2.
Proof of Theorem 2.2. Since a is the unique continuous solution, we have, by Proposition 3.1 and Lemma 3.6, that $Q_t^{+,n} (\mu) \to Q_t^a (\mu)$ as $n\to\infty$ in the sense of weak convergence. The monotonicity of $Q_t^{+,n}(\mu)$ follows from Lemma 3.3. The uniform convergence of $q^{(n)}$ and $a^{(n)}$ on compact intervals follows from Lemma 3.9. The monotonicity of $a^{(n)}$ was shown in the proof of Lemma 3.9.
4. Proofs of auxiliary statements
4.1. Proof of Lemma 3.4
Proof of Lemma 3.4. From $\mu \preceq_{\operatorname{st}} \nu$ follows $q_\alpha^t(\nu) \geq q_\alpha^t(\mu)$ , and thus
Now, observe that, for $c < q_\alpha^t(\nu)$ ,
On the other hand, for $c\geq q_\alpha^t(\nu)$ ,
Since $R_\alpha^t(\mu)(\mathbb{R}) = \mathbb{R}_\alpha^t(\nu)(\mathbb{R})$ , this means that $R_\alpha^t(\mu)(({-}\infty,c]) \geq R_\alpha^t(\nu)(({-}\infty,c])$ , which shows that $R_\alpha^t(\mu) \preceq_{\operatorname{st}} R_\alpha^t(\nu)$ .
4.2. Proof of Lemma 3.5
In order to prove Lemma 3.5 we show the following two separate statements concerning the reweighting operation, the convolution operator, and the usual stochastic order.
Lemma 4.1. Let $\mu$ be a finite measure, $t,s>0$ , and $\alpha \in [\textrm{e}^{-t}\mu(\mathbb{R}),\mu(\mathbb{R})]$ , and assume that $R_{\alpha}^t(\mu)(\mathbb{R}) = \alpha$ . Then $P_s R_{\alpha}^t(\mu) \preceq_{\operatorname{st}} R_\alpha^t(P_s\mu)$ .
Proof. Abbreviate $q \, :\!= \, q_{\alpha}^t ( P_s \mu)$ . First, let $c\geq q$ . Then
Since $P_s R_\alpha^t(\mu)(\mathbb{R}) = \alpha = R_\alpha^t(P_s\mu)$ , we have $P_s R_\alpha^t(\mu)(({-}\infty, c]) \geq R_\alpha^t(P_s\mu)(({-}\infty, c])$ .
Now let $c < q$ . Then
which completes the proof.
Lemma 4.2. Let $t,s > 0$ , $\mu$ be a non-atomic finite measure, $\beta \in [\textrm{e}^{-t}\mu(\mathbb{R}), \mu(\mathbb{R})]$ , and $\alpha \in [\textrm{e}^{-s}\beta, \beta]$ . Then $R_\alpha^s(R_\beta^t(\mu)) \preceq_{\operatorname{st}} R_\alpha^{s+t}(\mu)$ .
Proof. First, note that
Let $c < q^{s+t}_\alpha(\mu)$ . Then
For $c \geq q_\alpha^{s+t}(\mu)$ ,
Since $R_\alpha^s(R_\beta^t(\mu))(\mathbb{R}) = \alpha = R_\alpha^{s+t}(\mu)(\mathbb{R})$ , this shows the desired result.
Proof of Lemma 3.5. Using Lemmas 4.1, 4.2, and 3.4, we can deduce that
which completes the proof.
4.3. Proof of Lemma 3.8
Let $\mu$ and $\nu$ be absolutely continuous with respect to the Lebesgue measure with densities f and g. Then
is the total variation distance from (B.2) (for example, see [Reference Gibbs and Su11]). We prepare for the proof of Lemma 3.8 with the following.
Lemma 4.3. Let $\mu \in \mathcal{P}$ be absolutely continuous with respect to the Lebesgue measure, and $t>0$ . Then, for all $\alpha \in [\textrm{e}^{-t},1]$ , $d_{\operatorname{TV}} \left( \alpha^{-1}R_\alpha^t(\mu) , \mu \right) \leq 1 - \alpha$ .
Proof. Let $\mu = f\,\textrm{d} x$ . Then the density of $\alpha^{-1}R_\alpha^t(\mu)$ is given by $\alpha^{-1}\exp\!\big\{{-}t\textbf{1}_{({-}\infty, q_\alpha^t(\mu))}\big\}f$ . By the representation in (4.1) and the computation in (3.1), we have
This completes the proof.
Proof of Lemma 3.8. From the coupling representation in (B.2) it directly follows that $d_{\operatorname{TV}} \left( P_t\mu , P_t\nu \right) \leq d_{\operatorname{TV}} \left( \mu , \nu \right)$ for probability measures $\mu, \nu$ . Thus, we can deduce by Lemma 4.3 and the triangle inequality that
4.4. Proof of Lemma 3.10
In order to prove Lemma 3.10 we will use the following elementary statement.
Lemma 4.4. Let $\nu_n \to \nu$ in distribution, where $\nu$ is a probability measure equivalent to Lebesgue measure. Let $\alpha \in (0,1)$ and $(c_n)_{n\in \mathbb{N}}$ be a sequence with $\nu_n(({-}\infty, c_n]) \to \alpha$ . Then $c_n \to c_\alpha$ , where $c_\alpha$ is uniquely determined by $\nu(({-}\infty, c_\alpha]) = \alpha$ .
Proof. Assume that $\limsup_{n\to\infty} c_n > c_\alpha$ . Now, there has to be a subsequence $(c_{n_k})_{k\in \mathbb{N}}$ and $\varepsilon > 0$ such that $\lim_{k\to\infty} c_{n_k} > c_\alpha + \varepsilon$ . This implies that, by the portmanteau theorem,
which is a contradiction. The reasoning for $\liminf_{n\to\infty} c_n \geq c_\alpha $ is analogous, which yields the statement.
Proof of Lemma 3.10. For $t>0$ , since $P_t\mu$ is equivalent to the Lebesgue measure we have, by Lemma 4.4, $\lim_{s\to t}q_{g(s)}^{s}(P_{s}\mu) = q_{g(t)}^t(P_t\mu)$ . Because of this, as $s\to t$ we have, for continuous and bounded $f\,:\ \mathbb{R}\to\mathbb{R}$ ,
due to the continuity of the paths, the fact that $X_t = q_{g(t)}^t(P_t\mu)$ has probability 0, and the dominated convergence theorem. For $t = 0$ the statement is clear, since $\exp\!\big\{{-}t\textbf{1}_{({-}\infty, q_{g(t)}^t(P_t\mu))}(x)\big\} \to 1$ as $t \to 0$ .
5. Markov processes with soft killing
In this section we propose a generalization of the results from Brownian motion to certain Markov processes. Suppose that $(X_t)_{t\geq 0}$ is a Markov process on a filtrated probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0},\mathbb{P})$ with transition semigroup $(P_t)_{t\geq 0}$ . Then it is possible to state the inverse first-passage-time problem for this Markov process with soft killing in the same way as before. Given a random variable $\zeta$ with values in $(0,\infty)$ we search for a function $b\,:\ [0,\infty) \to \mathbb{R}$ such that (1.1) is fulfilled, i.e.
where $\mu$ is again the initial distribution of $(X_t)_{t\geq 0}$ . For the aim of generalizing the approach for the Brownian motion to this case, it turns out that we merely have to impose the following requirements on the semigroup $(P_t)_{t\geq 0}$ , where we understand $P_t$ as usual as an operator on the space of sub-probability measures by the relation
for continuous and bounded functions $f\,:\ \mathbb{R}\to\mathbb{R}$ .
Remark 5.1. In order to obtain uniquely determined quantiles by the reweighting operator $R^t_\alpha$ in (2.4) and to pass this property through the approximation limit $Q_t^+ (\mu)$ , we should assume that $P_t \mu$ is equivalent to the Lebesgue measure for every initial measure $\mu$ . Furthermore, for the continuity of $Q_t^{+,n}(\mu)$ in t we should impose that there is a version of $(X_t)_{t\geq 0}$ that has continuous sample paths, as this is used in the proof of Lemma 3.10. Furthermore, for the properties in Lemma 3.3 we need that $P_t$ preserves the usual stochastic ordering, which can be established by a suitable coupling for strong Markov processes with continuous sample paths. For the result of Lemma 3.7, on the one hand we want to use that $d_{\operatorname{TV}} \left( P_t\mu , P_t\nu \right) \leq d_{\operatorname{TV}} \left( \mu , \nu \right)$ , which holds true for Markov kernels in general. On the other hand, we want to require that for a tight collection $\mathcal{S}$ of probability measures we have
as $t\to 0$ . This is sufficient for the proof of Lemma 3.7 since the family
is tight.
In order to reduce these conditions to some natural requirements we list the following sufficient properties, which will induce the conditions discussed above, where (5.2) can be deduced by (iii) as pointed out in Remark B.1.
-
(i) $P_t\delta_x$ is equivalent to the Lebesgue measure for every $x\in \mathbb{R}$ and $t>0$ .
-
(ii) $(X_t)_{t\geq 0}$ admits a version which has almost surely continuous sample paths.
-
(iii) The process is locally uniformly continuous in probability, i.e. for every compact subset $K \subset \mathbb{R}$ , $\lim_{t\rightarrow 0}\sup_{x\in K}\mathbb{P}_{x} \left( \vert X_t - X_0 \vert > \varepsilon \right) = 0$ .
Then there is exactly one continuous $b\,:\ [0,\infty) \to \mathbb{R}$ such that (5.1) is fulfilled. This shows that, for a large class of diffusion processes, the inverse first-passage-time problem with soft killing has a unique solution. The validity of such a generalization was conjectured in [Reference Ettinger, Hening and Wong10] but instead using conditions on the coefficients directly.
6. Monte Carlo method for an approximate solution
In the following we present our Monte Carlo method for simulating the discrete approximations from Figure 1.
Let $g(t) = \mathbb{P}\left( \zeta > t \right)$ be a continuously differentiable survival function with random variable $\zeta >0$ , let $\mu$ be a probability measure, and assume that g fulfills (1.2).
Let $(X^1_t, \ldots, X^N_t)_{t\geq 0}$ be an N-dimensional Brownian motion with initial configuration $(X_0^1, \ldots, X_0^N) \sim \mu^{\otimes N}$ . For $n\in \mathbb{N}$ let timepoints $(t_k^n)_{k\in \mathbb{N}}$ be given by $t_k^n \, :\!= \, k\cdot 2^{-n} = k \delta^{(n)}$ . We define the weighting process $(\hat{w}_k)_{k\in \mathbb{N}_0} = (\hat{w}_k^1, \ldots, \hat{w}_k^N)_{k\in \mathbb{N}_0}$ inductively by $\hat{w}_0^i \, :\!= \, {1}/{N}$ for any $i \in \{1,\ldots,N\}$ and, for $k\in\mathbb{N}$ ,
where
Heuristically, $\hat{q}^{(n)}_k$ is an empirical version of $q^{(n)}(k\delta^{(n)})$ from (2.6). A proof for the validity of this choice, namely that $\hat{q}^{(n)}_k \to q^{(n)}(k\delta^{(n)})$ almost surely as $N\to \infty$ , can be found in [Reference Klump18, Theorem 3.3.2].
A simulation of $\hat{q}^{(n)}$ can be seen in Figure 1 for certain distributions with the parameters $n=6$ and $N = 10^6$ .
While the soft-killing problem intrinsically kills continuously in time, our discretization procedure for the soft-killing problem provides an approximation of the continuous boundary at discrete parts only. In contrast to that, in the classical inverse first-passage-time problem a continuous, piecewise linear approximation in [Reference Zucca and Sacerdote25] yields a Monte Carlo algorithm to approximate the barrier with error bounds. While in the classical problem several methods are known to approximate the solutions numerically (see, e.g., [Reference Abundo1, Reference Gür and Pötzelberger12, Reference Iscoe and Kreinin13, Reference Klump19, Reference Song and Zipkin24, Reference Zucca and Sacerdote25]), the problem of obtaining numerical approximations for the solutions of the soft-killing inverse first-passage-time problem has not been treated in the literature until now. Therefore, a more detailed study has yet to be provided in order to obtain reliable results.
Appendix A. Approximation of a Riemann integral
For the following basic result about Riemann integrals used in the proof of Theorem 3.1 we found no direct source in the literature. For completeness we give a proof here.
Lemma A.1. Let $f\,:\ [0,T] \to \mathbb{R}$ be a continuous function with $\int_0^T\textbf{1}_{f^{-1}(\{0\})}(s)\,\textrm{d} s = 0$ . Furthermore, let $f_n \to f$ uniformly on [0, T]. Then, for any sequence of partitions $(Z_n)$ of [0, T] with mesh tending to zero (this means that $Z_n = \{t_0^n, \ldots, t_{m_n}^n\}$ , where $0 = t_0 < \cdots < t_m = T$ and $\lim_{n\to\infty}\max_{i=1,\ldots,m_n}\vert t_i^n - t_{i-1}^n \vert = 0$ ),
Proof. Let $(Z_n)_{n\in\mathbb{N}}$ be such a sequence of partitions. Let $D \, :\!= \, \{t \in [0,T] \,:\ s \mapsto \textbf{1}_{({-}\infty, 0)}(f(s)) \text{ is discontinuous at } t\}$ . We have $D \subset f^{-1} (\{0\})$ , and thus, by the assumption, the mapping $s \mapsto \textbf{1}_{({-}\infty, 0)}(f(s))$ is almost everywhere continuous on [0, T]. By Lebesgue’s criterion for Riemann integrability it follows that we have
For $\varepsilon > 0$ let
For $\varepsilon > 0$ let n be large enough that $\sup_{t\in [0,T]}\vert f_n(t) - f(t) \vert < \varepsilon$ . Then
as $n\to\infty$ , since $\phi_\varepsilon \circ f$ is continuous and thus Riemann integrable. Now, by letting $\varepsilon \to 0$ we get, by the dominated convergence theorem,
since $\int_0^T\textbf{1}_{f^{-1}(\{0\})}(s)\,\textrm{d} s = 0$ . Thus the desired statement follows.
Appendix B. Bounds and the use of probability metrics
In this work we use the Prokhorov metric
where $B^\varepsilon \, :\!= \, \{x\in \mathbb{R} \,:\ \inf_{y\in B}\vert x-y \vert \leq \varepsilon\}$ , and the total variation distance
where the coupling representation can be found in [Reference Gibbs and Su11]. We have the following bounds [Reference Gibbs and Su11]:
where $d_{\operatorname{W}}$ is the Wasserstein metric.
The following is a direct consequence of (B.4).
Corollary B.1. Let $\mathcal{N}(0,t)$ denote the normal distribution with mean 0 and variance $t>0$ . Then $d_{\operatorname{P}} \left( \mathcal{N}(0,t) * \mu , \mu \right) \leq t^{1/4}$ .
Remark B.1. Furthermore, (B.4) implies that for an initial distribution $\mu$ and a Markov process $(X_t)_{t\geq 0}$ we have, for every compact set $K\subset \mathbb{R}$ such that $\mu (K) \geq 1 -\varepsilon$ , $\varepsilon \in (0,1)$ ,
If $\mathcal{S}$ is a tight family of probability measures and $(X_t)_{\geq 0}$ is locally uniformly continuous in probability at $t=0$ , this implies that $\limsup_{t\to 0}\sup_{\mu\in\mathcal{S}}d_{\operatorname{P}}(\mathbb{P}_{\mu} \left( X_t \in \cdot \, \right),\mu) = 0$ . This property was required in (5.2).
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.