Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-18T14:13:27.432Z Has data issue: false hasContentIssue false

On mixed fractional stochastic differential equations with discontinuous drift coefficient

Published online by Cambridge University Press:  01 December 2022

Ercan Sönmez*
Affiliation:
University of Klagenfurt
*
*Postal address: Department of Statistics, Universitätsstrase 65–67, 9020 Klagenfurt, Austria. Email: ercan.soenmez@aau.at
Rights & Permissions [Opens in a new window]

Abstract

We prove existence and uniqueness for the solution of a class of mixed fractional stochastic differential equations with discontinuous drift driven by both standard and fractional Brownian motion. Additionally, we establish a generalized Itô rule valid for functions with an absolutely continuous derivative and applicable to solutions of mixed fractional stochastic differential equations with Lipschitz coefficients, which plays a key role in our proof of existence and uniqueness. The proof of such a formula is new and relies on showing the existence of a density of the law under mild assumptions on the diffusion coefficient.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Consider an autonomous stochastic differential equation (SDE)

(1.1) \begin{align}{\mathrm{d}} X_t = a(X_t) \, {\mathrm{d}} t + b(X_t) \, {\mathrm{d}} W_t + c(X_t) \, {\mathrm{d}} B_t^H,\end{align}

with $a,b,c \,:\, \mathbb{R} \to \mathbb{R}$ being measurable functions, W a standard Brownian motion, and $B^H$ a fractional Brownian motion with Hurst index $H \in \big(\frac12,1\big)$ .

If $c \equiv 0$ the corresponding SDE fits into the Markovian case and allows us to use the Itô theory in order to investigate the SDE (see [Reference Karatzas and Shreve4, Reference Leobacher and Szölgyenyi9, Reference Veretennikov27, Reference Zvonkin31] and references therein), whereas if $b \equiv 0$ we find ourselves in the purely fractional case. With H satisfying $H \in \big(\frac12 ,1\big)$ we can define stochastic integrals with respect to fractional Brownian motion utilizing a pathwise approach. A variety of methods have been developed and used in order to study such corresponding (stochastic) differential equations (see [Reference Hu, Liu and Nualart3, Reference Kleptsyna, Kloeden and Anh5, Reference Kubilius6, Reference Lin11, Reference Lyons12, Reference Nualart and Ouknine20, Reference Nualart and Ouknine21, Reference Nualart and Răşcanu22, Reference Ruzmaikina24, Reference Young28, Reference Zähle30]), in particular borrowing ideas and results from deterministic (geometric) differential equation theory.

A suitable motivation for considering mixed SDEs arises from applications in financial mathematics. Including both standard and fractional Brownian motion for the purpose of modeling randomness in a financial market enriches the model with flexibility, and more particularly enables the capture and distinguishing between two sources of randomness. Typically, standard Brownian motion models white noise possessing no memory, whereas fractional Brownian motion models noise with a long range property.

Questions regarding the existence and uniqueness of a solution to mixed SDEs have been addressed in [Reference Guerra and Nualart2, Reference Kubilius7, Reference Mishura and Posashkov15, Reference Mishura and Shevchenko16, Reference Mishura and Shevchenko17] under certain regularity assumptions on the coefficients a, b, c; see the aforementioned references and Section 2 for details. A main contribution of this work is the establishment of existence and uniqueness for solutions to mixed SDEs with irregular drift, in which the drift coefficient is allowed to be discontinuous.

In the Markovian case $c \equiv 0$ , considerable effort has been made in the study of the corresponding SDE with discontinuous drift coefficient; see, e.g., [Reference Leobacher and Szölgyenyi9, Reference Leobacher and Szölgyenyi10, Reference Müller-Gronbach and Yaroslavtseva18, Reference Zvonkin31] and references therein, just to mention a few. Comparatively little is known for purely fractional SDEs, i.e. $b \equiv 0$ , with discontinuous drift coefficient; see [Reference Fan and Zhang1, Reference Mishura13, Reference Suo and Yuan25] for the case $H > \frac12$ . In [Reference Mishura13, Theorem 3.5.14] the existence of a strong solution is proven for purely fractional SDEs with additive noise, where the drift coefficient is given by the discontinuous function $a(x) = \operatorname{sign} (x)$ for all $x \in \mathbb{R}$ and the Hurst index H is restricted to $H \in \big(\frac12, ({1+\sqrt{5}})/{4}\big)$ ; see also [Reference Mishura and Nualart14, Theorem 1] for a related result. In this paper, using a significantly different approach, we provide results on the existence and uniqueness of solutions to mixed SDEs, where we allow the drift coefficient to be in the more general class of piecewise Lipschitz continuous functions and we include all $H \in \big(\frac{1}{2},1\big)$ .

The main obstacle faced in studying mixed SDEs arises from the fact that the two stochastic integrals involved in (1.1) have crucially different natures. The integral with respect to standard Brownian motion is a classical Itô integral, while the integral with respect to fractional Brownian motion is a pathwise Riemann–Stieltjes integral.

Let us outline the structure in achieving our main result. Whereas the proofs in [Reference Mishura and Shevchenko17] use approximation theory and partially the approach in [Reference Nualart and Răşcanu22], we will borrow ideas from the purely non-fractional case and make them accessible in the setting of mixed SDEs. A key idea is to use a transformation technique originating from [Reference Leobacher and Szölgyenyi9, Reference Veretennikov26, Reference Veretennikov27]. We will employ a transformation used in [Reference Leobacher and Szölgyenyi10]. More precisely, to reduce the question to the existence and uniqueness of solutions of classical equations, the SDE is transformed in such a way that the discontinuity of the drift is removed, whereas the (regularity) properties of both the diffusion and fractional coefficient are preserved. This intention is, however, accompanied by challenges. In order to be able to apply the transform we need to employ a generalized Itô formula valid for convex functions with absolutely continuous derivative, which is not possible so far. Therefore, another main contribution of this paper is to establish such a formula. Here we provide a novel proof, which is also new in the Markovian case. Our approach partially combines the procedure in [Reference Fan and Zhang1, Reference Leobacher, Reisinger and Stockinger8]. This is achieved by providing results on the absolute continuity of the law of mixed stochastic differential equations, which is then used in order to prove a variant of a generalized Itô formula.

Here is a summary of the rest of this paper. In Section 2 we rigorously formulate the problem under consideration, also formulating and giving a proof of our result on existence and uniqueness. As already mentioned, the proof invokes the Itô formula generalized to convex functions. In Section 3 we first give a proof of a classical Itô formula for mixed SDEs, in order to make the paper self-contained. Subsequently, Section 4 is devoted to the study of the existence of a density of the law of mixed SDEs, which finally enables us to prove a generalized Itô formula.

2. Existence and uniqueness

Let $T \in (0, \infty)$ and $\big(\Omega, \mathcal{F}, (\mathbb{F}_t)_{t \in [0, T]}, P\big)$ be a filtered probability space satisfying the usual conditions. Suppose that $W=(W_t)_{t \in [0, T]}$ is a standard Brownian motion and $B^H=\big(B^H_t\big)_{t \in [0, T]}$ is a fractional Brownian motion with Hurst parameter $H \in \big(\frac{1}{2},1\big)$ independent of W, i.e. $B^H$ is a centered Gaussian process with covariance function $R_H$ given by

\[R_H(t,s) = \mathbb{E} [ B_t^H B_s^H] = \tfrac{1}{2} \big( t^{2H} + s^{2H} - |t-s|^{2H} \big) , \qquad t,s \in [0,T].\]

We can, for example, take $(\mathbb{F}_t)_{t \in [0, T]}$ to be a right-continuous filtration containing the natural filtration generated by W, $B^H$ , and all null sets with respect to P.

Let $a,b,c \,:\, \mathbb{R} \to \mathbb{R}$ be measurable functions. We consider the following mixed stochastic differential equation:

(2.1) \begin{align}\begin{split}X_t & = X_0+ \int_0^t a(X_s) \, {\mathrm{d}} s + \int_0^t b(X_s) \, {\mathrm{d}} W_s + \int_0^t c(X_s) \, {\mathrm{d}} B_s^H , \qquad t \in [0,T], \\ X_0 & = \xi \in \mathbb{R}.\end{split}\end{align}

Under the assumptions that a, b, c are Lipschitz and c is differentiable with bounded and Lipschitz continuous derivative c ′, it is known from [Reference Mishura and Shevchenko17, Theorem 3.1] that (2.1) admits a unique solution. In fact, its proof shows that the following holds.

Lemma 2.1. Assume that there exists $K \in (0, \infty)$ such that, for all $x,y,x_1,x_2,x_3,x_4 \in \mathbb{R}$ ,

\begin{align*}| a(x) - a(y)| + | b(x) - b(y)| + | c(x) - c(y)| & \leq K|x-y|, \\| c(x_1) - c(x_2) - c(x_3) + c(x_4) | & \leq K |x_1 - x_2 - x_3 + x_4| \\& \quad + K |x_1 - x_3| \big( |x_1 - x_2| + |x_3 - x_4| \big) .\end{align*}

Then (2.1) admits a unique solution.

We first show that Lemma 2.1 implies the following result, which is in accordance with [Reference Mishura and Shevchenko17, Theorem 3.1].

Proposition 2.1. Assume that there exists $K \in (0, \infty)$ such that, for all $x,y \in \mathbb{R}$ ,

\begin{equation*}| a(x) - a(y)| + | b(x) - b(y)| + | c(x) - c(y)| \leq K|x-y|.\end{equation*}

Moreover, assume that the function c has a bounded Lebesgue density $c^{\prime} \,:\, \mathbb{R} \to \mathbb{R}$ which is Lipschitz continuous. Then (2.1) admits a unique solution.

Proof. Let $x_1,x_2,x_3,x_4 \in \mathbb{R}$ , and $M_1, M_2, \ldots$ be unspecified constants. By assumption,

\begin{equation*}c(x_1) - c(x_3) = \int_{x_3}^{x_1} c^{\prime} (u) \, {\mathrm{d}} u= \int_0^1 (x_1 - x_3) c^{\prime} \big( \theta x_1 + (1-\theta) x_3 \big) \, {\mathrm{d}}\theta .\end{equation*}

Thus,

\begin{align*}c(x_1) - c(x_2) & - c(x_3) + c(x_4) \\& = \int_0^1 (x_1 - x_3) c^{\prime} \big( \theta x_1 + (1-\theta) x_3 \big) \, {\mathrm{d}}\theta - \int_0^1 (x_2 - x_4) c^{\prime} \big( \theta x_2 + (1-\theta) x_4 \big) \, {\mathrm{d}}\theta \\& = \int_0^1 (x_1 - x_2 - x_3 + x_4) c^{\prime} \big( \theta x_2 + (1-\theta) x_4 \big) \, {\mathrm{d}}\theta \\& \quad + \int_0^1 (x_1 - x_3) \big( c^{\prime} ( \theta x_1 + (1-\theta) x_3 ) - c^{\prime} ( \theta x_2 + (1-\theta) x_4 ) \big) \, {\mathrm{d}}\theta .\end{align*}

From this and the assumptions on c′, we obtain

\begin{align*}| c(x_1) - c(x_2) & - c(x_3) + c(x_4) | \\& \leq M_1 | x_1 - x_2 - x_3 + x_4 | \\& \quad + |x_1 - x_3| \cdot \int_0^1 M_2 | \theta x_1 + (1- \theta) x_3 - \theta x_2 - (1- \theta) x_4| \, {\mathrm{d}}\theta \\& \leq M_3 | x_1 - x_2 - x_3 + x_4 | + M_4 |x_1 - x_3|( |x_1 - x_2| + |x_3 - x_4|) .\end{align*}

Thus, Proposition 2.1 follows from Lemma 2.1.

Let us first recall the definition of the fractional integral appearing in (2.1), which is an extension of the Stieltjes integral (see [Reference Zähle29]). Let $a,b \in \mathbb{R}$ with $a<b$ . For $\alpha \in (0,1)$ and a function $f \,:\, [a,b] \to \mathbb{R}$ , the Weyl derivatives, denoted by $D_{a+}^\alpha f$ and $D_{b-}^\alpha f$ , are defined by

\begin{align*}D_{a+}^\alpha \,f (x) & = \frac{1}{\Gamma (1- \alpha)} \bigg( \frac{f(x)}{(x-a)^\alpha} + \alpha \int_a^x \frac{f(x)-f(y)}{(x-y)^{\alpha+1}} \, {\mathrm{d}} y \bigg) , \qquad x\in (a,b), \\[5pt]D_{b-}^\alpha \,f (x) &= \frac{({-}1)^\alpha}{\Gamma (1- \alpha)} \bigg( \frac{f(x)}{(b-x)^\alpha} + \alpha \int_x^b \frac{f(x)-f(y)}{(y-x)^{\alpha+1}} \, {\mathrm{d}} y \bigg) , \qquad x\in (a,b),\end{align*}

provided that $D_{a+}^\alpha f \in L_p$ and $D_{b-}^\alpha f \in L_p$ for some $p \geq 1$ , respectively. The convergence of the above integrals at the singularity $y=x$ holds pointwise for almost all $x \in (a,b)$ if $p=1$ and in the $L_p$ sense if $p \in (1, \infty)$ . Denote by $g_{b-}$ the function given by $ g_{b-}(x) = g(x) - g(b{-})$ , $x \in (a,b)$ . Assume that $D_{a+}^\alpha f \in L_1$ and $D_{b-}^{1-\alpha} g_{b-} \in L_\infty$ . Then the generalized Stieltjes or fractional integral of f with respect to g is defined as

\[\!\int_a^b f(x) \, {\mathrm{d}} g(x) \,:\!=\, ({-}1)^\alpha \int_a^b D_{a+}^\alpha f(x) D_{b-}^{1-\alpha} g_{b-} (x) \, {\mathrm{d}} x.\]

Let $\alpha \in \big(1-H, \frac{1}{2}\big)$ and $\lambda \in (0,1]$ . In the following, denote by $W_0^{\alpha, \infty}$ the space of measurable functions $g \,:\, [0,T] \to \mathbb{R}$ such that

\begin{equation*} \| g\|_{\alpha, \infty} \,:\!=\, \sup_{t \in [0,T]} \bigg( |g(t)| + \int_0^t \frac{|g(t) - g(s)|}{(t-s)^{\alpha +1}} \, {\mathrm{d}} s \bigg) < \infty ,\end{equation*}

and denote by $C^\lambda$ the space of $\lambda$ -Hölder continuous functions $g \,:\, [0,T] \to \mathbb{R}$ equipped with the norm

\[ \|g\|_\lambda \,:\!=\,\sup_{t \in [0,T]}|g(t)| + \sup_{0 \leq s < t \leq T}\frac{|g(t) - g(s)|}{(t-s)^{\lambda}}.\]

We have, for all $\varepsilon \in (0, \alpha)$ , $ C^{\alpha + \varepsilon} \subset W_0^{\alpha, \infty} \subset {C^{\alpha - \varepsilon}}$ . Let $f \in C^\lambda$ and $g \in C^\mu$ with $\lambda, \mu \in (0,1]$ such that $\lambda + \mu >1$ . It is a well-known result, see [Reference Zähle29, Theorem 4.2.1], that under this condition the fractional integral $\int_0^T f(x) \, {\mathrm{d}} g(x)$ exists and agrees with the corresponding Riemann–Stieltjes integral. In particular, we have $D_{0+}^\alpha f \in L_1$ and $D_{T-}^{1-\alpha} g_{T-} \in L_\infty$ .

Assume that $Y=(Y_t)_{t \in [0,T]}$ satisfies $Y \in C^\lambda$ almost surely for all $\lambda \in (0, \frac12)$ . Then, according to the aforementioned remarks, the integral $\int_0^T c(Y_r) \, {\mathrm{d}} B_r^H$ is well-defined when $c \,:\, \mathbb{R} \to \mathbb{R}$ is Lipschitz continuous. Let $\lambda \in \big(0, \frac12\big)$ and $\beta \in (0,H)$ , with $\lambda+\beta >1$ . Similarly to [Reference Fan and Zhang1, (3.8)], we can prove the estimate

(2.2) $$\eqalign{& \bigg| \int_s^t c(Y_r) \, {\mathrm{d}} B_r^H \bigg| \leq K \|B^H\|_\beta \bigg(\!\int_s^t |c(X_r)| (r-s)^{-\alpha} (t-r)^{\alpha + \beta -1} \, {\mathrm{d}} r \cr & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \|c\|_1 \| Y\|_\lambda \int_s^t (r-s)^{\lambda - \alpha} (t-r)^{\alpha + \beta -1} \, {\mathrm{d}} r \bigg) \cr}$$

for $\alpha \in \big(1-H, \frac12\big)$ and some constant $K \in (0, \infty)$ .

The goal of this paper is to study mixed SDEs with irregular coefficients. In particular, the function a is allowed to be discontinuous. It turns out that we can prove existence and uniqueness for such SDEs under the following assumption.

Assumption 2.1.

  1. (i) The function a is piecewise Lipschitz according to [Reference Leobacher and Szölgyenyi9, Definition 2.1], and its discontinuity points are given by $\xi_1 < \cdots < \xi_k \in \mathbb{R}$ for some $k \in \mathbb{N}$ , i.e. a is Lipschitz continuous on each of the intervals $({-}\infty, \xi_1)$ , $(\xi_k, \infty)$ , and $(\xi_j,\xi_{j+1})$ , $1 \leq j \leq k-1$ .

  2. (ii) The function b is Lipschitz continuous on $\mathbb{R}$ and $b (\xi_i) \neq 0$ for all $i \in \{1, \ldots, k\}$ .

  3. (iii) The function c is Lipschitz continuous with bounded derivative c ′ which is Lipschitz continuous on $\mathbb{R}$ as well, and $c (\xi_i) = 0$ for all $i \in \{1, \ldots, k\}$ .

We stress that these assumptions are satisfied by a variety of (practical) examples. One such is given by the SDE ${\mathrm{d}} X_t = -\operatorname{sign}\! (X_t) \, {\mathrm{d}} t + {\mathrm{d}} W_t + X_t \, {\mathrm{d}} B_t^H$ , $X_0 = \xi \in \mathbb{R}$ .

Our main theorem now reads as follows.

Theorem 2.1. Under Assumption 2.1, (2.1) admits a unique strong solution.

Proof. Let $\Theta = \{ \xi_1, \ldots, \xi_k\}$ and $U = \mathbb{R} \setminus \Theta$ . Recall from [Reference Müller-Gronbach and Yaroslavtseva18, Lemma 7] that there is a function $G \,:\, \mathbb{R} \to \mathbb{R}$ satisfying the following:

  • G is Lipschitz continuous, and differentiable on $\mathbb{R}$ with $0 < \inf_{x \in \mathbb{R}} G^{\prime}(x) \leq \sup_{x \in \mathbb{R}} G^{\prime}(x) < \infty$ ;

  • G has an inverse $G^{-1} \,:\, \mathbb{R} \to \mathbb{R}$ that is Lipschitz continuous and differentiable on $\mathbb{R}$ with $G(\xi_i)=\xi_i$ for $i=1, \ldots, k$ ;

  • the derivative G ′ of G is Lipschitz continuous on $\mathbb{R}$ ;

  • the derivative G ′ of G has a bounded Lebesgue density $G^{\prime\prime} \,:\, \mathbb{R} \to \mathbb{R}$ that is piecewise Lipschitz with discontinuity points given by $\xi_1 < \cdots < \xi_k$ such that $\tilde{a} = \big(G^{\prime} \cdot a + \frac{1}{2} G^{\prime\prime} \cdot b^2\big) \circ G^{-1}$ and $\tilde{b} = (G^{\prime} \cdot b) \circ G^{-1}$ are Lipschitz continuous.

Define, for all $x \in \mathbb{R}$ ,

\begin{equation*}f(x) \,:\!=\, \frac{{\mathrm{d}}}{{\mathrm{d}} x} G^{\prime} \big( G^{-1} (x) \big) = G^{\prime\prime} \big( G^{-1} (x) \big) \cdot \frac{1}{G^{\prime} \big( G^{-1} (x) \big)} = h(x) \cdot j(x),\end{equation*}

with

\[h(x) =G^{\prime\prime} \big( G^{-1} (x) \big) , \qquad j(x) = \frac{1}{G^{\prime} \big( G^{-1} (x) \big)}. \]

By the assumptions listed above, f is bounded. Moreover, the function j is bounded and differentiable (on U) with bounded derivative, and thus the function j is Lipschitz continuous. Since h is bounded and Lipschitz continuous as well as a composition of Lipschitz continuous functions, the function f is Lipschitz continuous on U as a product of bounded and Lipschitz continuous functions. Similarly, the function g with

\begin{equation*}g(x) \,:\!=\, \frac{{\mathrm{d}}}{{\mathrm{d}} x} c \big( G^{-1} (x) \big) = c^{\prime} \big( G^{-1} (x) \big) \cdot \frac{1}{G^{\prime} \big( G^{-1} (x) \big)} ,\qquad x \in \mathbb{R},\end{equation*}

is bounded and Lipschitz continuous on U. Now, for all $x \in \mathbb{R}$ , let $\tilde{c} (x)= c \big( G^{-1} (x) \big) \cdot G^{\prime} \big( G^{-1} (x) \big)$ . Then, $\tilde{c}$ is differentiable in U and, for $x \in U$ , we have

\begin{align*}\tilde{c} ^{\prime}(x)& =\frac{{\mathrm{d}}}{{\mathrm{d}} x} \tilde{c} (x) = c \big( G^{-1} (x) \big) \cdot f(x) + g(x) \cdot G^{\prime} \big( G^{-1} (x) \big) \\& = c \big( G^{-1} (x) \big) \cdot G^{\prime\prime} \big( G^{-1} (x) \big) \cdot \frac{1}{G^{\prime} \big( G^{-1} (x) \big)} + c^{\prime} \big( G^{-1} (x) \big)\cdot \frac{1}{G^{\prime} \big( G^{-1} (x) \big)} \cdot G^{\prime} \big( G^{-1} (x) \big) .\end{align*}

Then, by the considerations above, the function $\tilde{c} ^{\prime}$ is bounded and Lipschitz continuous on U. Now consider the extension $\tilde{c}^{\prime} \,:\, \mathbb{R} \to \mathbb{R}$ of $\tilde{c}^{\prime}$ that we define by setting

\[ \tilde{c} ^{\prime}(\xi_i) =c \big( G^{-1} (\xi_i) \big) \cdot G^{\prime\prime} \big( G^{-1} (\xi_i) \big)\cdot \frac{1}{G^{\prime} \big( G^{-1} (\xi_i) \big)} + c^{\prime} \big( G^{-1} (\xi_i) \big)\]

for all $i \in \{1, \ldots, k\}$ . By construction and by Assumption 2.1(iii) we have, for all $i \in \{1, \ldots, k\}$ ,

\begin{equation*}\tilde{c}^{\prime} (\xi_i {+}) = c (\xi_i + ) \cdot G^{\prime\prime} (\xi_i{+}) \frac{1}{G^{\prime}(\xi_i{+})} + c^{\prime} (\xi_i {+}) = c^{\prime} (\xi_i ) = c^{\prime} (\xi_i {-}) = \tilde{c}^{\prime} (\xi_i {-}).\end{equation*}

Thus, the function $\tilde{c}^{\prime}$ is continuous and piecewise Lipschitz, and hence Lipschitz continuous by [Reference Leobacher and Szölgyenyi9, Lemma 2.6]. Moreover, the function G is constructed in the proof of [Reference Müller-Gronbach and Yaroslavtseva18, Lemma 7] in such a way that $G(x) = x$ for all $x \in \mathbb{R}$ with $|x| > K$ , where $K \in (0, \infty)$ is some constant. Thus, we have $G^{\prime\prime}(x) = 0$ for every such x. Hence, $\tilde{c}^{\prime}$ is bounded, as G ′′ has compact support. We conclude that the function $\tilde{c}$ defined above admits a bounded Lebesgue density that is Lipschitz continuous. Now consider the (transformed) SDE given by

\begin{equation*}{\mathrm{d}} Z_t = \tilde{a} (Z_t) \, {\mathrm{d}} t + \tilde{b} (Z_t) \, {\mathrm{d}} W_t + \tilde{c} (Z_t) \, {\mathrm{d}} B_t^H, \qquad Z_0 = G(\xi) \in \mathbb{R} .\end{equation*}

By Proposition 2.1, the solution to this SDE is unique. Moreover, the inverse $G^{-1}$ of G inherits the smoothness from G, i.e. it satisfies the conditions in Itô’s formula, Theorem 4.1 below. Applying Theorem 4.1 to $G^{-1}$ and setting $X_t= G^{-1} (Z_t)$ , $t\in[0,T]$ , we obtain that the process $(X_t)_{t \in [0,T]}$ satifies

\begin{equation*}{\mathrm{d}} X_t = a (X_t) \, {\mathrm{d}} t + b (X_t) \, {\mathrm{d}} W_t + c (X_t) \, {\mathrm{d}} B_t^H, \qquad X_0 = \xi \in \mathbb{R} .\end{equation*}

This completes the proof.

3. Itô’s formula for mixed SDEs

In the following, assume that $a,b,c \,:\, \mathbb{R} \to \mathbb{R}$ are Lipschitz, and c is differentiable with bounded and Lipschitz continuous derivative c ′, so that, by [Reference Mishura and Shevchenko17, Theorem 3.1], the solution to

\begin{align*}X_t & = X_0+ \int_0^t a(X_s) \, {\mathrm{d}} s + \int_0^t b(X_s) \, {\mathrm{d}} W_s + \int_0^t c(X_s) \, {\mathrm{d}} B_t^H ,\qquad t \in [0,T], \\X_0 & = \xi \in \mathbb{R} \end{align*}

exists and is unique.

Theorem 3.1. Let $f \,:\, \mathbb{R} \to \mathbb{R}$ be twice continuously differentiable. Then, almost surely,

\begin{align*} f (X_t)& = f(X_0) + \int_0^t f^{\prime}(X_s) b(X_s) \, {\mathrm{d}} W_s + \int_0^t f^{\prime}(X_s) c(X_s) \, {\mathrm{d}} B^H_s \\& \quad + \int_0^t \big( \tfrac{1}{2}b^2(X_s) f^{\prime\prime}(X_s) +a(X_s) f^{\prime}(X_s) \big) \, {\mathrm{d}} s, \qquad t \in [0, T] .\end{align*}

Proof. By the usual localization argument (see the proof of [Reference Karatzas and Shreve4, Theorem 3.3]) we may assume that f has compact support and that f, f ′, f ′′ are bounded. Fix $t \in (0, T]$ and a sequence $\big(\Pi^n = \big\{ 0=t_0^n < t^n_1< \cdots< t^n_m = t\big\}\big)_{n \in \mathbb{N}}$ , $m \in \mathbb{N}$ , of partitions of [0, t] with $\max_{1 \leq k \leq m} |t_k^n - t_{k-1}^n| \to 0$ , $n \to \infty$ . For notational simplicity we will suppress the index n and simply write $\Pi = \{ 0=t_0 < t_1< \cdots< t_m = t\}$ . By Taylor expansion,

\begin{align*}f(X_t) - f(X_0)& = \sum_{k=1}^{m} \big( f\big(X_{t_k}\big) - f\big(X_{t_{k-1}}\big) \big) \\& = \sum_{k=1}^{m} f^{\prime}\big(X_{t_{k-1}}\big) \big(X_{t_k} - X_{t_{k-1}}\big)+ \frac{1}{2}\sum_{k=1}^{m} f^{\prime\prime}(\eta_{{k}})\big(X_{t_k} - X_{t_{k-1}}\big)^2 ,\end{align*}

with $\eta_{{k}}\,:\!=\, X_{t_{k-1}} + \theta_k \big(X_{t_k} - X_{t_{k-1}}\big)$ for some random variable $\theta_k = \theta_k (\omega) \in [0,1]$ , $\omega \in \Omega$ . We write $f(X_t) - f(X_0) = J_0 + J_1 + J_2 + \frac{1}{2} J_3$ , with

\begin{align*}J_0 &= \sum_{k=1}^m f^{\prime}\big(X_{t_{k-1}}\big) \int_{t_{k-1}}^{t_k} a(X_s) \, {\mathrm{d}} s, \\J_1 &= \sum_{k=1}^m f^{\prime}\big(X_{t_{k-1}}\big) \int_{t_{k-1}}^{t_k} c(X_s) \, {\mathrm{d}} B^H_s, \\J_2 &= \sum_{k=1}^m f^{\prime}\big(X_{t_{k-1}}\big) \int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s, \\J_3 & = \sum_{k=1}^{m} f^{\prime\prime}(\eta_{{k}})\big(X_{t_k} - X_{t_{k-1}}\big)^2 .\end{align*}

Observe that $J_0$ converges to the Lebesgue–Stieltjes integral $\int_0^t f^{\prime}(X_s) a(X_s) \, {\mathrm{d}} s$ as $n \to \infty$ , almost surely. Now we turn to the term $J_1$ . By [Reference Mishura and Shevchenko17, Theorem 3.1], for all $\alpha \in \big(1-H, \frac{1}{2}\big)$ we have $ X = (X_t)_{t \in [0,T]} \in W_0^{\alpha, \infty}$ almost surely. Therefore, $X \in C^{\frac{1}{2}-}$ and we conclude that, by the assumptions made, $ f^{\prime}(X) c(X) \in C^{\frac{1}{2}-}$ almost surely. Thus, by [Reference Mishura13, Theorem 2.1.7], the Riemann–Stieltjes integral $ \int_0^t f^{\prime}(X_s) c(X_s) \, {\mathrm{d}} B_s^H$ exists and equals the limit $\lim_{n \to \infty} J_1$ , almost surely.

Now we consider the term $J_2$ . Define $Y_s \,:\!=\, f^{\prime}(X_s)$ , $s \in [0,T]$ , which we are going to approximate by

\[ Y_s^\Pi = f^{\prime}(X_0) \textbf{1}_{\{0\}} (s)+ \sum_{k=1}^m f^{\prime}\big(X_{t_{k-1}}\big) \textbf{1}_{(t_{k-1}, t_k]} (s) , \qquad s\in [0,t] .\]

By Itô’s isometry we have

\begin{equation*}\mathbb{E} \bigg[ \bigg(\!\int_0^t b(X_s) \big(Y_s^\Pi - Y_s\big) \, {\mathrm{d}} W_s \bigg) ^2 \bigg]= \mathbb{E} \bigg[\!\int_0^t b^2(X_s) \big(Y_s^\Pi - Y_s\big)^2 \, {\mathrm{d}} s \bigg] \to 0\end{equation*}

as $n \to \infty$ , by the dominated convergence theorem. Thus, $ J_2 \to \int_0^t b(X_s) f^{\prime}(X_s) \, {\mathrm{d}} W_s$ , $n \to \infty$ , in $L^2$ , i.e.

\[ \mathbb{E} \bigg[ \bigg( J_2 - \int_0^t b(X_s) f^{\prime}(X_s) \, {\mathrm{d}} W_s \bigg) ^2 \bigg] \to 0,\qquad n \to \infty.\]

It remains to consider the expression $J_3$ . We begin by writing $ J_3 = J_4 + J_5 + J_6 + J_7 + $ $ J_8 + J_9$ , with

\begin{align*}J_4 &= \sum_{k=1}^m f^{\prime\prime}(\eta_{{k}})\bigg(\int_{t_{k-1}}^{t_k} c(X_s) \, {\mathrm{d}} B^H_s\bigg) ^2, \\J_5 &= \sum_{k=1}^m f^{\prime\prime}({\eta_{k}}) \bigg(\!\int_{t_{k-1}}^{t_k} a(X_s) \, {\mathrm{d}} s\bigg) ^2, \\J_6 &= \sum_{k=1}^m f^{\prime\prime}({\eta_{k}}) \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s\bigg) ^2, \\J_7 & = 2\sum_{k=1}^{m} f^{\prime\prime}(\eta_{{k}}) \bigg(\!\int_{t_{k-1}}^{t_k} a(X_s) \, {\mathrm{d}} s\bigg)\bigg(\int_{t_{k-1}}^{t_k} c(X_s) \, {\mathrm{d}} B^H_s\bigg) , \\J_8 & = 2\sum_{k=1}^{m} f^{\prime\prime}(\eta_{{k}}) \bigg(\!\int_{t_{k-1}}^{t_k} a(X_s) \, {\mathrm{d}} s\bigg)\bigg(\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s\bigg) , \\J_9 & = 2\sum_{k=1}^{m} f^{\prime\prime}(\eta_{{k}}) \bigg(\int_{t_{k-1}}^{t_k} c(X_s) \, {\mathrm{d}} B^H_s\bigg)\bigg(\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s\bigg) .\end{align*}

We first estimate $|J_4|$ . In order to do this, first define $W_T^{1-\alpha, \infty}$ for $\alpha \in \big(0, \frac{1}{2}\big)$ as the space of measurable functions $g \,:\, [0,T] \to \mathbb{R}$ such that

\[ \| g \| _{1-\alpha, \infty, T} \,:\!=\, \sup_{0 < s < t < T} \bigg( \frac{|g(t) - g(s)|}{(t-s)^{1-\alpha }} + \int_s^t \frac{|g(y) - g(s)|}{(y-s)^{2-\alpha }} \, {\mathrm{d}} y \bigg) < \infty .\]

We have the relation $ C^{1-\alpha + \varepsilon} \subset W_T^{1-\alpha, \infty} \subset C^{1-\alpha}$ for every $\varepsilon \in (0, \infty)$ . Recall that $ (X_t)_{t \in [0,T]} \in W_0^{\alpha, \infty}$ almost surely. Since c is assumed to be Lipschitz, we have $ \big( c(X_t)\big)_{t \in [0,T]} \in W_0^{\alpha, \infty}$ almost surely. Moreover, by the remarks above, $ \big(B^H_t\big)_{t \in [0,T]} \in W_T^{H-\varepsilon, \infty}$ for all $\varepsilon \in (0, \infty)$ almost surely. Consequently, by [Reference Nualart and Răşcanu22, Proposition 4.2] we have

\[\bigg(\!\int_0^t c(X_s) \, {\mathrm{d}} B^H_s \bigg)_{t \in [0,T]} \in C^{H - \varepsilon} \quad\text{for all}\ \varepsilon \in (0, \infty)\]

almost surely. Let $\gamma \in \big(\frac12 , H-\varepsilon\big)$ . From this, together with the boundedness of f ′′, for some constant $K \in (0, \infty)$ we obtain

\[ | J_4| \leq K \sum_{k=1}^{m} ( {t_k} - {t_{k-1}} ) ^{2\gamma} \to 0, \qquad n \to \infty,\]

almost surely. We continue by estimating $|J_5| + |J_7|$ . Recall that the mapping $ [0,T] \ni t \mapsto \int_0^t a(X_s) \, {\mathrm{d}} s$ is continuous, of bounded variation, and that the mapping $ [0,T] \ni t \mapsto \int_0^t c(X_s) \, {\mathrm{d}} B^H_s$ is continuous, almost surely. Thus,

\begin{equation*}|J_5| + |J_7| \leq K \bigg( \max_{1 \leq k \leq m} \bigg|\int_{t_{k-1}}^{t_k} c(X_s) \, {\mathrm{d}} B^H_s\bigg|+ \max_{1 \leq k \leq m}\bigg|\int_{t_{k-1}}^{t_k} a(X_s) \, {\mathrm{d}} s\bigg| \bigg) \to 0, \qquad n \to \infty,\end{equation*}

almost surely. The same argument shows that $|J_8|$ converges to 0 as $n \to \infty$ , almost surely. Now we estimate $|J_9|$ . Recall that

\[\bigg(\!\int_0^t c(X_s) \, {\mathrm{d}} B^H_s \bigg)_{t \in [0,T]} \in C^{H - \varepsilon} \quad\text{for all}\ \varepsilon \in (0, \infty)\]

almost surely. We now combine this with the fact that

\[\bigg(\!\int_0^t b(X_s) \, {\mathrm{d}} W_s \bigg)_{t \in [0,T]} \in C^{\frac{1}{2}-}\]

almost surely. Indeed, choose $\alpha \in (0,H)$ and $\beta \in \big(0, \frac{1}{2}\big)$ with $\alpha + \beta >1$ . Employing the Hölder continuity, we easily get, for some constants,

\begin{align*}|J_9| & \leq K_1 \sum_{k=1}^m |t_k - t_{k-1}|^\alpha|t_k - t_{k-1}|^\beta \\& \leq K_2 \max_{1 \leq k \leq m} |t_k - t_{k-1}|^{\alpha + \beta -1} \to 0, \qquad n \to \infty,\end{align*}

almost surely, since $\alpha + \beta >1$ . It remains to consider

\[J_6 = \sum_{k=1}^m f^{\prime\prime}(\eta_{k}) \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^2.\]

Define

\[J^{\prime}_6 = \sum_{k=1}^m f^{\prime\prime}\big(X_{t_{k-1}}\big) \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^2.\]

Then,

\[| J_6 - J^{\prime}_6| \leq \max_{1 \leq k \leq m} |f^{\prime\prime}(\eta_{k}) - f^{\prime\prime}\big(X_{t_{k-1}}\big)|\sum_{k=1}^m\bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^2 .\]

Note that $(b(X_s))_{s \in [0,T]}$ is adapted and continuous, and hence progressively measurable. Moreover, since $\mathbb{E} \big[\! \int_0^t b^2(X_s) \, {\mathrm{d}} s \big] < \infty$ by assumption, we get that the process $\big(\!\int_0^t b(X_s) \, {\mathrm{d}} W_s \big)_{t \in [0,T]}$ is a martingale with respect to the underlying filtration. Thus, by [Reference Karatzas and Shreve4, Lemma 1.5.9],

\[\mathbb{E} \Bigg[ \sum_{k=1}^m \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg)^2 \Bigg] \leq K\]

for some constant, and from the Cauchy–Schwarz inequality we further get that

\begin{equation*}\mathbb{E} \big[| J_6 - J^{\prime}_6|\big] \leq \bigg\{\mathbb{E} \bigg[ \Big( \max_{1 \leq k \leq m} |f^{\prime\prime}(\eta_{k}) - f^{\prime\prime}\big(X_{t_{k-1}}\big) | \Big) ^2 \bigg]\bigg\}^{1/2} \to 0, \qquad n \to \infty,\end{equation*}

by the dominated convergence theorem and the fact that f ′′ and X are continuous. Now define

\[J^{\prime}_7 \,:\!=\, \sum_{k=1}^m f^{\prime\prime}\big(X_{t_{k-1}}\big)\int_{t_{k-1}}^{t_k} b^2(X_s) \, {\mathrm{d}} s .\]

By the martingale property, see also [Reference Karatzas and Shreve4, p. 32], we have

\begin{align*}\mathbb{E} \big[| J^{\prime}_6 - J^{\prime}_7|^2\big]& = \mathbb{E} \Bigg[ \bigg| \sum_{k=1}^m f^{\prime\prime}\big(X_{t_{k-1}}\big) \bigg( \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^2 - \int_{t_{k-1}}^{t_k} b^2(X_s) \, {\mathrm{d}} s\bigg) \bigg|^2 \Bigg] \\& = \mathbb{E} \Bigg[\sum_{k=1}^m \big(f^{\prime\prime}\big(X_{t_{k-1}}\big)\big) ^2 \bigg( \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^2 -\int_{t_{k-1}}^{t_k} b^2(X_s) \, {\mathrm{d}} s\bigg)^2 \Bigg] \\& \leq 2K \mathbb{E} \Bigg[\sum_{k=1}^m \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^4 +\sum_{k=1}^m \bigg(\int_{t_{k-1}}^{t_k} b^2(X_s) ds\bigg)^2 \Bigg] \\& \leq 2K \mathbb{E} \Bigg[ \sum_{k=1}^m \bigg(\!\int_{t_{k-1}}^{t_k} b(X_s) \, {\mathrm{d}} W_s \bigg) ^4 +\bigg(\int_{0}^{t} b^2(X_s) \, {\mathrm{d}} s\bigg) \max_{1 \leq k \leq m} \bigg(\int^{t_k}_{t_{k-1}} b^2(X_s) \, {\mathrm{d}} s\bigg)\Bigg] \\& \to 0, \qquad n \to \infty,\end{align*}

by [Reference Karatzas and Shreve4, Lemma 1.5.10] and the dominated convergence theorem. Overall, we conclude that $ J_3 \to \int_0^t f^{\prime\prime}(X_s) b^2(X_s) \, {\mathrm{d}} s$ , $n \to \infty$ , in $L^1$ . All these findings imply the result by standard arguments.

4. A generalized Itô formula for mixed SDEs

The goal of this section is a novel proof of the following new variant of Itô’s formula that differs from the one presented in the previous section.

Theorem 4.1. Let $\Theta = \{ \xi_1, \ldots, \xi_k\}$ be as in the proof of Theorem 2.1. Let $f \,:\, \mathbb{R} \to \mathbb{R}$ be a continuously differentiable function such that, for all $x \in \mathbb{R} \setminus \Theta$ the second derivative f′′(x) exists and the function $f^{\prime\prime} \,:\, \mathbb{R} \setminus \Theta \to \mathbb{R}$ is continuous and bounded. For definiteness, extend f′′ to $\mathbb{R}$ in a way such that $f^{\prime\prime} \,:\, \mathbb{R} \to \mathbb{R}$ is measurable. Moreover, let X be as in Theorem 3.1, where we assume that $b(\xi_i) \neq 0$ for all $i=1, \ldots, k$ . Then, almost surely,

\begin{align*}f (X_t)& = f(X_0) + \int_0^t f^{\prime}(X_s) b(X_s) \, {\mathrm{d}} W_s + \int_0^t f^{\prime}(X_s) c(X_s) \, {\mathrm{d}} B^H_s \\& \quad + \int_0^t \big( \tfrac{1}{2}b^2(X_s) f^{\prime\prime}(X_s) +a(X_s) f^{\prime}(X_s) \big) \, {\mathrm{d}} s, \qquad t \in [0, T] .\end{align*}

Remark 4.1. In proving Theorem 4.1 we shall make use of the localization argument, as executed in the proof of Theorem 3.1. Let X be as in Theorem 4.1, and denote by $\| \cdot \|_t$ , $t \in [0,T]$ , a norm such that $\| X\|_t$ is almost surely finite. Choose a sequence of non-decreasing stopping times $(T_n)_{n \in \mathbb{N}}$ with the property that $\| X\|_t \leq n$ for all $t \in [0, T_n]$ , $n \in \mathbb{N}$ . Then, it suffices to establish Theorem 4.1 for the stopped process $X_t^{(n)} \,:\!=\, X_{t \wedge T_n}$ , $t \in [0,T]$ , $n \in \mathbb{N}$ . Therefore, without loss of generality, in our proofs we will often assume that $\sup_{t \in [0,T]} \| X\|_t$ is bounded by some constant $K \in (0, \infty)$ . A frequent choice for the norm will be $\| X\|_t = |X_t|$ or $\| X\|_t = {|X_t - X_s|}/{(t-s)^\lambda}$ , $0 \leq s <t$ , for $\lambda \in (0,1]$ .

As mentioned in the introduction, we present a proof of this variant of Itô’s formula for mixed SDEs which combines ideas from [Reference Fan and Zhang1, Reference Leobacher, Reisinger and Stockinger8]. The first essential is to establish the existence of a density of the law of $X_t$ for every $t \in (0,T]$ , where solely weak assumptions on the diffusion coefficient b are imposed. In particular, we do not require non-degeneracy conditions, nor do we require any assumptions on the fractional coefficient c.

We proceed by studying the existence of a density, which shortly after enables us to provide a proof of our main result in this section. We first introduce some notation used throughout this section. For $h \in \mathbb{R}$ and $m \in \mathbb{N}$ we define $\Delta_h$ to be the difference operator with respect to h, and $\Delta_h^m$ to be the difference operator of order m:

\[ \Delta_h f(x) = f(x+h) - f(x), \qquad \Delta_h^m f(x) = \Delta_h \big(\Delta^{m-1}_h f\big) (x) ,\]

for every function $f \,:\, \mathbb{R} \to \mathbb{R}$ and $x \in \mathbb{R}$ . Moreover, for $\gamma \in (0,m)$ we set $\mathcal{C}_b^\gamma$ to be the closure of bounded smooth functions with respect to the norm

\[ \| f\|_{\mathcal{C}_b^\gamma} \,:\!=\, \|f \|_\infty + \sup_{ |h| \leq 1} \frac{\|\Delta_h^m f(x) \|_ \infty}{|h|^\gamma},\]

where $\| \cdot \|_\infty$ denotes the sup norm. Our second main result of this section reads as follows.

Lemma 4.1. Let X be as in Theorem 3.1. Assume that $\| X\|_{\beta^{\prime}} \leq M$ for some constant $M\in (0,\infty)$ and all $\beta^{\prime} \in \big(0,\frac12\big)$ . For all $t \in (0,T]$ , the law of $X_t$ admits a density with respect to the Lebesgue measure on the set $D_b = \{ x \in \mathbb{R}\,:\, b(x) \neq 0\}$ . In particular, $P(X_t =x) = 0$ for all $x \in D_b$ .

Note that we could also prove that Lemma 4.1 also holds for the set $\{ x \in \mathbb{R}\,:\, c(x) \neq 0\}$ using similar arguments, but this is not important for our purposes. The proof of Lemma 4.1 closely follows the approach in [Reference Fan and Zhang1, Section 4]. We will invoke the following two results: Lemma 4.2 is the statement of [Reference Fan and Zhang1, Lemma 4.6] in dimension one; Lemma 4.3 is due to [Reference Romito23, Section 2].

Lemma 4.2. Let $\rho \,:\, \mathbb{R} \to [0, \infty)$ be a continuous function and $\delta \in (0, \infty)$ . We write $D_\delta = \{ x \in \mathbb{R} \,:\, \rho(x) \leq \delta \}$ , and define a function $h_\delta \,:\, \mathbb{R} \to [0, \delta]$ with

\[h_\delta (x) = (\!\inf \{ |x-z| \,:\, z \in D_\delta \} ) \wedge \delta, \qquad x \in \mathbb{R},\]

where we use the convention that $\inf \{ |x-z| \,:\, z \in D_\delta \} = 0$ if $ D_\delta = \emptyset$ . Then, $h_\delta$ has support in $\mathbb{R} \setminus D_\delta$ and is globally Lipschitz continuous with Lipschitz constant 1. Moreover, for a probability measure $\mu$ on $\mathbb{R}$ , if for some $\delta >0$ the measure $\mu_\delta$ given by ${{\mathrm{d}} \mu_\delta}/{{\mathrm{d}} \mu} = h_\delta$ admits a density, then $\mu$ has a density on the set $\{ x \in \mathbb{R}\,:\, \rho(x) >0 \}$ .

Lemma 4.3. Let $\mu$ be a finite measure on $\mathbb{R}$ . Assume that there exist $m \in \mathbb{N}$ , $\gamma \in (0, \infty)$ , $s \in (\gamma, m)$ , and a constant $K \in (0, \infty)$ such that, for all $\phi \in \mathcal{C}_b^\gamma$ and $h \in \mathbb{R}$ with $|h | \leq 1$ ,

\[ \bigg| \int_\mathbb{R} \Delta_h^m \phi(x) \, {\mathrm{d}} \mu (x) \bigg|\leq K |h|^s \| \phi \|_{\mathcal{C}_b^\gamma} .\]

Then, $\mu$ has a density with respect to the Lebesgue measure on $\mathbb{R}$ .

In proving Lemma 4.1, the goal is to apply Lemma 4.2 with $P_{X_t}$ and $\rho (x) = |b(x)|$ , $x \in \mathbb{R}$ , where $P_{X_t}$ denotes the law of $X_t$ . Lemma 4.3 will be used to deduce that the measure $\mu_\delta$ in Lemma 4.2 admits a density. First, we establish some auxiliary results. In what follows we write

\[Y(\varepsilon) = X_{T-\varepsilon} + b\big(X_{T-\varepsilon}\big) (W_T - W_{T- \varepsilon}) + c\big(X_{T-\varepsilon}\big) \big(B_T^H - B^H_{T- \varepsilon}\big)\]

for $\varepsilon \in (0,T)$ . Let us now briefly recall some basic results concerning the representation of fractional Brownian motion in terms of Brownian motion, which will play a role in the remainder of this section. There exists a standard Brownian motion $B=(B_t)_{t \in [0,T]}$ such that $B_t^H = \int_0^t K_H(t,s) \, {\mathrm{d}} B_s$ , $t \in [0,T]$ , where $K_H$ denotes the following square integrable kernel:

\[K_H(t,s) = c_H \bigg( \frac{t}{s} \bigg)^{H-\frac12} (t-s)^{H-\frac12} - \bigg(H- \frac12\bigg)s^{\frac12 - H} \int_s^t u^{H- \frac{3}{2}} (u-s)^{H-\frac12} \, {\mathrm{d}} u,\]

with $s \in (0,t)$ and some appropriate constant $c_H$ ; see [Reference Nualart19] for details. We will assume that the underlying filtration $\mathcal{F}$ is such that B is $\mathcal{F}$ -adapted. Moreover, the processes W and B are independent by assumption.

Lemma 4.4. Let $\xi = X_{T-\varepsilon} + c\big(X_{T-\varepsilon}\big)\int_0^{T- \varepsilon} \big( K_H(T,s) - K_H(T-\varepsilon,s) \big) \, {\mathrm{d}} B_s$ and $\eta = X_{T-\varepsilon}$ . For all $u \in \mathbb{R}$ we have

\begin{equation*}\mathbb{E} \big[\!\exp\!\big( \mathrm{i}u Y(\varepsilon) \big) \mid \mathbb{F}_{T- \varepsilon}\big]= \exp\!\bigg( \mathrm{i}u \xi - \frac12 u^2 \bigg( b^2(\eta) \varepsilon+ c^2(\eta) \int_{T- \varepsilon}^T K_H^2 (T,s) \, {\mathrm{d}} s \bigg) \bigg) ,\end{equation*}

i.e., given $\mathbb{F}_{T- \varepsilon}$ the random variable $Y(\varepsilon)$ is conditionally Gaussian with mean $\xi$ and variance $b^2(\eta) \varepsilon + c^2(\eta) \int_{T- \varepsilon}^T K_H^2 (T,s) \, {\mathrm{d}} s$ .

Proof. We note that both $W_T - W_{T- \varepsilon}$ and $\int_{T- \varepsilon}^T K_H (T,s) \, {\mathrm{d}} B_s$ are independent of $\mathbb{F}_{T- \varepsilon}$ ; moreover, $X_{T-\varepsilon}$ is $\mathbb{F}_{T- \varepsilon}$ -measurable. Thus,

\begin{align*}& \mathbb{E} \bigg[\!\exp\!\bigg( \mathrm{i}u b\big(X_{T-\varepsilon}\big) (W_T - W_{T- \varepsilon}) + \mathrm{i}u c(X_{T- \varepsilon}) \int_{T- \varepsilon}^T K_H (T,s) \, {\mathrm{d}} B_s \bigg) \, \big| \, \mathbb{F}_{T- \varepsilon}\bigg] \\& \quad = \mathbb{E} \bigg[\!\exp\!\bigg( \mathrm{i}u b(y) (W_T - W_{T- \varepsilon}) + \mathrm{i}u c(y) \int_{T- \varepsilon}^T K_H (T,s) \, {\mathrm{d}} B_s \bigg)\bigg]\bigg|_{y = \eta} \\& \quad = \exp \Bigg( {-} \frac{\big( b^2(y) \varepsilon + c^2(y) \int_{T- \varepsilon}^T K^2_H (T,s) \, {\mathrm{d}} s \big)}{2} u^2 \Bigg)\Bigg|_{y = \eta}.\end{align*}

From the integral representation of fractional Brownian motion, it consequently follows that

\begin{align*}\mathbb{E} [\!\exp(& \mathrm{i}u Y(\varepsilon) ) \mid \mathbb{F}_{T- \varepsilon}] \\& = \exp (\mathrm{i}u \xi) \mathbb{E} \bigg[\!\exp\!\bigg( \mathrm{i}u b(\eta) (W_T - W_{T- \varepsilon})+ \mathrm{i}u c(\eta) \int_{T- \varepsilon}^T K_H (T,s) \, {\mathrm{d}} B_s \bigg) \, \big| \, \mathbb{F}_{T- \varepsilon} \bigg] \\& = \exp \Bigg( \mathrm{i}u\xi - \frac{\big( b^2(\eta) \varepsilon + c^2(\eta) \int_{T- \varepsilon}^T K^2_H (T,s) \, {\mathrm{d}} s \big)}{2} u^2 \Bigg) .\end{align*}

Lemma 4.5. Let $\beta \in (0,H)$ and $\beta^{\prime} \in \big(0, \frac12\big)$ . Assume that $\| X\|_{\beta^{\prime}} \leq M$ for some constant $M\in (0,\infty)$ . Then $\mathbb{E} \big[ |X_T - Y(\varepsilon)|\big] \leq K \Big( \varepsilon + \varepsilon^{\beta^{\prime} + \beta} + \varepsilon^{\beta^{\prime} + \frac12} \Big)$ for some constant $K\in (0, \infty)$ .

Proof. Throughout this proof we denote by $M_1, M_2, \ldots$ unspecified positive and finite constants. We have

\begin{align*}& |X_T - Y(\varepsilon)| \\& \quad = \bigg| \int_{T- \varepsilon}^T a(X_r) \, {\mathrm{d}} r + \int_{T- \varepsilon}^T ( c(X_r) - c(X_{T- \varepsilon}) ) \, {\mathrm{d}} B_r^H + \int_{T- \varepsilon}^T ( b(X_r) - b(X_{T- \varepsilon}) ) \, {\mathrm{d}} W_r \bigg| .\end{align*}

Due to our assumptions and the Lipschitz continuity of a, we obtain the estimate

(4.1) \begin{align}\bigg| \int_{T- \varepsilon}^T a(X_r) \, {\mathrm{d}} r \bigg| \leq M_1 \varepsilon.\end{align}

Furthermore, according to (2.2) we estimate

(4.2) \begin{align}& \bigg| \int_{T- \varepsilon}^T \big( c(X_r) - c(X_{T- \varepsilon}) \big) \, {\mathrm{d}} B_r^H \bigg| \nonumber \\& \quad \leq M_2 \| B^H\|_\beta \bigg(\!\int_{T- \varepsilon}^T \big|c(X_r) - c\big(X_{T-\varepsilon}\big)\big|(r-T+\varepsilon)^{-\alpha} (T-r)^{\alpha + \beta -1} \, {\mathrm{d}} r \nonumber \\& \qquad \qquad \qquad \qquad+ M_3 \int_{T- \varepsilon}^T (r-T + \varepsilon)^{\beta^{\prime} - \alpha} (T-r)^{\alpha + \beta -1} \, {\mathrm{d}} r \bigg) \nonumber \\& \quad \leq M_4 \| B^H\|_\beta \bigg(\!\int_{T- \varepsilon}^T (r-T+\varepsilon)^{\beta^{\prime}-\alpha} (T-r)^{\alpha + \beta -1} \, {\mathrm{d}} r + M_5 \varepsilon^{\beta^{\prime} + \beta} \bigg) \nonumber \\& \quad \leq M_6 \| B^H\|_\beta \varepsilon^{\beta^{\prime} + \beta}.\end{align}

In addition, the Burkholder–Davis–Gundy inequality gives us

(4.3) \begin{align}\mathbb{E}\Bigg[ \bigg| \int_{T- \varepsilon}^T \big( b(X_r) - b(X_{T- \varepsilon}) \big) \, {\mathrm{d}} W_r \bigg| \Bigg]& \leq M_7 \mathbb{E}\Bigg[ \bigg(\!\int_{T- \varepsilon}^T \big| b(X_r) - b(X_{T- \varepsilon})\big| ^2 \, {\mathrm{d}} r \bigg) ^{\frac12} \Bigg] \nonumber \\& \leq M_8 \mathbb{E}\Bigg[ \bigg(\!\int_{T- \varepsilon}^T (r-T + \varepsilon)^{2\beta^{\prime}} \, {\mathrm{d}} r \bigg) ^{\frac12} \Bigg]\leq M_{9} \varepsilon^{\beta^{\prime} + \frac12}.\end{align}

Recall that there is a random variable A with finite moments of every order such that $ | B_v^H -B_u^H | \leq A |v-u| ^\beta$ for all $u,v \in [0,T]$ almost surely; see, e.g., [Reference Nualart and Răşcanu22, Lemma 7.4]. Using this, it is easy to see that, for every $k \in \mathbb{N}$ , $ \mathbb{E} \big[ \| B^H \|_{\beta}^k \big] \leq M_{10}$ . Combining this with (4.1), (4.2), and (4.3) completes the proof.

Proof of Lemma 4.1. Without loss of generality we only prove that the law of $X_T$ is absolutely continuous on the set $D_b$ . The goal is to apply Lemma 4.2. To this end, define the function $\rho \,:\, \mathbb{R} \to [0, \infty)$ to be $\rho(x) = |b(x)|$ , $x \in \mathbb{R}$ , and the measure $\mu_\delta$ given by ${\mathrm{d}}\mu_\delta (z) = h_\delta (z) \, {\mathrm{d}} P_{X_T} (z)$ . It suffices to prove that $\mu_\delta$ admits a density with respect to the Lebesgue measure, and we will make use of Lemma 4.3 in order to show this. According to the latter, it suffices to find $m \in \mathbb{N}$ , $\gamma \in (0, \infty)$ , and $s \in (\gamma, m)$ such that

(4.4) \begin{equation}\big| \mathbb{E} \big[ h_\delta (X_T) \Delta_h^m \phi (X_T) \big] \big|\leq K |h|^s \| \phi\|_{\mathcal{C}_b^\gamma}\end{equation}

for all $h \in \mathbb{R}$ with $|h| \leq 1$ , for all $\phi \in \mathcal{C}_b^\gamma$ , and some constant $K \in (0, \infty)$ . The specific choice of m, $\gamma$ , and s will be given at the end of this proof. In the following, as before we denote by $M_1, M_2, \ldots$ unspecified positive and finite constants. Using the notation and results of Lemmas 4.2 and 4.5, we estimate

(4.5) \begin{align}\big| \mathbb{E} \big[ h_\delta (X_T) \Delta_h^m \phi (X_T) \big] \big|& \leq \big| \mathbb{E} \big[ \big( h_\delta (X_T) - h_\delta \big(X_{T-\varepsilon}\big) \big) \Delta_h^m \phi (X_T) \big] \big| \nonumber \\& \quad + \big| \mathbb{E} \big[ h_\delta \big(X_{T-\varepsilon}\big) \big( \Delta_h^m \phi (X_T) -\Delta_h^m \phi ( Y (\varepsilon) ) \big) \big] \big| \nonumber \\& \quad + \big| \mathbb{E} \big[ h_\delta \big(X_{T-\varepsilon}\big) \Delta_h^m \phi ( Y (\varepsilon) ) \big] \big| \nonumber \nonumber \\& \leq M_1 \| \phi\|_{\mathcal{C}_b^\gamma}|h|^\gamma \mathbb{E} [ |X_T - X_{T-\varepsilon}|] + M_2 \| \phi\|_{\mathcal{C}_b^\gamma}\mathbb{E} [ |X_T - Y(\varepsilon)|]^\gamma \nonumber \\& \quad + \big| \mathbb{E} \big[h_\delta \big(X_{T-\varepsilon}\big) \mathbb{E} \big[ \Delta_h^m \phi \big( Y (\varepsilon) \big) \mid \mathbb{F}_{T-\varepsilon} \big] \big] \big| \nonumber \\& \leq M_3 \| \phi\|_{\mathcal{C}_b^\gamma}\Big( |h|^\gamma \varepsilon^{\beta^{\prime}} + \Big( \varepsilon + \varepsilon^{\beta^{\prime} + \beta} + \varepsilon^{\beta^{\prime}+\frac12} \Big)^\gamma \Big) \nonumber \\& \quad + \big| \mathbb{E} \big[h_\delta \big(X_{T-\varepsilon}\big) \mathbb{E} \big[ \Delta_h^m \phi \big( Y (\varepsilon) \big) \mid \mathbb{F}_{T-\varepsilon} \big] \big] \big| .\end{align}

Recall that, by Lemma 4.4, $Y(\varepsilon) \mid \mathbb{F}_{T-\varepsilon} \sim \mathcal{N} \big(\xi, \sigma^2 (\eta)\big)$ with

\[\sigma^2 (y) = b^2(y) \varepsilon + c^2(y) \int_{T- \varepsilon}^T K_H^2 (T,s) \, {\mathrm{d}} s ,\qquad y \in \mathbb{R},\]

and that $h_\delta (y) = 0$ for all $y \in \mathbb{R}$ with $|b(y)| \leq \delta$ . Let $p_y \,:\, \mathbb{R} \to \mathbb{R}$ be the density of a Gaussian distribution with mean zero and variance $\sigma^2 (y)$ . Then

\begin{equation*}\sup_{y \in \mathbb{R} \,:\, |b(y)| \geq \delta} \int_\mathbb{R} \bigg| \frac{{\mathrm{d}}^k}{{\mathrm{d}} z^k} p_y(z) \bigg| \, {\mathrm{d}} z\leq M_4 \sup_{y \in \mathbb{R} \,:\, |b(y)| \geq \delta} \big(\sigma^2\big)^{-\frac{k}{2}}\leq M_4 \delta^{-k} \varepsilon^{-\frac{k}{2}}\end{equation*}

for every $k \in \mathbb{N}$ . From this, we obtain

\begin{equation*}\big| \mathbb{E} \big[h_\delta \big(X_{T-\varepsilon}\big) \mathbb{E} [ \Delta_h^m \phi \big( Y (\varepsilon) \big) \mid \mathbb{F}_{T-\varepsilon} ] \big] \big|\leq M_5\| \phi\|_{\mathcal{C}_b^\gamma}|h|^m \varepsilon^{-\frac{m}{2}}.\end{equation*}

Combining the latter inequality with (4.5) gives

\begin{align*}\big| \mathbb{E} \big[ h_\delta (X_T) \Delta_h^m \phi (X_T) \big] \big|& \leq M_6 \| \phi\|_{\mathcal{C}_b^\gamma} \Big( |h|^\gamma \varepsilon^{\beta^{\prime}} + \Big( \varepsilon + \varepsilon^{\beta^{\prime} + \beta} + \varepsilon^{\beta^{\prime}+\frac12} \Big)^\gamma + |h|^m \varepsilon^{-\frac{m}{2}} \Big) \\& \leq M_7 \| \phi\|_{\mathcal{C}_b^\gamma} \Big( |h|^\gamma \varepsilon^{\beta^{\prime}} +\varepsilon^{(\beta^{\prime}+\frac12) \gamma}+ |h|^m \varepsilon^{-\frac{m}{2}} \Big) .\end{align*}

Now it is not difficult to see that the assumptions of Lemma 4.3 are satisfied. For example, we can set $m=4$ , $\gamma=1$ , $\varepsilon = |h|^{\frac{4}{3}}$ , and choose $\beta^{\prime} \in \big(0, \frac12\big)$ such that $\big(\beta^{\prime} + \frac12\big) \frac{4}{3} >1$ . Then it is easy to see that (4.4) holds and the proof is complete.

Now we are in position to prove Theorem 4.1.

Proof of Theorem 4.1. The proof borrows ideas from the theory of approximate identities as outlined in [Reference Leobacher, Reisinger and Stockinger8, Section A]. Without loss of generality we assume that f has compact support, implying in our case that f, f ′, and f ′′ are bounded, and we assume that f ′′ has one discontinuity point at $\xi_1=0$ . Recall that $\| X\|_{\beta^{\prime}} < \infty$ for all $\beta^{\prime} \in \big(0,\frac12\big)$ almost surely. According to Remark 4.1, we may assume that $\| X\|_{\beta^{\prime}} \leq M$ for some constant $M\in (0,\infty)$ . According to [Reference Leobacher, Reisinger and Stockinger8, Lemma 1-3], we can choose a sequence $(\phi_n)_{n \in \mathbb{N}}$ of twice continuously differentiable non-negative functions such that, for $f_n \,:\!=\, f * \phi_n$ , $n \in \mathbb{N}$ , $ \lim_{n \to \infty} \| f - f_n \|_\infty + \| f^{\prime} - f_n^{\prime} \|_\infty = 0$ and $ \lim_{n \to \infty} | f^{\prime\prime}(x) - f^{\prime\prime}_n(x) | = 0$ for all continuity points x of f ′′, and $\| f^{\prime\prime}_n\|_\infty \leq K$ for some constant $K \in (0, \infty)$ independent of $n \in \mathbb{N}$ . Let $t \in [0,T]$ . By Theorem 3.1,

(4.6) \begin{align}f_n (X_t)& = f_n(X_0) + \int_0^t f^{\prime}_n(X_s) b(X_s) \, {\mathrm{d}} W_s + \int_0^t f^{\prime}_n(X_s) c(X_s) \, {\mathrm{d}} B^H_s \nonumber \\& \quad + \int_0^t \big( \tfrac{1}{2}b^2(X_s) f^{\prime\prime}_n(X_s) +a(X_s) f^{\prime}_n(X_s) \big) \, {\mathrm{d}} s.\end{align}

By our assumptions we obtain convergence:

\begin{align*}\lim_{n \to \infty} \int_0^t f^{\prime}_n(X_s) b(X_s) \, {\mathrm{d}} W_s & = \int_0^t f^{\prime}(X_s) b(X_s) \, {\mathrm{d}} W_s, \\\lim_{n \to \infty} \int_0^t f^{\prime}_n(X_s) a(X_s) \, {\mathrm{d}} s & = \int_0^t f^{\prime}(X_s) a(X_s) \, {\mathrm{d}} s,\end{align*}

where the first convergence is uniformly in probability and the second holds almost surely.

Now we turn to the fractional integral in (4.6). By assumption, $ f^{\prime}(X) c(X) \in C^{\frac12 -}$ , so that the fractional integral $\int_0^t f^{\prime}(X_s) c(X_s) \, {\mathrm{d}} B_s^H$ exists and agrees with the Riemann–Stieltjes integral. In particular, we have $D_{0+}^\alpha f^{\prime}(X) c(X) \in L_1$ , where $\alpha \in \big(1-H, \frac12\big)$ , and $D_{0+}^\alpha f^{\prime}_n(X) c(X)$ converges in $L_1$ to $D_{0+}^\alpha f^{\prime}(X) c(X)$ . Thus, by dominated convergence and the definition of the fractional integral,

\begin{align*}\lim_{n \to \infty} \int_0^t f^{\prime}_n(X_s) c(X_s) \, {\mathrm{d}} B_s^H& = \lim_{n \to \infty} ({-}1)^\alpha \int_0^t D_{0+}^\alpha \big( f^{\prime}_n(X) c(X) \big) (s)D_{t-}^{1-\alpha} B_{t-}^H (s) \, {\mathrm{d}} s \\& = ({-}1)^\alpha \int_0^t D_{0+}^\alpha \big( f^{\prime}(X) c(X) \big) (s) D_{t-}^{1-\alpha} B_{t-}^H (s) \, {\mathrm{d}} s \\& = \int_0^t f^{\prime}(X_s) c(X_s) \, {\mathrm{d}} B_s^H .\end{align*}

It remains to consider the term in (4.6) with the second derivative. In order to prove its convergence we will make use of Lemma 4.1. We write

\begin{equation*}\frac12 \int_0^t f^{\prime\prime}_n(X_s) b^2(X_s) \, {\mathrm{d}} s= \frac12 \int_0^t \textbf{1}_{\big\{ |X_s| > \frac{1}{n} \big\}}f^{\prime\prime}_n(X_s) b^2(X_s) \, {\mathrm{d}} s+ \frac12 \int_0^t \textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}}f^{\prime\prime}_n(X_s) b^2(X_s) \, {\mathrm{d}} s .\end{equation*}

From dominated convergence we derive

\[ \lim_{n \to \infty} \frac12 \int_0^t \textbf{1}_{\big\{ |X_s| > \frac{1}{n} \big\}}f^{\prime\prime}_n(X_s) b^2(X_s) \, {\mathrm{d}} s= \frac12 \int_0^t \textbf{1}_{\{ X_s \neq 0 \}}f^{\prime\prime}(X_s) b^2(X_s) \, {\mathrm{d}} s.\]

From Fatou’s lemma, for every sequence $(h_n)_{n \in \mathbb{N}}$ of measurable, non-negative and bounded functions with $\| h_n \|_\infty \leq K$ for all $n \in \mathbb{N}$ and some constant $K \in (0, \infty)$ , we obtain

\begin{align*}\limsup_{n \to \infty} \int_0^t \textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}}h_n(X_s) b^2(X_s) \, {\mathrm{d}} s& \leq \int_0^t\limsup_{n \to \infty} \textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}}h_n(X_s) b^2(X_s) \, {\mathrm{d}} s \\& \leq \int_0^t \textbf{1}_{\{ |X_s| = 0 \}} \limsup_{n \to \infty} h_n(X_s) b^2(X_s) \, {\mathrm{d}} s \\& \leq K \int_0^t \textbf{1}_{\{ X_s = 0 \}} b^2(X_s) \, {\mathrm{d}} s.\end{align*}

Now, by Lemma 4.1,

\[ \mathbb{E} \bigg[\! \int_0^t \textbf{1}_{\{ X_s = 0 \}} b^2(X_s) \, {\mathrm{d}} s \bigg]= b^2 (0) \int_0^t P(X_s=0) \, {\mathrm{d}} s = 0,\]

which yields $\int_0^t \textbf{1}_{\{ X_s = 0 \}} b^2(X_s) \, {\mathrm{d}} s =0$ almost surely. Therefore,

\[ \limsup_{n \to \infty} \int_0^t \textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}}h_n(X_s) b^2(X_s) \, {\mathrm{d}} s = 0,\]

and in the case of the particular choice $h_n = |f^{\prime\prime}_n - f^{\prime\prime}|$ ,

\[ \limsup_{n \to \infty} \int_0^t \textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}}\big| f^{\prime\prime}_n (X_s) - f^{\prime\prime}(X_s) \big|b^2(X_s) \, {\mathrm{d}} s = 0.\]

Finally, we get that

\begin{align*}\bigg| \frac12 \int_0^t f^{\prime\prime}_n(X_s) b^2(X_s) \, {\mathrm{d}} s& - \frac12 \int_0^t f^{\prime\prime}(X_s) b^2(X_s) \, {\mathrm{d}} s \bigg| \\& \leq \frac12 \bigg| \int_0^t\textbf{1}_{\big\{ |X_s| > \frac{1}{n} \big\}} \big( f^{\prime\prime}_n(X_s) - f^{\prime\prime}(X_s) \big) b^2(X_s) \, {\mathrm{d}} s \bigg| \\& \quad + \frac12 \int_0^t\textbf{1}_{\big\{ |X_s| \leq \frac{1}{n} \big\}} \big| f^{\prime\prime}_n(X_s) - f^{\prime\prime}(X_s) \big| b^2(X_s) \, {\mathrm{d}} s\to 0 \end{align*}

as $n \to \infty$ . Overall, we have shown that the assertion follows by letting $n \to \infty$ in (4.6).

Acknowledgements

The author would like to thank David Nualart for some interesting discussions. The author discussed this project with Michaela Szölgyenyi during an early stage.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Fan, X. and Zhang, S. (2021). Moment estimates and applications for SDEs driven by fractional Brownian motions with irregular drifts. Bull. Sci. Math. 170, 133.CrossRefGoogle Scholar
Guerra, J. and Nualart, D. (2008). Stochastic differential equations driven by fractional Brownian motion and standard Brownian motion. Stoch. Anal. Appl. 26, 10531075.CrossRefGoogle Scholar
Hu, Y., Liu, Y. and Nualart, D. (2016). Rate of convergence and asymptotic error distribution of Euler approximation schemes for fractional diffusions. Ann. Appl. Prob. 26, 11471207.CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. E. (1998). Brownian Motion and Stochastic Calculus. Springer, New York, pp. 47127.CrossRefGoogle Scholar
Kleptsyna, M. L., Kloeden, P. E. and Anh, V. V. (1998). Existence and uniqueness theorems for stochastic differential equations with fractal Brownian motion. Probl. Peredachi Inf. 34, 5161.Google Scholar
Kubilius, K. (2000). The existence and uniqueness of the solution of the integral equation driven by fractional Brownian motion. Lith. Math. J 40, 104110.Google Scholar
Kubilius, K. (2002). The existence and uniqueness of the solution of an integral equation driven by a p-semimartingale of special type. Stoch. Process. Appl. 98, 289315.CrossRefGoogle Scholar
Leobacher, G., Reisinger, C. and Stockinger, W. Well-posedness and numerical schemes for McKean–Vlasov equations and interacting particle systems with discontinuous drift. Preprint, arXiv:2006.14892.Google Scholar
Leobacher, G. and Szölgyenyi, M. (2016). A numerical method for SDEs with discontinuous drift. BIT Numer. Math. 56, 151162.CrossRefGoogle Scholar
Leobacher, G. and Szölgyenyi, M. (2017). A strong order $1/2$ method for multidimensional SDEs with discontinuous drift. Ann. Appl. Prob. 27, 23832418.Google Scholar
Lin, S. J. (1995). Stochastic analysis of fractional Brownian motions. Stochastics 55, 121140.Google Scholar
Lyons, T. (1994). Differential equations driven by rough signals (I): An extension of an inequality of LC Young. Math. Res. Lett. 1, 451464.CrossRefGoogle Scholar
Mishura, Y. (2008). Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer, New York.CrossRefGoogle Scholar
Mishura, Y. and Nualart, D. (2004). Weak solutions for stochastic differential equations with additive fractional noise. Statist. Prob. Lett. 70, 253261.CrossRefGoogle Scholar
Mishura, Y. and Posashkov, S. (2007). Existence and uniqueness of solution of mixed stochastic differential equation driven by fractional Brownian motion and Wiener process. Theory Stoch. Process. 13, 152165.Google Scholar
Mishura, Y. and Shevchenko, G. (2011). Rate of convergence of Euler approximations of solution to mixed stochastic differential equation involving Brownian motion and fractional Brownian motion. Random Oper. Stoch. Eq. 19.Google Scholar
Mishura, Y. and Shevchenko, G. (2012). Mixed stochastic differential equations with long-range dependence: Existence, uniqueness and convergence of solutions. Comput. Math. Appl. 64, 32173227.Google Scholar
Müller-Gronbach, T. and Yaroslavtseva, L. (2020). On the performance of the Euler–Maruyama scheme for SDEs with discontinuous drift coefficient. Ann. Inst. H. Poincaré Prob. Statist. 56, 11621178.CrossRefGoogle Scholar
Nualart, D. (2006). The Malliavin Calculus and Related Topics. Springer, New York.Google Scholar
Nualart, D. and Ouknine, Y. (2002). Regularization of differential equations by fractional noise. Stoch. Process. Appl. 102, 103116.CrossRefGoogle Scholar
Nualart, D. and Ouknine, Y. (2003). Stochastic differential equations with additive fractional noise and locally unbounded drift. In Stochastic Inequalities and Applications, eds. E. Giné, C. Houdré and D. Nualart. Springer, New York, pp. 353365.CrossRefGoogle Scholar
Nualart, D. and Răşcanu, A. (2002). Differential equations driven by fractional Brownian motion. Collect. Math. 53, 5581.Google Scholar
Romito, M. (2018). A simple method for the existence of a density for stochastic evolutions with rough coefficients. Electron. J. Prob. 23, 143.CrossRefGoogle Scholar
Ruzmaikina, A. A. (2000). Stieltjes integrals of Hölder continuous functions with applications to fractional Brownian motion. J. Statist. Phys. 100, 10491069.CrossRefGoogle Scholar
Suo, Y. and Yuan, C. Weak convergence of path-dependent SDEs driven by fractional Brownian motion with irregular coefficients. Preprint, arXiv:1907.02293.Google Scholar
Veretennikov, A. J. (1981). On strong solutions and explicit formulas for solutions of stochastic integral equations. Sbornik: Math. 39, 387403.CrossRefGoogle Scholar
Veretennikov, A. J. (1984). On stochastic equations with degenerate diffusion with respect to some of the variables. Math. USSR-Izvestiya 22, 173180.CrossRefGoogle Scholar
Young, L. C. (1936). An inequality of the Hölder type, connected with Stieltjes integration. Acta Math. 67, 251282.CrossRefGoogle Scholar
Zähle, M. (1998). Integration with respect to fractal functions and stochastic calculus. I. Prob. Theory Relat. Fields 111, 333374.CrossRefGoogle Scholar
Zähle, M. (1999). On the link between fractional and stochastic calculus. In Stochastic Dynamics, eds. H. Crauel and M. Gundlach. Springer, New York, pp. 305325 CrossRefGoogle Scholar
Zvonkin, A. K. (1974), A transformation of the phase space of a diffusion process that removes the drift. Math. USSR-Sbornik 22, 129149.CrossRefGoogle Scholar