1. Introduction
A stochastic process $Z=\{Z_t\}_{t\geq0}$ is said to be a tempered stable (TS) subordinator if it is a driftless subordinator with the Lévy measure $\nu(\mathrm{d} y)=\theta y^{-\alpha-1}{\mathrm{e}}^{-\beta y}\,\mathrm{d} y$ , $y>0$ , where $\alpha\in(0,1)$ and $\beta,\theta\in{\mathbb R}^+$ . In this case, we call the distribution of $Z_1$ a tempered stable distribution with parameters $\alpha,\beta,\theta$ , and denote it by $\mathop{\mathrm{TS}}\nolimits(\alpha,\beta,\theta)$ . In addition, a process $X=\{X_t\}_{t\geq0}$ is said to be a TS-based Ornstein–Uhlenbeck (OU-TS) process if it is a solution to the following stochastic differential equation:
where $\delta>0$ and $\rho>0$ . For any $t\geq0$ and $\tau>0$ , we have
Qu et al. [Reference Qu, Dassios and Zhao2] suggested an exact simulation algorithm for $X_{t+\tau}$ given $X_t$ . In addition, they separately gave another algorithm available only for the case of $\alpha=\frac{1}{2}$ . Here we correct the two algorithms and introduce the results of some numerical experiments.
2. Mathematical background and algorithms
The infinitesimal operator ${\mathcal A}$ of X is given by
where $f\colon{\mathbb R}^+\times{\mathbb R}^+\to{\mathbb R}$ is differentiable on x and t. We can derive this by applying [Reference Applebaum1, (6.36)] to the stochastic differential equation (1). [Reference Qu, Dassios and Zhao2, (3.2)] gave a representation of ${\mathcal A}$ incorrectly as follows:
Thus, all the subsequent arguments in [Reference Qu, Dassios and Zhao2] must be corrected, but this error does not affect the case of $\rho=1$ . Now, we fix $t\geq0$ and $\tau>0$ , and define a process $Y=\{Y_s\}_{t\leq s\leq t+\tau}$ as
where $\kappa\in{\mathbb R}$ , and $\Phi$ is the Laplace exponent of Z, i.e. $\Phi(u)\;:\!=\;\int_0^\infty(1-{\mathrm{e}}^{-uy})\,\nu(\mathrm{d} y)$ . When ${\mathcal A} f(x,t)=0$ , the process $f(X_t,t)$ is a martingale. From this point of view, we can see that Y is a martingale. For any $\eta\in{\mathbb R}^+$ , taking $\kappa=\eta{\mathrm{e}}^{-\delta(t+\tau)}$ , we obtain
From the view of [Reference Qu, Dassios and Zhao2, (3.7)–(3.9)], (2) implies
where $w\;:\!=\;{\mathrm{e}}^{-\delta\tau}$ , $\Gamma(\cdot)$ is the Gamma function, and
Equation (3) can be obtained by replacing $\theta$ and $\beta$ in [Reference Qu, Dassios and Zhao2, Theorem 3.1] with $\rho^{\alpha-1}\theta$ and $\beta/\rho$ , respectively. Thus, [Reference Qu, Dassios and Zhao2, Algorithm 3.2] can be corrected by replacing all $\theta$ s and $\beta$ s appearing there in the same way. As for the correction of [Reference Qu, Dassios and Zhao2, Algorithm 3.4], we have only to change the distribution of $\widetilde{IG}$ into
where $\mathop{\mathrm{IG}}\nolimits(\mu,\lambda)$ denotes the IG distribution with the mean parameter $\mu$ and the rate parameter $\lambda$ . Furthermore, [Reference Qu, Dassios and Zhao2, Proposition 6.1] can be corrected with the same replacements of $\theta$ and $\beta$ as above, but additionally, the function $h(\cdot)$ needs to be replaced by $h(\cdot/\rho)$ .
Remark 1. Denoting the solution to (1) by $X^{\rho,X_0}$ with emphasis on $\rho>0$ and the initial value $X_0>0$ , we have $X^{\rho,X_0}=\rho X^{1,X_0/\rho}$ for any $\rho>0$ and $X_0>0$ . Thus, the algorithm with $\rho=1$ can be generalized to any case of $\rho>0$ .
3. Numerical results
As can be seen in [Reference Qu, Dassios and Zhao2, Tables 1 and 2], even using the original algorithms in [Reference Qu, Dassios and Zhao2] the errors are kept small enough as long as the means are computed. Thus, we compute the second moments instead and compare the results of the original and corrected algorithms.
Here, we execute simulations for an OU-TS process with $\alpha=0.25$ . For the other parameters, we set $\delta=0.2$ , $\beta=0.5$ , $\theta=0.25$ and vary the value of $\rho$ as 0.5, 1, 2, 5. We set $X_0=10.0$ and simulate $X_{0.5}$ ; next, we simulate $X_1$ using the value of $X_{0.5}$ , which is repeated until we simulate $X_5$ . We carried out the simulation one million times, and compared their mean square with the second moment ${\mathbb E}[X_5^2]$ , where we can calculate ${\mathbb E}[X_5^2]$ by using [Reference Qu, Dassios and Zhao2, (3.5)] and (2) as follows:
where $w={\mathrm{e}}^{-5\delta}$ . The algorithms were coded in MATLAB (R2022b).
The simulation results are given in Table 1. Note that “Estim” in the third column represents the mean square of one million simulation results, and “Diff” and “Error” in the fourth and fifth columns are defined as $\mathrm{Diff} \;:\!=\; \mathrm{Estim} - {\mathbb E}[X_5^2]$ and $\mathrm{Error} \;:\!=\; ({\mathrm{Diff}}/{{\mathbb E}[X_5^2]})\times100$ , respectively. The last three columns display the results for the original algorithm, [Reference Qu, Dassios and Zhao2, Algorithm 3.2]. As seen in Table 1, the errors of the corrected algorithm are small enough regardless of the value of $\rho$ , but for the original algorithm this is not the case. Furthermore, similar results were obtained for the cases of OU-TS with $\alpha=0.75$ and OU-IG (inverse Gaussian), but they are not tabulated.
Funding information
Takuji Arai gratefully acknowledges the financial support of the MEXT Grants in Aid for Scientific Research (C) Nos. 18K03422 and 22K03419. Yuto Imai also gratefully acknowledges the financial support of the MEXT Grant in Aid for Young Research No. 21K13327.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.