Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-21T05:43:46.029Z Has data issue: false hasContentIssue false

On the splitting and aggregating of Hawkes processes

Published online by Cambridge University Press:  09 December 2022

Bo Li*
Affiliation:
Nankai University
Guodong Pang*
Affiliation:
Rice University
*
*Postal address: School of Mathematics and LPMC, Nankai University. Email address: libo@nankai.edu.cn
**Postal address: Department of Computational Applied Mathematics and Operations Research, George R. Brown College of Engineering, Rice University. Email address: gdpang@rice.edu
Rights & Permissions [Opens in a new window]

Abstract

We consider the random splitting and aggregating of Hawkes processes. We present the random splitting schemes using the direct approach for counting processes, as well as the immigration–birth branching representations of Hawkes processes. From the second scheme, it is shown that random split Hawkes processes are again Hawkes. We discuss functional central limit theorems (FCLTs) for the scaled split processes from the different schemes. On the other hand, aggregating multivariate Hawkes processes may not necessarily be Hawkes. We identify a necessary and sufficient condition for the aggregated process to be Hawkes. We prove an FCLT for a multivariate Hawkes process under a random splitting and then aggregating scheme (under certain conditions, transforming into a Hawkes process of a different dimension).

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Hawkes processes were first introduced in [Reference Hawkes9, Reference Hawkes10] as an extension of the Poisson process, which has the so-called self-exciting effect; that is, the occurrence of an event will increase the probability of future events. Hawkes processes have been widely used to model various applications, for example, in finance [Reference Bacry, Mastromatteo and Muzy2, Reference Hawkes11] and internet traffic and queueing [Reference Chen4, Reference Daw and Pender6, Reference Gao and Zhu7, Reference Koops, Saxena, Boxma and Mandjes14]. There are extensive studies of Hawkes processes, including the exact distributional properties [Reference Hawkes and Oakes12, Reference Oakes16] for exponential-type kernel functions, and limit theorems in both conventional scaling [Reference Bacry, Delattre, Hoffmann and Muzy1, Reference Bacry, Mastromatteo and Muzy2, Reference Chen4, Reference Karabash and Zhu13] and large-intensity scaling [Reference Gao and Zhu7, Reference Gao and Zhu8, Reference Li and Pang15].

In this paper, we investigate the random splitting/sampling and aggregating/superposition of Hawkes processes. The random splitting/sampling of point processes has been an important topic in stochastic models. For example, one arrival stream of customers may require different types of services, and the same packet in a communication network may be sent out simultaneously on several outgoing links. Similarly, input into a service or communication system can come from aggregating several sources [Reference Sriram and Whitt17, Reference Sriram and Whitt18]. See further discussions in [Reference Whitt21, Chapter 9]. Splitting and aggregating of standard point processes are well understood in the literature [Reference Whitt19, Reference Whitt20, Reference Whitt21].

The Poisson branching representation [Reference Hawkes and Oakes12], also known as the immigration–birth representation, has been a fundamental tool in the analysis of Hawkes processes. This representation says that for each individual (generation), the number of individuals (children) produced by this individual over time, called the next generation, is a simple and conditional Poisson process with an intensity that is a functional of the counting process of the individual’s generation. It is known that random splitting/sampling of Poisson processes results in independent Poisson processes, each component with a rate equal to that of the original process multiplied by the splitting probability. By contrast, each sub-counting process resulting from the random splitting/sampling of a Hawkes process cannot be regarded as a Hawkes process itself, despite the conditional Poisson property for each generation. Intuitively, the jumping intensity of a sub-counting process depends on the history of the original Hawkes process, which requires information strictly greater than that provided by the sub-counting process. However, we show that the vector-valued splitting Hawkes process is a multidimensional Hawkes process (Propositions 3.2 and 5.1). It is also clear that the split processes are no longer independent, but their dependence structure is not at all obvious.

We thus aim to understand the split processes of a Hawkes process and their dependence structure. We provide two representations of the (scaled) split processes, one directly using the original Hawkes process, and the other using the non-homogeneous conditional Poisson processes in each generation from the Poisson branching representation. For ease of exposition, we start with the splitting of a one-dimensional Hawkes process; we discuss how the splitting schemes work and show the equivalence of the limits in the functional central limit theorems (FCLTs) derived from them, which rely on the existing results in Chapter 9.5 of Whitt [Reference Whitt21] and in [Reference Bacry, Delattre, Hoffmann and Muzy1]. (See the discussions in Section 3.) We next aim to understand the aggregation/superposition of a multivariate Hawkes process and show when the aggregated process can still be a Hawkes process (Section 4).

We then consider the scheme of first splitting and then aggregating a multivariate Hawkes process, which may transform it into a point process of a different dimension (Section 5). We identify conditions under which the transformed process is again Hawkes. We prove an FCLT for the transformed process, which is a Brownian motion limit with a surprisingly simple covariance function. We also discuss how that relates to the known result in the special case of a Hawkes process.

1.1. Notation

All random variables and processes are defined in a common complete probability space $\big(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P}\big)$ . Throughout the paper, ${\mathbb N}$ denotes the set of natural numbers; ${\mathbb R} ({\mathbb R}_{+})$ denotes the space of real (nonnegative) numbers. Let ${\mathbb D}={\mathbb D}({\mathbb R}_{+},{\mathbb R})$ denote the ${\mathbb R}$ -valued function space of all càdlàg functions on ${\mathbb R}_+$ . The expression $({\mathbb D},J_{1})$ denotes the space ${\mathbb D}$ equipped with the Skorokhod $J_{1}$ topology (see [Reference Billingsley3]), which is complete and separable. ${\mathbb D}^{n}$ denotes an n-dimensional vector-valued càdlàg process endowed with the weak Skorokhod $J_{1}$ topology [Reference Whitt21], for which we write $({\mathbb D}^{n},J_{1})$ . $L^{2}(\mathbb{P})$ denotes the space of random variables with finite second moment. For an integrable function $f\;:\;{\mathbb R}\to{\mathbb R}$ , its $L^{1}$ norm is denoted by $\|f\|_{1}$ . The symbols $\to$ and $\Rightarrow$ mean convergence of real numbers and convergence in distribution, respectively. For a matrix $M=(M_{ij})_{i,j}$ , we denote by $\textrm{ent}_{ij}M=M_{ij}$ its (i,j)th entry, while $\textrm{row}_{k}M$ and $\textrm{col}_{m}M$ denote its kth row vector and the mth column vector, respectively. $M^{\textrm{T}}$ denotes the transpose of M; I denotes the identity matrix; e denotes the column vector of 1s with associated dimension. For a vector a, $\textrm{diag}(a)$ denotes the diagonal matrix with the elements of the vector a on the main diagonal. The expression $f*g(t)\;:\!=\;\int_{0}^{t}f(t-s)g(s)ds$ denotes the convolution of f and g on ${\mathbb R}_{+}$ . Additional notation is introduced in the paper whenever necessary.

2. Preliminaries on Hawkes processes

A d-dimensional Hawkes process, $N=\{N(t), t\geq0\}$ with $N=\big(N_{k}\big)_{k=1,\dots,d}$ , is formally defined as an ${\mathbb N}^{d}$ -valued simple counting process with conditional intensity

(2.1) \begin{equation}\lambda_{k}(t)=\lambda_{k0}+\sum_{k^{\prime}=1}^{d}\sum_{j=1}^{N_{k^{\prime}}(t)}H_{kk^{\prime}}(t-\tau_{kj})=\lambda_{k0}+\sum_{k^{\prime}=1}^{d}\int_{0}^{t}H_{kk^{\prime}}(t-s)N_{k^{\prime}}(ds), \quad t\ge 0,\end{equation}

for every $k=1,\cdots,d$ , where $\tau_{kj}$ is the jth event time of $N_{k}$ ; $\lambda_{k0}\ge0$ is a constant called the baseline intensity of the kth subprocess; and $H_{kk^{\prime}}\;:\;{\mathbb R}_{+}\to{\mathbb R}_{+}$ is called the mutually exciting function or the kernel function, also known as the cross-exciting function for $k\neq k^{\prime}$ and the self-exciting function for $k=k^{\prime}$ .

Assumption A1. For all $k,k^{\prime}=1,\cdots,d$ , we have

(2.2) \begin{equation}\int_{0}^{\infty}H_{kk^{\prime}}(t)dt<\infty,\end{equation}

and the spectral radius $\rho(\|H\|_{1})$ of the matrix

\begin{equation*}\|H\|_{1} \;:\!=\;\Big(\int_{0}^{\infty}H_{kk^{\prime}}(t)dt\Big)_{k,k^{\prime}}\end{equation*}

satisfies $\rho(\|H\|_{1})<1$ .

The condition in (2.2) is also called the non-explosion criterion in [Reference Bacry, Delattre, Hoffmann and Muzy1, Reference Daley and Vere-Jones5]. It is easy to calculate the mean of N(t):

(2.3) \begin{equation}\mathbb{E}[N(t)]=\mathbb{E}\Big[\int_{0}^{t}\lambda(s)ds\Big]=\Big(\int_{0}^{t}\big(I+\varphi*I(s)\big)ds\Big)\cdot\lambda_{0}, \quad t \ge 0\,.\end{equation}

Here $\lambda_{0}=\big(\lambda_{k0}\big)_{k}$ is the constant vector baseline intensity in (2.1), $\varphi*I(s)$ is a matrix with $\varphi_{kk^{\prime}}*1(s)=\int_{0}^{s}\varphi_{kk^{\prime}}(s-u)\,du$ at its (k, k )th entry (we abuse notation to let $1(\cdot)$ indicate a constant function equal to one), and $\varphi$ is a $d\times d$ matrix defined as an $L^{1}(dt)$ limit of the following series:

(2.4) \begin{equation}\varphi(t)=H(t)+H*H(t)+H*H*H(t)+\cdots,\end{equation}

where $F*G$ is defined as the matrix having as each entry

\begin{equation*}\textrm{ent}_{ij} (F*G(t)) =\sum_{k} F_{ik}*G_{kj}(t)=\sum_{k} \int_{0}^{t}F_{ik}(t-s)G_{jk}(s)\,ds\end{equation*}

for matrix-valued functions F, G; see e.g. [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2]. The function $\varphi$ can also be understood as the renewal density of the function H and satisfies the (matrix) renewal equation

\begin{equation*}\varphi(t)=H(t)+\int_{0}^{t}H(t-s)\varphi(s)ds.\end{equation*}

In addition, under Assumption A1,

(2.5) \begin{equation} I+\|\varphi\|_{1}=(I-\|H\|_{1})^{-1}. \end{equation}

The functional law of large numbers for the Hawkes process N reads as follows [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 1]: under Assumption A1,

(2.6) \begin{equation}\sup_{t\in[0,1]}\Big\|\bar{N}_{T}(t)- (I-\|H\|_{1})^{-1}\cdot\lambda_{0}t\Big\|\to 0\quad \mbox{as}\quad T\to\infty,\end{equation}

almost surely and in $L^{2}(\mathbb{P})$ , where $\bar{N}_{T}(t)\;:\!=\;T^{-1}N(Tt)$ . The FCLT for the Hawkes process N is stated as follows [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2]: under Assumption A1,

(2.7) \begin{equation}\hat{N}_{T}(t)\;:\!=\;\sqrt{T}\big(\bar{N}_{T}(t)- \mathbb{E}[\bar{N}_{T}(t)]\big)\Rightarrow \hat{N}(t) \quad \mbox{in}\quad ({\mathbb D}^{d},\, J_{1}),\end{equation}

as $T\to \infty$ , where

(2.8) \begin{equation} \hat{N}(t)\;:\!=\; (I-\|H\|_{1})^{-1}\cdot\Sigma^{1/2}\cdot W, \end{equation}

W is a d-dimensional standard Brownian motion, and

(2.9) \begin{equation}\Sigma=\textrm{diag}\big((I-\|H\|_{1})^{-1}\cdot\lambda_{0}\big).\end{equation}

3. Splitting of a one-dimensional Hawkes process

We now describe the random splitting mechanism of Hawkes processes. We first focus on the splitting of a one-dimensional Hawkes process for the sake of exposition. The splitting of a d-dimensional Hawkes process can be derived similarly and will be discussed in Section 5 together with aggregation. We provide two representations of the split processes: the first using the method in Chapter 9.5 of Whitt [Reference Whitt21], and the second using the immigration–birth branching representation. Note that for the case $d=1$ , Assumption A1 reduces to $\|H\|_{1}\in(0,1)$ .

3.1. The first representation

Let N be the Hawkes process in (2.1) with $d=1$ and self-exciting function $H\;:\;{\mathbb R}_{+}\to{\mathbb R}_{+}$ . Denoting by $\{\xi_{j},j\ge1\}$ the splitting variables, whenever $\xi_{j}=m$ , the jth individual occurring at $\tau_j$ is assigned to the mth split process. Under this standard splitting, the process N splits into n sub-counting processes, denoted by $N^{(m)}$ :

(3.1) \begin{equation}N^{(m)}(t)=\sum_{j=1}^{N(t)}\boldsymbol{1}(\xi_{j}=m)\quad \text{for every $m=1,2,\cdots,n$, and $t\ge0$.}\end{equation}

We assume that $\{\xi_{j},j\ge1\}$ is a sequence of independent and identically distributed (i.i.d.) variables, independent of N, with

(3.2) \begin{equation}\mathbb{P}\big(\xi_{j}=m\big)=p^{(m)}\quad \text{and}\quad \sum_{m=1}^{n}p^{(m)}=1.\end{equation}

By the independence between N and $\{\xi_j\}_{j}$ , it is easy to see that

(3.3) \begin{equation}\mathbb{E}[N^{(m)}(t)]=p^{(m)}\mathbb{E}[N(t)]=\lambda_{0}\, p^{(m)}\int_{0}^{t}\big(1+\varphi*1(s)\big)ds.\end{equation}

(Here $\varphi$ is an ${\mathbb R}_+$ -valued function.)

We consider the scaled process indexed by T, and all the variables are marked with an additional subscript T; that is, $N_{T}$ is a Hawkes process with intensity process $\lambda_{T}(\cdot)$ whose baseline intensity in (2.1) is $\lambda_{0,T}$ , and the kernel function H stays the same. The splitting variables are denoted by $\{\xi_{j,T},j\ge1\}$ with distribution $\{p^{(m)}_{T}\}_{m}$ .

Assumption A2. Assume that, for some $\lambda_{0},p^{(m)}>0$ with $\sum_{m=1}^{n}p^{(m)}=1$ ,

\begin{equation*}\lambda_{0,T}\to \lambda_{0}\quad \text{and}\quad p^{(m)}_{T}\to p^{(m)}\quad \mbox{as}\quad T\to\infty.\end{equation*}

For every $m=1,2,\cdots,d$ , define

\begin{equation*}\bar{N}^{(m)}_{T}(t)\;:\!=\; \frac{1}{T}N^{(m)}_{T}(Tt) = \frac{1}{T} \sum_{j=1}^{N_{T}(Tt)} \boldsymbol{1}(\xi_{j,T}=m)\end{equation*}

and the diffusion-scaled processes

(3.4) \begin{equation}\hat{N}^{(m)}_{T}(t)\;:\!=\;\sqrt{T}\Big(\bar{N}^{(m)}_{T}(t)- \mathbb{E}\big[\bar{N}^{(m)}_{T}(t) \big] \Big)\quad \text{and}\quad \hat{S}^{(m)}_{T}(t)\;:\!=\;\frac{1}{\sqrt{T}} \sum_{j=1}^{\lfloor Tt \rfloor} \Big(\boldsymbol{1}(\xi_{j,T}=m) -p^{(m)}_{T}\Big),\end{equation}

where $\lfloor t \rfloor$ represents the largest integer no larger than $t\in{\mathbb R}_+$ . Then we have the first representation

(3.5) \begin{equation}\hat{N}^{(m)}_{T}(t)= \hat{S}^{(m)}_{T} ( \bar{N}_{T}(t)) + p^{(m)}_{T} \hat{N}_{T}(t).\end{equation}

Note that in the representation (3.5), the process $\hat{N}^{(m)}_{T}$ consists of two components: $\hat{S}^{(m)}_{T}$ represents the oscillation from the splitting scheme, and $p^{(m)}_{T}\hat{N}_{T}$ is the oscillation inherited from the original counting process and proportional to the splitting probability. Note that the processes $ \hat{S}^{(m)}_{T}$ and $ \hat{N}_{T}$ are independent, and in the limit we see that the two components $ \hat{S}^{(m)}_{T}\circ \bar{N}_{T}$ and $ p^{(m)}_{T}\hat{N}_{T}$ converge to two independent processes as $T\to \infty$ .

By applying [Reference Whitt21, Theorem 9.5.1], under Assumptions A1 and A2, provided with the limit for $\bar{N}_{T}$ in (2.6) and the limit for $\hat{N}_{T}$ in (2.8), we obtain the following.

Proposition 3.1. Let $(\hat{N}^{(m)}_{T})_{m}$ be the diffusion-scaled process in (3.4). Assume that $\|H\|_{1}\in(0,1)$ and Assumption A2 hold. We have

\begin{equation*}\big(\hat{N}^{(m)}_{T}\big)_{m}\Rightarrow \big(\hat{N}^{(m)}\big)_{m} \quad \mbox{in}\quad ({\mathbb D}^{n}, J_{1}) \quad \mbox{as}\quad T \to \infty,\end{equation*}

with

(3.6) \begin{equation}\begin{split}\hat{N}^{(m)}=&\ \frac{\lambda_{0}^{1/2}}{(1-\|H\|_{1})^{1/2}} \hat{S}^{(m)}+p^{(m)}\frac{\lambda_{0}^{1/2}}{(1-\|H\|_{1})^{3/2}}W,\end{split}\end{equation}

where $(\hat{S}^{(m)})_m$ is an n-dimensional Brownian motion with covariance function

\begin{equation*}\textrm{cov}\big(\hat{S}^{(m)}(t),\hat{S}^{(m')}(s)\big)=\big(\!-\!p^{(m)}p^{(m')}+p^{(m)}\delta_{mm'}\big)(t\wedge s),\end{equation*}

and W is a standard Brownian motion, independent of $(\hat{S}^{(m)})_m$ .

Therefore, the limit $\big(\hat{N}^{(m)}\big)_{m}$ is an n-dimensional Brownian motion with covariance

(3.7) \begin{equation}\textrm{cov}\big(\hat{N}^{(m)}(t),\hat{N}^{(m')}(s)\big)=(t\wedge s)\bigg(\frac{\lambda_{0}p^{(m)}(\delta_{mm'}-p^{(m')})}{1-\|H\|_{1}} + \frac{\lambda_{0}p^{(m)}p^{(m')}}{(1-\|H\|_{1})^{3}}\bigg).\end{equation}

3.2. The second representation

Recall the immigration–birth branching representation with chronological levels for the Hawkes process N described in (2.1), which generalizes the one proposed by [Reference Hawkes and Oakes12]. Basically, the points from N are categorized virtually into those from an exogenous arrival process, which are called migrants and/or the first generation, and those generated from existing points, which are called children, and whose chronological levels are obviously defined. Thus we rewrite

(3.8) \begin{equation}N(t)=\sum_{l\geq1} N_{l}(t),\end{equation}

where $N_{1}(t)=\sup\{j\geq1, \tau_{1j}\leq t\}$ is a Poisson process with parameter $\lambda_{0}$ representing the arrival rate of the immigrants, and $\tau_{1j}$ denotes the arrival time of the jth immigrant. For $l\geq1$ , $N_{l+1}(t)=\sup\{j\geq1, \tau_{(l+1)j}\leq t\}$ , representing the individuals of $(l+1)$ th generation, is an inhomogeneous Poisson process with intensity, by conditioning on $\mathscr{F}_{l}(t)$ , as follows:

(3.9) \begin{equation}\lambda_{l}(t)=\sum_{j=1}^{N_{l}(t)}H(t-\tau_{lj})=\int_{0}^{t}H(t-s)N_{l}(ds)\in\mathscr{G}_{l}(t),\end{equation}

where $\mathscr{G}_{l}(t)=\sigma\{N_{l}(s), s\leq t\}$ and $\mathscr{F}_{l}(t)=\bigvee_{1\leq l'\leq l}\mathscr{G}_{l'}(t)$ , and where $\tau_{lj}$ denotes the birth time of the jth child of the lth generation, which is produced by some individual in the $(l-1)$ th generation. Here $\mathscr{G}_{l}(t)$ represents the information produced by the lth generation, $\mathscr{F}_{l}(t)$ represents the information up to the lth generation, and $\mathscr{F}_{\infty}(t)=\bigvee_{l\ge1}\mathscr{F}_{l}(t)$ collects the information of all the generations up to time t, which includes not only the occurrence times of events but also the virtually defined generation information and hence is strictly larger than the information generated by N itself. Under the non-explosion assumption (2.2), $\lambda_{l}$ and $N_{l}$ are finite-valued and can be constructed pathwise. By conditioning and the additive property of the intensity for the independent counting processes, N in (3.8) is a simple counting process with conditional intensity in (2.1).

To describe the split processes with the branching generations, we further let $\{\xi_{lj}, j\ge1\}$ be the i.i.d. splitting variables for the individuals from $N_{l}$ , which have the same distributional properties as $\{\xi_{j},j\geq1\}$ in (3.1). Recall $p^{(m)}$ in (3.2). Let

(3.10) \begin{equation}N^{(m)}_{l}(t) \;:\!=\;\sum_{j=1}^{N_{l}(t)}\boldsymbol{1}(\xi_{lj}=m), \quad \text{for every $m=1,2,\cdots,n$}.\end{equation}

Then $N^{(m)}=\sum_{l\ge1} N^{(m)}_{l}$ .

Proposition 3.2. $\big(N^{(m)}\big)_{m}$ is an n-dimensional Hawkes process, where the baseline intensity vector is $\big(p^{(m)}\lambda_{0}\big)_{m}$ and the mutual exciting matrix at the (m, m) entry is $p^{(m)}H$ ; that is, its intensity is given by

(3.11) \begin{equation}\lambda^{(m)}(t)=p^{(m)}\lambda(t)=p^{(m)}\lambda_{0}+\sum_{m'=1}^{n}\int_{0}^{t}\big(p^{(m)}H(t-s)\big)N^{(m')}(ds),\end{equation}

for every $m=1,2,\cdots,n$ , where $\lambda$ is the intensity for N in (2.1).

Remark 3.1. We remark that although the split process $\big(N^{(m)}\big)_{m}$ is a multivariate Hawkes process, the cross-exciting function $H_{mm'}$ in the definition (2.1) takes a special form, $p^{(m)}H$ , independent of m in (3.11).

Proof. We show that $\big(N^{(m)}\big)_{m}$ is a counting process with conditional jumping intensity $\big(\lambda^{(m)}\big)_{m}$ by evaluating its conditional jumping intensity.

For every l, notice that given $\{\mathscr{F}_{l}(t)\}_{t\ge0}$ , $\big(N^{(m)}_{l+1}\big)_{m}$ is a Poisson process with intensity $\lambda^{(m)}_{l}=p^{(m)}\lambda_{l}$ and independent among m. Therefore, conditioning on $\mathscr{F}_{\infty}(t)$ , the jumping intensity of $N^{(m)}$ at t is

\begin{equation*}\begin{split}\lambda^{(m)}(t)\;:\!=\;&\ \sum_{l\geq1}\lambda^{(m)}_{l-1}(t)=p^{(m)}\lambda(t)=p^{(m)}\Big(\lambda_{0}+\int_{0}^{t}H(t-s)N(ds)\Big)\\[5pt]=&\ p^{(m)}\lambda_{0}+\sum_{m'=1}^{d}\int_{0}^{t}\big(p^{(m)}H(t-s)\big) N^{(m')}(ds),\end{split}\end{equation*}

which is a process adapted to the natural filtration of $(N^{(m)})_{m}$ . This proves Proposition 3.2.

Recall $\hat{N}^{(m)}_T$ defined in (3.4). By applying [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2] to Proposition 3.2, we obtain the following result.

Proposition 3.3. Suppose that $\|H\|_{1}\in(0,1)$ and Assumption A2 hold. We have

\begin{equation*}\big(\hat{N}^{(m)}_{T}\big)_{m}\Rightarrow \big(\hat{N}^{(m)}\big)_{m} \quad \mbox{$in$}\quad ({\mathbb D}^{n}, J_{1}) \quad \mbox{$as$}\quad T \to \infty,\end{equation*}

where $\big(\hat{N}^{(m)}\big)_{m}$ is a standard n-dimensional Brownian motion with covariance matrix

(3.12) \begin{equation}\big(I-\tilde{\Xi}\big)^{-1}\cdot\tilde{\Sigma}\cdot\big(I-\tilde{\Xi}^{\textrm{T}}\big)^{-1},\end{equation}

where $\displaystyle\tilde{\Xi}=\big(p^{(m)}\|H\|_{1}\big)_{mm'}$ and $\displaystyle \tilde{\Sigma}=\textrm{diag}(p)\frac{\lambda_{0}}{1-\|H\|_{1}}$ .

Proposition 3.4. The limits in Propositions 3.1 and 3.2 are equivalent in distribution.

Proof. To check the equivalence it suffices to show that the covariance functions in (3.7) coincide with the matrix in (3.12).

By definition we have

\begin{equation*}\tilde{\Xi}=\big(p^{(m)}\|H\|_{1}\big)_{mm'}=\textrm{diag}(p)\cdot\text{ones}(n)\cdot \|H\|_{1},\end{equation*}

where $\text{ones}(n)$ denotes the n-dimensional square matrix with 1 for all its entries. Under Assumption A1, the spectral radius of $\tilde{\Xi}$ is $\|H\|_{1}\in(0,1)$ . Therefore,

\begin{equation*}\tilde{\Xi}^{j}=\textrm{diag}(p)\cdot\text{ones}(n)\cdot \|H\|_{1}^{j},\end{equation*}

where we use the fact $\sum_{m=1}^{n}p^{(m)}=1$ in the identity. Thus,

\begin{equation*}\big(I-\tilde{\Xi}\big)^{-1}=I+\sum_{j\ge1}\tilde{\Xi}^{j}=I+\textrm{diag}(p)\cdot\text{ones}(n)\cdot \frac{\|H\|_{1}}{1-\|H\|_{1}},\end{equation*}

and from (2.8) and (2.9),

\begin{equation*}\tilde{\Sigma}=\textrm{diag}\Big(\big(I-\tilde{\Xi}\big)^{-1}\big(p^{(m)}\lambda_{0}\big)_{m}\Big)=\textrm{diag}(p)\frac{\lambda_{0}}{1-\|H\|_{1}}.\end{equation*}

We then have

\begin{equation*}\begin{split}&\ \big(I-\tilde{\Xi}\big)^{-1}\cdot\tilde{\Sigma}\cdot\big(I-\tilde{\Xi}^{\textrm{T}}\big)^{-1}\\[5pt]=&\ \bigg(I+\textrm{diag}(p)\cdot\text{ones}(n)\cdot \frac{\|H\|_{1}}{1-\|H\|_{1}}\bigg)\textrm{diag}\bigg(\frac{\lambda_{0}p^{(m)}}{1-\|H\|_{1}}\bigg)\\[5pt]& \qquad \times\bigg(I+\text{ones}(n)\cdot \frac{\|H\|_{1}}{1-\|H\|_{1}}\cdot\textrm{diag}(p)^{\textrm{T}}\bigg)\\[5pt]=&\ \frac{\lambda_{0}}{1-\|H\|_{1}}\,\textrm{diag}(p)+\lambda_{0}\frac{2\|H\|_{1}-\|H\|_{1}^{2}}{(1-\|H\|_{1})^{3}}\, p\cdot p^{\textrm{T}},\end{split}\end{equation*}

which coincides with the matrix in (3.7).

3.2.1. Another perspective via decomposed processes.

We have from [Reference Bacry, Delattre, Hoffmann and Muzy1, Lemma 4] that

\begin{equation*}N^{(m)}(t)-\mathbb{E}\big[N^{(m)}(t)\big]=X^{(m)}(t)+p^{(m)}Y(t),\end{equation*}

where $X^{(m)}(t)=N^{(m)}(t)-\int_{0}^{t}\lambda^{(m)}(s)ds$ and

\begin{equation*}X(t)=\sum_{m=1}^{n}X^{(m)}(t)=N(t)-\int_{0}^{t}\lambda(s)ds, \qquad Y(t)=\int_{0}^{t}\varphi(t-s)X(s)ds.\end{equation*}

In addition to the process $N^{(m)}_{l}$ in the proof of Proposition 3.2, we further define

\begin{equation*}X^{(m)}_{l}(t)=N^{(m)}_{l}(t)-\int_{0}^{t}\lambda^{(m)}_{l-1}(s)ds=N^{(m)}_{l}(t)-p^{(m)}\int_{0}^{t}\lambda_{l-1}(s)ds.\end{equation*}

Then we also have $X^{(m)}=\sum_{l\ge1}X^{(m)}_{l}$ . By the conditional independence of the $X^{(m)}_{l}$ for $m=1,\dots,n$ and zero covariance for different l and l by definition, we obtain immediately the following covariance function of $X^{(m)}$ : for $m,m'=1,\dots,n$ and $t, s\ge 0$ ,

\begin{equation*}\begin{split}&\ \textrm{cov}\Big(X^{(m)}(t),X^{(m')}(s)\Big)= \mathbb{E}\bigg[\sum_{l\ge1}X^{(m)}_{l}(t)\sum_{l'\ge1}X^{(m')}_{l'}(s)\bigg]\\[5pt]=&\ \delta_{mm'}\sum_{l\ge1}\mathbb{E}\Big[X^{(m)}_{l}(t)X^{(m)}_{l}(s)\Big]= \delta_{mm'}\sum_{l\ge1}\mathbb{E}\bigg[\int_{0}^{t\wedge s}\lambda^{(m)}_{l-1}(u)du\bigg]\\[5pt]=&\ \delta_{mm'}p^{(m)}\sum_{l\ge1}\int_{0}^{t\wedge s}H^{(l-1)}*\lambda_{0}(u)du= \delta_{mm'}p^{(m)}\lambda_{0}\int_{0}^{t\wedge s}\big(1+\varphi*1(u)\big)du,\end{split}\end{equation*}

where $\delta_{mm'}=1$ if $m=m'$ and $\delta_{mm'}=0$ if $m\neq m'$ . Notice that $\big(X^{(m)}\big)_{m}$ is a martingale with respect to the natural filtration $\big\{\sigma\big\{\big(N^{(m)}(s)\big)_{m}, s\leq t\big\}, t\ge0 \big\}$ , and its subprocesses $X^{(m)}_{l}$ do not jump at the same time.

Again we use the same scaling for the processes and quantities indexed by T. With the representations in the proposition, we can define $\big(N^{(m)}_{T}\big)_{m}$ and write

(3.13) \begin{equation}\hat{N}^{(m)}_{T}(t)\;:\!=\;\frac{1}{\sqrt{T}}\Big(N^{(m)}_{T}(Tt)-\mathbb{E}\big[N^{(m)}_{T}(Tt)\big]\Big)= \hat{X}^{(m)}_{T}(t) + p^{(m)}_{T} \hat{Y}_{T}(t), \quad t\ge 0,\end{equation}

where for every $m=1,2,\cdots,n$ and $l\ge1$

\begin{equation*}\hat{X}^{(m)}_{l,T}(t)=\frac{1}{\sqrt{T}}X^{(m)}_{l,T}(Tt)=\frac{1}{\sqrt{T}}\Big(N^{(m)}_{l,T}(t)-\int_{0}^{t}\lambda^{(m)}_{l-1,T}(s)ds\Big),\end{equation*}

with

(3.14) \begin{equation}\hat{X}^{(m)}_{T}(t)=\sum_{l\ge1}\hat{X}^{(m)}_{l,T}(t),\qquad \hat{X}_{T}(t)=\sum_{m=1}^{n}\hat{X}^{(m)}_{l,T}(t),\qquad \hat{Y}_{T}(t)=\int_{0}^{t} T \varphi(T(t-s))\hat{X}_{T}(s)ds.\end{equation}

Then we have from the immigration–birth representation that

(3.15) \begin{align} \mathbb{E}\Big[\bar{N}^{(m)}_{T}(t)\Big]=&\ \lambda_{0,T}p^{(m)}_{T}\cdot \int_{0}^{t}\big(1+\varphi*1(Ts)\big)ds,\end{align}
(3.16) \begin{align}\textrm{cov}\Big(\hat{X}^{(m)}_{T}(t),\hat{X}^{(m')}_{T}(s)\Big)=&\ \delta_{mm'}\lambda_{0,T}p^{(m)}_{T}\cdot \int_{0}^{t\wedge s}\big(1+\varphi*1(Tu)\big)du.\end{align}

It is clear that given $\|H\|_{1}\in(0,1)$ and Assumption A2, by (2.5), we obtain that this covariance converges as $T\to \infty$ to

\begin{equation*}\delta_{mm'}\lambda_{0}p^{(m)} (1+\|\varphi\|_{1}) (t\wedge s) = \delta_{mm'}\lambda_{0}p^{(m)}(1-\|H\|_1)^{-1} (t\wedge s). \end{equation*}

We observe that the two components in the expression for $\hat{N}^{(m)}_{T}$ in (3.13) are intrinsically correlated, so that it appears to be more complicated than the first expression in (3.5); however, the martingale convergence method as in [Reference Bacry, Delattre, Hoffmann and Muzy1] can be applied and results in the following proposition (proof details are omitted for brevity).

Proposition 3.5. Under the conditions of $\|H\|_{1}\in(0,1)$ and Assumption A2, $\big(\hat{X}^{(m)}_{T}\big)_{m}\Rightarrow \big(\hat{X}^{(m)}\big)_{m}$ in $({\mathbb D}^{n}, J_1)$ as $T\to\infty$ , where $\big(\hat{X}^{(m)}\big)_{m}$ is an n-dimensional Brownian motion with covariance function

\begin{equation*}\textrm{cov}\Big(\hat{X}^{(m)}(t),\hat{X}^{(m')}(s)\Big)=(t\wedge s)\cdot\delta_{mm'}\frac{\lambda_{0} p^{(m)}}{1-\|H\|_{1}}\,.\end{equation*}

Thus, $\hat{Y}_{T}\Rightarrow \hat{Y}$ in $({\mathbb D}, J_1)$ as $T\to\infty$ , where

\begin{equation*}\hat{Y}=\frac{\|H\|_{1}}{1-\|H\|_{1}}\hat{X}=\frac{\|H\|_{1}}{1-\|H\|_{1}}\sum_{m=1}^{n}\hat{X}^{(m)}.\end{equation*}

As a consequence, the process $\big(\hat{N}^{(m)}\big)_{m}$ can also be represented as

(3.17) \begin{equation}\hat{N}^{(m)}= \frac{\lambda_{0}^{1/2}}{(1-\|H\|_{1})^{1/2}}\Big(\sqrt{p^{(m)}}W^{(m)}\Big)+p^{(m)}\frac{\lambda_{0}^{1/2}\|H\|_{1}}{(1-\|H\|_{1})^{3/2}} W,\end{equation}

where $\big(W^{(m)}\big)_{m}$ is a standard n-dimensional Brownian motion, and $W=\sum_{m=1}^{n}\sqrt{p^{(m)}}W^{(m)}$ .

Remark 3.2. We observe that we can rewrite the expression for the limit $\hat{N}^{(m)}$ in (3.17) as

(3.18) \begin{equation}\hat{N}^{(m)}= \frac{\lambda_{0}^{1/2}}{(1-\|H\|_{1})^{1/2}} \hat{S}^{(m)}+p^{(m)}\frac{\lambda_{0}^{1/2}}{(1-\|H\|_{1})^{3/2}}W,\end{equation}

where $\hat{S}^{(m)}\;:\!=\;\sqrt{p^{(m)}}W^{(m)}- p^{(m)}W$ . It can be checked by direct calculations that the two components $(\hat{S}^{(m)})_{m}$ and W in the second expression in (3.18) are independent. Moreover, for every m and m ,

\begin{equation*}\begin{split} & \textrm{cov}\big(\hat{S}^{(m)}(t),\hat{S}^{(m')}(s)\big) \\[5pt]&= \sqrt{p^{(m)}p^{(m')}}\mathbb{E}\big[W^{(m)}(t)W^{(m')}(s)\big]-p^{(m)}p^{(m')}\mathbb{E}\big[W^{(m)}(t)W^{(m)}(s)\big]\\[5pt]&\qquad -p^{(m)}p^{(m')}\mathbb{E}\big[W^{(m')}(t)W^{(m')}(s)\big]+ p^{(m)}p^{(m')}\mathbb{E}\big[W(t)W(s)\big]\\[5pt]&=p^{(m)}\big(\delta_{mm'}-p^{(m')}\big)(t\wedge s),\end{split}\end{equation*}

which is exactly the identity in (3.6). This also proves the equivalence of the limit in (3.17) and that in Proposition 3.1, and thus, equivalence with that in Proposition 3.3.

4. Aggregating Hawkes processes

Splitting a Hawkes process results a multivariate Hawkes process with special exciting function. However, the aggregation of a multivariate Hawkes process is not necessarily a Hawkes process. In the following, we identify a necessary and sufficient condition for the aggregated multivariate Hawkes process to be a one-dimensional Hawkes process.

Let $N=(N_{k})_{k}$ be a multivariate Hawkes process with conditional intensity in (2.1). The associated aggregating process of N is denoted by

(4.1) \begin{equation}A(t)=\sum_{k=1}^{d}N_{k}(t).\end{equation}

Then, similarly to the proof of Proposition 3.2, by conditioning on the natural filtration of $(N_{k})_{k}$ , the conditional jumping intensity is given by

\begin{equation*}\lambda_{A}(t)=\sum_{k=1}^{d}\lambda_{k}(t)=\sum_{k=1}^{d}\lambda_{k0}+ \sum_{k^{\prime}=1}^{d}\int_{0}^{t}\Big(\sum_{k=1}^{d}H_{kk^{\prime}}(t-s)\Big) N_{k^{\prime}}(ds),\end{equation*}

which is a process adapted to the natural filtration of $\big(N_{k}\big)_{k}$ . Note that the filtration generated by the vector process N is strictly larger than that of A. Thus, to make $\lambda_{A}$ above adapted to the natural filtration of A, an immediate observation is the following property.

Proposition 4.1. $A=\{A(t), t\ge0\}$ is a one-dimensional Hawkes process if and only if $\tilde{H}\;:\!=\;\sum_{k=1}^{d}H_{kk^{\prime}}$ is a function independent of k, under which the conditional intensity for A is

\begin{equation*}\lambda_A(t) =\sum_{k=1}^{d}\lambda_{k0}+\int_{0}^{t}\tilde{H}(t-s)A(ds)=e^{\textrm{T}}\cdot\lambda_{0}+\int_{0}^{t}\tilde{H}(t-s)A(ds).\end{equation*}

Remark 4.1. As a very special case, suppose that the $N_{k}$ are independent one-dimensional Hawkes processes with conditional intensity $\lambda_k(t) = \lambda_{k0} + \int_0^t H_k(t-s) N_k(ds)$ . Then

\begin{equation*}\lambda_A(t) = \sum_{k=1}^{d}\lambda_{k0}+ \sum_{k=1}^{d}\int_{0}^{t}H_{k}(t-s)N_{k}(ds).\end{equation*}

Thus, the aggregating process A is a Hawkes process with conditional intensity $\lambda_A(t) = e^{\textrm{T}}\cdot\lambda_{0}+\int_{0}^{t}H(t-s)A(ds)$ only if $H_k\equiv H$ .

Next we give another example to illustrate the property. Let N be a two-dimensional Hawkes process with positive baseline intensities $\lambda_{10},\lambda_{20}>0$ and with positive mutually exciting functions $(H_{ij})_{ij}$ . Then the associated aggregating process A is a Hawkes process if and only if $H_{11}+H_{21}=H_{12}+H_{22}$ .

Let $\hat{A}_{T}$ be the scaled aggregating process of the Hawkes process N,

\begin{equation*}\hat{A}_{T}(t)=\sqrt{T}\big(A(Tt)-\mathbb{E}\big[A(Tt)\big]\big)=e^{\textrm{T}}\cdot \hat{N}_{T}(t).\end{equation*}

Proposition 4.2. Under Assumption A1, we have

(4.2) \begin{equation}\hat{A}_{T}\Rightarrow \hat{A}= e^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}\cdot\Sigma^{1/2}\cdot W,\end{equation}

where W is a standard Brownian motion and $\Sigma=\textrm{diag}\big((I-\|H\|_{1})^{-1}\lambda_{0}\big)$ is the diagonal matrix in (2.9).

If, in addition, $\tilde{H}\;:\!=\;\sum_{k=1}^{d}H_{kk^{\prime}}$ is a function independent of k , then $\|\tilde{H}\|_{1}\in(0,1)$ and $\hat{A}$ is a Brownian motion with diffusion coefficient $\sum_{k=1}^{d}\lambda_{k0}\big(1-\|\tilde{H}\|_{1}\big)^{-3}$ .

Proof. The limit result for $\hat{A}_{T}$ in (4.2) is a direct consequence of (2.7) for $\hat{N}_{T}$ and the continuous mapping theorem, which is a one-dimensional Brownian motion with diffusion coefficient

\begin{equation*}e^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}\cdot\Sigma\cdot\big(I-\|H\|_{1}^{T}\big)^{-1}\cdot e.\end{equation*}

For the special case that $\sum_{k}H_{kk^{\prime}}$ is independent of k , since it is a Hawkes process, we can directly apply [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2] to the one-dimensional Hawkes process A with the second expression for $\lambda_A(t)$ in Proposition 4.1 and obtain the diffusion coefficient as stated. We next verify that the expression can be also obtained from that in (4.2). We have $e^{\textrm{T}}\cdot H(t)=e^{\textrm{T}} \tilde{H}(t)$ and

\begin{equation*}e^{\textrm{T}}\cdot \|H\|_{1}=e^{\textrm{T}} \|\tilde{H}\|_{1},\end{equation*}

which shows that $\|\tilde{H}\|_{1}$ and $e^{\textrm{T}}$ are the left eigenvalue and the associated eigenvector of the kernel matrix $\|H\|_{1}$ . Therefore, $\|\tilde{H}\|_{1}\in(0,1)$ by Assumption A1,

\begin{equation*}e^{\textrm{T}}\cdot\|H\|_{1}^{j}=e^{\textrm{T}}\big(\|\tilde{H}\|_{1}\big)^{j}\quad \text{and}\quad e^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}=e^{\textrm{T}}\big(1-\|\tilde{H}\|_{1}\big)^{-1},\end{equation*}

and the diffusion coefficient for $\hat{A}$ is

\begin{equation*}\begin{split}&\ e^{\textrm{T}}\cdot(I-\|H\|_{1})^{-1}\cdot\Sigma\cdot(I-\|H\|_{1}^{\textrm{T}})^{-1}\cdot e= e^{\textrm{T}}\cdot\Sigma\cdot e \big(1-\|\tilde{H}\|_{1}\big)^{-2}\\[5pt]=&\ e^{\textrm{T}}\cdot \big(I-\|H\|_{1}\big)^{-1}\cdot \lambda_{0}\big(1-\|\tilde{H}\|_{1}\big)^{-2}=e^{\textrm{T}}\cdot \lambda_{0}\big(1-\|\tilde{H}\|_{1}\big)^{-3}.\end{split}\end{equation*}

Here we make use of the fact that $\textrm{diag}(u)\cdot e=u$ for a vector u.

5. Splitting and aggregating multivariate Hawkes processes

Now we consider randomly splitting and aggregating a multivariate Hawkes process $N=\big(N_{k}\big)_{k}$ in (2.1). Let $\{\xi_{kj}\}_{k,j}$ be the splitting variables: whenever $\xi_{kj}=m$ , the jth individual of the kth component counting process $N_k$ (occurring at $\tau_{kj}$ ) is assigned to the mth sub-counting process. Then the $\mathbb{N}^{d}$ -valued Hawkes process $\big(N_{k}\big)_{k}$ splits into an $\mathbb{N}^{d\times n}$ -valued process $\big(N^{(m)}_{k}\big)_{k,m}$ and

\begin{equation*}N^{(m)}_{k}(t)=\sum_{j=1}^{N_{k}(t)}\boldsymbol{1}(\xi_{kj}=m)\quad \text{for every} \, k,\, m,\end{equation*}

where $\{\xi_{kj}\}_{j}$ are i.i.d. variables, independent of N, with distribution

\begin{equation*}\mathbb{P}\big(\xi_{kj}=m\big)=p^{(m)}_{k}\quad \text{and}\quad \sum_{m=1}^{n}p^{(m)}_{k}=1\quad \text{for every} \, k,\, m.\end{equation*}

We consider the following aggregated process:

\begin{equation*}A^{(m)}(t)=\sum_{k=1}^{d}N^{(m)}_{k}(t), \quad t\ge 0.\end{equation*}

By the procedure of splitting and then aggregating, we have transformed a d-dimensional Hawkes process into an n-dimensional counting process.

Before we proceed to study the properties of the split and aggregated processes, we discuss some potential applications. This often occurs when demands require re-categorization in order for them to be processed. In an insurance company, demands may arrive as a bivariate Hawkes process comprising home insurance and automobile insurance, each of which may be split into claims and new personal/commercial services, which will be processed by the corresponding service departments. In a remanufacturing facility, different products may arrive as a multivariate Hawkes process, then be split and aggregated, given the different component reprocessing needs, in order to be reprocessed at the corresponding machines. Similarly, in a data center, jobs may arrive as a multivariate Hawkes process, which must be regrouped in order to be processed at separate parallel servers because of computational requirements or constraints.

Similarly to Propositions 3.2 and 4.1, the Hawkes property is preserved for the splitting process indexed by (k, m), and the aggregated process $A^{(m)}$ is a Hawkes process under certain conditions.

Proposition 5.1. The following properties hold:

  1. (i) $\big(N^{(m)}_{k}\big)_{k,m}$ is an $\mathbb{N}^{d\times n}$ -valued Hawkes process with conditional intensity

    \begin{equation*}\lambda^{(m)}_{k}(t)=p^{(m)}_{k}\lambda_{k}(t)=p^{(m)}_{k}\lambda_{k0}+\sum_{k^{\prime},m'}\int_{0}^{t}\Big(p^{(m)}_{k}H_{kk^{\prime}}(t-s)\Big)N_{k^{\prime}}^{(m')}(ds),\end{equation*}
    where $(\lambda_{k})_{k}$ is the intensity for N in (2.1).
  2. (ii) $\big(A^{(m)}\big)_{m}$ is an n-dimensional Hawkes process if and only if $\tilde{H}^{(m)}\;:\!=\;\sum_{k=1}^{d}p^{(m)}_{k}H_{kk^{\prime}}$ is a function independent of $k^{\prime}\in\mathcal{L}^{(m)}\;:\!=\;\{k, p_{k}^{(m)}>0\}$ , under which the conditional intensity of $\big(A^{(m)}\big)_{m}$ is given by

    \begin{equation*}\lambda^{(m)}_{A}(t)=\Big(\sum_{k=1}^{d}p^{(m)}_{k}\lambda_{k0}\Big)+\sum_{m'=1}^{n}\int_{0}^{t}\tilde{H}^{(m)}(t-s)A^{(m')}(ds).\end{equation*}

Remark 5.1. For a d-dimensional Hawkes process, it is always assumed to be irreducible, that is, $\mathbb{E}\big(N_{k}(\infty)\big)>0$ for every k. The independence of $\tilde{H}^{(m)}$ in Proposition 4.1 is required for every k . However, in the splitting and aggregating case, $\mathbb{P}\big(N^{(m)}_{k}(\infty)=0\big)=1$ for those $p_{k}^{(m)}=0$ (that is, $k\notin\mathcal{L}^{(m)}$ ); the necessity and sufficiency of the condition can be checked by evaluating the intensity for the second occurrence time of $A^{(m)}$ in Proposition 5.1(i), which should remain unchanged for different values of k .

For example, let $d=n$ and let $\{m_{k}\}_{k}$ be a permutation of $\{1,2,\cdots,d\}$ with $p^{(m_{k})}_{k}=1$ . Then we have from this assumption that $N^{(m)}_{k}(\infty)=0$ for $m\neq m_{k}$ and $A^{(m_{k})}=N_{k}$ for every k. This means $(A^{(m)})_{m}$ defines a rotated Hawkes process which switches the positions of individuals. It is of course Hawkes; however,

\begin{equation*}\sum_{k=1}^{d}p_{k}^{(m_{k_{0}})}H_{kk^{\prime}}=p^{(m_{k_{0})}}_{k_{0}}H_{k_{0}k^{\prime}}=H_{k_{0}k^{\prime}}\end{equation*}

depends on k for every $k_{0}$ .

We consider the scaled process indexed by T, and all the variables are marked with additional subscripts T; that is, $N_{T}=\big(N_{k,T}\big)_{k}$ is a Hawkes process whose baseline intensity is $\lambda_{0,T}=(\lambda_{k0,T})_{k}\in{\mathbb R}_{+}^{d}$ , and the kernel matrix function $H\in{\mathbb R}_{+}^{d\times d}$ stays the same. The splitting variables $\{(\xi_{kj,T})_{k}\}_{j}$ have subscripts T and distribution matrix $(p^{(m)}_{k,T})_{k,m}\in{\mathbb R}_{+}^{d\times n}$ , which results in the splitting process $\big(N^{(m)}_{k,T}\big)_{k,m}$ . The average process and the diffusion-scaled process are defined by

\begin{equation*}\bar{N}_{T}(t)=\Big(\frac{1}{T}N^{(m)}_{k,T}(Tt)\Big)_{k,m}\quad \text{and}\quad \hat{N}_{T}(t)=\Big(\hat{N}^{(m)}_{k,T}(t)\Big)_{k,m}=\sqrt{T}\Big(\bar{N}_{T}(t)-\mathbb{E}\big[\bar{N}_{T}(t)\big]\Big).\end{equation*}

Assumption A3. Assume that for some $\lambda_{k0},p^{(m)}_{k}\ge0$ with $\sum_{m=1}^{n}p^{(m)}_{k}=1$ for every k,

\begin{equation*}\lambda_{k0,T}\to \lambda_{k0}\quad and \quad p^{(m)}_{k,T}\to p^{(m)}_{k}\quad \mbox{$as$}\quad T\to\infty.\end{equation*}

We are interested in the FCLT of the aggregating process $(A^{(m)}_{T})_{m}$ , defined by

\begin{equation*}\hat{A}^{(m)}_{T}=\sum_{k=1}^{d}\hat{N}^{(m)}_{k,T}.\end{equation*}

Theorem 5.1. Suppose Assumptions A1 and A3 hold. We have

\begin{equation*}\big(\hat{A}^{(m)}_{T}\big)_{m}\Rightarrow\big(\hat{A}^{(m)}\big)_{m}\quad \text{$in$ $({\mathbb D}^{n},J_{1})$},\end{equation*}

where $(\hat{A}^{m})_{m}$ is an n-dimensional Brownian motion with covariance function

\begin{equation*}\begin{split} \textrm{cov}\big(\hat{A}^{(m)}(t),\hat{A}^{(m')}(s)\big)=&\ (t\wedge s)\times\Big(\textrm{diag}\big(p^{\textrm{T}}\cdot\Sigma\cdot e\big)-p^{\textrm{T}}\cdot\Sigma\cdot p\\[5pt]&\quad +p^{\textrm{T}}\cdot(I-\|H\|_{1})^{-1}\cdot\Sigma\cdot(I-\|H\|_{1}^{\textrm{T}})^{-1}\cdot p\Big),\end{split}\end{equation*}

where $\Sigma=\textrm{diag}\big((I-\|H\|_{1})^{-1}\lambda_{0}\big)$ is the diagonal matrix in (2.8).

Proof. Applying [Reference Bacry, Delattre, Hoffmann and Muzy1, Lemma 4] to $\big(N^{(m)}_{k,T}\big)_{k,m}$ we have the representation

(5.1) \begin{equation}\hat{N}^{(m)}_{k,T}(t)=\hat{X}^{(m)}_{k,T}(t)+ p^{(m)}_{k,T}\hat{Y}_{k,T}(t)\end{equation}

where

\begin{equation*}\begin{gathered}\hat{X}^{(m)}_{k,T}(t)=\frac{1}{\sqrt{T}}\Big(N^{(m)}_{k,T}(Tt)-\int_{0}^{Tt}\lambda^{(m)}_{k,T}(s)ds\Big),\\[5pt]\hat{X}_{T}(t)=\big(\hat{X}_{k,T}(t)\big)_{k}=\sum_{m=1}^{n}\big(\hat{X}^{(m)}_{k,T}(t)\big)_{k}\quad \text{and}\quad \hat{Y}_{k,T}(t)=\int_{0}^{t}\textrm{row}_{k}\big(\varphi(T(t-s))\big)\hat{X}_{T}(s)ds,\end{gathered}\end{equation*}

and where $\varphi$ is the matrix function defined in (2.4). Moreover, similarly to (3.15) and (3.16),

\begin{equation*}\begin{split}\mathbb{E}\big[\bar{N}^{(m)}_{k,T}(t)\big]=&\ p^{(m)}_{k,T}\cdot \textrm{ent}_{k}\Big(\int_{0}^{t}\big(I+\varphi*1(Ts)\big)ds\cdot\lambda_{0,T}\Big),\\[5pt]\textrm{cov}\Big(\hat{X}^{(m)}_{k,T}(t),\hat{X}^{(m')}_{k^{\prime},T}(s)\Big)=&\ \delta_{kk^{\prime}}\delta_{mm'}p^{(m)}_{k,T}\cdot \textrm{ent}_{k}\Big(\int_{0}^{t}\big(I+\varphi*1(Ts)\big)ds\cdot\lambda_{0,T}\Big).\end{split}\end{equation*}

Therefore, we have

\begin{equation*}\Big(\big(\hat{X}^{(m)}_{k,T}\big)_{k,m},\big(\hat{Y}_{k,T}\big)_{k}\Big)\Rightarrow\Big(\big(\hat{X}^{(m)}_{k}\big)_{k,m},\big(\hat{Y}_{k}\big)_{k}\Big),\end{equation*}

where, for some standard Brownian motion $(W^{(m)}_{k})_{k,m}$ ,

\begin{equation*}\hat{X}^{(m)}_{k}=\sqrt{p^{(m)}_{k}\Sigma_{kk}} W^{(m)}_{k},\quad \hat{Y}_{k}=\textrm{row}_{k}\big(\|\varphi\|_{1}\big)\sum_{m=1}^{n}\Big(\hat{X}^{(m)}_{k^{\prime}}\Big)_{k^{\prime}}\end{equation*}
\begin{equation*}\quad \text{and}\quad \hat{N}^{(m)}_{k}=\hat{X}^{(m)}_{k}+p^{(m)}_{k}\hat{Y}_{k}.\end{equation*}

A direct calculation shows that

\begin{equation*}\begin{split}\textrm{cov}\Big(\hat{N}^{(m)}_{k}(t),&\ \hat{N}^{(m')}_{k^{\prime}}(s)\Big)=(t\wedge s)\times\Big(\delta_{kk^{\prime}}\Sigma_{kk}p^{(m)}_{k}\big(\delta_{mm'}-p^{(m')}_{k^{\prime}}\big)\\[5pt]&\ + p^{(m)}_{k}p^{(m')}_{k^{\prime}}\textrm{ent}_{kk^{\prime}}\big((I-\|H\|_{1})^{-1}\cdot\Sigma\cdot(I-\|H\|_{1}^{\textrm{T}})^{-1}\big)\Big).\end{split}\end{equation*}

Summing over k and k gives the covariance function for $\big(\hat{A}^{(m)}\big)_{m}$ .

Remark 5.2. From the proof, we have a second representation for $\big(\hat{N}^{(m)}_{k}\big)_{k,m}$ :

(5.2) \begin{equation}\hat{N}^{(m)}_{k}= \Sigma_{kk}^{1/2}\hat{S}^{(m)}_{k}+p^{(m)}_{k}\textrm{row}_{k}\big((I-\|H\|_{1})^{-1}\big)\Sigma^{1/2}W,\end{equation}

where $\big(W^{(m)}_{k}\big)_{k,m}$ is the standard Brownian motion in the proof, and

\begin{equation*}W_{k}=\sum_{m=1}^{n}\sqrt{p^{(m)}_{k}}W^{(m)}_{k},\quad \hat{S}^{(m)}_{k}=\sqrt{p^{(m)}_{k}}W^{(m)}_{k}-p^{(m)}_{k}W_{k},\quad \text{and}\quad W=(W_{k})_{k}\in{\mathbb R}^{d}.\end{equation*}

Then $\big(W_{k}\big)$ is an n-dimensional standard Brownian motion; $\big\{(\hat{S}^{(m)}_{k})_{m}, \,k=1,2,\cdots,d\big\}$ is a sequence of Brownian motions independent of $(W_{k})_{k}$ and over k; and

\begin{equation*}\textrm{cov}\big(\hat{S}^{(m)}_{k}(t),\hat{S}^{(m')}_{k}(s)\big)=(t\wedge s)\times p^{(m)}_{k}\big(\delta_{mm'}-p^{(m')}_{k}\big).\end{equation*}

Under the conditions of Proposition 5.1(ii), $(A^{(m)}_{T})_{m}$ becomes a multivariate Hawkes process. Thus, [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2] can be applied directly and we have a second representation for its covariances.

Proposition 5.2. Suppose $\tilde{H}^{(m)}_{T}=\sum_{k=1}^{d}p^{(m)}_{k,T}H_{kk^{\prime}}$ is independent of k, and Assumptions A1 and A3 hold. Then the covariance function for $\hat{A}=\big(\hat{A}^{(m)}\big)_{m}$ in Theorem 5.1 can also be represented as

(5.3) \begin{equation}\big(I-\tilde{\Xi}\big)^{-1}\cdot\textrm{diag}\Big(\big(I-\tilde{\Xi}\big)^{-1}\cdot p^{\textrm{T}}\cdot \lambda_{0}\Big)\cdot\big(I-\tilde{\Xi}^{\textrm{T}}\big)^{-1},\end{equation}

where $\tilde{\Xi}=p^{\textrm{T}}\cdot\textrm{col}_{1}\big(\|H\|_{1}\big)\cdot e_{n}^{\textrm{T}}$ , and $e_{n}$ is the n-dimensional column vector of ones.

Proof. By assumption, $(A^{(m)}_{T})_{m}$ is a Hawkes process with conditional intensity

\begin{equation*}\lambda^{(m)}_{A,T}(t)=\Big(\sum_{k=1}^{d}p^{(m)}_{k,T}\lambda_{k0,T}\Big)+\sum_{m'=1}^{n}\int_{0}^{t}\tilde{H}^{(m)}_{T}(t-s)A^{(m')}_{T}(ds).\end{equation*}

The representation for the covariance function in (5.3) follows from (2.7) by [Reference Bacry, Delattre, Hoffmann and Muzy1, Theorem 2]. In the following, we show that (5.3) coincides with the covariance in Theorem 5.1 in this special case.

Defining $\displaystyle \nu=p^{\textrm{T}}\cdot \textrm{col}_{1}\big(\|H\|_{1}\big)$ and $c_{0}=\nu^{\textrm{T}}\cdot e_{n}$ , we have

(5.4) \begin{equation}\tilde{\Xi}=\nu\cdot e_{n}^{\textrm{T}},\quad p^{\textrm{T}}\cdot \|H\|_{1}=\nu\cdot e_{d}^{\textrm{T}},\quad \text{and}\quad e_{d}^{\textrm{T}}\cdot \|H\|_{1}=c_{0}\cdot e_{d}^{\textrm{T}}\end{equation}

by the fact that $p\cdot e_{n}=e_{d}$ . Thus, $c_{0}$ and $e_{d}^{\textrm{T}}$ are the right eigenvalue and the associated eigenvector of $\|H\|_{1}$ , and $c_{0}\in(0,1)$ by Assumption A1. We further have, for $\tilde{\Xi}$ ,

\begin{equation*}\tilde{\Xi}^{j}=\nu\cdot e_{n}^{\textrm{T}}\big(e_{n}^{\textrm{T}}\cdot \nu\big)^{j-1}=c_{0}^{j-1}\cdot\tilde{\Xi}\quad \text{and}\quad \big(I-\tilde{\Xi}\big)^{-1}=I+ \frac{1}{1-c_{0}}\nu\cdot e_{n}^{\textrm{T}},\end{equation*}

and

\begin{equation*}\big(I-\tilde{\Xi}\big)^{-1}\cdot p^{\textrm{T}}\cdot\lambda_{0}=p^{\textrm{T}}\cdot\lambda_{0}+\frac{1}{1-c_{0}}\nu\cdot e_{n}^{\textrm{T}}\cdot p^{\textrm{T}}\cdot\lambda_{0}=p^{\textrm{T}}\cdot\lambda_{0}+\frac{e_{d}^{\textrm{T}}\cdot \lambda_{0}}{1-c_{0}}\nu.\end{equation*}

Therefore, we have

(5.5) \begin{align}& \big(I-\tilde{\Xi}\big)^{-1}\cdot\textrm{diag}\Big(\big(I-\tilde{\Xi}\big)^{-1}\cdot p^{\textrm{T}}\cdot \lambda_{0}\Big)\cdot\big(I-\tilde{\Xi}^{\textrm{T}}\big)^{-1} \nonumber \\[5pt] & \quad = \textrm{diag}\Big(p^{\textrm{T}}\cdot\lambda_{0}+\frac{e_{d}^{\textrm{T}}\cdot \lambda_{0}}{1-c_{0}}\nu\Big)+ \frac{1}{(1-c_{0})^{2}}\nu\cdot e_{n}^{\textrm{T}}\cdot \Big(p^{\textrm{T}}\cdot\lambda_{0}\cdot\nu^{\textrm{T}}+\frac{e_{d}^{\textrm{T}}\cdot \lambda_{0}}{1-c_{0}}\nu\cdot\nu^{\textrm{T}}\Big) \nonumber \\[5pt]&\qquad +\frac{1}{1-c_{0}}\Big(p^{\textrm{T}}\cdot\lambda_{0}\cdot \nu^{\textrm{T}}+ \nu\cdot\lambda^{\textrm{T}}\cdot p+ 2\frac{e_{d}^{\textrm{T}}\cdot \lambda_{0}}{1-c_{0}}\nu\cdot\nu^{\textrm{T}}\Big) \end{align}
\begin{equation*}\begin{split}=&\ \textrm{diag}\Big(p^{\textrm{T}}\cdot\lambda_{0}+\frac{e_{d}^{\textrm{T}}\cdot \lambda_{0}}{1-c_{0}}\nu\Big)+ \frac{1}{1-c_{0}}\Big(p^{\textrm{T}}\cdot\lambda_{0}\cdot\nu^{\textrm{T}}+\nu\cdot\lambda_{0}^{\textrm{T}}\cdot p\Big) \\[5pt]& \quad +\big(\nu\cdot \nu^{\textrm{T}}\big)\Big(\frac{2e_{d}^{\textrm{T}}\cdot\lambda_{0}}{(1-c_{0})^{2}} +\frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{(1-c_{0})^{3}}\Big),\end{split}\end{equation*}

where we make use of the fact that $\textrm{diag}(u)\cdot e=u$ for the vector u in the calculation.

On the other hand, for the covariance function in Theorem 5.1 in this special case, we have

(5.6) \begin{align}p^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}& =p^{\textrm{T}}+ \sum_{j\ge1} p^{\textrm{T}}\cdot \|H\|_{1}^{j}=p^{\textrm{T}}+ \sum_{j\ge1} \nu\cdot e_{d}^{\textrm{T}}\cdot \|H\|_{1}^{j-1} \nonumber \\[5pt]& =p^{\textrm{T}}+ \frac{1}{1-c_{0}}\, \nu\cdot e_{d}^{\textrm{T}},\end{align}

where the fact from (5.4) is used. Thus, by the definition $\Sigma\cdot e_{d}=(I-\|H\|_{1})^{-1}\cdot\lambda_{0}$ , we have

(5.7) \begin{equation}\begin{split}p^{\textrm{T}}\cdot \Sigma\cdot e_{d}=&\ p^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}\cdot\lambda_{0}=p^{\textrm{T}}\cdot\lambda_{0}+\nu\frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{1-c_{0}} \,,\\[5pt]p^{\textrm{T}}\cdot \Sigma\cdot e_{d}\cdot\nu^{\textrm{T}}=&\ p^{\textrm{T}}\cdot\lambda_{0}\cdot\nu^{\textrm{T}}+ \frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{1-c_{0}} \big(\nu\cdot\nu^{\textrm{T}}\big) \,,\\[5pt]e_{d}^{\textrm{T}}\cdot\Sigma\cdot e_{d}=&\ e_{d}^{\textrm{T}}\cdot\big(I-\|H\|_{1}\big)^{-1}\cdot\lambda_{0}=\frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{1-c_{0}}\,,\end{split}\end{equation}

where the fact that $e_{d}^{\textrm{T}}\cdot\|H\|_{1}=c_{0}\cdot e_{d}^{\textrm{T}}$ is used.

Therefore, by applying the identities in (5.6) and (5.7), we obtain

\begin{equation*}\begin{split}&\ \textrm{diag}\big(p^{\textrm{T}}\cdot\Sigma\cdot e\big)+p^{\textrm{T}}\cdot(I-\|H\|_{1})^{-1}\cdot\Sigma\cdot(I-\|H\|_{1}^{\textrm{T}})^{-1}\cdot p-p^{\textrm{T}}\cdot\Sigma\cdot p\\[5pt]=&\ \textrm{diag}\big(p^{\textrm{T}}\cdot\Sigma\cdot e\big)+ \frac{1}{1-c_{0}}\Big(p^{\textrm{T}}\cdot\Sigma\cdot e_{d}\cdot\nu^{\textrm{T}}+ \nu\cdot e_{d}\cdot \Sigma\cdot p\Big) \\[5pt] & \qquad + \frac{1}{(1-c_{0})^{2}}\Big(\nu\cdot e_{d}^{\textrm{T}}\cdot\Sigma\cdot e_{d}\cdot \nu^{\textrm{T}}\Big)\\[5pt]=&\ \textrm{diag}\Big(p^{\textrm{T}}\cdot\lambda_{0}+\nu\frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{1-c_{0}}\Big)+ \frac{1}{1-c_{0}}\Big(p^{\textrm{T}}\cdot \lambda_{0}\cdot \nu^{\textrm{T}}+ \nu\cdot \lambda_{0}^{\textrm{T}}\cdot p+ \nu\cdot \nu^{\textrm{T}} \frac{2e_{d}^{\textrm{T}}\cdot\lambda_{0}}{(1-c_{0})}\Big) \\[5pt] & \qquad +\nu\cdot \nu^{\textrm{T}}\Big(\frac{e_{d}^{\textrm{T}}\cdot\lambda_{0}}{(1-c_{0})^{3}}\Big).\end{split}\end{equation*}

Comparing with the expression in (5.5), we conclude the equivalence.

Acknowledgements

We wish to thank the anonymous referees for their very helpful comments.

Funding information

Bo Li is supported by the National Natural Science Foundation of China (#11871289). Guodong Pang is partly supported by the National Science Foundation of the USA (DMS-2216765).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Bacry, E., Delattre, S., Hoffmann, M. and Muzy, J. F. (2013). Some limit theorems for Hawkes processes and application to financial statistics. Stoch. Process. Appl. 123, 24752499.CrossRefGoogle Scholar
Bacry, E., Mastromatteo, I. and Muzy, J.-F. (2015). Hawkes processes in finance. Market Microstruct. Liquidity 01, article no. 1550005.CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.CrossRefGoogle Scholar
Chen, X. (2021). Perfect sampling of Hawkes processes and queues with Hawkes arrivals. Stoch. Systems 11, 264283.CrossRefGoogle Scholar
Daley, D. J. and Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes, Vol. I, Elementary Theory and Methods, 2nd edn. Springer, New York.Google Scholar
Daw, A. and Pender, J. (2018). Queues driven by Hawkes processes. Stoch. Systems 8, 192229.CrossRefGoogle Scholar
Gao, X. and Zhu, L. (2018). Functional central limit theorems for stationary Hawkes processes and application to infinite-server queues. Queueing Systems 90, 161206.CrossRefGoogle Scholar
Gao, F. and Zhu, L. (2018). Some asymptotic results for nonlinear Hawkes processes. Stoch. Process. Appl. 128, 40514077.CrossRefGoogle Scholar
Hawkes, A. G. (1971). Point spectra of some mutually exciting point processes. J. R. Statist. Soc. B [Statist. Methodology] 33, 438443.Google Scholar
Hawkes, A. G. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 8390.CrossRefGoogle Scholar
Hawkes, A. G. (2017). Hawkes processes and their applications to finance: a review. Quant. Finance 18, 193198.CrossRefGoogle Scholar
Hawkes, A. G. and Oakes, D. (1974). A cluster process representation of a self-exciting process. J. Appl. Prob. 11, 493503.CrossRefGoogle Scholar
Karabash, D. and Zhu, L. (2015). Limit theorems for marked Hawkes processes with application to a risk model. Stoch. Models 31, 433451.CrossRefGoogle Scholar
Koops, D. T., Saxena, M., Boxma, O. J. and Mandjes, M. (2018). Infinite-server queues with Hawkes input. J. Appl. Prob. 55, 920943.CrossRefGoogle Scholar
Li, B. and Pang, G. (2022). Functional limit theorems for nonstationary marked Hawkes processes in the high intensity regime. Stoch. Process. Appl. 143, 285339.CrossRefGoogle Scholar
Oakes, D. (1975). The Markovian self-exciting process. J. Appl. Prob. 12, 6977.CrossRefGoogle Scholar
Sriram, K. and Whitt, W. (1985). Characterizing superposition arrival processes and the performance of multiplexers for voice and data. In Proc. IEEE Global Telecommunications Conference, New Orleans, Vol. 2, Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 25.4.17.Google Scholar
Sriram, K. and Whitt, W. (1986). Characterizing superposition arrival processes in packet multiplexers for voice and data. IEEE J. Sel. Areas Commun. 4, 833846.CrossRefGoogle Scholar
Whitt, W. (1972). Limits for the superposition of m-dimensional point processes. J. Appl. Prob. 9, 462465.CrossRefGoogle Scholar
Whitt, W. (1985). Queues with superposition arrival processes in heavy traffic. Stoch. Process. Appl. 21, 8191.CrossRefGoogle Scholar
Whitt, W. (2002). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and Their Application to Queues. Springer, New York.CrossRefGoogle Scholar