Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-22T12:55:09.013Z Has data issue: false hasContentIssue false

Characterization theorems for pseudo cross-variograms

Published online by Cambridge University Press:  24 April 2023

Christopher Dörr*
Affiliation:
University of Mannheim
Martin Schlather*
Affiliation:
University of Mannheim
*
*Postal address: Institute of Mathematics, University of Mannheim, 68131 Mannheim, Germany.
*Postal address: Institute of Mathematics, University of Mannheim, 68131 Mannheim, Germany.
Rights & Permissions [Opens in a new window]

Abstract

Pseudo cross-variograms appear naturally in the context of multivariate Brown–Resnick processes, and are a useful tool for analysis and prediction of multivariate random fields. We give a necessary and sufficient criterion for a matrix-valued function to be a pseudo cross-variogram, and further provide a Schoenberg-type result connecting pseudo cross-variograms and multivariate correlation functions. By means of these characterizations, we provide extensions of the popular univariate space–time covariance model of Gneiting to the multivariate case.

MSC classification

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

With increasing availability of multivariate data and considerable improvements in computational feasibility, multivariate random fields have become a significant part of geostatistical modelling in recent years.

These random fields are usually assumed to be either second-order stationary or intrinsically stationary. An m-variate random field

\[\{\boldsymbol{Z}(\boldsymbol{x})=(Z_1(\boldsymbol{x}),\ldots,Z_m(\boldsymbol{x}))^\top, \boldsymbol{x}\in{{\mathbb{R}}}^d \}\]

is called second-order stationary if it has a constant mean and if its auto- and cross-covariances

exist and are functions of the lag $\boldsymbol{h}$ only. It is called intrinsically stationary if the increment process $\{\boldsymbol{Z}(\boldsymbol{x}+\boldsymbol{h})-\boldsymbol{Z}(\boldsymbol{x}), \boldsymbol{x} \in {{\mathbb{R}}}^d\}$ is second-order stationary for all $\boldsymbol{h} \in {{\mathbb{R}}}^d$ . In this case, the function ${\boldsymbol {\tilde\gamma}}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ ,

is well-defined and is called a cross-variogram [Reference Myers19]. If we additionally assume that $Z_i(\boldsymbol{x}+\boldsymbol{h})-Z_j(\boldsymbol{x})$ is square integrable and that ${Var}(Z_i(\boldsymbol{x} + \boldsymbol{h}) - Z_j(\boldsymbol{x}))$ does not depend on $\boldsymbol{x}$ for all $\boldsymbol{x}, \boldsymbol{h}\in {{\mathbb{R}}}^d$ , $i,j=1,\ldots,m$ , then we can also define the so-called pseudo cross-variogram ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ [Reference Myers20] via

\begin{equation*}\gamma_{ij}(\boldsymbol{h})= \dfrac12{Var}(Z_i(\boldsymbol{x}+\boldsymbol{h})-Z_j(\boldsymbol{x})), \quad \boldsymbol{x},\boldsymbol{h} \in {{\mathbb{R}}}^d, \quad i,j=1,\ldots,m.\end{equation*}

Obviously, the diagonal entries of pseudo cross-variograms and cross-variograms coincide and contain univariate variograms $\gamma_{ii}(\boldsymbol{h})= \frac12{Var}(Z_i(\boldsymbol{x}+\boldsymbol{h})-Z_i(\boldsymbol{x}))$ , $i=1,\ldots,m$ [Reference Matheron15, Reference Matheron16].

Both cross- and pseudo cross-variograms are commonly used in geostatistics to capture the degree of spatial dependence [Reference Chen and Genton5]. There is some controversy as to which one to use, since both have their benefits and drawbacks. The cross-variogram, on the one hand, is well-defined under weaker assumptions, but requires measurements of the quantities of interest at the same locations for estimation in practical applications [Reference Chen and Genton5]. Moreover, it only reproduces the symmetric part of a cross-covariance function of a stationary random field; see e.g. [Reference Wackernagel30]. The pseudo cross-variogram, on the other hand, can capture asymmetry, and provides optimal co-kriging predictors without imposing any symmetry assumption on the cross-dependence structure [Reference Cressie and Wikle6, Reference Ver Hoef and Cressie29], but is difficult to interpret in practice due to considering differences of generally different physical quantities; cf. [Reference Cressie and Wikle6] and their account of it.

From a theoretical perspective, pseudo cross-variograms are interesting objects, since they are not only found in multivariate geostatistics but also appear naturally in extreme value theory in the context of multivariate Brown–Resnick processes [Reference Genton, Padoan and Sang8, Reference Oesting, Schlather and Friederichs21]. However, pseudo cross-variograms, in contrast to cross-variograms, have not yet been sufficiently well understood. So far, elementary properties [Reference Du and Ma7] are known, such as their relation to cross-variograms and cross-covariance functions [Reference Myers20, Reference Papritz, Künsch and Webster22], their applicability to co-kriging [Reference Cressie and Wikle6, Reference Myers20, Reference Ver Hoef and Cressie29], and limiting behaviour [Reference Oesting, Schlather and Friederichs21, Reference Papritz, Künsch and Webster22], but a concise necessary and sufficient criterion for a matrix-valued function to be a pseudo cross-variogram is missing. This lack of an equivalent characterization makes it immensely difficult to show the validity of a function as a pseudo cross-variogram (cf. e.g. [Reference Ma13, p. 239]), unless it can be led back to an explicit construction of a random field as in [Reference Chen and Genton5] or [Reference Oesting, Schlather and Friederichs21].

Equivalent characterizations are well known for univariate variograms (see e.g. [Reference Gneiting, Sasvári and Schlather11], [Reference Matheron15]), and involve the notion of conditional negative definiteness. These characteristics are intimately connected to a result which can be mainly attributed to Schoenberg [Reference Berg, Christensen and Ressel2, Reference Schoenberg28], implying that a function $\gamma\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}$ is a univariate variogram if and only if $\exp({-}t\gamma)$ is a correlation function for all $t >0$ [Reference Gneiting, Sasvári and Schlather11]. Such a characterization of $\gamma$ in the multivariate case, however, seems to be untreated in the geostatistical literature. For cross-variograms, there is a result for the ‘if’ part [Reference Ma14, Theorem 10]. The ‘only if’ part is false in general; see e.g. [Reference Schlather27, Remark 2]. See also [Reference Du and Ma7].

The aim of this article is to fill these gaps. The key ingredient is to apply a stronger notion of conditional negative definiteness for matrix-valued functions than the predominant one in geostatistical literature. We discuss this notion in Section 2, and provide a first characterization of pseudo cross-variograms in these terms. This characterization leads to a Schoenberg-type result in terms of pseudo cross-variograms in Section 3, thus making a case for proponents of pseudo cross-variograms, at least from a theoretical standpoint. In Section 4 we apply this characterization and illustrate its power by extending versions of the very popular space–time covariance model of Gneiting [Reference Gneiting10] to the multivariate case.

Our presentation here is carried out in terms of pseudo cross-variograms in their original stationary form as introduced above. It is important to note that, with the exception of Corollary 2, all results presented here, which involve pseudo cross-variograms or conditionally negative definite matrix-valued functions as defined below, also hold for their respective non-stationary versions or kernel-based forms by straightforward adaptations. A non-stationary version of Corollary 2 is also available. The proofs follow the same lines.

2. Conditional negative definiteness for matrix-valued functions

Real-valued conditionally negative definite functions are essential to characterizing univariate variograms. A function $\gamma\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}$ is a univariate variogram if and only if $\gamma(0)=0$ and $\gamma$ is conditionally negative definite, that is, $\gamma$ is symmetric and for all $n \ge 2$ , $\boldsymbol{x}_{\textbf{1}},\ldots,\boldsymbol{x}_{\boldsymbol{n}} \in {{\mathbb{R}}}^d$ , $a_1,\ldots,a_n \in {{\mathbb{R}}}$ such that $\sum_{k=1}^n a_k=0$ , the inequality $\sum_{i=1}^n\sum_{j=1}^n a_i \gamma(\boldsymbol{x}_{\boldsymbol{i}}-\boldsymbol{x}_{\boldsymbol{j}})a_j \le 0$ holds [Reference Matheron15]. An extended notion of conditional negative definiteness for matrix-valued functions is part of a characterization of cross-variograms. A function ${\boldsymbol {\tilde\gamma}}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ is a cross-variogram if and only if ${\boldsymbol {\tilde\gamma}}(\textbf{0}) =\textbf{0}$ , ${\boldsymbol {\tilde\gamma}}(\boldsymbol{h})={\boldsymbol {\tilde\gamma}}({-}\boldsymbol{h})={\boldsymbol {\tilde\gamma}}(\boldsymbol{h})^\top$ and

(1) \begin{equation} \sum_{i=1}^n\sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol {\tilde\gamma}}(\boldsymbol{x}_{\boldsymbol{i}}-\boldsymbol{x}_{\boldsymbol{j}})\boldsymbol{a}_{\boldsymbol{j}} \le 0\end{equation}

for $n \ge 2$ , $\boldsymbol{x}_{\textbf{1}},\ldots,\boldsymbol{x}_{\boldsymbol{n}} \in {{\mathbb{R}}}^d$ , $\boldsymbol{a}_{\textbf{1}},\ldots,\boldsymbol{a}_{\boldsymbol{n}} \in {{\mathbb{R}}}^m$ such that $\sum_{k=1}^n \boldsymbol{a}_{\boldsymbol{k}} =\textbf{0}$ [Reference Ma14]. A function satisfying condition (1) is called an almost negative definite matrix-valued function in [Reference Xie31, p. 40].

A pseudo cross-variogram ${\boldsymbol \gamma}$ has similar, but only necessary properties; see [Reference Du and Ma7]. It holds that $\gamma_{ii}(\textbf{0})=0$ , and $\gamma_{ij}(\boldsymbol{h})=\gamma_{ji}({-}\boldsymbol{h})$ , $i,j=1,\ldots,m$ . Additionally, a pseudo cross-variogram is an almost negative definite matrix-valued function as well, but inequality (1), loosely speaking, cannot enforce non-negativity on the secondary diagonals. Therefore we consider the following stronger notion of conditional negative definiteness; see [Reference Gesztesy and Pang9].

Definition 1. A function ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ is called conditionally negative definite if

(2a) \begin{align}\gamma_{ij}(\boldsymbol{h})=\gamma_{ji}({-}\boldsymbol{h}), \quad i,j=1,\ldots,m,\end{align}
(2b) \begin{align}\sum_{i=1}^n\sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}}-\boldsymbol{x}_{\boldsymbol{j}})\boldsymbol{a}_{\boldsymbol{j}} \le 0, \end{align}

for all $n \in \mathbb{N}$ , $\boldsymbol{x}_{\textbf{1}},\ldots,\boldsymbol{x}_{\boldsymbol{n}} \in {{\mathbb{R}}}^d$ , $\boldsymbol{a}_{\textbf{1}},\ldots,\boldsymbol{a}_{\boldsymbol{n}} \in {{\mathbb{R}}}^m$ such that

\[\textbf{1}_{\boldsymbol{m}}^\top\sum_{k=1}^n \boldsymbol{a}_{\boldsymbol{k}} =0\]

with $\textbf{1}_{\boldsymbol{m}}\,:\!=\, (1,\ldots, 1)^\top \in {{\mathbb{R}}}^m$ .

Obviously, the set of conditionally negative definite matrix-valued functions is a convex cone which is closed under integration and pointwise limits, if existing. In the univariate case, the concepts of conditionally and almost negative definite functions coincide, reproducing the traditional notion of real-valued conditionally negative definite functions. The main difference between them is the broader spectrum of vectors for which inequality (2b) has to hold, in that the sum of all components has to be zero instead of each component of the sum itself. This modification in particular includes sets of linearly independent vectors in the pool of admissible test vector families, resulting in more restrictive conditions on the secondary diagonals. Indeed, choosing $n=2$ , $\boldsymbol{x}_{\textbf{1}}=\boldsymbol{h} \in {{\mathbb{R}}}^d$ , $\boldsymbol{x}_{\textbf{2}}=\textbf{0}$ , and $\boldsymbol{a}_{\textbf{1}}=\boldsymbol{e}_{\boldsymbol{i}}$ , $\boldsymbol{a}_{\textbf{2}}=-\boldsymbol{e}_{\boldsymbol{j}}$ in Definition 1 with $\{\boldsymbol{e}_{\textbf{1}},\ldots,\boldsymbol{e}_{\boldsymbol{m}}\}$ denoting the canonical basis in ${{\mathbb{R}}}^m$ , we have $\gamma_{ij}(\boldsymbol{h}) \ge 0$ for a conditionally negative definite function ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ with $\gamma_{ii}(\textbf{0}) = 0$ , $i,j=1,\ldots,m$ , fitting the non-negativity of a pseudo cross-variogram. In fact, the latter condition on the main diagonal and the conditional negative definiteness property are sufficient to characterize pseudo cross-variograms.

Theorem 1. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ . Then there exists a centred Gaussian random field $\boldsymbol{Z}$ on ${{\mathbb{R}}}^d$ with pseudo cross-variogram ${\boldsymbol \gamma}$ if and only if $\gamma_{ii}(\textbf{0})=0$ , $i=1,\ldots,m$ , and ${\boldsymbol \gamma}$ is conditionally negative definite.

Proof. The proof is analogous to the univariate one in [Reference Matheron15]. Let $\boldsymbol{Z}$ be an m-variate random field with pseudo cross-variogram ${\boldsymbol \gamma}$ . Obviously, $\gamma_{ii}(\textbf{0}) = 0$ and $\gamma_{ij}(\boldsymbol{h})=\gamma_{ji}({-}\boldsymbol{h})$ for all $\boldsymbol{h} \in {{\mathbb{R}}}^d$ , $i,j=1,\ldots, m$ . Define an m-variate random field $\tilde{\boldsymbol{Z}}$ via $\tilde Z_i(\boldsymbol{x})=Z_i(\boldsymbol{x})-Z_1(\textbf{0})$ , $\boldsymbol{x} \in {{\mathbb{R}}}^d$ , $i=1,\ldots,m$ . Then $\boldsymbol{Z}$ and $\tilde{\boldsymbol{Z}}$ have the same pseudo cross-variogram, and

\[{{{ Cov}}}(\tilde Z_i(\boldsymbol{x}),\tilde Z_j(\boldsymbol{y})) = \gamma_{i1}(\boldsymbol{x})+\gamma_{j1}(\boldsymbol{y})-\gamma_{ij}(\boldsymbol{x}-\boldsymbol{y})\]

(see also [Reference Papritz, Künsch and Webster22, equation (6)]), that is,

\[{{{ Cov}}}(\tilde{\boldsymbol{Z}}(\boldsymbol{x}), \tilde{\boldsymbol{Z}}(\boldsymbol{y})) = \boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y}) - {\boldsymbol \gamma}(\boldsymbol{x} - \boldsymbol{y}), \quad \boldsymbol{x}, \boldsymbol{y} \in {{\mathbb{R}}}^d,\]

with $\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\,:\!=\, (\gamma_{11}(\boldsymbol{x}),\ldots,\gamma_{m1}(\boldsymbol{x}))^\top$ . For $\textbf{1}_{\boldsymbol{m}}^\top \sum_{k=1}^m \boldsymbol{a}_{\boldsymbol{k}}=0$ , we thus have

\begin{align*} 0 & \le {Var}\Biggl(\sum_{i=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top\tilde{\boldsymbol{Z}}(\boldsymbol{x}_{\boldsymbol{i}})\Biggr)\\ &=\sum_{i=1}^n \sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top \bigl(\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x}_{\boldsymbol{i}})\textbf{1}_{\boldsymbol{m}}^\top +\textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{x}_{\boldsymbol{j}})- {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\boldsymbol{j}})\bigr) \boldsymbol{a}_{\boldsymbol{j}}\\& = -\sum_{i=1}^n \sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}}.\end{align*}

Now let ${\boldsymbol \gamma}$ be conditionally negative definite and $\gamma_{ii}(\textbf{0})=0$ , $i=1,\ldots,m$ . Let $\boldsymbol{a}_{\textbf{1}},\ldots, \boldsymbol{a}_{\boldsymbol{n}} \in {{\mathbb{R}}}^m, \boldsymbol{x}_{\textbf{1}}, \ldots,\boldsymbol{x}_{\boldsymbol{n}} \in {{\mathbb{R}}}^d$ be arbitrary, $\boldsymbol{x}_{\textbf{0}}=\textbf{0} \in {{\mathbb{R}}}^d$ and

\[\boldsymbol{a}_{\textbf{0}} = \Biggl({-}\textbf{1}_{\boldsymbol{m}}^\top\sum_{k=1}^n \boldsymbol{a}_{\boldsymbol{k}}, 0, \ldots,0\Biggr)\in {{\mathbb{R}}}^{m}.\]

Then

\begin{align*}0 &\le - \sum_{i=0}^n \sum_{j=0}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}}-\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}}\\&= - \sum_{i=1}^n \sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}}- \sum_{i=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top{\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\textbf{0}}) \boldsymbol{a}_{\textbf{0}}- \sum_{j=1}^n \boldsymbol{a}_{\textbf{0}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\textbf{0}} -\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}} \\& \quad- \boldsymbol{a}_{\textbf{0}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\textbf{0}} -\boldsymbol{x}_{\textbf{0}}) \boldsymbol{a}_{\textbf{0}}.\end{align*}

Since $\gamma_{11}(\textbf{0})=0$ , and $\boldsymbol{a}_{\textbf{0}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\textbf{0}} -\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}} =\boldsymbol{a}_{\boldsymbol{j}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\textbf{0}}$ due to property (2a), we get that

\begin{align*}0 &\le - \sum_{i=0}^n \sum_{j=0}^n \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\boldsymbol{j}}) \boldsymbol{a}_{\boldsymbol{j}} \\&= \sum_{i=1}^n \sum_{j=1}^n \boldsymbol{a}_{\boldsymbol{i}}^\top\bigl(\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x}_{\boldsymbol{i}})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{x}_{\boldsymbol{j}}) -{\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}} -\boldsymbol{x}_{\boldsymbol{j}})\bigr) \boldsymbol{a}_{\boldsymbol{j}},\end{align*}

that is,

\[(\boldsymbol{x},\boldsymbol{y}) \mapsto \boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y}) -{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y})\]

is a matrix-valued positive definite kernel. Let

\[\{\boldsymbol{Z}(\boldsymbol{x})=(Z_1(\boldsymbol{x}),\ldots, Z_m(\boldsymbol{x}))^\top, \boldsymbol{x} \in {{\mathbb{R}}}^d\}\]

be a corresponding centred Gaussian random field. We have to show that ${Var}(Z_i(\boldsymbol{x}+\boldsymbol{h}) - Z_j(\boldsymbol{x}))$ is independent of $\boldsymbol{x}$ for all $\boldsymbol{x}, \boldsymbol{h} \in {{\mathbb{R}}}^d$ , $i,j=1,\ldots,m$ . We even show that $\boldsymbol{x} \mapsto Z_i(\boldsymbol{x}+\boldsymbol{h}) - Z_j(\boldsymbol{x})$ is weakly stationary for $i,j=1,\ldots,m$ :

\begin{align*}& {{{ Cov}}}(Z_i(\boldsymbol{x}+\boldsymbol{h}) - Z_j(\boldsymbol{x}), Z_i(\boldsymbol{y}+\boldsymbol{h}) - Z_j(\boldsymbol{y}))\\&\quad =\gamma_{i1}(\boldsymbol{x}+\boldsymbol{h}) + \gamma_{i1}(\boldsymbol{y}+\boldsymbol{h}) -\gamma_{ii}(\boldsymbol{x}-\boldsymbol{y}) +\gamma_{j1}(\boldsymbol{x}) + \gamma_{j1}(\boldsymbol{y}) - \gamma_{jj}(\boldsymbol{x}-\boldsymbol{y})\\&\quad\quad-\gamma_{j1}(\boldsymbol{x}) - \gamma_{i1}(\boldsymbol{y}+\boldsymbol{h}) + \gamma_{ji}(\boldsymbol{x}-\boldsymbol{y}-\boldsymbol{h}) -\gamma_{i1}(\boldsymbol{x}+\boldsymbol{h}) - \gamma_{j1}(\boldsymbol{y}) +\gamma_{ij}(\boldsymbol{x}+\boldsymbol{h}-\boldsymbol{y})\\&\quad = -\gamma_{ii}(\boldsymbol{x}-\boldsymbol{y})- \gamma_{jj}(\boldsymbol{x}-\boldsymbol{y})+ \gamma_{ji}(\boldsymbol{x}-\boldsymbol{y}-\boldsymbol{h})+\gamma_{ij}(\boldsymbol{x}-\boldsymbol{y}+\boldsymbol{h}).\end{align*}

Theorem 1 answers the questions raised in [Reference Du and Ma7, p. 422] and also settles a question in [Reference Ma13, p. 239] in a more general framework with regard to the intersection of the sets of pseudo and cross-variograms. It turns out that this intersection is trivial in the following sense.

Corollary 1. Let

\begin{align*} \mathcal{P}&=\{{\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}\mid {\boldsymbol \gamma}\ {pseudo\ cross\mbox{-}variogram} \},\\ \mathcal{C}&=\{{\boldsymbol {\tilde\gamma}}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}\mid {\boldsymbol {\tilde\gamma}}\ {cross\mbox{-}variogram} \}. \end{align*}

Then we have

(3) \begin{equation} \mathcal{P} \cap \mathcal{C} = \{\textbf{1}_{\boldsymbol{m}}\textbf{1}_{\boldsymbol{m}}^\top \gamma \mid \gamma\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}\ {variogram} \}. \end{equation}

Proof. Let ${\boldsymbol \gamma} \in \mathcal{P} \cap \mathcal{C}$ . Without loss of generality, assume $m=2$ . Since ${\boldsymbol \gamma} \in \mathcal{P}\cap \mathcal{C}$ , for $n=2$ , $\boldsymbol{x}_{\textbf{1}}=\boldsymbol{h}$ , $\boldsymbol{x}_{\textbf{2}}=\textbf{0}$ , $\boldsymbol{a}_{\textbf{1}},\boldsymbol{a}_{\textbf{2}} \in {{\mathbb{R}}}^2$ with $\textbf{1}_{\textbf{2}}^\top \sum_{k=1}^2 \boldsymbol{a}_{\boldsymbol{k}}=0$ , using the symmetry of ${\boldsymbol \gamma}$ and ${\boldsymbol \gamma}(\textbf{0})=\textbf{0}$ we have

\begin{align*} 0 &\ge \sum_{i=1}^2\sum_{j=1}^2 \boldsymbol{a}_{\boldsymbol{i}}^\top {\boldsymbol \gamma}(\boldsymbol{x}_{\boldsymbol{i}}-\boldsymbol{x}_{\boldsymbol{j}})\boldsymbol{a}_{\boldsymbol{j}} \\ &= 2a_{11}a_{21} \gamma_{11}(\boldsymbol{h})+2a_{12}a_{22}\gamma_{22}(\boldsymbol{h})+2(a_{11}a_{22}+a_{12}a_{21})\gamma_{12}(\boldsymbol{h}) .\end{align*}

Choosing $\boldsymbol{a}_{\textbf{1}}=({-}1,0)^\top$ , $\boldsymbol{a}_{\textbf{2}}= (1-k,k)^\top$ , $k \ge 2$ , and applying the Cauchy–Schwarz inequality due to ${\boldsymbol \gamma} \in \mathcal{C}$ gives

(4) \begin{align}0 \le \gamma_{11}(\boldsymbol{h}) \le \dfrac{-k}{1-k}\gamma_{12}(\boldsymbol{h}) \le \dfrac{1}{1-{{1}/{k}}}\sqrt{\gamma_{11}(\boldsymbol{h})}\sqrt{\gamma_{22}(\boldsymbol{h})}.\end{align}

By symmetry, we also have

(5) \begin{align}0 \le \gamma_{22}(\boldsymbol{h}) &\le \dfrac{ -k}{1-k}\gamma_{12}(\boldsymbol{h}) \le \dfrac{1}{1-{{1}/{k}}}\sqrt{\gamma_{11}(\boldsymbol{h})}\sqrt{\gamma_{22}(\boldsymbol{h})}.\end{align}

Assume first that, without loss of generality, $\gamma_{11}(\boldsymbol{h})=0$ . Then $\gamma_{12}(\boldsymbol{h}) = 0$ and $\gamma_{22}(\boldsymbol{h}) =0$ due to inequalities (4) and (5). Suppose now that $\gamma_{11}(\boldsymbol{h}), \gamma_{22}(\boldsymbol{h}) \neq 0$ . Letting $k \rightarrow \infty$ in inequalities (4) and (5) yields $\gamma_{11}(\boldsymbol{h}) = \gamma_{22}(\boldsymbol{h})$ . Inserting this into inequality (5) gives

\[\gamma_{22}(\boldsymbol{h}) \le \dfrac{1}{1-{{1}/{k}}}\gamma_{12}(\boldsymbol{h}) \le \dfrac{1}{1-{{1}/{k}}}\gamma_{22}(\boldsymbol{h})\]

and consequently the result for $k \rightarrow \infty$ .

Remark 1. Let $\boldsymbol{Z}$ be a random field on ${{\mathbb{R}}}^d$ with cross-variogram ${\boldsymbol {\tilde\gamma}} \in \mathcal{P} \cap \mathcal{C}$ . Then the pseudo and cross-variogram of $\boldsymbol{Z}$ do not necessarily coincide; in fact, the pseudo cross-variogram might not even exist. For instance, let $\boldsymbol{Y}$ be a random field on ${{\mathbb{R}}}^d$ with cross-variogram ${\boldsymbol {\tilde\gamma}}$ , and take $Z_i(\boldsymbol{x})\,:\!=\, Y_i(\boldsymbol{x})+U_i$ , for i.i.d. random variables $U_i$ , $i=1,\ldots,m$ , without existing variance. However, if $\boldsymbol{Z}$ is a random field with existing pseudo cross-variogram of the form (3), then we have $2\gamma_{ij}(\textbf{0})={Var}(Z_i(\boldsymbol{x})-Z_j(\boldsymbol{x}))=0$ for all $\boldsymbol{x} \in {{\mathbb{R}}}^d$ , $i,j=1,\ldots,m$ . Consequently, the difference between $Z_i(\boldsymbol{x})$ and $Z_j(\boldsymbol{x})$ is almost surely constant for all $\boldsymbol{x} \in {{\mathbb{R}}}^d$ , $i,j=1,\ldots,m$ , implying that cross- and pseudo cross-variogram of $\boldsymbol{Z}$ coincide in that case.

Corollary 1 can also be proved by means of a result in [Reference Oesting, Schlather and Friederichs21]. In fact, Theorem 1 enables us to reproduce their result, which was originally derived in a stochastic manner, by a direct proof.

Corollary 2. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ be a pseudo cross-variogram. Then ${\boldsymbol \gamma}$ fulfils

\[\Bigl(\sqrt{\gamma_{ii}(\boldsymbol{h})}-\sqrt{\gamma_{ij}(\boldsymbol{h})}\Bigr)^2 \le \gamma_{ij}(\textbf{0}), \quad \boldsymbol{h} \in {{\mathbb{R}}}^d, \quad i,j=1,\ldots,m.\]

Proof. Without loss of generality, assume $m=2$ . We present a proof for $i=1, j=2$ and $\gamma_{11}(\boldsymbol{h})$ , $\gamma_{12}(\boldsymbol{h}) > 0$ . Then, for $n=2$ , $\boldsymbol{x}_{\textbf{1}}=\boldsymbol{h}$ , $\boldsymbol{x}_{\textbf{2}}=\textbf{0}$ , $\boldsymbol{a}_{\textbf{1}},\boldsymbol{a}_{\textbf{2}} \in {{\mathbb{R}}}^2$ with $\textbf{1}_{\textbf{2}}^\top \sum_{k=1}^2 \boldsymbol{a}_{\boldsymbol{k}}=0$ , we have

(6) \begin{align} 0 &\ge a_{11}a_{21} \gamma_{11}(\boldsymbol{h})+a_{12}a_{22}\gamma_{22}(\boldsymbol{h}) \notag \\ &\quad + a_{11}a_{22}\gamma_{12}(\boldsymbol{h}) + a_{12}a_{21}\gamma_{21}(\boldsymbol{h}) + (a_{11}a_{12}+a_{21}a_{22})\gamma_{12}(\textbf{0}).\end{align}

Assuming $a_{12}=0$ , $a_{22}>0$ and $a_{11} + a_{22} = -a_{21} >0$ , inequality (6) simplifies to

(7) \begin{align} \gamma_{12}(\textbf{0}) &\ge -\dfrac{a_{11}}{a_{22}} \gamma_{11}(\boldsymbol{h}) + \dfrac{a_{11}}{a_{11}+a_{22}}\gamma_{12}(\boldsymbol{h}) \notag \\&= -x \gamma_{11}(\boldsymbol{h}) + \dfrac x{1+x}\gamma_{12}(\boldsymbol{h})\end{align}

for $x \,:\!=\, {{a_{11}}/{a_{22}}}$ . Maximization of the function

\[x \mapsto -x \gamma_{11}(\boldsymbol{h}) + \dfrac x{1+x}\gamma_{12}(\boldsymbol{h}),\quad x > -1,\]

leads to

\[x^{\ast} = \sqrt{\dfrac{\gamma_{12}(\boldsymbol{h})}{\gamma_{11}(\boldsymbol{h})}} -1.\]

Inserting $x^\ast$ into (7) gives

\begin{align*} \gamma_{12}(\textbf{0}) &\ge -\biggl(\sqrt{\dfrac{\gamma_{12}(\boldsymbol{h})}{\gamma_{11}(\boldsymbol{h})}}-1\biggr) \gamma_{11}(\boldsymbol{h}) + \left( \dfrac {\sqrt{\frac{\gamma_{12}(\boldsymbol{h})}{\gamma_{11}(\boldsymbol{h})}} -1}{1+\sqrt{\frac{\gamma_{12}(\boldsymbol{h})}{\gamma_{11}(\boldsymbol{h})}} -1} \right)\gamma_{12}(\boldsymbol{h}) \\ &= \bigl(\sqrt{\gamma_{11}(\boldsymbol{h})}-\sqrt{\gamma_{12}(\boldsymbol{h})}\bigr)^2.\end{align*}

3. A Schoenberg-type characterization

The stochastically motivated proof of Theorem 1 contains an important relation between matrix-valued positive definite kernels and conditionally negative definite functions we have not yet emphasized. Due to its significance, we formulate it in a separate lemma. As readily seen, the assumption on the main diagonal stemming from our consideration of pseudo cross-variograms can be dropped, resulting in the matrix-valued version of Lemma 3.2.1 in [Reference Berg, Christensen and Ressel2].

Lemma 1. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ be a matrix-valued function with $\gamma_{ij}(\boldsymbol{h})=\gamma_{ji}({-}\boldsymbol{h})$ , $i,j=1,\ldots,m$ . Define

\[ \boldsymbol{C}_{\boldsymbol{k}}(\boldsymbol{x},\boldsymbol{y})\,:\!=\, \boldsymbol{\gamma_{\boldsymbol{k}}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\boldsymbol{k}}^\top(\boldsymbol{y}) -{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y})-\gamma_{kk}(\textbf{0})\textbf{1}_{\boldsymbol{m}}\mathbf{1}_{\boldsymbol{m}}^\top \]

with $\boldsymbol{\gamma_{\boldsymbol{k}}}(\boldsymbol{h})=(\gamma_{1k}(\boldsymbol{h}),\ldots,\gamma_{mk}(\boldsymbol{h}))^\top$ , $k \in \{1,\ldots,m\}$ . Then $\boldsymbol{C}_{\boldsymbol{k}}$ is a positive definite matrix-valued kernel for $k \in \{1,\ldots,m\}$ if and only if ${\boldsymbol \gamma}$ is conditionally negative definite. If $\gamma_{kk}(\textbf{0}) \ge 0$ for $k=1,\ldots,m$ , then

\[ \tilde{\boldsymbol{C}_{\boldsymbol{k}}}(\boldsymbol{x},\boldsymbol{y})\,:\!=\, \boldsymbol{\gamma_{\boldsymbol{k}}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\boldsymbol{k}}^\top(\boldsymbol{y}) -{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y}) \]

is a positive definite matrix-valued kernel for $k \in \{1,\ldots,m\}$ if and only if ${\boldsymbol \gamma}$ is conditionally negative definite.

The kernel construction in Lemma 1 leads to a matrix-valued version of Schoenberg’s theorem [Reference Berg, Christensen and Ressel2].

Theorem 2. A function ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ is conditionally negative definite if and only if $\exp^\ast({-}t{\boldsymbol \gamma})$ , with $\exp^\ast({-}t{\boldsymbol \gamma}(\boldsymbol{h}))_{ij} \,:\!=\, \exp({-}t\gamma_{ij}(\boldsymbol{h}))$ , is positive definite for all $t >0$ .

Remark 2. Theorem 2, in the form presented here, has recently been formulated in [Reference Gesztesy and Pang9] in terms of conditionally positive definite matrix-valued functions with complex entries. Nonetheless, we present an alternative proof of the ‘only if’ part, which relies on the kernel construction in Lemma 1 and follows the lines of the proof of Theorem 3.2.2 in [Reference Berg, Christensen and Ressel2].

Proof of Theorem 2.Assume that ${\boldsymbol \gamma}$ is conditionally negative definite. Then

\[(\boldsymbol{x},\boldsymbol{y}) \mapsto \boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + \textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y}) -{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y}) -\gamma_{11}(\textbf{0})\textbf{1}_{\boldsymbol{m}}\textbf{1}_{\boldsymbol{m}}^\top\]

is a positive definite kernel due to Lemma 1. Since positive definite matrix-valued kernels are closed with regard to sums, Hadamard products, and pointwise limits (see e.g. [Reference Schlather27]), the kernel

\begin{align*}(\boldsymbol{x},\boldsymbol{y}) &\mapsto \exp^\ast\bigl(t\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + t\textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y}) -t{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y})-t\gamma_{11}(\textbf{0})\textbf{1}_{\boldsymbol{m}}\textbf{1}_{\boldsymbol{m}}^\top\bigr) \\&=\exp({-}t\gamma_{11}(\textbf{0}))\exp^\ast\bigl(t\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top + t\textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y}) -t{\boldsymbol \gamma}(\boldsymbol{x} -\boldsymbol{y})\bigr)\end{align*}

is again positive definite for all $t >0$ . The same holds true for the kernel

\[(\boldsymbol{x},\boldsymbol{y}) \mapsto \exp^*\bigl({-}t\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top - t\textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y})\bigr),\]

since

\[\exp\bigl({-}t\boldsymbol{\gamma}_{\textbf{1}}(\boldsymbol{x})\textbf{1}_{\boldsymbol{m}}^\top - t\textbf{1}_{\boldsymbol{m}}\boldsymbol{\gamma}_{\textbf{1}}^\top(\boldsymbol{y})\bigr)_{ij}=\exp({-}t\gamma_{i1}(\boldsymbol{x}))\exp({-}t\gamma_{j1}(\boldsymbol{y})), \quad i,j=1,\ldots,m,\]

with the product separable structure implying positive definiteness. Again using the stability of positive definite kernels under Hadamard products, the first part of the assertion follows.

Assume now that $\exp^\ast({-}t{\boldsymbol \gamma})$ is a positive definite function for all $t>0$ . Then

\[\exp({-}t\gamma_{ij}(\boldsymbol{h}))=\exp({-}t\gamma_{ji}({-}\boldsymbol{h})),\]

and thus

\[\biggl( \dfrac{1 - {{e}}^{-t\gamma_{ij}}}t\biggr)_{i,j=1,\ldots,m}= \dfrac{\textbf{1}_{\boldsymbol{m}} \textbf{1}_{\boldsymbol{m}}^\top - \exp^\ast({-}t{\boldsymbol \gamma})}t\]

is a conditionally negative definite function. The assertion follows for $t\rightarrow 0$ .

Combining Theorems 1 and 2, and recalling that the classes of matrix-valued positive definite functions and covariance functions for multivariate random fields coincide, we immediately get the following characterization of pseudo cross-variograms.

Corollary 3. A function ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ is a pseudo cross-variogram if and only if $\exp^\ast({-}t{\boldsymbol \gamma})$ is a matrix-valued correlation function for all $t>0$ .

Corollary 3 establishes a direct link between matrix-valued correlation functions and pseudo cross-variograms. Together with Corollary 1, it shows that the cross-variograms for which Theorem 10 in [Reference Ma14] holds are necessarily of the form (3), and it explains the findings in the first part of Remark 2 in [Reference Schlather27].

Remark 3. The correspondence of the proofs of the matrix-valued and real-valued versions of Schoenberg’s theorem (Theorem 2 here and Theorem 3.2.2 in [Reference Berg, Christensen and Ressel2]) is no coincidence. Since [Reference Berg, Christensen and Ressel2] does not impose any assumption on the underlying domain X of the negative definite function there, we could also choose $X= {{\mathbb{R}}}^d \times \{1,\ldots,m\}$ , which translates into Theorem 2. The same holds true for Lemma 1. With this ‘dimension expansion view’, it is no surprise that the pseudo cross-variogram turns out to be the natural multivariate analogue of the variogram from a theoretical standpoint. Due to our interest in pseudo cross-variograms, we chose a stochastically/pseudo cross-variogram driven derivation of Lemma 1 and Theorem 2.

As demonstrated in Theorem 3.2.3 in [Reference Berg, Christensen and Ressel2], Schoenberg’s theorem can be further generalized for conditionally negative definite functions with non-negative components in terms of componentwise Laplace transforms. Here we present the corresponding matrix-valued version explicitly for clarity, and combine it with our previous results concerning pseudo cross-variograms. With regard to Remark 3, and since we have already presented the matrix-valued proof of Theorem 2, we omit the proof here.

Theorem 3. Let $\mu$ be a probability measure on $[0,\infty)$ such that

\[ 0 < \int_0^\infty s \,{{d}} \mu(s) < \infty. \]

Let $\mathcal{L}$ denote its Laplace transform, that is,

\[ \mathcal{L}\mu(x)=\int_0^\infty \exp({-}sx) \,{{d}} \mu(s),\quad x \in [0,\infty). \]

Then ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow [0,\infty)^{m \times m}$ is conditionally negative definite if and only if $(\mathcal{L}\mu(t\gamma_{ij}))_{i,j=1,\ldots,m}$ is positive definite for all $t >0$ . In particular, ${\boldsymbol \gamma}$ is a pseudo cross-variogram if and only if $(\mathcal{L}\mu(t\gamma_{ij}))_{i,j=1,\ldots,m}$ is an m-variate correlation function for all $t >0$ .

Corollary 4. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow [0,\infty)^{m \times m}$ be a matrix-valued function. If ${\boldsymbol \gamma}$ is a pseudo cross-variogram, then for all $\lambda >0$ , the function $\boldsymbol{C}\,:\, {{\mathbb{R}}}^d \rightarrow {{\mathbb{R}}}^{m \times m}$ with

(8) \begin{equation} \boldsymbol{C}(\boldsymbol{h})=\bigl((1 + t\gamma_{ij}(\boldsymbol{h}))^{-\lambda}\bigr)_{i,j=1,\ldots,m}, \quad \boldsymbol{h} \in {{\mathbb{R}}}^d, \end{equation}

is a correlation function of an m-variate random field for all $t >0$ . Conversely, if a $\lambda >0$ exists such that $C_{ii}(\textbf{0})=1$ , $i=1,\ldots,m$ , and such that $\boldsymbol{C}$ is positive definite for all $t >0$ , then ${\boldsymbol \gamma}$ is a pseudo cross-variogram.

Proof. Choose

\[ \mu({{d}} s)=\dfrac1{\Gamma(\lambda)}\exp({-}s)s^{\lambda-1} \unicode{x1d7d9}_{(0,\infty)}(s) \,{{d}} s \]

in Theorem 3.

Similarly to Theorem 3, we can translate the ‘univariate’ result that Bernstein functions operate on real-valued conditionally negative definite functions [Reference Berg, Christensen and Ressel2] to the matrix-valued case, which can thus be used to derive novel pseudo cross-variograms from known ones. Again, we omit the proof for the reasons given above.

Proposition 1. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^d \rightarrow [0,\infty)^{m\times m}$ be conditionally negative definite. Let $g\,:\, [0, \infty) \rightarrow [0,\infty)$ denote the continuous extension of a Bernstein function. Then $\boldsymbol{g} \circ {\boldsymbol \gamma}$ with $((\boldsymbol{g} \circ {\boldsymbol \gamma})(\boldsymbol{h}))_{ij}\,:\!=\, (g\circ \gamma_{ij})(\boldsymbol{h})$ , $i,j=1,\ldots,m$ , is conditionally negative definite. In particular, if $g(0)=0$ and ${\boldsymbol \gamma}$ is a pseudo cross-variogram, then $\boldsymbol{g} \circ \boldsymbol\gamma$ is again a pseudo cross-variogram.

4. Multivariate versions of Gneiting’s space–time model

Schoenberg’s result is often an integral part of proving the validity of univariate covariance models. Here we use its matrix-valued counterparts derived in the previous section to naturally extend covariance models of Gneiting type to the multivariate case.

Gneiting’s original space–time model is a univariate covariance function on ${{\mathbb{R}}}^d\times {{\mathbb{R}}}$ defined via

(9) \begin{equation} G(\boldsymbol{h},u)= \dfrac1{\psi(\lvert u\rvert^2)^{d/2}}\varphi\biggl(\dfrac{\lVert \boldsymbol{h}\rVert^2}{\psi(\lvert u\rvert^2)}\biggr), \quad (\boldsymbol{h},u) \in {{\mathbb{R}}}^d \times {{\mathbb{R}}},\end{equation}

where $\psi\,:\, 0, \infty) \rightarrow (0,\infty)$ is the continuous extension of a Bernstein function, and $\varphi\,:\, [0,\infty) \rightarrow [0,\infty)$ is the continuous extension of a bounded completely monotone function [Reference Gneiting10]. For convenience, we simply speak of bounded completely monotone functions henceforth. Model (9) is very popular in practice due to its versatility and ability to model space–time interactions; see [Reference Porcu, Furrer and Nychka24] for a list of several applications. Its special structure has attracted and still attracts interest from a theoretical perspective as well, resulting in several extensions and refinements of the original model (9); see e.g. [Reference Menegatto17], [Reference Menegatto, Oliveira and Porcu18], [Reference Porcu, Bevilacqua and Genton23], [Reference Qadir and Sun25], [Reference Zastavnyi and Porcu32]. Only recently, specific simulation methods have been proposed [Reference Allard, Emery, Lacaux and Lantuéjoul1] for the so-called extended Gneiting class, a special case of [Reference Zastavnyi and Porcu32, Theorem 2.1],

\begin{equation*} G(\boldsymbol{h},\boldsymbol{u})= \dfrac1{(1+\gamma(\boldsymbol{u}))^{d/2}}\varphi\biggl(\dfrac{\lVert \boldsymbol{h}\rVert^2}{1+\gamma(\boldsymbol{u})}\biggr), \quad (\boldsymbol{h},\boldsymbol{u}) \in {{\mathbb{R}}}^d \times {{\mathbb{R}}}^l,\end{equation*}

with $\gamma$ denoting a continuous variogram. One of these methods is based on an explicit construction of a random field, where the continuity assumption on $\gamma$ is not needed [Reference Allard, Emery, Lacaux and Lantuéjoul1], and which can be directly transferred to the multivariate case via pseudo cross-variograms.

Theorem 4. Let R be a non-negative random variable with distribution $\mu$ , $\boldsymbol{\Omega} \sim N(\textbf{0},\mathbf{1}_{d \times d})$ with $\mathbf{1}_{d \times d} \in {{\mathbb{R}}}^{d \times d}$ denoting the identity matrix, $U \sim U(0,1)$ , $\Phi\sim U(0,2\pi)$ , and let $\boldsymbol{W}$ be a centred, m-variate Gaussian random field on ${{\mathbb{R}}}^l$ with pseudo cross-variogram ${\boldsymbol \gamma}$ , all independent. Then the m-variate random field $\boldsymbol{Z}$ on ${{\mathbb{R}}}^d \times {{\mathbb{R}}}^l$ defined via

\[ Z_i(\boldsymbol{x},\boldsymbol{t})=\sqrt{-2\log(U)}\cos\biggl(\sqrt{2R}\langle\boldsymbol{\Omega},\boldsymbol{x}\rangle + \dfrac{\lVert \boldsymbol{\Omega} \rVert}{\sqrt{2}}W_i(\boldsymbol{t})+\Phi \biggr),\!\!\quad (\boldsymbol{x},\boldsymbol{t}) \in {{\mathbb{R}}}^d\times {{\mathbb{R}}}^l,\quad i =1,\ldots,m, \]

has the extended Gneiting-type covariance function

(10) \begin{equation} G_{ij}(\boldsymbol{h},\boldsymbol{u})= \dfrac1{(1+\gamma_{ij}(\boldsymbol{u}))^{d/2}}\varphi\biggl(\dfrac{\lVert \boldsymbol{h}\rVert^2}{1+\gamma_{ij}(\boldsymbol{u})}\biggr),\quad (\boldsymbol{h},\boldsymbol{u}) \in {{\mathbb{R}}}^d\times {{\mathbb{R}}}^l, \quad i,j=1,\ldots,m,\end{equation}

where $\varphi$ denotes a bounded completely monotone function.

Proof. The proof follows the lines of the proof of Theorem 3 in [Reference Allard, Emery, Lacaux and Lantuéjoul1]. In the multivariate case, the cross-covariance function reads

\begin{align*}{{{ Cov}}}(Z_i(\boldsymbol{x},\boldsymbol{t}),Z_j(\boldsymbol{y},\boldsymbol{s}))&= {{\mathbb{E}}} \cos\biggl(\sqrt{2R}\langle\boldsymbol{\Omega},\boldsymbol{x}-\boldsymbol{y}\rangle + \dfrac{\lVert \boldsymbol{\Omega}\rVert}{\sqrt{2}}(W_i(\boldsymbol{t})-W_j(\boldsymbol{s}))\biggr),\end{align*}

$(\boldsymbol{x},\boldsymbol{t}), (\boldsymbol{y},\boldsymbol{s}) \in {{\mathbb{R}}}^d \times {{\mathbb{R}}}^l, i,j=1,\ldots,m$ . Due to the assumptions, $W_i(\boldsymbol{t})-W_j(\boldsymbol{s})$ is a Gaussian random variable with mean zero and variance $2\gamma_{ij}(\boldsymbol{t}-\boldsymbol{s})$ . Proceeding further as in [Reference Allard, Emery, Lacaux and Lantuéjoul1] gives the result.

Theorem 4 provides a multivariate extension of the extended Gneiting class, and lays the foundations for a simulation algorithm for an approximately Gaussian random field with the respective cross-covariance function; see [Reference Allard, Emery, Lacaux and Lantuéjoul1]. The existence of a Gaussian random field with a preset pseudo cross-variogram ${\boldsymbol \gamma}$ and the possibility of sampling from it are ensured by Theorem 1 and Lemma 1, respectively.

Due to our results in previous sections, Theorem 4 can easily be generalized further, replacing $d/2$ in the denominator in equation (10) with a general parameter $r \ge d/2$ .

Corollary 5. Let ${\boldsymbol \gamma}\,:\, {{\mathbb{R}}}^l \rightarrow {{\mathbb{R}}}^{m \times m}$ be a pseudo cross-variogram. Then the function $\boldsymbol{G}\,:\, {{\mathbb{R}}}^d \times {{\mathbb{R}}}^l \rightarrow {{\mathbb{R}}}^{m \times m}$ with

(11) \begin{equation} G_{ij}(\boldsymbol{h},\boldsymbol{u})= \dfrac1{(1+\gamma_{ij}(\boldsymbol{u}))^{r}}\varphi\biggl(\dfrac{\lVert \boldsymbol{h}\rVert^2}{1+\gamma_{ij}(\boldsymbol{u})}\biggr), \quad (\boldsymbol{h},\boldsymbol{u}) \in {{\mathbb{R}}}^d\times {{\mathbb{R}}}^l, \quad i,j=1,\ldots,m,\end{equation}

is positive definite for $r \ge {{d}/{2}}$ and a bounded completely monotone function $\varphi$ .

Proof. We proved the assertion for $r={{d}/{2}}$ in Theorem 4. Now let $\lambda > 0$ and $r = \lambda + {{d}/{2}}$ . Then the matrix-valued function (11) is the componentwise product of positive definite functions of the form (8) and (10), and consequently positive definite itself.

Even further refinements of Corollary 5 are possible. We can replace $\textbf{1}_{\boldsymbol{m}}\textbf{1}_{\boldsymbol{m}}^\top + {\boldsymbol \gamma}$ in (11) with general conditionally negative definite matrix-valued functions, but for a subclass of completely monotone functions, the so-called generalized Stieltjes functions of order $\lambda$ . This leads to a multivariate version of a result in [Reference Menegatto17]. A bounded generalized Stieltjes function $S\,:\, (0,\infty) \rightarrow [0,\infty)$ of order $\lambda >0$ has a representation

\[ S(x) = a + \int_0^\infty \dfrac1{(x + v)^\lambda}\,{{d}} \mu(v), \quad x>0, \]

where $a\ge 0$ and the so-called Stieltjes measure $\mu$ is a positive measure on $(0,\infty)$ , such that $\int_{(0,\infty)}v^{-\lambda}\,{{d}} \mu(v) < \infty$ [Reference Menegatto17]. As for completely monotone functions, in the following we do not distinguish between a generalized Stieltjes function and its continuous extension. Several examples of generalized Stieltjes functions can be found in [Reference Berg, Koumandos and Pedersen3] and [Reference Menegatto17].

Theorem 5. Let $S_{ij}, i,j=1,\ldots,m$ , be generalized Stieltjes functions of order $\lambda >0$ . Let the associated Stieltjes measures have densities $\varphi_{ij}$ such that $(\varphi_{ij}(v))_{i,j=1,\ldots,m}$ is a symmetric positive semidefinite matrix for all $v > 0$ . Let

\[ \boldsymbol{g}\,:\, {{\mathbb{R}}}^d \rightarrow [0,\infty)^{m \times m},\quad \boldsymbol{f}\,:\, {{\mathbb{R}}}^l \rightarrow (0,\infty)^{m \times m} \]

be conditionally negative definite functions. Then the function $\boldsymbol{G}\,:\, {{\mathbb{R}}}^d\times {{\mathbb{R}}}^l \rightarrow {{\mathbb{R}}}^{m \times m}$ with

\[ G_{ij}(\boldsymbol{h},\boldsymbol{u})= \dfrac1{f_{ij}(\boldsymbol{u})^r}S_{ij}\biggl(\dfrac{g_{ij}(\boldsymbol{h})}{f_{ij}(\boldsymbol{u})}\biggr), \quad (\boldsymbol{h},\boldsymbol{u}) \in {{\mathbb{R}}}^d\times{{\mathbb{R}}}^l, \quad i,j=1,\ldots,m, \]

is an m-variate covariance function for $r \ge \lambda$ .

Proof. We follow the proof in [Reference Menegatto17]. It holds that

\begin{align*} G_{ij}(\boldsymbol{h},\boldsymbol{u})&= \dfrac a{f_{ij}(\boldsymbol{u})^r} + \dfrac 1{f_{ij}(\boldsymbol{u})^{r-\lambda}} \int_0^\infty \dfrac1{(g_{ij}(\boldsymbol{h}) + vf_{ij}(\boldsymbol{u}))^\lambda}\varphi_{ij}(v) \,{{d}} v .\end{align*}

The function $ x \mapsto {{1}/{x^\alpha}}$ is completely monotone for $\alpha \ge 0$ and thus the Laplace transform of a measure on $[0,\infty)$ [Reference Schilling, Song and Vondracek26, Theorem 1.4]. Therefore $(1/f_{ij}^r)_{i,j=1,\ldots,m}$ and $(1/f_{ij}^{r-\lambda})_{i,j=1,\ldots,m}$ are positive definite functions due to Theorem 2 as mixtures of positive definite functions. Furthermore, we have

\[\dfrac1{(g_{ij}(\boldsymbol{h}) + vf_{ij}(\boldsymbol{u}))^\lambda}= \dfrac1{\Gamma(\lambda)}\int_0^\infty {{e}}^{-sg_{ij}(\boldsymbol{h})}{{e}}^{-svf_{ij}(\boldsymbol{u})}s^{\lambda -1} \,{{d}} s.\]

The functions $\bigl({{e}}^{-sg_{ij}(\boldsymbol{h})}\bigr)_{i,j=1,\ldots,m}$ and $\bigl({{e}}^{-svf_{ij}(\boldsymbol{u})}\bigr)_{i,j=1,\ldots,m}$ are again positive definite due to Theorem 2 for all $s,v >0$ , and so is their componentwise product. Since positive definite functions are closed under integration,

\[\biggl(\dfrac1{(g_{ij}(\boldsymbol{h}) + vf_{ij}(\boldsymbol{u}))^\lambda}\biggr)_{i,j=1,\ldots,m}\]

is positive definite for all $v > 0$ . Therefore the function

\[\biggl(\dfrac1{(g_{ij}(\boldsymbol{h}) + vf_{ij}(\boldsymbol{u}))^\lambda}\varphi_{ij}(v)\biggr)_{i,j=1,\ldots,m}\]

is also positive definite for all $v > 0$ . Combining and applying the above arguments shows our claim.

Theorem 5 provides a very flexible model. In a space–time framework, it allows for different cross-covariance structures in both space and time, and it does not require assumptions such as continuity of the conditionally negative definite functions involved, or isotropy, which distinguishes it from the multivariate Gneiting-type models presented in [Reference Bourotte, Allard and Porcu4] and [Reference Guella12], respectively.

Funding information

C. D. was supported by the German Research Foundation (DFG) via RTG 1953: Statistical Modelling of Complex Systems and Processes – Advanced Nonparametric Approaches.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Allard, D., Emery, X., Lacaux, C. and Lantuéjoul, C. (2020). Simulating space–time random fields with nonseparable Gneiting-type covariance functions. Statist. Comput. 30, 14791495.CrossRefGoogle Scholar
Berg, C., Christensen, J. P. R. and Ressel, P. (1984). Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions. Springer, New York.CrossRefGoogle Scholar
Berg, C., Koumandos, S. and Pedersen, H. L. (2021). Nielsen’s beta function and some infinitely divisible distributions. Math. Nachr. 294, 426449.CrossRefGoogle Scholar
Bourotte, M., Allard, D. and Porcu, E. (2016). A flexible class of non-separable cross-covariance functions for multivariate space–time data. Spat. Statist. 18, 125146.CrossRefGoogle Scholar
Chen, W. and Genton, M. G. (2019). Parametric variogram matrices incorporating both bounded and unbounded functions. Stoch. Environ. Res. Risk Assess. 33, 16691679.CrossRefGoogle Scholar
Cressie, N. and Wikle, C. K. (1998). The variance-based cross-variogram: you can add apples and oranges. Math. Geol. 30, 789799.CrossRefGoogle Scholar
Du, J. and Ma, C. (2012). Variogram matrix functions for vector random fields with second-order increments. Math. Geosci. 44, 411425.CrossRefGoogle Scholar
Genton, M. G., Padoan, S. A. and Sang, H. (2015). Multivariate max-stable spatial processes. Biometrika 102, 215230.CrossRefGoogle Scholar
Gesztesy, F. and Pang, M. (2016). On (conditional) positive semidefiniteness in a matrix-valued context. Available at arXiv:1602.00384.Google Scholar
Gneiting, T. (2002). Nonseparable, stationary covariance functions for space–time data. J. Amer. Statist. Assoc. 97, 590600.CrossRefGoogle Scholar
Gneiting, T., Sasvári, Z. and Schlather, M. (2001). Analogies and correspondences between variograms and covariance functions. Adv. Appl. Prob. 33, 617630.CrossRefGoogle Scholar
Guella, J. C. (2020). On Gaussian kernels on Hilbert spaces and kernels on hyperbolic spaces. Available at arXiv:2007.14697.Google Scholar
Ma, C. (2011). A class of variogram matrices for vector random fields in space and/or time. Math. Geosci. 43, 229242.Google Scholar
Ma, C. (2011). Vector random fields with second-order moments or second-order increments. Stoch. Anal. Appl. 29, 197215.CrossRefGoogle Scholar
Matheron, G. (1972). Leçon sur les fonctions aléatoire d’ordre 2. Tech. Rep. C-53, MINES Paristech – Centre de Géosciences.Google Scholar
Matheron, G. (1973). The intrinsic random functions and their applications. Adv. Appl. Prob. 5, 439468.CrossRefGoogle Scholar
Menegatto, V. A. (2020). Positive definite functions on products of metric spaces via generalized Stieltjes functions. Proc. Amer. Math. Soc. 148, 47814795.CrossRefGoogle Scholar
Menegatto, V. A., Oliveira, C. P. and Porcu, E. (2020). Gneiting class, semi-metric spaces and isometric embeddings. Constr. Math. Anal. 3, 8595.Google Scholar
Myers, D. E. (1982). Matrix formulation of co-kriging. J. Int. Assoc. Math. Geol. 14, 249257.CrossRefGoogle Scholar
Myers, D. E. (1991). Pseudo-cross variograms, positive-definiteness, and cokriging. Math. Geol. 23, 805816.CrossRefGoogle Scholar
Oesting, M., Schlather, M. and Friederichs, P. (2017). Statistical post-processing of forecasts for extremes using bivariate Brown–Resnick processes with an application to wind gusts. Extremes 20, 309332.CrossRefGoogle Scholar
Papritz, A., Künsch, H. R. and Webster, R. (1993). On the pseudo cross-variogram. Math. Geol. 25, 10151026.CrossRefGoogle Scholar
Porcu, E., Bevilacqua, M. and Genton, M. G. (2016). Spatio-temporal covariance and cross-covariance functions of the great circle distance on a sphere. J. Amer. Statist. Assoc. 111, 888898.Google Scholar
Porcu, E., Furrer, R. and Nychka, D. (2021). 30 years of space–time covariance functions. Wiley Interdiscip. Rev. Comput. Statist. 13, e1512.CrossRefGoogle Scholar
Qadir, G. and Sun, Y. (2022). Modeling and predicting spatio-temporal dynamics of ${\textbf{PM}}_{2.5}$ concentrations through time-evolving covariance models. Available at arXiv:2202.12121.Google Scholar
Schilling, R. L., Song, R. and Vondracek, Z. (2012). Bernstein Functions, 2nd rev. and ext. edn. De Gruyter, Berlin.Google Scholar
Schlather, M. (2010). Some covariance models based on normal scale mixtures. Bernoulli 16, 780797.Google Scholar
Schoenberg, I. J. (1938). Metric spaces and positive definite functions. Trans. Amer. Math. Soc. 44, 522536.CrossRefGoogle Scholar
Ver Hoef, J. and Cressie, N. (1993). Multivariable spatial prediction. Math. Geol. 25, 219240.Google Scholar
Wackernagel, H. (2003). Multivariate Geostatistics: An Introduction with Applications, 3rd edn. Springer, Berlin.CrossRefGoogle Scholar
Xie, T. (1994). Positive definite matrix-valued functions and matrix variogram modeling. Doctoral thesis, The University of Arizona.Google Scholar
Zastavnyi, V. P. and Porcu, E. (2011). Characterization theorems for the Gneiting class of space–time covariances. Bernoulli 17, 456465.Google Scholar