Hostname: page-component-7bb8b95d7b-dtkg6 Total loading time: 0 Render date: 2024-09-26T02:35:26.432Z Has data issue: false hasContentIssue false

Costa’s concavity inequality for dependent variables based on the multivariate Gaussian copula

Published online by Cambridge University Press:  12 April 2023

Fatemeh Asgari*
Affiliation:
University of Isfahan
Mohammad Hossein Alamatsaz*
Affiliation:
University of Isfahan
*
*Postal address: Department of Statistics, Faculty of Mathematics and Statistics, University of Isfahan, Isfahan 81746-73441, Iran.
*Postal address: Department of Statistics, Faculty of Mathematics and Statistics, University of Isfahan, Isfahan 81746-73441, Iran.
Rights & Permissions [Opens in a new window]

Abstract

An extension of Shannon’s entropy power inequality when one of the summands is Gaussian was provided by Costa in 1985, known as Costa’s concavity inequality. We consider the additive Gaussian noise channel with a more realistic assumption, i.e. the input and noise components are not independent and their dependence structure follows the well-known multivariate Gaussian copula. Two generalizations for the first- and second-order derivatives of the differential entropy of the output signal for dependent multivariate random variables are derived. It is shown that some previous results in the literature are particular versions of our results. Using these derivatives, concavity of the entropy power, under certain mild conditions, is proved. Finally, special one-dimensional versions of our general results are described which indeed reveal an extension of the one-dimensional case of Costa’s concavity inequality to the dependent case. An illustrative example is also presented.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $h(\mathbf{Y})=-\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}$ denote the differential entropy of a random vector $\mathbf{Y}$ with probability density function (PDF) $f_{\mathbf{Y}}(\mathbf{y};\ t)$ depending on a real parameter t. The entropy power of an m-variate random vector $\mathbf{Y}$ is defined by

\begin{equation*} N(\mathbf{Y})=\frac{\mathrm{e}^{({2}/{m})h(\mathbf{Y})}}{2\pi \mathrm{e}},\end{equation*}

which was first introduced by Shannon [Reference Shannon13]. One of the most important inequalities in information theory is the entropy power inequality (EPI), which gives a lower bound for the differential entropy of the sum of the independent random vectors $\mathbf{X}$ and $\mathbf{Y}$ as $N(\mathbf{X}+\mathbf{Y})\geq N(\mathbf{X})+N(\mathbf{Y})$ . The first complete proof of the EPI was given in [Reference Stam15]; in its development, [Reference Stam15] proved an equality called de Bruijn’s identity. This identity links Fisher information with Shannon’s differential entropy (see [Reference Blachman5]). Consider the additive Gaussian noise channel model

(1) \begin{equation} \mathbf{Y}=\mathbf{X}+\mathbf{W}_{t},\end{equation}

in which the input signal $\mathbf{X}=(X_{1},\ldots,X_{m})^\top$ and the additive noise $\mathbf{W}_{t}=(W_{t,1},\ldots,W_{t,m})^\top$ are two m-variate random vectors and $\mathbf{W}_{t}$ is normally distributed with mean vector $\mathbf{0}$ and covariance matrix

(2) \begin{equation}\boldsymbol{\Sigma}_{\mathbf{W}_{t}} =\left( {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}t& \sigma_{12}&\ldots &\sigma_{1m}\\ \\[-9pt]\sigma_{21}&t&\ldots &\sigma_{2m}\\ \\[-9pt]\vdots & \vdots &\ddots &\vdots\\ \\[-9pt]\sigma_{m1}&\sigma_{m2}&\ldots &t\end{array}} \right),\end{equation}

where the $\sigma_{ij}$ , $i,j=1,2,\ldots ,m$ , are real numbers. De Bruijn’s identity, generalized by Costa [Reference Costa7] to multivariate random variables, is given by

(3) \begin{equation} \frac{\partial}{\partial t}h(\mathbf{Y})=\frac{1}{2}J(\mathbf{Y}),\end{equation}

in which $\mathbf{X}$ and $\mathbf{W}_{t}$ are independent random vectors and $J(\mathbf{Y})$ stands for the Fisher information of $f_{\mathbf{Y}}(\mathbf{y};\ t)$ , defined by

(4) \begin{align} J(\mathbf{Y}) & = \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\| \nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}, \nonumber \\[3pt] & = \int_{\Bbb{R}^{m}}{\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\,\mathrm{d}\mathbf{y}}.\end{align}

There are several applications of the EPI, such as in bounding the capacity of certain kinds of channels and proving converses of channel or source coding theorems; see, e.g., [Reference Bergmans6, Reference Weingarten, Steinberg and Shamai18]. Considering the channel model (1), [Reference Costa7] presented an extension of the EPI for the case in which $\mathbf{W}_{t}$ is independent of $\mathbf{X}$ with $\Sigma_{\mathbf{W}_{t}}=t\mathbf{I}_{m}$ , where $\mathbf{I}_{m}$ is the $m\times m$ identity matrix. That is,

\begin{equation*} N(\mathbf{X}+\mathbf{W}_{t})\geq (1-t)N(\mathbf{X})+tN(\mathbf{X}+\mathbf{W}_{1}),\end{equation*}

or, equivalently, $N(\mathbf{X}+\mathbf{W}_{t})$ is concave in t, i.e.

(5) \begin{equation} \frac{\partial^{2}}{\partial t^{2}}N(\mathbf{X}+\mathbf{W}_{t})\leq 0.\end{equation}

Later, [Reference Dembo8] provided another simple proof for the Costa’s concavity inequality (5) via the Stam Fisher information inequality [Reference Stam15] defined by

\begin{equation*} \frac{1}{J(X+W)}\geq \frac{1}{J(X)}+\frac{1}{J(W)},\end{equation*}

where X and W are independent random variables. Also, [Reference Villani17] used some advanced methods to simplify Costa’s proof of the inequality (5).

As mentioned before, in all of the above results the assumption of independence between the input signal $\mathbf{X}$ and the additive noise $\mathbf{W}_{t}$ has been required. However, there are several real situations, such as in radar and sonar systems, in which the noise is highly dependent on the transmitted signal [Reference Kay11]. It was illustrated in [Reference Takano, Watanabe, Fukushima, Prohorov and Shiryaev16] that, under some assumptions, Shannon’s EPI can hold for weakly dependent random variables; [Reference Asgari and Alamatsaz3] extended the EPI to dependent random variables with arbitrary distibutions; and [Reference Johnson10] provided certain conditions under which the conditional EPI can hold for dependent summands as well.

One of the best methods for describing the dependency structure among random variables is by copula functions. Copula theory was first introduced in [Reference Sklar14] in order to achieve the connection between a joint PDF and its marginals. In [Reference Asgari, Alamatsaz and Khoolenjani4], the authors extended two inequalities based on the Fisher information when the input signal and noise components are dependent and their dependence structure is modeled by several well-known copulas. There are several families of copulas with different dependence structures. The Gaussian copula is one of the most usable, and describes different levels of dependence between marginal components. In the present paper, by considering the additive Gaussian noise channel model (1) where the input signal $\mathbf{X}$ and noise $\mathbf{W}_{t}$ are dependent random vectors obeying the multivariate Gaussian copula, first, an extension of de Bruijn’s identity (3) is derived, and then Costa’s concavity inequality (5) is proved, under some mild conditions.

The rest of the paper is organized as follows. In Section 2 we recall the copula theory concept and the basic definition of the multivariate Gaussian copula function, along with one of its particular cases. In Section 3 we provide a generalization of the first-order derivatives of the differential entropy and Fisher information, provided that the input signal and noise components are dependent variables. Thus, based on these derivatives, Costa’s concavity inequality for the case that the random vector $ \mathbf{X}$ is composed of independent coordinates is extended. Finally, we illustrate the one-dimensional versions of our results in Section 4.

Let us first establish the fundamental definitions and notation used in this paper. Let $\phi(\mathbf{y})$ and $\psi(\mathbf{y})$ be twice continuously differentiable functions on $\Bbb{R}^{m}$ , and V be any closed and simply connected m-dimensional region in $\Bbb{R}^{m}$ bounded by a piecewise smooth, closed, and oriented surface S. We recall Green’s identity [Reference Amazigo and Rubenfeld1], which is stated as

(6) \begin{align} \int_{V}{\phi\Delta\psi \,\mathrm{d} V} = \int_{S}{\phi\nabla\psi .\mathbf{n}_{S}\,\mathrm{d} S} - \int_{V}{\nabla\phi .\nabla\psi \,\mathrm{d} V},\end{align}

in which $\nabla\phi$ and $\nabla\psi$ are the gradients of $\phi$ and $\psi$ , respectively, $\mathbf{n}_{S}$ denotes the unit vector normal to the surface S, and $\nabla\psi .\mathbf{n}_{S}$ is the inner product of the two vectors. Now, the m-dimensional Stokes’ theorem is recalled: it states that if $\mathbf{F}\colon\Bbb{R}^{m}\rightarrow\Bbb{R}^{m}$ is a vector field over $\Bbb{R}^{m}$ , then

(7) \begin{align} \int_{V}{\nabla .\mathbf{F}\,\mathrm{d} V} = \int_{\partial V}{\mathbf{F}.\mathbf{n}_{S}\,\mathrm{d} S},\end{align}

where $\partial V=S$ is the boundary of V.

We denote the PDF and cumulative distribution function (CDF) of a random variable X by $f_{X}(x)$ and $F_{X}(x)$ , respectively.

2. Copula background

Copula theory is popular in multivariate distribution analysis as copulas allow easy modeling of the distribution of a random vector by its marginals. A copula is a multivariate CDF with standard uniform marginal distributions which couples univariate distribution functions to generate a multivariate CDF and indicates the dependency structure of the random variables. Copulas are important parts of the study of dependency between variables since they allow us to separate the effect of dependency from the effects of the marginal distributions [Reference Joe9]. In recent years, there has been a revival of copulas in applications where the matter of dependency between random variables is of great importance [Reference Arias-Nicolás, Fernández-Ponce, Luque-Calvo and Suárez-Llorens2].

The fundamental theorem for copulas was introduced by Sklar [Reference Sklar14] and illustrates the role that copulas play in the relationship between multivariate CDFs and their univariate marginals. In an n-dimensional multivariate case, Sklar’s theorem states that if $F_{T_1,T_2,\ldots,T_n}$ is an n-dimensional CDF with marginals $F_{T_1},F_{T_2},\ldots,F_{T_n}$ , then there exists an n-copula $C\colon I^{n}\longrightarrow I$ such that

(8) \begin{equation}F_{T_1,T_2,\ldots,T_n}(t_{1},t_{2},\ldots ,t_{n})=C(F_{T_1}(t_{1}),F_{T_2}(t_{2}),\ldots ,F_{T_n}(t_{n})),\end{equation}

where $I=[0,1]$ . If $F_{T_1},F_{T_2},\ldots,F_{T_n}$ are continuous, the n-copula C is unique; otherwise, C is uniquely determined on the range of $F_{T_1}$ $\times$ the range of $F_{T_2}$ $\times\cdots\times$ the range of $F_{T_n}$ . Conversely, if C is an n-copula and $F_{T_1},F_{T_2},\ldots,F_{T_n}$ are univariate distribution functions, then $F_{T_1,T_2,\ldots,T_n}$ is a joint CDF with marginals $F_{T_1},F_{T_2},\ldots,F_{T_n}$ .

For any n-copula function C, there exists a corresponding copula density function c:

(9) \begin{equation} c(u_{1},u_{2},\ldots ,u_{n})=\frac{\partial^{n}}{\partial u_{1}\partial u_{2}\cdots\partial u_{n}}C(u_{1},u_{2},\ldots ,u_{n}).\end{equation}

Therefore, if $f_{T_1,T_2,\ldots,T_n}$ , $f_{T_1},f_{T_2},\ldots f_{T_n}$ , and c are the density functions of $F_{T_1,T_2,\ldots,T_n}$ , $F_{T_1},F_{T_2},\ldots F_{T_n}$ , and C, respectively, the relation in (8) yields

(10) \begin{equation} f_{T_1,T_2,\ldots,T_n}(t_{1},t_{2},\ldots ,t_{n})=c(u_{1},u_{2},\ldots ,u_{n})f_{T_1}(t_{1})f_{T_2}(t_{2})\cdots f_{T_n}(t_{n}),\end{equation}

where $u_{1},u_{2},\ldots ,u_{n}$ are related to $t_{1},t_{2},\ldots ,t_{n}$ through the marginal distribution functions $u_{1}=F_{T_1}(t_{1})$ , $u_{2}=F_{T_2}(t_{2})$ , …, $u_{n}=F_{T_n}(t_{n})$ .

Let us recall the definition of one of the most popular copulas, the multivariate Gaussian copula, which we consider here.

Definition 1. The n-dimensional Gaussian copula with covariance matrix $\boldsymbol{\Sigma}$ is defined by

(11) \begin{equation} C_{\boldsymbol{\Sigma}}(u_{1},u_{2},\ldots ,u_{n}) = \Phi_{\boldsymbol{\Sigma}}(\Phi^{-1}(u_{1}),\Phi^{-1}(u_{2}),\ldots ,\Phi^{-1}(u_{n})), \end{equation}

where $\Phi_{\boldsymbol{\Sigma}}$ denotes the CDF of the n-variate normal random vector with mean vector $\mathbf{0}$ and covariance matrix $\boldsymbol{\Sigma}$ , $\Phi^{-1}$ is the inverse of the univariate standard Gaussian CDF, and $0\leq u_{1},u_{2},\ldots ,u_{n}\leq 1$ .

In this paper we consider the special version of the n-dimensional Gaussian copula with

\begin{equation*} \boldsymbol{\Sigma} =\left( {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 1& \rho&\ldots &\rho\\ \\[-9pt] \rho &1&\ldots &\rho\\ \\[-9pt] \vdots & \vdots &\ddots &\vdots\\ \\[-9pt] \rho &\rho&\ldots &1 \end{array}} \right) =(1-\rho)\mathbf{I}_{n}+\rho \mathbf{1}_{n}\mathbf{1}^\top_{n},\end{equation*}

and $-1/(n-1)<\rho<1$ in which $\mathbf{1}_{n}=(1,1,\ldots,1)_{1\times n}^\top$ . Thus, from (9), the n-dimensional Gaussian copula density is given by

(12) \begin{align} c_{\boldsymbol{\Sigma}}(u_{1},u_{2},\ldots ,u_{n}) & = \prod_{i = 1}^n\frac{\partial}{\partial u_{i}}\Phi^{-1}(u_{i}) \phi_{\boldsymbol{\Sigma}}(\Phi^{-1}(u_{1}),\Phi^{-1}(u_{2}),\ldots ,\Phi^{-1}(u_{n})) \nonumber \\ & = (2\pi)^{{n}/{2}}\exp\Bigg[{\frac{1}{2}\sum_{i=1}^{n}z_{i}^{2}}\Bigg] \phi_{\boldsymbol{\Sigma}}(z_{1},z_{2},\ldots ,z_{n}),\end{align}

where $\phi_{\boldsymbol{\Sigma}}$ is the PDF of the n-variate Gaussian distribution, and $z_{i}=\Phi^{-1}(u_{i})$ , $i=1,2,\ldots,n$ . Since

\begin{equation*} \vert \boldsymbol{\Sigma}\vert = (1+(n-1)\rho)(1-\rho)^{n-1}, \qquad \boldsymbol{\Sigma}^{-1} = \frac{1}{1-\rho} \bigg(\mathbf{I}_{n}-\frac{\rho}{1+(n-1)\rho}\mathbf{1}_{n}\mathbf{1}^\top_{n}\bigg),\end{equation*}

we have

(13) \begin{align} \phi_{\boldsymbol{\Sigma}}(z_{1},z_{2},\ldots ,z_{n}) & = \frac{(2\pi)^{-{n}/{2}}}{\sqrt{(1+(n-1)\rho)(1-\rho)^{n-1}}\,} \nonumber \\ & \quad \times \exp\Bigg\{\frac{-1}{2(1-\rho)}\sum_{i=1}^{n}z_{i}^{2} + \frac{\rho}{2(1+(n-1)\rho)(1-\rho)}\Bigg(\sum_{i=1}^{n}z_{i}\Bigg)^{2}\Bigg\}.\end{align}

Now, due to the fact that $\big(\sum_{i=1}^{n}z_{i}\big)^{2}=\sum_{i=1}^{n}z_{i}^{2}+\sum_{i\neq j}z_{i}z_{j}$ , substituting (13) into (12) yields

(14) \begin{align} c_{\boldsymbol{\Sigma}}(u_{1},u_{2},\ldots ,u_{n}) & = \alpha(\rho,n)\nonumber\\[3pt] &\quad \times\exp\Bigg\{\beta(\rho,n)\Bigg(\sum_{i=1}^{n}[\Phi^{-1}(u_{i})]^{2} - \frac{1}{(n-1)\rho}\sum_{i\neq j}\Phi^{-1}(u_{i})\Phi^{-1}(u_{j})\Bigg)\Bigg\},\end{align}

where

\begin{equation*} \alpha(\rho,n) = \frac{1}{\sqrt{(1+(n-1)\rho)(1-\rho)^{n-1}}\,}, \qquad \beta(\rho,n) = \frac{-(n-1)\rho^{2}}{2(1-\rho)(1+(n-1)\rho)}.\end{equation*}

Remark 1. Note that setting $\boldsymbol{\Sigma}=\mathbf{I}_{n}$ , i.e. $\rho=0$ , in (11) leads to the independent copula $C_{\mathbf{I}_{n}}(u_{1},u_{2},\ldots ,u_{n}) = u_{1}u_{2}\cdots u_{n}$ , which is equivalent to the random variables $T_{1},T_{2},\ldots,T_{n}$ being independent.

A particular case of the n-dimensional Gaussian copula is the bivariate Gaussian copula. If we put $n=2$ and

\begin{equation*} \boldsymbol{\Sigma} =\left( {\begin{array}{c@{\quad}c} 1& \rho\\ \\[-9pt] \rho &1\\ \end{array}} \right),\end{equation*}

with $-1<\rho<1$ , then the bivariate Gaussian copula is defined by

\begin{equation*} C_{\rho}(u_{1},u_{2}) = \Phi_{2}(\Phi^{-1}(u_{1}),\Phi^{-1}(u_{2});\rho),\end{equation*}

where $\rho\in(\!-1,1)$ is the Gaussian copula parameter and $\Phi_{2}$ is the bivariate standard Gaussian CDF. The Gaussian copula density for $-1<\rho<1$ is obtained as

(15) \begin{equation} c_{\rho}(u_{1},u_{2}) = \frac{1}{\sqrt{1-\rho ^{2}}\,}\exp\bigg\{ {-}\frac{\rho^{2}}{2(1-\rho^{2})}\bigg([\Phi^{-1}(u_{1})]^{2} - \frac{2}{\rho}\Phi^{-1}(u_{1})\Phi^{-1}(u_{2})+[\Phi^{-1}(u_{2})]^{2}\bigg)\bigg\}.\end{equation}

3. The general case

Consider the additive Gaussian noise channel model (1). Let $\mathbf{X}$ and $\mathbf{W}_{t}$ be two dependent random vectors with a differentiable joint PDF $f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{w}_{t})$ . Then, for the PDF of $\mathbf{Y}$ , we obtain

(16) \begin{equation} f_{\mathbf{Y}}(\mathbf{y};\ t) = \int_{\Bbb{R}^{m}}{f_{\mathbf{X}}(\mathbf{x}) f_{\mathbf{Y}\mid \mathbf{X}}(\mathbf{y}\mid \mathbf{x};\ t)\,\mathrm{d}\mathbf{x}} = \int_{\Bbb{R}^{m}}{f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}},\end{equation}

where

\begin{equation*} f_{\mathbf{Y}\mid \mathbf{X}}(\mathbf{y} \mid \mathbf{x};\ t) = \frac{f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})}{f_{\mathbf{X}}(\mathbf{x})}.\end{equation*}

First, recall that assuming $\mathbf{X}$ and $\mathbf{W}_{t}$ are independent random vectors and $\Sigma_{\mathbf{W}_{t}}=t\mathbf{I}_{m}$ , [Reference Costa7, Reference Villani17] used the heat equation given by

\begin{equation*} \frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) = \frac{1}{2}\sum_{j=1}^{m}\frac{\partial^{2}}{\partial y_{j}^{2}}f_{\mathbf{Y}}(\mathbf{y};\ t) \end{equation*}

in their proofs. We now need to generalize this heat equation to the case of multivariate random vectors, as below.

Lemma 1. Suppose that $\mathbf{W}_{t}$ in channel model (1) has the covariance matrix (2), and let $\mathbf{X}$ and $\mathbf{W}_{t}$ be two dependent random vectors whose dependence structure is modeled by the multivariate Gaussian copula (14). Then, we have

(17) \begin{align} & f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}) \nonumber \\ & = \frac{\alpha(\rho, 2m)}{(2\pi t)^{{m}/{2}}} \exp\Bigg\{\gamma(\rho, m)\frac{\|\mathbf{\mathbf{y}-\mathbf{x}}\|^{2}}{t} + \beta(\rho, 2m)\Bigg[\sum_{i=1}^{m}[\Phi^{-1}(F_{X_{i}}(x_{i}))]^{2} \nonumber \\ & \qquad \qquad \qquad \qquad {-} \frac{2}{(2m-1)\rho}\Bigg(\sum_{i<j}\Phi^{-1}(F_{X_{i}}(x_{i})) \Phi^{-1}(F_{X_{j}}(x_{j})) + \sum_{k<l}\frac{(y_{k}-x_k)(y_{l}-x_l)}{t} \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \sum_{i,k}\Phi^{-1}(F_{X_{i}}(x_{i})) \frac{(y_{k}-x_k)}{\sqrt{t}}\Bigg)\Bigg]\Bigg\} \prod_{i = 1}^m f_{X_{i}}(x_{i}), \end{align}

where

\begin{equation*} \gamma(\rho, m)=\frac{2(1-m)\rho-1}{2(1-\rho)(1+(2m-1)\rho)}. \end{equation*}

Proof. Using (10) and (14), by setting $\mathbf{T}=(\mathbf{X},\mathbf{W}_{t})$ and $n=2m$ , we have

\begin{equation*} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{w}) = c_{\boldsymbol{\Sigma}}(F_{X_{1}}(x_{1}),\ldots ,F_{X_{m}}(x_{m}), F_{W_{t,1}}(w_{1}),\ldots ,F_{W_{t,m}}(w_{m})) \prod_{i = 1}^m f_{X_{i}}(x_{i}) \prod_{k = 1}^m f_{W_{t,k}}(w_{k}), \end{equation*}

where

\begin{align*} & c_{\boldsymbol{\Sigma}}(F_{X_{1}}(x_{1}),\ldots ,F_{X_{m}}(x_{m}),F_{W_{t,1}}(w_{1}),\ldots, F_{W_{t,m}}(w_{m})) \\ &\quad = \alpha(\rho, 2m)\exp\Bigg\{\beta(\rho, 2m)\Bigg[ \sum_{i=1}^{m}z_{x_{i}}^{2} + \sum_{k=1}^{m}z_{w_{k}}^{2} - \frac{2}{(2m-1)\rho} \\ &\qquad\qquad\qquad\qquad \Bigg(\sum_{i<j}z_{x_{i}}z_{x_{j}} + \sum_{k<l}z_{w_{k}}z_{w_{l}} + \sum_{i,k}z_{x_{i}}z_{w_{k}}\Bigg)\Bigg]\Bigg\}, \end{align*}

in which

\begin{equation*} z_{x_{i}} = \Phi^{-1}(F_{X_{i}}(x_{i})), \qquad z_{w_{k}} = \Phi^{-1}(F_{W_{t,k}}(w_{k})) = \Phi^{-1}\bigg(\Phi\bigg(\frac{w_{k}}{\sqrt{t}\,}\bigg)\bigg) = \frac{w_{k}}{\sqrt{t}\,}, \end{equation*}

because $W_{t,k}$ , $k=1,2,\ldots,m$ , are normally distributed with zero mean and variance t. Thus,

\begin{align*} & f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{w}) \\ & = \frac{\alpha(\rho, 2m)}{(2\pi t)^{\frac{m}{2}}}\exp\Bigg\{\beta(\rho, 2m) \Bigg[\sum_{i=1}^{m}[\Phi^{-1}(F_{X_{i}}(x_{i}))]^{2} + \frac{\| \mathbf{w}\|^{2}}{t} \\ & \qquad \qquad \qquad \qquad -\frac{2}{(2m-1)\rho} \Bigg(\sum_{i<j}\Phi^{-1}(F_{X_{i}}(x_{i}))\Phi^{-1}(F_{X_{j}}(x_{j})) + \sum_{k<l}\frac{w_{k}w_{l}}{t} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \sum_{i,k}\Phi^{-1}(F_{X_{i}}(x_{i})) \frac{w_{k}}{\sqrt{t}\,}\Bigg)\Bigg] - \frac{\|\mathbf{w}\|^{2}}{2t}\Bigg\} \prod_{i = 1}^m f_{X_{i}}(x_{i}). \end{align*}

By some easy calculations, this expression can be rewritten as (17).

Lemma 2. Based on the same assumptions as in Lemma 1, we have

(18) \begin{equation} \frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) = \delta(\rho ,m)\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)-\lambda(\rho ,m)\nabla. q(\mathbf{y};\ t), \end{equation}

in which $q(\mathbf{y};\ t) = (q_{1}(\mathbf{y};\ t),q_{2}(\mathbf{y};\ t),\ldots,q_{m}(\mathbf{y};\ t))$ and

\begin{align*} \delta(\rho ,m) & = \frac{[(1-\rho)(1+(2m-1)\rho)]}{2(1-2(1-m)\rho)}, \qquad \lambda(\rho ,m) = \frac{\rho}{2\sqrt{t}(1-2(1-m)\rho)}, \\ \Delta f_{\mathbf{Y}}(\mathbf{y};\ t) & = \sum_{j=1}^{m}\frac{\partial^{2}}{\partial y_{j}^{2}}f_{\mathbf{Y}}(\mathbf{y};\ t), \\ q_{j}(\mathbf{y};\ t) & = \int_{\Bbb{R}^{m}}{\Bigg[\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) + \frac{1}{\sqrt{t}\,}\sum_{k\neq j}(y_{k}-x_{k})\Bigg] f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{w}_{t})\,\mathrm{d}\mathbf{x}} \nonumber \\ & = p_{j}(\mathbf{y};\ t)f_{\mathbf{Y}}(\mathbf{y};\ t),\qquad j=1,2,\ldots ,m, \end{align*}

where $p_{j}(\mathbf{y};\ t) = {{{{\bf E}}}}_{\mathbf{X}\mid \mathbf{Y}}\big[ \sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(X_{i})) + ({1}/{\sqrt{t}})\sum_{k\neq j}(Y_{k}-X_{k}) \mid \mathbf{Y} = \mathbf{y}\big]$ .

Proof. According to Lemma 1, differentiating (17) with respect to t and $y_{j}$ yields

(19) \begin{align} \frac{\partial}{\partial t}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}) & = \Bigg[\frac{-\gamma(\rho, m)}{t^{2}}\|\mathbf{y}-\mathbf{x}\|^{2} + \frac{\beta(\rho, 2m)}{(2m-1)\rho t^{2}}\Bigg(\sum_{k\neq l}(y_{k}-x_{k})(y_{l}-x_{l}) \nonumber \\ & \qquad + \sqrt{t}\sum_{i, k}\Phi^{-1}(F_{X_{i}}(x_{i}))(y_{k}-x_{k})\Bigg) - \frac{m}{2t}\Bigg]f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}) \end{align}
(20) \begin{align} \frac{\partial}{\partial y_{j}}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}) & = \Bigg[\frac{2\gamma(\rho, m)}{t}(y_{j}-x_{j}) - \frac{2\beta(\rho, 2m)}{(2m-1)\rho t}\Bigg(\sum_{k\neq j}(y_{k}-x_{k}) \nonumber \\ & \qquad + \sqrt{t}\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i}))\Bigg)\Bigg] f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}),\quad j=1,2,\ldots ,m, \end{align}

respectively. Thus, for the second-order derivative of (17) with respect to $y_{j}$ , we obtain

(21) \begin{align} \frac{\partial^{2}}{\partial y_{j}^{2}}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}) & = \Bigg[\frac{2\gamma(\rho, m)}{t} + \Bigg[\frac{2\gamma(\rho, m)}{t}(y_{j}-x_{j}) - \frac{2\beta(\rho, 2m)}{(2m-1)\rho t}\Bigg(\sum_{k\neq j}(y_{k}-x_{k}) \nonumber \\ & \qquad + \sqrt{t}\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i}))\Bigg)\Bigg]^{2}\Bigg] f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x}). \end{align}

Now, according to (16) and (19), we have

(22) \begin{align} & \frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) = \int_{\Bbb{R}^{m}}{\frac{\partial}{\partial t}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y} - \mathbf{x})\,\mathrm{d}\mathbf{x}} \nonumber \\[2pt] & = \frac{-m}{2t}f_{\mathbf{Y}}(\mathbf{y};\ t) - \frac{\gamma(\rho ,m)}{t^{2}}\int_{\Bbb{R}^{m}}{\|\mathbf{y} - \mathbf{x}\|^{2}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y} - \mathbf{x}) \, \mathrm{d}\mathbf{x}} \nonumber \\[2pt] & \quad + \frac{\beta(\rho ,2m)}{(2m-1)\rho t^{2}}\int_{\Bbb{R}^{m}}\Bigg( \sum_{k\neq l}(y_{k}-x_{k})(y_{l}-x_{l}) \nonumber \\[2pt] & \qquad \qquad \qquad \qquad \qquad + \sqrt{t}\sum_{i, k}\Phi^{-1}(F_{X_{i}}(x_{i}))(y_{k}-x_{k})\Bigg) f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}, \end{align}
(23) \begin{align} & \frac{\partial^{2}}{\partial y_{j}^{2}}f_{\mathbf{Y}}(\mathbf{y};\ t) = \int_{\Bbb{R}^{m}}{\frac{\partial^{2}}{\partial y_{j}^{2}} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}} \nonumber \\[2pt] & = \frac{2\gamma(\rho, m)}{t}f_{\mathbf{Y}}(\mathbf{y};\ t) + \frac{4\gamma^{2}(\rho, m)}{t^2}\int_{\Bbb{R}^{m}}{(y_{j}-x_{j})^{2} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}} \nonumber \\[2pt] & \quad + \frac{4\beta^{2}(\rho, 2m)}{(2m-1)^{2}\rho^2 t^2} \int_{\Bbb{R}^{m}}{\Bigg(\sum_{k\neq j}(y_{k}-x_{k}) + \sqrt{t}\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i}))\Bigg)^2 f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}} \nonumber \\[2pt] & \quad - \frac{8\gamma(\rho, m)\beta(\rho, 2m)}{(2m-1)\rho t^{2}} \int_{\Bbb{R}^{m}}(y_{j}-x_{j})\Bigg(\sum_{k\neq j}(y_{k}-x_{k}) \nonumber \\[2pt] & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \sqrt{t}\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i}))\Bigg) f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}. \end{align}

Thus, due to (19), by combining (22) with (23), we obtain

\begin{align*} \frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) & = \frac{-1}{4\gamma(\rho ,m)}\sum_{j=1}^{m}\frac{\partial^{2}}{\partial y_{j}^{2}} f_{\mathbf{Y}}(\mathbf{y};\ t) \\ & \quad - \frac{\beta(\rho ,2m)}{2\gamma(\rho ,m)(2m-1)\rho \sqrt{t}\,} \sum_{j=1}^{m}\int_{\Bbb{R}^{m}}\Bigg[\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{\sqrt{t}\,}\sum_{k\neq j}(y_{k}-x_{k})\Bigg] \frac{\partial}{\partial y_{j}}f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y} - \mathbf{x})\,\mathrm{d}\mathbf{x} , \end{align*}

where

\begin{equation*} \frac{-1}{4\gamma(\rho ,m)}=\delta(\rho, m), \qquad \frac{\beta(\rho ,2m)}{2\gamma(\rho ,m)(2m-1)\rho \sqrt{t}\,} = \lambda(\rho ,m). \end{equation*}

Therefore, using (20), the proof is complete.

Now, we need to derive the first- and second-order derivatives of the differential entropy $h(\mathbf{Y})$ that are key instruments in establishing our main result.

Theorem 1. Based on Lemma 2, the first-order derivative of the entropy $h(\mathbf{Y})$ is derived as

(24) \begin{equation} \frac{\partial}{\partial t}h(\mathbf{Y}) = \delta(\rho ,m)J(\mathbf{Y}) + A_{t}, \end{equation}

where

\begin{equation*} A_{t} = -\lambda(\rho ,m)\sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[p_{j}(\mathbf{Y};\ t)\frac{\partial}{\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg]. \end{equation*}

Proof. Using (18), we obtain

(25) \begin{align} \frac{\partial}{\partial t}h(\mathbf{Y}) & = -\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\frac{\partial}{\partial t} \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} - \int_{\Bbb{R}^{m}}{\log f_{\mathbf{Y}}(\mathbf{y};\ t)\frac{\partial}{\partial t} f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}, \nonumber \\ & = 0 - \delta(\rho ,m)\int_{\Bbb{R}^{m}}{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t) \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} + \lambda(\rho ,m)\int_{\Bbb{R}^{m}}{\nabla .q(\mathbf{y};\ t) \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}. \end{align}

To apply Green’s identity (6) to the second term in (25), we assume that $V_{r}$ is the $m-$ sphere of radius r centered at the origin with boundary $S_{r}=\partial V_{r}$ . Now, we apply Green’s identity to the second term in (25) with $\phi(\mathbf{y})=\log f_{\mathbf{Y}}(\mathbf{y};\ t)$ and $\psi(\mathbf{y})=f_{\mathbf{Y}}(\mathbf{y};\ t)$ , and then take the limit on both sides as $r\rightarrow +\infty$ . Thus,

(26) \begin{align} \int_{\Bbb{R}^{m}}{\nabla .(\nabla f_{\mathbf{Y}}(\mathbf{y};\ t)) \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\log f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla f_{\mathbf{Y}}(\mathbf{y};\ t). \mathbf{n}_{S_{r}}(\mathbf{y}) \,\mathrm{d} S_{r}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}, \nonumber \\ & = 0 - \int_{\Bbb{R}^{m}}{\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\,\mathrm{d}\mathbf{y}}, \end{align}

where $\mathbf{n}_{S_{r}}$ is the unit vector normal in the surface $S_{r}$ . Consider the identity

(27) \begin{equation} \nabla .(\phi \mathbf{F})=\nabla \phi .\mathbf{F}+\phi\nabla .\mathbf{F}, \end{equation}

where $\mathbf{F}\colon\Bbb{R}^{m}\rightarrow \Bbb{R}^{m}$ . We set $\mathbf{F}(\mathbf{y})=q(\mathbf{y};\ t)$ and $\phi(\mathbf{y})=\log f_{\mathbf{Y}}(\mathbf{y};\ t)$ , and then, using Stokes’ theorem (8) and taking limits on both sides as $r\rightarrow +\infty$ , we get

(28) \begin{align} \int_{\Bbb{R}^{m}}{\nabla .q(\mathbf{y};\ t)\log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \log f_{\mathbf{Y}}(\mathbf{y};\ t)q(\mathbf{y};\ t). \mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}} \nonumber \\ & \quad -\int_{\Bbb{R}^{m}}{q(\mathbf{y};\ t). \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}, \nonumber \\ & = 0 - \sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}}\bigg[p_{j}(\mathbf{Y};\ t) \frac{\partial}{\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg]. \end{align}

In Appendix A, the surface integrals in (26) and (28) over the surface $S_{r}$ are shown to vanish as r approaches $+\infty$ . Therefore, by substituting (26) and (28) into (25), the theorem is proved.

Remark 2. Note that, in Theorem 1, from (24) with $\rho=0$ , we obtain

\begin{equation*} \frac{\partial}{\partial t}h(\mathbf{Y}) = \frac{1}{2}\int_{\Bbb{R}^{m}}{\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\,\mathrm{d}\mathbf{y}}. \end{equation*}

That is, the first-order derivative of the entropy $h(\mathbf{Y})$ reduces to the case when $\mathbf{X}$ and $\mathbf{W}_{t}$ are independent random vectors with $\Sigma_{\mathbf{W}_{t}}=tI_{m} $ as in [Reference Costa7].

According to Theorem 1, to provide the second-order derivative of $h(\mathbf{Y})$ , it is sufficient to derive the first-order derivative of the Fisher information $J(\mathbf{Y})$ . First, we need the following lemma.

Lemma 3. According to Lemma 2, the following two equations hold:

(29) \begin{align} \frac{\partial}{\partial t}\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} & = 2\delta(\rho ,m)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla\bigg(\frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg) \nonumber \\ & \quad - 2\lambda(\rho ,m)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla\bigg(\frac{\nabla. q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg), \end{align}

where

(30) \begin{align} \frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)} & = \Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t)+\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}, \end{align}
(31) \begin{align} \nabla\bigg(\frac{\nabla. q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg) & = \frac{\nabla(\nabla .q(\mathbf{y};\ t))}{f_{\mathbf{Y}}(\mathbf{y};\ t)} - \frac{\nabla .q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)} \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t), \end{align}

and

(32) \begin{equation} 2\nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla(\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t)) - \Delta\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} = -2\sum_{i,j}\bigg(\frac{\partial^{2}}{\partial y_{i}\partial y_{j}} \log f_{\mathbf{Y}}(\mathbf{y};\ t) \bigg)^{2}. \end{equation}

Proof. Simply, we know that

\begin{equation*} \frac{\partial}{\partial t}\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} = 2\sum_{j=1}^{m}\frac{\partial}{\partial y_{j}}\log f_{\mathbf{Y}}(\mathbf{y},t) \frac{\partial^2}{\partial y_{j}\partial t}\log f_{\mathbf{Y}}(\mathbf{y};\ t). \end{equation*}

Also, from (18), we can write

\begin{equation*} \frac{\partial^2}{\partial y_{j}\partial t}\log f_{\mathbf{Y}}(\mathbf{y};\ t) = \frac{\partial}{\partial y_{j}}\bigg(\frac{\delta(\rho ,m)\Delta f_{\mathbf{Y}}(\mathbf{y};\ t) - \lambda(\rho ,m)\sum_{j=1}^{m}\frac{\partial}{\partial y_{j}}q_{j}(\mathbf{y};\ t)} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg), \end{equation*}

which implies (29). To prove (30), we have

\begin{align*} \Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t) & = \sum_{j=1}^{m}\frac{\partial^{2}}{\partial y_{j}^2}\log f_{\mathbf{Y}}(\mathbf{y};\ t) \\ & = \sum_{j=1}^{m}\bigg[\frac{\frac{\partial^{2}}{\partial y_{j}^2}f_{\mathbf{Y}}(\mathbf{y};\ t)} {f_{\mathbf{Y}}(\mathbf{y};\ t)} - \bigg(\frac{\frac{\partial}{\partial y_{j}}f_{\mathbf{Y}}(\mathbf{y};\ t)} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)^{2}\bigg] = \frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)} - \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}. \end{align*}

Also, since $\nabla. q(\mathbf{y};\ t)=\sum_{j=1}^{m}(\partial/\partial y_{j})q_{j}(\mathbf{y};\ t)$ , (31) is obtained. Now, to prove (32), we obtain

(33) \begin{align} \Delta\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} & = \sum_{i=1}^{m}\frac{\partial^{2}}{\partial y_{i}^{2}}\sum_{j=1}^{m} \bigg(\frac{\partial}{\partial y_{i}}\log f_{\mathbf{Y}}(\mathbf{y};\ t)\bigg)^{2} \nonumber \\ & = 2\sum_{i,j}\bigg(\frac{\partial^{2}}{\partial y_{i}\partial y_{j}} \log f_{\mathbf{Y}}(\mathbf{y};\ t) \bigg)^{2} + 2\sum_{i,j}\frac{\partial}{\partial y_{i}}\log f_{\mathbf{Y}}(\mathbf{y};\ t) \frac{\partial^{3}}{\partial y_{i}\partial y_{j}^2}\log f_{\mathbf{Y}}(\mathbf{y};\ t), \end{align}

where

\begin{align*} 2\sum_{i,j}\frac{\partial}{\partial y_{i}}\log f_{\mathbf{Y}}(\mathbf{y};\ t) \frac{\partial^{3}}{\partial y_{i}\partial y_{j}^2}\log f_{\mathbf{Y}}(\mathbf{y};\ t) & = 2\sum_{i=1}^{m}\frac{\partial}{\partial y_{i}}\log f_{\mathbf{Y}}(\mathbf{y};\ t) \frac{\partial}{\partial y_{i}}\sum_{j=1}^{m}\frac{\partial^{2}}{\partial y_{j}^{2}} \log f_{\mathbf{Y}}(\mathbf{y};\ t) \\ & = 2\nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla(\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t)); \end{align*}

together with (33), this completes the proof.

Theorem 2. Under the conditions of Lemma 2, the first-order derivative of the Fisher information $J(\mathbf{Y})$ is as follows:

(34) \begin{equation} \frac{\partial}{\partial t}J(\mathbf{Y}) = -2\delta(\rho ,m)\sum_{i,j}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[\frac{\partial^{2}}{\partial Y_{i}\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t) \bigg]^{2} + 2\lambda(\rho ,m)D_{t}, \end{equation}

where

\begin{align*} D_{t} & = \sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[\frac{\partial}{\partial Y_{j}}p_{j}(\mathbf{Y};\ t)\Delta\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg] \\ & \quad + \sum_{i\neq j}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[p_{j}(\mathbf{Y};\ t)\frac{\partial}{\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t) \bigg(\frac{\partial^2}{\partial Y_{i}^{2}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t) - \frac{\partial^2}{\partial Y_{i}\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg)\bigg]. \end{align*}

Proof. According to the Fisher information (4), we know that

(35) \begin{equation} \frac{\partial}{\partial t}J(\mathbf{Y}) = \int_{\Bbb{R}^{m}}{\frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} + \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\frac{\partial}{\partial t} \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{equation}

Based on Lemma 2, the first term in (35) is expressed as

(36) \begin{align} \int_{\Bbb{R}^{m}}{\frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = \delta(\rho ,m)\int_{\Bbb{R}^{m}}{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - \lambda(\rho ,m)\sum_{j=1}^{m}\int_{\Bbb{R}^{m}}{\frac{\partial}{\partial y_{j}} q_{j}(\mathbf{y};\ t)\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

By applying Green’s identity (6) to the first term in (36) and taking the limit as r tends to $+\infty$ , we obtain

(37) \begin{align} \int_{\Bbb{R}^{m}}{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} \nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

Similarly, using Green’s identity for the second term in (37) and taking the limit, we have

(38) \begin{align} -\int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = -\lim_{r\rightarrow+\infty}\int_{S_{r}}{ f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}.\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}} \nonumber \\ & \quad + \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

The first terms in (37) and (38) can be shown to vanish (see Appendix B), and therefore, by comparing (37) with (38), we can write

\begin{equation*} \int_{\Bbb{R}^{m}}{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} = \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{equation*}

Substituting this into (36) yields

(39) \begin{align} \int_{\Bbb{R}^{m}}{\frac{\partial}{\partial t}f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = \delta(\rho ,m)\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - \lambda(\rho ,m)\int_{\Bbb{R}^{m}}{\nabla. q(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

Also, by using (29) in Lemma 3, the second term in (35) can be rewritten as

(40) \begin{align} \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\frac{\partial}{\partial t} \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = 2\delta(\rho ,m)\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla\bigg( \frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - 2\lambda(\rho ,m)\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla \bigg(\frac{\nabla. q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}}. \end{align}

Now, according to (30), we obtain

\begin{align*} \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla \bigg(\frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}} & = \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla(\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t))\,\mathrm{d}\mathbf{y}} \\ & \quad + \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align*}

Therefore, from this and (38), we have

(41) \begin{align} \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla\bigg(\frac{\Delta f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}} & = \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla(\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t))\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{ f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

Thanks to the identity (31), for the second term in (40) we obtain

(42) \begin{align} \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla\bigg(\frac{\nabla. q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}} & = \int_{\Bbb{R}^{m}}{\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla(\nabla. q(\mathbf{y};\ t))\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{\nabla. q(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

Using Green’s identity, we arrive at

(43) \begin{align} \int_{\Bbb{R}^{m}}{\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla(\nabla. q(\mathbf{y};\ t))\,\mathrm{d}\mathbf{y}} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\nabla. q(\mathbf{y};\ t) \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla.q(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}, \end{align}

whose first term becomes zero (see Appendix B). Using the identity

\begin{align*} \nabla. q(\mathbf{y};\ t) = \sum_{j=1}^{m} p_{j}(\mathbf{y};\ t)\frac{\partial}{\partial y_{j}}f_{\mathbf{Y}}(\mathbf{y};\ t) + f_{\mathbf{Y}}(\mathbf{y};\ t)\sum_{j=1}^{m}\frac{\partial}{\partial y_{j}}p_{j}(\mathbf{y};\ t), \end{align*}

the second term in (43) is rewritten as

\begin{align*} -\int_{\Bbb{R}^{m}}{\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla. q(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} & = -\sum_{j=1}^{m}\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)p_{j}(\mathbf{y};\ t) \frac{\partial}{\partial y_{j}}\log f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} \\ & \quad - \sum_{j=1}^{m}\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \frac{\partial}{\partial y_{j}}p_{j}(\mathbf{y};\ t)\Delta \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}}. \end{align*}

By combining this with (42) and (43), we get

(44) \begin{align} & \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t). \nabla\bigg(\frac{\nabla. q(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg)\,\mathrm{d}\mathbf{y}} \nonumber \\ & = -\sum_{j=1}^{m}\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)p_{j}(\mathbf{y};\ t) \frac{\partial}{\partial y_{j}}\log f_{\mathbf{Y}}(\mathbf{y};\ t)\Delta \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} \nonumber \\ & \quad - \sum_{j=1}^{m}\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \frac{\partial}{\partial y_{j}}p_{j}(\mathbf{y};\ t)\Delta \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d}\mathbf{y}} - \int_{\Bbb{R}^{m}}{\nabla. q(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}. \end{align}

Also, we have

(45) \begin{align} \int_{\Bbb{R}^{m}}{\nabla . q_{j}(\mathbf{y};\ t)\| \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} q(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}, \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)p(\mathbf{y};\ t).\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d}\mathbf{y}}, \end{align}

whose first term vanishes (see Appendix B), and $p(\mathbf{y};\ t)=(p_{1}(\mathbf{y};\ t),p_{2}(\mathbf{y};\ t)\ldots,p_{m}(\mathbf{y};\ t))$ . From (45), combining (35), (39), (40), (41), and (44), we obtain

\begin{align*} & \frac{\partial}{\partial t}J(\mathbf{Y}) \\ & = \delta(\rho ,m)\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)}[2\nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t).\nabla(\Delta\log f_{\mathbf{Y}}(\mathbf{y};\ t))-\Delta\| \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}]\,\mathrm{d}\mathbf{y} \\ & \quad + 2\lambda(\rho ,m)\sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[\frac{\partial}{\partial Y_{j}}p_{j}(\mathbf{Y};\ t)\Delta\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg] \\ & \quad + 2\lambda(\rho ,m)\sum_{i\neq j}{{{{\bf E}}}}_{\mathbf{Y}} \bigg[p_{j}(\mathbf{Y};\ t)\frac{\partial}{\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t) \bigg(\frac{\partial^2}{\partial Y_{i}^{2}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t) - \frac{\partial^2}{\partial Y_{i}\partial Y_{j}}\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\bigg)\bigg]. \end{align*}

Hence, based on the relation (32) in Lemma 3, the proof is complete.

Remark 3. It is interesting to see that, if we put $\rho=0$ in (34), it reduces to

\begin{equation*} \frac{\partial}{\partial t}J(\mathbf{Y}) = -\sum_{i,j}{{{{\bf E}}}}_{\mathbf{Y}}\bigg[\frac{\partial^{2}}{\partial Y_{i}\partial Y_{j}} \log f_{\mathbf{Y}}(\mathbf{Y};\ t) \bigg]^{2}. \end{equation*}

That is, Theorem 2 results in the case where $\mathbf{X}$ and $\mathbf{W}_{t}$ are independent random variables as a special case. Hence, Theorem 2 encompasses the result of [Reference Villani17] as a corollary.

Now, we can establish our main result of this manuscript.

Theorem 3. Let $\mathbf{X}$ and $\mathbf{W}_{t}$ in channel model (1) be two dependent random variables whose dependence structure is modeled by the multivariate Gaussian copula. For any $\rho>-1/(2m-1)$ , under the conditions

(46a) \begin{align} m\frac{\partial A_{t}}{\partial t}+2A_{t}^{2}+4\delta(\rho ,m)A_{t}J(\mathbf{Y}) & \leq 0, \end{align}
(46b) \begin{align} \rho D_{t} & \leq 0, \end{align}

the entropy power $N(\mathbf{X}+\mathbf{W}_{t})$ is concave in t. i.e.

\begin{equation*} \frac{\partial^{2}}{\partial t^{2}}N(\mathbf{X}+\mathbf{W}_{t})\leq 0. \end{equation*}

Proof. Simply, we have

\begin{equation*} \frac{\partial^{2}}{\partial t^{2}}N(\mathbf{Y})=\frac{2}{m}N(\mathbf{Y}) \bigg[\frac{\partial^{2}}{\partial t^{2}}h(\mathbf{Y}) + \frac{2}{m}\bigg(\frac{\partial}{\partial t}h(\mathbf{Y})\bigg)^{2}\bigg]. \end{equation*}

Since the entropy power is nonnegative, to show that $(\partial^{2}/\partial t^{2})N(\mathbf{Y})\leq 0$ , it is sufficient to prove that

\begin{equation*} -\frac{\partial^{2}}{\partial t^{2}}h(\mathbf{Y}) \geq \frac{2}{m}\bigg(\frac{\partial}{\partial t}h(\mathbf{Y})\bigg)^{2}. \end{equation*}

Based on Theorem 1, this is equivalent to

\begin{equation*} -\delta(\rho ,m)\frac{\partial}{\partial t}J(\mathbf{Y}) - \frac{\partial}{\partial t}A_{t} \geq \frac{2\delta^{2}(\rho ,m)}{m}J^{2}(\mathbf{Y}) + \frac{4\delta(\rho ,m)}{m}A_{t}J(\mathbf{Y}) + \frac{2}{m}A_{t}^{2}. \end{equation*}

Thus, since $\rho>-1/(2m-1)$ and $\delta(\rho ,m)>0$ , due to the condition (46a), we must prove that

\begin{equation*} -\frac{\partial}{\partial t}J(\mathbf{Y}) \geq \frac{2\delta(\rho ,m)}{m}J^{2}(\mathbf{Y}). \end{equation*}

According to proof of the proposition in [17, p. 3], we have

(47) \begin{equation} \sum_{i,j}{{{{\bf E}}}}_{\mathbf{Y}}\bigg[\frac{\partial^{2}}{\partial Y_{i}\partial Y_{j}} \log f_{\mathbf{Y}}(\mathbf{Y};\ t) \bigg]^{2}\geq \frac{J^{2}(\mathbf{Y})}{m}. \end{equation}

Hence, according to Theorem 3, (47), and assumption (46b), the proof is complete.

4. The one-dimensional case

In this section, by considering the channel model (1) with $m=1$ , we describe special versions of our main results.

Corollary 1. Let X and $W_{t}$ in the channel model $Y=X+W_{t}$ be dependent one-dimensional random variables, and let $W_{t}$ be normally distributed with mean zero and variance t. If their dependence structure is modeled by the bivariate Gaussian copula (15), then

\begin{equation*} \frac{\partial}{\partial t}h(Y) = \bigg(\frac{1-\rho^{2}}{2}\bigg)J(Y)+A'_{t}, \end{equation*}

where

(48) \begin{equation} A'_{t} = -\frac{\rho}{2\sqrt{t}\,}{{{{\bf E}}}}_{Y} \bigg[p'(Y;\ t)\frac{\partial}{\partial Y}\log f_{Y}(Y;\ t)\bigg], \end{equation}

in which $p'(y;\ t)={{{{\bf E}}}}_{X\mid Y}[\Phi^{-1}(F_{X}(X))\mid Y=y]$ .

Proof. Since $W_{t}$ is normally distributed with mean zero and variance t, from (15),

\begin{multline*} f_{X,W_{t}}(x,y-x) = \frac{1}{\sqrt{2\pi (1-\rho ^{2})}\,} \exp\bigg\{{-}\frac{\rho^{2}}{2(1-\rho ^{2})}\bigg[(\Phi^{-1}(F_{X}(x)))^{2} \\ - \frac{2\rho}{\sqrt{t}\,} (y-x)\Phi ^{-1}(F_{X}(x)) +\dfrac{(y-x)^{2}}{\rho^{2}t}\bigg] \bigg\} f_{X}(x). \end{multline*}

Thus, by some simple calculations, we obtain

(49) \begin{align} \frac{\partial}{\partial t}f_{Y}(y;\ t) & = \int_{-\infty}^{+\infty}{\frac{\partial}{\partial t}f_{X,W_{t}}(x,y-x)\,\mathrm{d} x} \nonumber \\ & = \int_{-\infty}^{+\infty}{\frac{1}{2t} \bigg({-}\frac{\rho \Phi ^{-1}(F_{X}(x))(y-x)}{\sqrt{t}(1-\rho^{2})} + \frac{(y-x)^{2}}{t(1-\rho^{2})}-1\bigg)f_{X,W_{t}}(x,y-x)\,\mathrm{d} x}, \end{align}
(50) \begin{align} \frac{\partial^{2}}{\partial y^{2}}f_{Y}(y;\ t) & = \int_{-\infty}^{+\infty}{\frac{\partial^{2}}{\partial y^{2}}f_{X,W_{t}}(x,y-x)\,\mathrm{d} x}, \nonumber \\ & = \int_{-\infty}^{+\infty}{\frac{1}{t(1-\rho^{2})} \bigg[\dfrac{\rho^{2}(\Phi^{-1}(F_{X}(x)))^{2}}{(1-\rho^{2})} + \dfrac{(y-x)^{2}}{t(1-\rho^{2})}} \nonumber \\ & \qquad \qquad \qquad \qquad -\frac{2\rho \Phi^{-1}(F_{X}(x))(y-x)}{\sqrt{t}(1-\rho^{2})}-1\bigg] f_{X,W_{t}}(x,y-x)\,\mathrm{d} x. \end{align}

Now, by comparing (49) with (50), we obtain

(51) \begin{equation} \frac{\partial}{\partial t}f_{Y}(y;\ t) = \bigg(\frac{1-\rho^{2}}{2}\bigg)\frac{\partial^{2}}{\partial y^{2}}f_{Y}(y;\ t) - \frac{\rho}{2\sqrt{t}\,}\frac{\partial}{\partial y}q'(y;\ t), \end{equation}

in which

(52) \begin{equation} q'(y;\ t) = \int_{-\infty}^{+\infty}{\Phi^{-1}(F_{X}(x))f_{X,W_{t}}(x,y-x)\,\mathrm{d} x} = p'(y;\ t)f_{Y}(y;\ t), \end{equation}

where $p'(y;\ t)={{{{\bf E}}}}_{X\mid Y}[\Phi^{-1}(F_{X}(X))\mid Y=y]$ . Hence, $q_{j}(\mathbf{y};\ t)$ and $p_{j}(\mathbf{y};\ t)$ in Lemma 2 reduce to $q'(y;\ t)$ and $p'(y;\ t)$ , respectively. Now, since X and $W_{t}$ are one-dimensional, it is sufficient to set $m=1$ and $p_{j}(\mathbf{y};\ t)=p'(y;\ t)$ in (24). Therefore, the proof is complete.

Remark 4. Corollary 1 is equivalent to a result in [Reference Khoolenjani and Alamatsaz12].

Now, under the same conditions as in Corollary 1, according to the relations (51) and (52), the first-order derivative of the Fisher information,

(53) \begin{equation} \frac{\partial}{\partial t}J(Y) = -(1-\rho^{2}){{{{\bf E}}}}_{Y} \bigg(\frac{\partial^{2}}{\partial Y^{2}}\log f_{Y}(Y;\ t)\bigg)^{2} + \frac{\rho}{\sqrt{t}\,}{{{{\bf E}}}}_{Y} \bigg[\frac{\partial}{\partial Y}p'(Y;\ t)\frac{\partial^{2}}{\partial Y^{2}}\log f_{Y}(Y;\ t)\bigg],\end{equation}

simply follows by setting $m=1$ and $p_{j}(\mathbf{y};\ t)=p'(y;\ t)$ in (34). This coincides with the result in [Reference Asgari, Alamatsaz and Khoolenjani4], where a direct proof of (53) is provided.

Using the first-order derivatives of the entropy and Fisher information of the output signal Y, in what follows the concavity of Shannon’s entropy power for the special one-dimensional case is obtained.

Corollary 2. Given the channel model (1), assume that X and $W_{t}$ are dependent random variables modeled by the bivariate Gaussian copula (14). Based on the assumptions

(54a) \begin{align} \frac{\partial A'_{t}}{\partial t}+2A_{t}^{'2}+2(1-\rho^{2})J(Y)A'_{t} & \leq 0, \end{align}
(54b) \begin{align} \rho{{{{\bf E}}}}_{Y}\bigg[\frac{\partial}{\partial Y}p'(Y;\ t)\frac{\partial^{2}}{\partial Y^{2}} \log f_{Y}(Y;\ t)\bigg] & \leq 0, \end{align}

the entropy power $N(X+W_{t})$ is concave in t.

Example 1. Consider the channel model $Y=X+W_{t}$ with $W_{t}=\sqrt{t}W$ . Let X be standard Gaussian and suppose that X and $W_{t}$ are jointly distributed according to the bivariate Gaussian copula, i.e. X and W are two dependent random variables distributed according to a bivariate standard Gaussian distribution with the PDF

\begin{equation*} f_{X,W}(x,w) = \frac{1}{2\pi \sqrt{(1-\rho ^{2})}\,} \exp \bigg\{{-}\frac{1}{2(1-\rho ^{2})}[ x^{2}-2\rho xw+w^{2}] \bigg\} . \end{equation*}

We know that Y is normally distributed with mean zero and variance $1+t+2\sqrt{t}\rho$ . Thus, since $(X,Y)\sim N_{2}(\mathbf{0},\Sigma_{X,Y})$ with

\begin{equation*} \Sigma_{X,Y}=\left( \begin{array}{c@{\quad}c} 1& 1+\sqrt{t}\rho\\ \\[-9pt] 1+\sqrt{t}\rho &1+t+2\sqrt{t}\rho\\ \end{array} \right), \end{equation*}

we have

\begin{equation*} p'(y;\ t) = {{{{\bf E}}}}_{X\mid Y}(X\mid Y=y)=\frac{1+\sqrt{t}\rho}{1+t+2\sqrt{t}\rho}y. \end{equation*}

Further, we observe that

\begin{equation*} \frac{\partial}{\partial y}\log f_{Y}(y;\ t) = -\frac{1}{1+t+2\sqrt{t}\rho}y. \end{equation*}

Thus, by (48), we can write

\begin{equation*} A'(t) = \frac{\rho(1+\sqrt{t}\rho)}{2\sqrt{t}(1+t+2\rho\sqrt{t})}. \end{equation*}

As we can see, both conditions (54a) and (54b) are satisfied when $\rho>0$ . Thus, based on Corollary 2, $N(X+W_{t})$ is concave in t.

5. Conclusions

In this paper, based on the multivariate Gaussian copula dependence structure, we have derived the first- and second-order derivatives of differential entropy of the output signal in the m-dimensional additive Gaussian noise channel model. Then, by using these derivatives, we have generalized Costa’s concavity inequality for the particular case where the coordinates of the input signal and noise are dependent according to a multivariate Gaussian copula model. In particular, we have studied our results in the one-dimensional case and have provided an illustrative example.

Appendix A. Vanishing surface integrals of Theorem 1

We need to prove that

(55) \begin{equation} \lim_{r\rightarrow+\infty}\int_{S_{r}}{\log f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla f_{\mathbf{Y}}(\mathbf{y};\ t). \mathbf{n}_{S_{r}}(\mathbf{y}) \,\mathrm{d} S_{r}}=0.\end{equation}

We first assume that $h(\mathbf{Y})$ is finite. Next, we integrate the surface integral in (55) over $r\geq 0$ and then, by applying the identity (27) and Stokes’ theorem, we obtain

(56) \begin{align} \int_{0}^{+\infty}\int_{S_{r}}{\log f_{\mathbf{Y}}(\mathbf{y};\ t) \nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y}) \,\mathrm{d} S_{r}} & = \int_{0}^{+\infty}\int_{S_{r}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t). (\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y})) \,\mathrm{d} S_{r}\,\mathrm{d} r} \nonumber \\ & = \int_{\Bbb{R}^{m}}{\nabla f_{\mathbf{Y}}(\mathbf{y};\ t). (\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}} \nonumber \\ & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d} S_{r}} \nonumber \\ & \quad - \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla. (\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}}. \end{align}

Since the limit in the first part of (56) exists, due to

\begin{equation*} \bigg\vert \int_{0}^{+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \log f_{\mathbf{Y}}(\mathbf{y};\ t)\,\mathrm{d} S_{r}\,\mathrm{d} r}\bigg\vert = \vert h(\mathbf{Y})\vert <+\infty ,\end{equation*}

the first term in (56) vanishes. Now, since

\begin{equation*} \vert\nabla .(\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\vert = \frac{\vert \nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\vert} {f_{\mathbf{Y}}(\mathbf{y};\ t)} \leq \frac{\| \nabla f_{\mathbf{Y}}(\mathbf{y};\ t)\|}{f_{\mathbf{Y}}(\mathbf{y};\ t)},\end{equation*}

for the second term in (56) we can write

(57) \begin{align} \bigg\vert \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla. (\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}}\bigg\vert & \leq \int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\vert\nabla. (\!\log f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\vert \,\mathrm{d}\mathbf{y}} \nonumber \\ & \leq {{{{\bf E}}}}_{\mathbf{Y}} \bigg(\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{Y};\ t)\|}{f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg). \end{align}

Further, we know that

(58) \begin{equation} {{{{\bf E}}}}_{\mathbf{Y}}\bigg(\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{Y};\ t)\|} {f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg) = {{{{\bf E}}}}_{\mathbf{Y}}\bigg\{\bigg[\sum_{j=1}^{m}\bigg(\frac{\frac{\partial}{\partial y_{j}} f_{\mathbf{Y}}(\mathbf{Y};\ t)}{f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg)^{2}\bigg]^{\frac{1}{2}}\bigg\} \leq \bigg\{\sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}}\bigg(\frac{\frac{\partial}{\partial y_{j}} f_{\mathbf{Y}}(\mathbf{Y};\ t)}{f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg)^{2}\bigg\}^{\frac{1}{2}}.\end{equation}

On the other hand, from (20), we have

(59) \begin{align} {{{{\bf E}}}}(W_{t,j}\mid \mathbf{Y}=\mathbf{y}) & = {{{{\bf E}}}}_{\mathbf{X}\mid \mathbf{Y}}[(Y_{j}-X_{j})\mid \mathbf{Y}=\mathbf{y}] \nonumber \\ & = \int_{\Bbb{R}^{m}}{(y_{j}-x_{j})\frac{f_{\mathbf{X},\mathbf{W}_{t}}(x,\mathbf{y}-\mathbf{x})} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\,\mathrm{d}\mathbf{x}} \nonumber \\ & = -2\delta(\rho , m)t\bigg(\frac{\frac{\partial}{\partial Y_{j}}f_{\mathbf{Y}}(\mathbf{y};\ t)} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg) + 2t\lambda(\rho ,m)p_{j}(\mathbf{y};\ t). \end{align}

Now, since for all $j=1,2,\ldots,m$ , $\vert E(W_{t,j}\mid \mathbf{Y}=\mathbf{y})\vert<+\infty$ , the first and second terms in (59) must be finite too. Therefore, we have

(60) \begin{equation} {{{{\bf E}}}}_{\mathbf{Y}}\bigg(\frac{\frac{\partial}{\partial Y_{j}} f_{\mathbf{Y}}(\mathbf{Y};\ t)}{f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg)^{2}<+\infty , \qquad j=1,2,\ldots ,m,\end{equation}

and, due to (58), the right-hand side of inequality (57) is finite. Hence, the integral in (56) is finite and, since the limit in (55) exists, the desired result (55) is proved.

Now, we need to prove that

(61) \begin{equation} \lim_{r\rightarrow+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\log f_{\mathbf{Y}}(\mathbf{y};\ t) q(\mathbf{y};\ t). \mathbf{n}_{S_{r}}(\mathbf{y}) \,\mathrm{d} S_{r}}=0,\end{equation}

in which the integral is taken from $r=0$ to $r=+\infty$ on the surface integral. Thus, we have

(62) \begin{align} &\bigg\vert\int_{0}^{+\infty}\int_{S_{r}}{\log f_{\mathbf{Y}}(\mathbf{y};\ t) q(\mathbf{y};\ t). \mathbf{n}_{S_{r}}(\mathbf{y}) \,\mathrm{d} S_{r}\,\mathrm{d} r}\bigg\vert \nonumber\\ &\quad \leq \int_{0}^{+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\vert\log f_{\mathbf{Y}}(\mathbf{y};\ t)\vert \|p(\mathbf{y};\ t)\| \|\mathbf{n}_{S_{r}}(\mathbf{y})\| \,\mathrm{d} S_{r}\,\mathrm{d} r} \nonumber \\ &\quad = \sqrt{m}\int_{\Bbb{R}^{m}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\vert\log f_{\mathbf{Y}}(\mathbf{y};\ t)\vert \| p(\mathbf{y};\ t)\| \,\mathrm{d}\mathbf{y}}, \nonumber \\ &\quad = \sqrt{m}{{{{\bf E}}}}_{\mathbf{Y}}\big\vert\log f_{\mathbf{Y}}(\mathbf{Y};\ t) \|p(\mathbf{Y};\ t)\|\big\vert . \end{align}

Since $f_{\mathbf{Y}}(\mathbf{y};\ t)$ converges to zero as $\mathbf{y}$ approaches $\pm\infty$ , we have $f_{\mathbf{Y}}(\mathbf{y};\ t)\log f_{\mathbf{Y}}(\mathbf{y};\ t)\rightarrow 0$ as $\mathbf{y}\rightarrow\pm\infty$ . Therefore, $\log f_{\mathbf{Y}}(\mathbf{y};\ t)$ is finite and, due to (59), the right-hand side of (62) becomes finite. Hence, since the limit in (61) exists, we can conclude the relation in ( $61$ ).

Appendix B. Vanishing surface integrals of Theorem 2

We intend to prove that

(63) \begin{align} u_{1} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} \nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}=0, \\ u_{2} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}. \mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}=0, \nonumber \end{align}
(64) \begin{align} u_{3} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\nabla. q(\mathbf{y};\ t) \nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}=0, \\ u_{4} & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} q(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}=0. \nonumber \end{align}

First, we consider the integral of the surface integral in (63) over $r\geq 0$ ;

(65) \begin{multline} \bigg\vert\int_{0}^{+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} \nabla f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}\,\mathrm{d} r}\bigg\vert \\ \leq \int_{0}^{+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\nabla \| f_{\mathbf{Y}}(\mathbf{y};\ t)\|\|\mathbf{n}_{S_{r}}(\mathbf{y})\|\,\mathrm{d} S_{r}\,\mathrm{d} r} = {{{{\bf E}}}}_{\mathbf{Y}}\bigg(\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{Y};\ t)\|} {f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg)^{3}.\end{multline}

Simply, based on (58) and (60), the right-hand side of (65) becomes finite and, since the limit $u_{1}$ exists, this proves that $u_{1}=0$ .

To show that $u_{2}=0$ , we write

(66) \begin{align} & \int_{0}^{+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t)\nabla \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}.\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}\,\mathrm{d} r} \nonumber \\[3pt] & = \int_{0}^{+\infty}\int_{S_{r}}{\nabla\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}. (f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d} S_{r}\,\mathrm{d} r}, \nonumber \\[3pt] & = \int_{\Bbb{R}^{m}}{\nabla\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}. (f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}}, \nonumber \\[3pt] & = \lim_{r\rightarrow+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d} S_{r}} - \int_{\Bbb{R}^{m}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\nabla. (f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}}. \end{align}

Because $\big\vert\int_{0}^{+\infty}\int_{S_{r}}{f_{\mathbf{Y}}(\mathbf{y};\ t) \|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\,\mathrm{d} S_{r}}\,\mathrm{d} r\big\vert =\vert J(\mathbf{Y})\vert<+\infty$ and

\begin{align*} \bigg\vert\int_{\Bbb{R}^{m}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2}\nabla. (f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\,\mathrm{d}\mathbf{y}}\bigg\vert & \leq \int_{\Bbb{R}^{m}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} \vert\nabla.(f_{\mathbf{Y}}(\mathbf{y};\ t)\mathbf{n}_{S_{r}}(\mathbf{y}))\vert \,\mathrm{d}\mathbf{y}} \\ & \leq {{{{\bf E}}}}_{\mathbf{Y}}\bigg(\frac{\| \nabla f_{\mathbf{Y}}(\mathbf{Y};\ t)\|} {f_{\mathbf{Y}}(\mathbf{Y};\ t)}\bigg)^{3},\end{align*}

the first term in (66) becomes zero and the absolute value of the second term is finite. Thus, since the limit $u_{2}$ exists, we have $u_{2}=0$ .

In a similar way, we consider the integral from $r=0$ to $r=+\infty$ of the surface integral in (64):

(67) \begin{align} & \bigg\vert \int_{0}^{+\infty}\int_{S_{r}}{\nabla. q(\mathbf{y};\ t)\nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}}\,\mathrm{d} r\bigg\vert \nonumber \\[3pt] &\quad \leq \int_{0}^{+\infty}\int_{S_{r}}{\vert\nabla. q(\mathbf{y};\ t)\vert\|\nabla \log f_{\mathbf{Y}}(\mathbf{y};\ t)\|\|\mathbf{n}_{S_{r}}(\mathbf{y})\| \,\mathrm{d} S_{r}}\,\mathrm{d} r \nonumber \\[3pt] &\quad \leq \sum_{j=1}^{m}{{{{\bf E}}}}_{\mathbf{Y}}\bigg[ \bigg\vert\frac{\partial}{\partial Y_{j}}q_{j}(\mathbf{Y};\ t)\bigg\vert \|\nabla\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\| \bigg]. \end{align}

Using (21), we have

(68) \begin{align} & {\bf E}(W_{t,j}^{2}\mid \mathbf{Y}=\mathbf{y}) = {{{\bf E}}}_{\mathbf{X}\mid \mathbf{Y}}[(Y_{j}-X_{j})^{2}\mid \mathbf{Y}=\mathbf{y}] \nonumber \\ & = \int_{\Bbb{R}^{m}}{(y_{j}-x_{j})^{2}\frac{f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})} {f_{\mathbf{Y}}(\mathbf{y};\ t)}\,\mathrm{d}\mathbf{x}} \nonumber \\ & = 4\delta^{2}(\rho ,m)t^2\bigg(\frac{\frac{\partial^{2}}{\partial y_{j}^{2}} f_{\mathbf{Y}}(\mathbf{y};\ t)}{f_{\mathbf{Y}}(\mathbf{y};\ t)}\bigg) + 2\delta(\rho ,m)t \nonumber \\ & \quad - \frac{4t^{2}\lambda^{2}(\rho ,m)}{f_{\mathbf{Y}}(\mathbf{y};\ t)} \int_{\Bbb{R}^{m}}{\Bigg(\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) + \frac{1}{\sqrt{t}\,}\sum_{k\neq j}(y_{k}-x_{k})\Bigg)^{2} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}} \nonumber \\ & \quad - \frac{2t\lambda(\rho ,m)}{f_{\mathbf{Y}}(\mathbf{y};\ t)} \int_{\Bbb{R}^{m}}{(y_{j}-x_{j})\Bigg(\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) + \frac{1}{\sqrt{t}\,}\sum_{k\neq j}(y_{k}-x_{k})\Bigg)} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}. \end{align}

Also, from (20), we obtain

(69) \begin{align} & \frac{\partial}{\partial y_{j}}q_{j}(\mathbf{y};\ t) \nonumber \\ & = \frac{-1}{2\delta(\rho ,m)t}\Bigg\{ \int_{\Bbb{R}^{m}}(y_{j}-x_{j})\Bigg(\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) + \frac{1}{\sqrt{t}\,}\sum_{k\neq j}(y_{k}-x_{k})\Bigg) f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x} \nonumber \\ & \quad - 2t\lambda(\rho ,m)\int_{\Bbb{R}^{m}}\Bigg(\sum_{i=1}^{m}\Phi^{-1}(F_{X_{i}}(x_{i})) + \frac{1}{\sqrt{t}}\sum_{k\neq j}(y_{k}-x_{k})\Bigg)^{2} f_{\mathbf{X},\mathbf{W}_{t}}(\mathbf{x},\mathbf{y}-\mathbf{x})\,\mathrm{d}\mathbf{x}\Bigg\}. \end{align}

Since, for all $j=1,2,\ldots,m$ , ${\bf E}(W_{t,j}^{2}\mid \mathbf{Y}=\mathbf{y})<+\infty$ , the first, third, and fourth terms in (68) are finite too and, due to (69), $(\partial/\partial y_{j})q_{j}(\mathbf{y};\ t)$ is finite as well. Therefore, from (59), the right-hand side of (67) is finite and, together with the fact that the limit $u_{3}$ exists, it follows that $u_{3}=0$ .

Similarly, to show that $u_{4}=0$ , we find the sequence of relations

\begin{align*} & \bigg\vert\int_{0}^{+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} q(\mathbf{y};\ t).\mathbf{n}_{S_{r}}(\mathbf{y})\,\mathrm{d} S_{r}\,\mathrm{d} r}\bigg\vert \\ & \leq \int_{0}^{+\infty}\int_{S_{r}}{\|\nabla\log f_{\mathbf{Y}}(\mathbf{y};\ t)\|^{2} \| q(\mathbf{y};\ t)\|\|\mathbf{n}_{S_{r}}(\mathbf{y})\|\,\mathrm{d} S_{r}\,\mathrm{d} r} = \sqrt{m}{{{\bf E}}}_{\mathbf{Y}} \big[\| p(\mathbf{Y};\ t)\| \|\nabla\log f_{\mathbf{Y}}(\mathbf{Y};\ t)\|^{2}\big].\end{align*}

Using similar steps, we can see that $u_{4}=0$ .

Acknowledgements

We express our gratitude to the associate editor and the anonymous reviewers whose comments had a noticeable impact on improving the manuscript.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Amazigo, J. and Rubenfeld, L. (1980). Advanced Calculus and its Applications to the Engineering and Physical Siences. Wiley, New York.Google Scholar
Arias-Nicolás, J. P., Fernández-Ponce, J. M., Luque-Calvo, P. and Suárez-Llorens, A. (2005). Multivariate dispersion order and the notion of copula applied to the multivariate t-distribution. Prob. Eng. Inf. Sci. 19, 363375.CrossRefGoogle Scholar
Asgari, F. and Alamatsaz, M. H. (2022). An extension of entropy power inequality for dependent random variables. Commun. Statist. Theory Meth. 51, 43584369.Google Scholar
Asgari, F., Alamatsaz, M. H. and Khoolenjani, N. B. (2019). Inequalities for the dependent Gaussian noise channels based on Fisher information and copulas. Prob. Eng. Inf. Sci. 33, 618657.CrossRefGoogle Scholar
Blachman, N. (1965). The convolution inequality for entropy powers. IEEE Trans. Inf. Theory 11, 267271.Google Scholar
Bergmans, P. (1974). A simple converse for broadcast channels with additive white Gaussian noise. IEEE Trans. Inf. Theory 20, 279280.CrossRefGoogle Scholar
Costa, M. (1985). A new entropy power inequality. IEEE Trans. Inf. Theory 31, 751760.CrossRefGoogle Scholar
Dembo, A. (1989). Simple proof of the concavity of the entropy power with respect to added Gaussian noise. IEEE Trans. Inf. Theory 35, 887888.CrossRefGoogle Scholar
Joe, H. (1997). Multivariate Models and Dependence Concepts (Monographs Statist. Appl. Prob. 73). Chapman & Hall, London.Google Scholar
Johnson, O. (2004). A conditional entropy power inequality for dependent variables. IEEE Trans. Inf. Theory 50, 15811583.Google Scholar
Kay, S. (2009). Waveform design for multistatic radar detection. IEEE Trans. Aerosp. Electron. Systems 45, 11531166.CrossRefGoogle Scholar
Khoolenjani, N. B. and Alamatsaz, M. H. (2016). Extension of de Bruijn’s identity to dependent non-Gaussian noise channels. J. Appl. Prob. 53, 360368.Google Scholar
Shannon, C. E. (1948). A mathematical of communication. Bell Systems Tech. J. 27, 623656.Google Scholar
Sklar, A. (1959). Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Statist. Univ. Paris 8, 229231.Google Scholar
Stam, A. J. (1959). Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 2, 101112.Google Scholar
Takano, S., Watanabe, S., Fukushima, M., Prohorov, Y. and Shiryaev, A. (1995). The inequalities of Fisher information and entropy power for dependent variables. Proc. 7th Japan–Russia Symp. Prob. Theory Math. Statist. pp. 460470.Google Scholar
Villani, C. (2000). A short proof of the concavity of entropy power. IEEE Trans. Inf. Theory 46, 16951696.Google Scholar
Weingarten, H., Steinberg, Y. and Shamai, S. (2006). The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory 52, 39363964.CrossRefGoogle Scholar