Hostname: page-component-7bb8b95d7b-fmk2r Total loading time: 0 Render date: 2024-09-27T12:19:46.191Z Has data issue: false hasContentIssue false

Systems involving mean value formulas on trees

Published online by Cambridge University Press:  03 January 2024

Alfredo Miranda
Affiliation:
Departamento de Matemática, FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina e-mail: amiranda@dm.uba.ar mosquera@dm.uba.ar
Carolina A. Mosquera
Affiliation:
Departamento de Matemática, FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina e-mail: amiranda@dm.uba.ar mosquera@dm.uba.ar
Julio D. Rossi*
Affiliation:
Departamento de Matemática, FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina e-mail: amiranda@dm.uba.ar mosquera@dm.uba.ar
*
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we study the Dirichlet problem for systems of mean value equations on a regular tree. We deal both with the directed case (the equations verified by the components of the system at a node in the tree only involve values of the unknowns at the successors of the node in the tree) and the undirected case (now the equations also involve the predecessor in the tree). We find necessary and sufficient conditions on the coefficients in order to have existence and uniqueness of solutions for continuous boundary data. In a particular case, we also include an interpretation of such solutions as a limit of value functions of suitable two-players zero-sum games.

Type
Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Canadian Mathematical Society

1 Introduction

A tree is, informally, an infinite graph in which each node but one (the root of the tree) has exactly $m+1$ connected nodes, m successors and one predecessor (see below for a precise description of a regular tree). Regular trees and mean value averaging operators on them play the role of being a discrete model analogous to the unit ball and continuous partial differential equations (PDEs) in it. In this sense, linear and nonlinear mean value properties on trees are models that are close (and related to) to linear and nonlinear PDEs. The literature dealing with models and equations given by mean value formulas on trees is quite large but mainly focused on single equations. We quote [Reference Alvarez, Rodríguez and Yakubovich1Reference Bjorn, Bjorn, Gill and Shanmugalingam3, Reference Del Pezzo, Mosquera and Rossi5Reference Del Pezzo, Mosquera and Rossi7, Reference Kaufman, Llorente and Wu10, Reference Kaufman and Wu11, Reference Manfredi, Oberman and Sviridov14, Reference Sviridov20, Reference Sviridov21] and the references therein for references that are closely related to our results, but the list is far from being complete.

Our main goal here is to look for existence and uniqueness of solutions to systems of mean value formulas on regular trees. When dealing with systems two main difficulties arise: the first one comes from the operators used to obtain the equations that govern the components of the system and the second one comes from the coupling between the components. Here, we deal with linear couplings with coefficients in each equation that may change form one point to another and with linear or nonlinear mean value properties given in terms of averaging operators involving the successors together with a possible linear dependence on the predecessor.

Our main result can be summarized as follows: for a general system of averaging operators with linear coupling on a regular tree, we find the sharp conditions (necessary and sufficient conditions) on the coefficients of the coupling and the contribution of the predecessor/successors in such a way that the Dirichlet problem for the system with continuous boundary data has existence and uniqueness of solutions.

Now, let us introduce briefly some definitions and notations needed to make precise the statements of our main results.

The ambient space, a regular tree. Given $m\in \mathbb {N}_{\ge 2},$ a tree $\mathbb {T}$ with regular m-branching is an infinite graph that consists of a root, denoted as the empty set $\emptyset $ , and an infinite number of nodes, labeled as all finite sequences $(a_1,a_2,\dots ,a_k)$ with $k\in \mathbb {N},$ whose coordinates $a_i$ are chosen from $\{0,1,\dots ,m-1\}.$

The elements in $\mathbb {T}$ are called vertices. Each vertex x has m successors, obtained by adding another coordinate. We will denote by

the set of successors of the vertex $x.$ If x is not the root then x has a only an immediate predecessor, which we will denote $\hat {x}.$ The segment connecting a vertex x with $\hat {x}$ is called an edge and denoted by $(\hat {x},x).$

A vertex $x\in \mathbb {T}$ has level $k\in \mathbb {N}$ if $x=(a_1,a_2,\dots ,a_k)$ . The level of x is denoted by $|x|$ and the set of all k-level vertices is denoted by $\mathbb {T}^k.$ We say that the edge $e=(\hat {x},x)$ has k-level if $x\in \mathbb {T}^k.$

A branch of $\mathbb {T}$ is an infinite sequence of vertices starting at the root, where each of the vertices in the sequence is followed by one of its immediate successors. The collection of all branches forms the boundary of $\mathbb {T}$ , denoted by $\partial \mathbb {T}$ . Observe that the mapping $\psi :\partial \mathbb {T}\to [0,1]$ defined as

is surjective, where $\pi =(a_1,\dots , a_k,\dots )\in \partial \mathbb {T}$ and $a_k\in \{0,1,\dots ,m-1\}$ for all $k\in \mathbb {N}.$ Whenever $x=(a_1,\dots ,a_k)$ is a vertex, we set

Averaging operators. Let $F\colon \mathbb {R}^m\to \mathbb {R}$ be a continuous function. We call F an averaging operator if it satisfies the following:

$$ \begin{align*}\begin{array}{l} \displaystyle F(0,\dots,0)=0 \mbox{ and } F(1,\dots,1)=1; \\[5pt] \displaystyle F(tx_1,\dots,tx_m)=t F(x_1,\dots,x_m); \\[5pt] \displaystyle F(t+x_1,\dots,t+x_m)=t+ F(x_1,\dots,x_m),\qquad \mbox{for all } t\in\mathbb{R}; \\[5pt] \displaystyle F(x_1,\dots,x_{m})<\max\{x_1,\dots,x_{m}\},\qquad \mbox{if not all } x_j\text{'s are equal;} \\[5pt] \displaystyle F \mbox{ is nondecreasing with respect to each variable.} \end{array} \end{align*} $$

In addition, we will assume that F is permutation invariant, that is,

$$ \begin{align*}F(x_1,\dots,x_m)= F(x_{\tau(1)},\dots,x_{\tau(m)}) \end{align*} $$

for each permutation $\tau $ of $\{1,\dots ,m\}$ and that there exists $0<\kappa <1$ such that

$$ \begin{align*} F(x_1+c,\dots,x_m)\le F(x_1,\dots,x_m)+c\kappa \end{align*} $$

for all $(x_1,\dots ,x_m)\in \mathbb {R}^m$ and for all $c>0.$

As examples of averaging operators we mention the following ones. The first example is taken from [Reference Kaufman, Llorente and Wu10]. For $1<p<+\infty ,$ the operator $F^p(x_1,\dots ,x_m)=t$ from $\mathbb {R}^m$ to $\mathbb {R}$ defined implicitly by

$$\begin{align*}\sum_{j=1}^m (x_j-t)|x_j-t|^{p-2}=0 \end{align*}$$

is a permutation invariant averaging operator. Next, we consider, for $0\le \alpha \le 1$ and $0<\beta \leq 1$ with $\alpha +\beta =1$

$$ \begin{align*}\begin{array}{ll} \displaystyle F_0(x_1,\dots,x_m)=\frac{\alpha}2 \left(\max_{1\le j\le m} \{x_j\}+\min_{1\le j\le m}\{x_j\} \right) + \frac{\beta}m\sum_{j=1}^m x_j,\\ \displaystyle F_1(x_1,\dots,x_m)=\alpha \underset{{1\le j\le m}}{\operatorname{\mbox{ median }}}\{x_j\}+ \frac{\beta}m\sum_{j=1}^m x_j,\\ \displaystyle F_2(x_1,\dots,x_m)=\alpha \underset{{1\le j\le m}}{\operatorname{\mbox{ median }}}\{x_j\}+ \frac{\beta}2 \left(\max_{1\le j\le m} \{x_j\}+\min_{1\le j\le m}\{x_j\} \right), \end{array} \end{align*} $$

where

$$\begin{align*}\underset{{1\le j\le m}}{\operatorname{\mbox{ median }}}\{x_j\}\, := \begin{cases} y_{\frac{m+1}2}, & \text{ if }m \text{ is even}, \\ \displaystyle\frac{y_{\frac{m}2}+ y_{(\frac{m}2 +1)}}2, & \text{ if }m \text{ is odd}, \end{cases} \end{align*}$$

where $\{y_1,\dots , y_m\}$ is a nondecreasing rearrangement of $\{x_1,\dots , x_m\}.$ $F_0, F_1$ , and $F_2$ are permutation invariant averaging operators. For mean value formulas on trees and in for PDEs in the Euclidean space we refer to [Reference Alvarez, Rodríguez and Yakubovich8, Reference Hartenstine and Rudd9, Reference Kesten, Durrett and Kesten12, Reference Manfredi, Parviainen and Rossi15Reference Oberman17]

Given two averaging operators F and G, we deal with the system

(1.1) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle u(x)=(1-p_k)\Big\{ \displaystyle (1-\beta_k^u ) F(u(y_0),\dots,u(y_{m-1})) + \beta_k^u u (\hat{x}) \Big\}+p_k v(x) & \ x\in\mathbb{T}_m^k , \\[10pt] \displaystyle v(x)=(1-q_k)\Big\{ (1-\beta_k^v ) G(v(y_0),\dots,v(y_{m-1})) + \beta_k^v v (\hat{x}) \Big\}+q_k u(x) & \ x \in\mathbb{T}_m^k , \end{array} \right. \end{align} $$

for $k\ge 1$ , here $(y_i)_{i=0,\dots ,m-1} $ are the successors of x. We assume that $\beta _0^u=\beta _0^v=0$ , then at the root of the tree the equations are

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle u(\emptyset)=(1-p_0) F(u(y_0),\dots,u(y_{m-1})) +p_0 v(\emptyset) , \\[10pt] \displaystyle v(\emptyset)=(1-q_0)G(v(y_0),\dots,v(y_{m-1}))+q_0 u(\emptyset). \end{array} \right. \end{align*} $$

In order to have a probabilistic interpretation of the equations in this system (see below), we will assume that $\beta _k^u $ , $\beta _k^v$ , $p_k$ , $q_k$ are all in $[0,1]$ and moreover, we will also assume that $\beta _k^u$ and $\beta _k^v$ are bounded away from 1, that is, $\beta _k^u, \beta _k^v \leq c < 1$ and that there is no k such that $p_k=q_k=1$ .

We supplement (1.1) with boundary data. We take two continuous functions $f,g:[0,1] \mapsto \mathbb {R}$ and impose that along any branch of the tree we have that

(1.2) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow z = \psi (\pi)}u(x) = f(z) , \\[10pt] \displaystyle \lim_{{x}\rightarrow z = \psi(\pi)}v(x)=g(z). \end{array} \right. \end{align} $$

Here, the limits are understood along the nodes in the branch as the level goes to infinity. That is, if the branch is given by the sequence $\pi =\{x_n\} \subset \mathbb {T}_m$ , $x_{n+1} \in S(x_{n})$ , then we ask for $u(x_n) \to f(\psi (\pi ))$ as $n\to \infty $ .

Our main result is to obtain necessary and sufficient conditions on the coefficients $\beta _k^u$ , $\beta _k^v$ , $p_k$ and $q_k$ in order to have solvability of the Dirichlet problem, (1.1)–(1.2).

Theorem 1.1 For every $f,g:[0,1] \mapsto \mathbb {R}$ continuous functions, the system (1.1) has a unique solution satisfying (1.2) if and only if the coefficients $\beta _k^u$ , $\beta _k^v$ , $p_k$ and $q_k$ satisfy the following conditions:

(1.3) $$ \begin{align} \begin{array}{ll} \displaystyle \sum_{k=1}^\infty \prod_{j=1}^k \frac{\beta_j^u}{1- \beta_j^u} < + \infty, & \displaystyle \sum_{k=1}^\infty \prod_{j=1}^k \frac{\beta_j^v}{1- \beta_j^v} < + \infty, \\[10pt] \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^u}{(1-\beta_l^u)}\big)\frac{p_j}{(1-p_j)}<+\infty, & \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^v}{(1-\beta_l^v)}\big)\frac{q_j}{(1-q_j)}<+\infty,\\[10pt] \displaystyle \sum_{k=1}^\infty p_k < + \infty,& \displaystyle \sum_{k=1}^\infty q_k < + \infty. \end{array}\nonumber\\ \end{align} $$

Remark 1.2 Notice that when $\beta _k^u = \beta _k^v \equiv 0$ the conditions (1.3) are reduced to

$$ \begin{align*} \displaystyle \sum_{k=1}^\infty p_k < + \infty \quad \mbox{ and } \quad \displaystyle \sum_{k=1}^\infty q_k < + \infty. \end{align*} $$

When $\beta _k^u$ is a constant

$$ \begin{align*}\beta_k^u \equiv \beta\end{align*} $$

the first condition in (1.3) reads as

$$ \begin{align*}\sum_{k=1}^{\infty} \prod_{j=1}^k \frac{\beta_j^u}{1-\beta_j^u} = \sum_{k=1}^{\infty} \Big(\frac{\beta}{1-\beta} \Big)^k <+\infty, \end{align*} $$

and hence, we get

$$ \begin{align*}\beta < \frac12 \end{align*} $$

as the right condition for existence of solutions when $\beta _k^u$ is constant, $\beta _k^u \equiv \beta $ . Analogously, $\beta ^v_j < 1/2$ is the right condition when $\beta ^v_j$ is constant.

Also in this case ( $\beta _k^u$ constant), the second condition in (1.3) can be obtained from the first and the third one, since in this case, we have that $p_j \to 0$ (this follows from the third condition) and then we have that the second condition can be bounded by

$$ \begin{align*}\displaystyle \sum_{k=2}^{\infty} \Big(\frac{\beta}{1-\beta} \Big)^k \Big(\sum_{j=1}^{k-1}\frac{p_j}{(1-p_j)} \Big) \leq \Big( \sum_{k=2}^{\infty} \Big(\frac{\beta}{1-\beta} \Big)^k \Big) \Big(\sum_{j=1}^{\infty}\frac{p_j}{(1-p_j)} \Big) <+\infty. \end{align*} $$

The first series converges since $\beta < 1/2 $ (this follows from the first condition) and the second series converges since $p_j \to 0$ and the third condition holds.

In general, the third condition does not follow from the first and the third. As an example of a set of coefficients that satisfy the first and third conditions but not the second one in (1.3), let us mention

$$ \begin{align*}p_k = q_k = \frac{1}{k^2} \quad \mbox{and } \quad \beta_k^u = \beta_k^v = \frac{1}{2}(1- \frac1{k}). \end{align*} $$

Let us briefly comment on the meaning of the conditions in (1.3). The first condition implies that when x is a node with k large the influence of the predecessor in the value of the components at x is small (hence, there is more influence of the successors). The third condition says that when we are at a point x with k large, then the influence of the other component is small. The second condition couples the set of coefficients in each equation of the system. With these conditions one guarantees that for x with large k the values of the components of (1.1), $u(x)$ and $v(x)$ , depend moistly on the values of u and v at successors of x, respectively, and this is exactly what is needed to make possible to fulfill the boundary conditions (1.2).

Remark 1.3 Our results can be used to obtain necessary and sufficient conditions for existence and uniqueness of a solution to a single equation,

$$ \begin{align*}\displaystyle u(x)=\Big\{ \displaystyle (1-\beta_k ) F(u(y_0),\dots,u(y_{m-1})) + \beta_k u (\hat{x}) \Big\} \quad \ x\in\mathbb{T}_m^k \end{align*} $$

with

$$ \begin{align*}\lim_{{x}\rightarrow z = \psi (\pi)}u(x) = f(z). \end{align*} $$

In fact, take as coefficients $p_k=q_k=0$ and $\beta _k^u=\beta _k^v$ for every k and as boundary data $f=g$ in (1.1) to obtain that this problem has a unique solution for every continuous f if and only if

$$ \begin{align*}\sum_{k=1}^\infty \prod_{j=1}^k \frac{\beta_j}{1- \beta_j} < + \infty. \end{align*} $$

Remark 1.4 Our results can be extended to $N\times N$ systems with unknowns $(u_1,\dots ,u_N)$ , $\! u_i :\mathbb {T} \mapsto \mathbb {R}$ of the form

(1.4) $$ \begin{align} \left\lbrace \begin{array}{@{}l} \displaystyle u_i (x)\! = \! \Big(1 \! - \! \sum_{j=1}^N p_{i,j,k} \Big)\Big\{ \displaystyle (1 \! - \! \beta_k^i ) F_i(u(y_0),\dots,u(y_{m-1})) \! + \! \beta_k^i u_i (\hat{x}) \Big\} \! +\! \sum_{j=1}^N p_{i,j,k} u_j (x) , \\[10pt] \displaystyle \lim_{{x}\rightarrow z=\psi(\pi)}u_i (x) = f_i (z). \end{array} \right.\nonumber\\ \end{align} $$

Here, $F_i$ is an averaging operator, $0\leq p_{i,j,k}\leq 1$ depends on the level of x and on the indexes of the components $i,j$ and on the level of the node in the tree k and are assumed to satisfy $p_{i,i,k} =0$ and $0\leq \sum _{j=1}^N p_{i,j,k} < 1$ . The coefficients $0 \leq \beta _k^i <1$ depends on the level of x and on the component i. For such general systems, our result says that the system (1.4) has a unique solution if and only if

$$ \begin{align*} \begin{array}{l} \displaystyle \sum_{k=1}^\infty \prod_{l=1}^k \frac{\beta_l^i}{1- \beta_l^i} < + \infty, \qquad \\[10pt] \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^u}{(1-\beta_l^u)}\big)\frac{\sum_{j=1}^N p_{i,j,k} }{(1- \sum_{j=1}^N p_{i,j,k} )}<+\infty, \\[10pt] \displaystyle \sum_{k=1}^\infty \sum_{j=1}^N p_{i,j,k} < + \infty, \end{array} \end{align*} $$

hold for every $i=1,\dots ,N$ and every k.

To simplify the presentation, we first prove the main result, Theorem 1.1, in the special case in which $\beta _k^u = \beta _k^v \equiv 0$ , $p_k=q_k$ , for all k, and the averaging operators are given by

(1.5) $$ \begin{align} \begin{array}{l} \displaystyle F (u(y_0),\dots,u(y_{m-1})) = \displaystyle\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y), \\[10pt] \displaystyle G (v(y_0),\dots,v(y_{m-1})) = \frac{1}{m}\sum_{y\in S(x)}v(y). \end{array} \end{align} $$

The fact that $\beta _k^u = \beta _k^v \equiv 0$ simplifies the computations and allows us to find an explicit solution for the special case in which the boundary data f and g are two constants, $f\equiv C_1$ and $g \equiv C_2$ . The choice of the averaging operators in (1.5) has no special relevance but allows us to give a game theoretical interpretation of our equations (see below).

After dealing with this simpler case, we deal with the general case and prove Theorem 1.1 in full generality. Here, we have general averaging operators, F and G, $\beta _k^u$ and $\beta _k^v$ can be different from zero (and are allowed to vary depending on the level of the point x) and $p_k$ and $q_k$ need not be equal. This case is more involved and now the solution with constant boundary data $f\equiv C_1$ and $g \equiv C_2$ is not explicit (but in this case, under our conditions for existence, we will construct explicit sub and super solutions that take the boundary values).

Our system (1.1) with F and G given by (1.5) reads as

(1.6) $$ \begin{align} \left\lbrace \begin{array}{@{}l} \displaystyle u(x)=(1-p_k)\Big\{ \displaystyle (1-\beta_k^u ) \Big(\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y)\Big) + \beta_k^u u (\hat{x}) \Big\}+p_k v(x), \\[10pt] \displaystyle v(x)=(1-q_k)\Big\{ (1-\beta_k^v )\Big( \frac{1}{m}\sum_{y\in S(x)}v(y) \Big) + \beta_k^v v (\hat{x}) \Big\}+q_k u(x), \ \end{array} \right.\nonumber\\ \end{align} $$

for $x \in \mathbb {T}_m$ . This system has a probabilistic interpretation that we briefly describe (see Section 4 for more details). First, assume that $\beta _k^u=\beta _k^v\equiv 0$ . The game is a two-player zero-sum game played in two boards (each board is a copy of the m-regular tree) with the following rules: the game starts at some node in one of the two trees $(x_0,i)$ with $x_0\in \mathbb {T}$ and $i=1,2$ (we add an index to denote in which board is the position of the game). If $x_0$ is in the first board, then with probability $p_k$ , the position jumps to the other board and with probability $(1-p_k),$ the two players play a round of a Tug-of-War game (a fair coin is tossed and the winner chooses the next position of the game at any point among the successors of $x_0$ , we refer to [Reference Blanc and Rossi4, Reference Lewicka13, Reference Peres, Schramm, Sheffield and Wilson18, Reference Peres and Sheffield19] for more details concerning Tug-of-War games); in the second board with probability $q_k$ , the position changes to the first board, and with probability $(1-q_k),$ the position goes to one of the successors of $x_0$ with uniform probability. We take a finite level L and we end the game when the position arrives to a node at that level that we call $x_\tau $ . We also fix two final payoffs f and g. This means that in the first board Player I pays to Player II, the amount encoded by $f(\psi (x_\tau ))$ while in the second board, the final payoff is given by $g(\psi (x_\tau ))$ . Then the value function for this game is defined as

$$ \begin{align*}w_L (x,i) = \inf_{S_I} \sup_{S_{II}} \mathbb{E}^{(x,i)} [\mbox{final payoff}] = \sup_{S_{II}} \inf_{S_I} \mathbb{E}^{(x,i)} [\mbox{final payoff}]. \end{align*} $$

Here, the $\inf $ and $\sup $ are taken among all possible strategies of the players (the choice that the players make at every node of what will be the next position if they play (probability $(1-p_k)$ ) and they win the coin toss (probability $1/2$ )). The final payoff is given by f or g according to $i_\tau =1$ or $i_\tau =2$ (the final position of the game is in the first or in the second board).

When $\beta _k^u$ and/or $\beta _k^v$ are not zero, we add at each turn of the game, a probability of passing to the predecessor of the node.

We have that the pair of functions $(u_L,v_L)$ given by $u_L(x) = w_L (x,1)$ and $v_L (x) = w_L (x,2)$ is a solution to the system (1.6) in the finite subgraph of the tree composed by nodes of level less than L. Now, we can take the limit as $L \to \infty $ in the value functions for this game and we obtain that the limit is the unique solution to our system (1.6) that verifies the boundary conditions (1.2) in the infinite tree (see Section 4).

Organization of the paper: In the next section, Section 2, we deal with our system in the special case of the directed tree, $\beta _k^u = \beta _k^v \equiv 0$ with $p_k=q_k$ , and F and G given by (1.5); in Section 3, we deal with the general case of two averaging operators F and G and with general $\beta _k^u$ , $\beta _k^v$ ; finally, in Section 4, we include the game theoretical interpretation of our results.

2 A particular system on the directed tree

Our main goal in this section is to find necessary and sufficient conditions on the sequence of coefficients $\{p_k\}$ in order to have a solution to the system

(2.1) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle u(x)=(1-p_k)\Big\{ \displaystyle\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y) \Big\}+p_k v(x) \qquad & \ x\in\mathbb{T}_m , \\[10pt] \displaystyle v(x)=(1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x)}v(y) \Big)+p_k u(x) \qquad & \ x \in\mathbb{T}_m , \end{array} \right. \end{align} $$

with

(2.2) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow z=\psi(\pi)}u(x) = f(z) , \\[10pt] \displaystyle \lim_{{x}\rightarrow z=\psi(\pi)}v(x)=g(z). \end{array} \right. \end{align} $$

Here, $f,g:[0,1]\rightarrow \mathbb {R}$ are continuous functions.

First, let us prove a lemma where we obtain a solution to our system when the functions f and g are just two constants.

Lemma 2.1 Given $C_1, C_2 \in \mathbb {R}$ , suppose that

$$ \begin{align*}\sum_{k=0}^{\infty}p_k <+\infty,\end{align*} $$

then there exists $(u,v)$ a solution of (2.1) and (2.2) with $f\equiv C_1$ and $g\equiv C_2.$

Proof The solution that we are going to obtain will be constant at each level. That is,

$$ \begin{align*}u(x_k)=a_k \qquad \mbox{ and } \qquad v(x_k)=b_k\end{align*} $$

(here $x_k$ is any vertex at level k) for all $k\geq 0$ . With this simplification, the system (2.1) can be expressed as

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle a_k=(1-p_k)a_{k+1}+p_k b_k, \\[5pt] \displaystyle b_k=(1-p_k)b_{k+1}+p_k a_k, \end{array} \right. \end{align*} $$

for each $k\geq 0$ . Then, we obtain the following system of linear equations:

$$ \begin{align*}\frac{1}{1-p_k}\begin{bmatrix} 1 & -p_k \\ -p_k & 1 \end{bmatrix}\begin{bmatrix} a_k \\ b_k \end{bmatrix}=\begin{bmatrix} a_{k+1} \\ b_{k+1} \end{bmatrix}. \end{align*} $$

Iterating, we obtain

(2.3) $$ \begin{align} \left( \prod_{j=0}^{k}\frac{1}{1-p_j} \right) \prod_{j=0}^{k}\begin{bmatrix} 1 & -p_j \\ -p_j & 1 \end{bmatrix}\begin{bmatrix} a_0 \\ b_0 \end{bmatrix}=\begin{bmatrix} a_{k+1} \\ b_{k+1} \end{bmatrix}. \end{align} $$

Hence, our next goal is to analyze the convergence of the involved products as $k\to \infty $ . First, we deal with

$$ \begin{align*}\prod_{j=0}^{k}\frac{1}{1-p_j}. \end{align*} $$

Taking the logarithm, we get

$$ \begin{align*}\ln\Big(\prod_{j=0}^{k}\frac{1}{1-p_j}\Big)=-\sum_{j=0}^{k}\ln(1-p_j). \end{align*} $$

Now, using that $\lim _{x\rightarrow 0}\frac {-\ln (1-x)}{x}=1,$ as we have $\sum _{j=0}^{\infty }p_j<\infty $ by hypothesis, we can deduce that the previous series converges,

$$ \begin{align*}-\sum_{j=0}^{\infty}\ln(1-p_j)=U<\infty.\end{align*} $$

Therefore, we have that the product also converges

$$ \begin{align*}\prod_{j=0}^{\infty}\frac{1}{1-p_j}=e^{U}=\theta<\infty. \end{align*} $$

Remark that $U>0$ and hence $1<\theta <\infty $ .

Next, let us deal with the matrices and study the convergence of

$$ \begin{align*}\prod_{j=0}^{k}\begin{bmatrix} 1 & -p_j \\ -p_j & 1 \end{bmatrix}. \end{align*} $$

Given $j\geq 0$ , let us find the eigenvalues of

$$ \begin{align*}M_j=\begin{bmatrix} 1 & -p_j \\ -p_j & 1 \end{bmatrix}. \end{align*} $$

It is easy to verify that the eigenvalues are $\{1+p_j,1-p_j\}$ , and the associated eigenvectors are $[1 \ 1]$ and $[-1 \ 1]$ , respectively. Is important to remark that these vectors are independent of $p_j$ . Then, we introduce the orthogonal matrix ( $Q^{-1}=Q^T$ )

$$ \begin{align*}Q=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}, \end{align*} $$

and we have diagonalized $M_j$ ,

$$ \begin{align*}M_j=Q\begin{bmatrix} 1+p_j & 0 \\ 0 & 1-p_j \end{bmatrix}Q^T, \end{align*} $$

for all $j\geq 0$ . Then

$$ \begin{align*}\prod_{j=0}^{k}M_j=Q\Big(\prod_{j=0}^{k}\begin{bmatrix} 1+p_j & 0 \\ 0 & 1-p_j \end{bmatrix}\Big)Q^T=Q\begin{bmatrix} \prod_{j=0}^{k}(1+p_j) & 0 \\ 0 & \prod_{j=0}^{k}(1-p_j) \end{bmatrix}Q^T. \end{align*} $$

Using similar arguments as before, we obtain that

$$ \begin{align*}\prod_{j=0}^{\infty}(1+p_j)=\alpha<\infty \qquad \mbox{ and } \qquad \prod_{j=0}^{\infty}(1-p_j)=\frac{1}{\theta}=\beta.\end{align*} $$

Notice that $0<\beta <1$ and $1<\alpha <\infty $ . Therefore, taking the limit as $k\rightarrow \infty $ in (2.3), we obtain

$$ \begin{align*}\frac{1}{\beta}Q\begin{bmatrix} \alpha & 0 \\ 0 & \beta \end{bmatrix}Q^T\begin{bmatrix} a_0 \\ b_0 \end{bmatrix}=\begin{bmatrix} C_1 \\ C_2 \end{bmatrix}. \end{align*} $$

Then, given two constants $C_1,C_2$ this linear system has a unique solution, $[a_0 \ b_0]$ (because the involved matrices are nonsingular). Once we have the value of $[a_0 \ b_0]$ , we can obtain the values $[a_k \ b_k]$ at all levels using (2.3). The limits (2.2) are satisfied by this construction.

Now, we need to introduce the following definition.

Definition 2.1 Given $f,g: [0,1] \to \mathbb {R}$ and a sequence $(p_k)_{k\geq 0}$ ,

  • The pair $(z,w)$ is a subsolution of (2.1) and (2.2) if

    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle z(x)\leq (1-p_k)\Big\{ \displaystyle \frac{1}{2}\max_{y\in S(x)}z(y)+\frac{1}{2}\min_{y\in S(x)}z(y) \Big\}+p_k w(x), & x\in\mathbb{T}_m , \\[10pt] \displaystyle w(x)\leq (1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x)}w(y) \Big)+p_k z(x), & x \in\mathbb{T}_m , \end{array} \right. \end{align*} $$
    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \limsup_{{x}\rightarrow \psi(\pi)}z(x) \leq f(\psi(\pi)) , \\[10pt] \displaystyle \limsup_{{x}\rightarrow \psi(\pi)}w(x)\leq g(\psi(\pi)). \end{array} \right. \end{align*} $$
  • The pair $(z,w)$ is a supersolution of (2.1) and (2.2) if

    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle z(x)\geq (1-p_k)\Big\{ \displaystyle\frac{1}{2}\max_{y\in S(x)}z(y)+\frac{1}{2}\min_{y\in S(x)}z(y) \Big\}+p_k w(x), & x\in\mathbb{T}_m , \\[10pt] \displaystyle w(x)\geq (1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x)}w(y) \Big)+p_k z(x), & x \in\mathbb{T}_m , \end{array} \right. \end{align*} $$
    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \liminf_{{x}\rightarrow \psi(\pi)}z(x) \geq f(\psi(\pi)) , \\[10pt] \displaystyle \liminf_{{x}\rightarrow \psi(\pi)}w(x)\geq g(\psi(\pi)). \end{array} \right. \end{align*} $$

With these definitions, we are ready to prove a comparison principle.

Lemma 2.2 (Comparison Principle)

Suppose that $(u_{\ast },v_{\ast })$ be a subsolution of (2.1) and (2.2), and $(u^{\ast },v^{\ast })$ be a supersolution of (2.1) and (2.2), then it holds that

$$ \begin{align*} u_{\ast} (x)\leq u^{\ast}(x) \quad \mbox{and} \quad v_{\ast}(x)\leq v^{\ast}(x) \qquad \forall x \in \mathbb{T}_m. \end{align*} $$

Proof Suppose, arguing by contradiction, that

$$ \begin{align*} \max \Big\{\sup_{x\in\mathbb{T}_m}\{u_{\ast}-u^{\ast}\},\sup_{x\in\mathbb{T}_m}\{v_{\ast}-v^{\ast}\}\Big\}\geq \eta>0. \end{align*} $$

Let

$$ \begin{align*} Q=\Big\{ x\in\mathbb{T}_m : \max\{(u_{\ast}-u^{\ast})(x),(v_{\ast}-v^{\ast})(x)\}\geq\eta\Big\} \neq\emptyset. \end{align*} $$

Claim # 1 If $x\in Q$ , there exists $y\in S(x)$ such that $y\in Q$ .

Proof of the claim

Suppose that

(2.4) $$ \begin{align} u_{\ast}(x)-u^{\ast}(x)\geq\eta \quad \mbox{and} \quad u_{\ast}(x)-u^{\ast}(x)\geq v_{\ast}(x)-v^{\ast}(x), \end{align} $$

then

$$ \begin{align*} & u_{\ast}(x)-u^{\ast}(x) \\& \quad \leq \displaystyle (1\! - \! p_k)\Big\{ \frac{1}{2}\max_{y\in S(x)}u_{\ast}(y)+\frac{1}{2}\min_{y\in S(x)}u_{\ast}(y)- \frac{1}{2}\max_{y\in S(x)}u^{\ast}(y)-\frac{1}{2}\min_{y\in S(x)}u^{\ast}(y) \Big\}\\& \qquad + p_k (v_{\ast}(x)-v^{\ast}(x)). \end{align*} $$

Using (2.4) in the last term, we obtain

$$ \begin{align*} \displaystyle \displaystyle &(1-p_k)u_{\ast}(x)-u^{\ast}(x) \\\displaystyle & \quad \leq \displaystyle (1-p_k)\Big\{ \frac{1}{2}\max_{y\in S(x)}u_{\ast}(y)+\frac{1}{2}\min_{y\in S(x)}u_{\ast}(y)- \frac{1}{2}\max_{y\in S(x)}u^{\ast}(y)-\frac{1}{2}\min_{y\in S(x)}u^{\ast}(y) \Big\}. \end{align*} $$

Since $(1-p_k)$ is different from zero, using again (2.4), we arrive to

$$ \begin{align*} &\displaystyle \eta\leq u_{\ast}(x)-u^{\ast}(x)\leq \displaystyle \Big(\frac{1}{2}\max_{y\in S(x)}u_{\ast}(y)- \frac{1}{2}\max_{y\in S(x)}u^{\ast}(y)\Big) \\&\quad \displaystyle +\Big(\frac{1}{2}\min_{y\in S(x)}u_{\ast}(y)-\frac{1}{2}\min_{y\in S(x)}u^{\ast}(y)\Big). \end{align*} $$

Let $u_{\ast }(y_1)=\max _{y\in S(x)}u_{\ast }(y)$ ; it is clear that $u^{\ast }(y_1)\leq \max _{y\in S(x)}u^{\ast }(y)$ . On the other hand, let $u^{\ast }(y_2)=\min _{y\in S(x)}u^{\ast }$ ; now, we have $u_{\ast }(y_2)\geq \min _{y\in S(x)}u_{\ast }(y)$ . Hence

$$ \begin{align*} \eta\leq \Big(\frac{1}{2} u_{\ast}(y_1)-\frac{1}{2} u^{\ast}(y_1)\Big)+ \Big(\frac{1}{2} u_{\ast}(y_2)-\frac{1}{2} u^{\ast}(y_2)\Big). \end{align*} $$

This implies that exists $y\in S(x)$ such that $u_{\ast }(y)-u^{\ast }(y)\geq \eta $ . Thus $y\in Q$ .

Now suppose the other case, that is,

(2.5) $$ \begin{align} v_{\ast}(x)-v^{\ast}(x)\geq\eta \quad \mbox{and} \quad v_{\ast}(x)-v^{\ast}(x)\geq u_{\ast}(x)-u^{\ast}(x). \end{align} $$

Now, we use the second equation. We have

$$ \begin{align*} v_{\ast}(x)-v^{\ast}(x)\leq (1-p_k) \Big(\frac{1}{m}\sum_{y\in S(x)}(v_{\ast}(y)-v^{\ast}(y)) \Big) +p_k (u_{\ast}(x)-u^{\ast}(x)). \end{align*} $$

Using (2.5) again, we obtain

$$ \begin{align*} \eta\leq v_{\ast}(x)-v^{\ast}(x)\leq \frac{1}{m}\sum_{y\in S(x)}(v_{\ast}(y)-v^{\ast}(y)). \end{align*} $$

Hence, there exists $y\in S(x)$ such that $v_{\ast }(y)-v^{\ast }(y)\geq \eta $ . Thus $y\in Q$ . This ends the proof of the claim.

Now, given $y_0\in Q,$ we have a sequence $(y_k)_{k\geq 1}$ included in a branch, $y_{k+1} \in S(y_k)$ , with $(y_k)_{k\geq 1}\subset Q$ . Hence, we have

$$ \begin{align*} u_{\ast}(y_k)-u^{\ast}(y_k)\geq\eta \quad \mbox{or} \quad v_{\ast}(y_k)-v^{\ast}(y_k)\geq\eta. \end{align*} $$

Then, there exists a subsequence $(y_{k_j})_{k\geq 1}$ such that

(2.6) $$ \begin{align} u_{\ast}(y_{k_j})-u^{\ast}(y_{k_j})\geq\eta \quad \mbox{or} \quad v_{\ast}(y_{k_j})-v^{\ast}(y_{k_j})\geq\eta. \end{align} $$

Let us call $\lim _{k\rightarrow \infty }y_k =\pi $ the branch to which the $y_k$ belong. Suppose that the first case in (2.6) holds, then

$$ \begin{align*} \liminf_{j\rightarrow\infty}u_{\ast}(x_{k_j})\leq f(\psi(\pi)) \quad \mbox{and} \quad \limsup_{j\rightarrow\infty}u^{\ast}(y_{k_j})\geq f(\psi(\pi)). \end{align*} $$

Thus, we have

$$ \begin{align*} &\displaystyle 0<\eta\leq \liminf_{j\rightarrow\infty}(u_{\ast}(y_{k_j})-u^{\ast}(y_{k_j}))\\& \quad \displaystyle =\liminf_{j\rightarrow\infty}u_{\ast}(y_{k_j})-\limsup_{j\rightarrow\infty}u^{\ast}(y_{k_j})\leq f(\psi(\pi))-f(\psi(\pi))=0. \end{align*} $$

Here, we arrived to a contradiction. The other case is similar. This ends the proof.

To obtain the existence of a solution to our system in the general case (f and g continuous functions), we will use Perron’s method. Hence, let us introduce the following set:

$$ \begin{align*} \mathscr{A}=\big\{(z,w) \colon (z,w) \mbox{ is a subsolution to (2.1) and (2.2)} \big\}. \end{align*} $$

First, we observe that $\mathscr {A}$ is not empty when f and g are bounded below.

Lemma 2.3 Given $f,g\in C([0,1])$ the set $\mathscr {A}$ verifies $\mathscr {A}\neq \emptyset $ .

Proof Taking $z(x_k)=w(x_k)=\min \{\min f,\min g \}$ for all $k\geq 0$ , this pair is such that $(z,w)\in \mathscr {A}$ .

Now, we prove that these functions are uniformly bounded.

Lemma 2.4 Let $M=\max \{\max f, \max g \}$ . If $(z,w)\in \mathscr {A}$ , then

$$ \begin{align*}z(x)\leq M \quad \mbox{and} \quad w(x)\leq M, \qquad \forall x \in \mathbb{T}_m.\end{align*} $$

Proof Suppose that the statement is false. Then there exists a vertex $x_0\in \mathbb {T}_m$ (in some level k) such that $z(x_0)>M$ or $w(x_0)>M$ . Suppose, in first case, that $z(x_0)\geq w(x_0)$ . Then $z(x_0)>M$ .

Claim # 1 There exists $y_0\in S(x_0)$ such that $z(y_0)>M$ . Otherwise,

$$ \begin{align*} \displaystyle z(x_0)&\leq (1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_0)}z(y)+\frac{1}{2}\min_{y\in S(x_0)}z(y)\Big\}+p_k w(x_0) \\[10pt] \displaystyle &\leq(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_0)}z(y)+\frac{1}{2}\min_{y\in S(x_0)}z(y)\Big\}+p_k z(x_0). \end{align*} $$

Then

$$\begin{align*} (1-p_k)z(x_0)\leq (1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_0)}z(y)+\frac{1}{2}\min_{y\in S(x_0)}z(y)\Big\}\leq (1-p_k)\Big\{\frac{1}{2} M+\frac{1}{2} M\Big\}. \end{align*}$$

Here, we obtain the contradiction

$$ \begin{align*} M<z(x_0)\leq M. \end{align*} $$

Suppose the other case: $w(x_0)\ge z(x_0)$ , then $w(x_0)>M$ .

Claim # 2 There exists $y_0\in S(x_0)$ such that $w(y_0)>M$ . Otherwise,

$$ \begin{align*} \displaystyle w(x_0)&\leq (1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x_0)}w(y)\Big)+p_k z(x_0)\le (1-p_k)M+p_kw(x_0),\\ \displaystyle &\Rightarrow w(x_0)\le M \end{align*} $$

which is again a contradiction.

With these two claims we can ensure that exists a (infinite) sequence $x=(x_0,x_0^1, x_0^2, \dots )$ that belongs to a branch such that $z(x_0^j)>M$ (or $w(x_0^j)>M$ ). Then, taking limit along this branch, we obtain

$$ \begin{align*}\limsup_{{x}\rightarrow \pi}z(x)>M\end{align*} $$

and we arrive to a contradiction.

Now, let us define

(2.7) $$ \begin{align} u(x):=\sup_{(z,w)\in\mathscr{A}}z(x) \qquad \mbox{and} \qquad v(x):=\sup_{(z,w)\in\mathscr{A}}w(x). \end{align} $$

The next result shows that this pair of functions is in fact the desired solution to the system (2.1).

Theorem 2.5 Suppose that

$$ \begin{align*} \sum_{k=0}^{\infty}p_k <+\infty. \end{align*} $$

The pair $(u,v)$ given by (2.7) is the unique solution to (2.1) and (2.2).

Proof Given $\varepsilon>0$ , there exists $\delta>0$ such that $|f(\psi (\pi _1))-f(\psi (\pi _2))|<\varepsilon $ and $|g(\psi (\pi _1))-g(\psi (\pi _2))|<\varepsilon $ if $|\psi (\pi _1) - \psi (\pi _2)|<\delta $ . Let $k\in \mathbb {N}$ be such that $\frac {1}{m^k}<\delta $ . We divide the interval $[0,1]$ in the $m^k$ subintervals $I_j=[\frac {j-1}{m^k},\frac {j}{m^k}]$ for $1\leq j\leq m^k$ . Let us consider the constants

$$ \begin{align*} C_f^j=\min_{x\in I_j}f \quad \mbox{and} \quad C_g^j=\min_{x\in I_j}g \end{align*} $$

for $1\leq j\leq m^k$ .

Now, we observe that, if we consider $\mathbb {T}_m^k=\{x\in \mathbb {T}_m : |x|=k \},$ we have $\# \mathbb {T}_m^k=m^k$ and given $x\in \mathbb {T}_m^k$ any branch that have this vertex in the kth level has a limit that belongs to only one segment $I_j$ . Then, we have a correspondence one to one the set $\mathbb {T}_m^k$ with the segments $(I_j)_{j=1}^{m^k}$ . Let us call $x^j$ the vertex associated with $I_j$ .

Fix now $1\leq j\leq m^k$ , if we consider $x^j$ as a first vertex we obtain a tree such that the boundary (via $\psi $ ) is $I_j$ . Using the Lemma 2.1 in this tree, we can obtain a solution of (2.1) and (2.2) with the constants $C_f^j$ and $C_g^j$ .

Thus, doing this in all the vertices of $T_m^k$ , we have the value of some functions so called $(\underline {u},\underline {v})$ , in all vertex $x\in \mathbb {T}_m$ with $|x|\geq k$ . Then, using the equation (2.1), we can obtain the values of the $(k-1)$ -level. In fact, we have

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \underline{u}(x_{k-1})=(1-p_k)\Big\{ \frac{1}{2}\max_{y\in S(x_{k-1})}\underline{u}(y)+\frac{1}{2}\min_{y\in S(x_{k-1})}\underline{u}(y) \Big\}+p_k \underline{v}(x_{k-1}), \\ \displaystyle \underline{v}(x_{k-1})=(1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x_{k-1})}\underline{v}(y) \Big)+p_k \underline{u}(x_{k-1}). \end{array} \right. \end{align*} $$

Then, if we call

$$ \begin{align*}A_k=\frac{1}{2}\max_{y\in S(x_{k-1})}\underline{u}(y)+\frac{1}{2}\min_{y\in S(x_{k-1})}\underline{u}(y) \quad \mbox{ and } \quad B_k=\frac{1}{m}\sum_{y\in S(x_{k-1})}\underline{v}(y),\end{align*} $$

we obtain the system

$$ \begin{align*} \frac{1}{1-p_k}\begin{bmatrix} 1 & -p_k \\ -p_k & 1 \end{bmatrix}\begin{bmatrix} \underline{u}(x_{k-1}) \\ \underline{v}(x_{k-1}) \end{bmatrix}=\begin{bmatrix} A_k \\ B_k \end{bmatrix}. \end{align*} $$

Solving this system, we obtain all the values at $(k-1)$ -level. And so, we continue until obtain values for all the tree $\mathbb {T}_m$ . Let us observe that the pair of functions $(\underline {u},\underline {v})$ verifies

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \underline{u}(x)=(1-p_k)\Big\{ \frac{1}{2}\max_{y\in S(x)}\underline{u}(y)+\frac{1}{2}\min_{y\in S(x)}\underline{u}(y) \Big\}+p_k \underline{v}(x) \qquad & \ x\in\mathbb{T}_m , \\[10pt] \displaystyle \underline{v}(x)=(1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x)}\underline{v}(y) \Big)+p_k \underline{u}(x) \qquad & \ x \in\mathbb{T}_m , \\[10pt] \end{array} \right. \end{align*} $$
$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow \pi}\underline{u}(x) = C_f^j \quad \mbox{if} \ \pi\in I_j, \\[10pt] \displaystyle \lim_{{x}\rightarrow \pi}\underline{v}(x)=C_g^j \quad \mbox{if} \ \pi\in I_j. \end{array} \right. \end{align*} $$

We can make the same construction but using the constants

$$ \begin{align*} D_f^j=\max_{x\in I_j}f \quad \mbox{and} \quad D_g^j=\max_{x\in I_j}g \end{align*} $$

to obtain the pair of functions $(\overline {u},\overline {v})$ that verifies

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \overline{u}(x)=(1-p_k)\Big\{ \frac{1}{2}\max_{y\in S(x)}\overline{u}(y)+\frac{1}{2}\min_{y\in S(x)}\overline{u}(y) \Big\}+p_k \overline{v}(x) \qquad & \ x\in\mathbb{T}_m , \\[10pt] \displaystyle \overline{v}(x)=(1-p_k)\Big(\frac{1}{m}\sum_{y\in S(x)}\overline{v}(y) \Big)+p_k \overline{u}(x) \qquad & \ x \in\mathbb{T}_m , \\[10pt] \end{array} \right. \end{align*} $$

and

$$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow \pi}\overline{u}(x) = D_f^j \quad \mbox{if} \ \pi\in I_j, \\[10pt] \displaystyle \lim_{{x}\rightarrow \pi}\overline{v}(x)=D_g^j \quad \mbox{if} \ \pi\in I_j. \\ \end{array} \right. \end{align*} $$

Now, we observe that the pair $(\underline {u},\underline {v})$ is a subsolution and $(\overline {u},\overline {v})$ is a supersolution. We only need to observe that

$$ \begin{align*} \begin{array}{ll} \displaystyle \lim_{{x}\rightarrow \pi}\underline{u}(x) =\limsup_{{x}\rightarrow \pi}\underline{u}(x)= C_f^j\leq f(\psi(\pi)) \quad \mbox{if} \ \psi(\pi)\in I_j, \\[10pt] \displaystyle \lim_{{x}\rightarrow \pi}\underline{v}(x)=\limsup_{{x}\rightarrow \pi}\underline{v}(x)=C_g^j\leq g(\psi(\pi)) \quad \mbox{if} \ \psi(\pi)\in I_j, \\ \end{array} \end{align*} $$

and

$$ \begin{align*} \begin{array}{ll} \displaystyle \lim_{{x}\rightarrow \pi}\overline{u}(x) =\liminf_{{x}\rightarrow \pi}\overline{u}(x)= D_f^j\geq f(\psi(\pi)) \quad \mbox{if} \ \psi(\pi)\in I_j, \\[10pt] \displaystyle \lim_{{x}\rightarrow \pi}\overline{v}(x)=\limsup_{{x}\rightarrow \pi}\overline{v}(x)=D_g^j\geq g(\psi(\pi)) \quad \mbox{if} \ \psi(\pi)\in I_j. \end{array} \end{align*} $$

Now, let us see that $(u,v)\in \mathscr {A}$ . Take $(z,w)\in \mathscr {A}$ and fix $x\in \mathbb {T}_m$ , then

$$ \begin{align*} z(x)\leq(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x)}z(y)+\frac{1}{2}\min_{y\in S(x)}z(y)\Big\}+p_k w(x). \end{align*} $$

As $z\leq u$ and $w\leq v,$ we obtain

$$ \begin{align*} z(x)\leq(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x)}u(y)+\frac{1}{2}\min_{y\in S(x)}u(y)\Big\}+p_k v(x). \end{align*} $$

Taking supremum in the left-hand side, we obtain

$$ \begin{align*} u(x)\leq(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x)}u(y)+\frac{1}{2}\min_{y\in S(x)}u(y)\Big\}+p_k v(x). \end{align*} $$

Analogously, we obtain the corresponding inequality for v.

On the other hand, using the comparison principle, we have $z\le \overline {u}$ , and then we conclude that $u\le \overline {u}$ . Thus,

$$ \begin{align*} \limsup_{{x}\rightarrow \pi}u(x)\le \limsup_{{x}\rightarrow \pi}\overline{u}(x)=\lim_{{x}\rightarrow \pi}\overline{u}(x)=D_f^j\le f(\psi(\pi))+\varepsilon. \end{align*} $$

Using that $\varepsilon>0$ is arbitrary, we get

$$ \begin{align*} \limsup_{{x}\rightarrow \pi}u(x)\leq f(\psi(\pi)), \end{align*} $$

and the same with v. Hence, we conclude that $(u,v)\in \mathscr {A}$ .

Now, we want to see that $(u,v)$ verifies the equalities in the equations. We argue by contradiction. First, assume that there is a point $x_0\in \mathbb {T}_m$ , where an inequality is strict, that is,

$$ \begin{align*} u(x_0)<(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_0)}u(y)+\frac{1}{2}\min_{y\in S(x_0)}u(y)\Big\}+p_k v(x_0). \end{align*} $$

Let

$$ \begin{align*} \delta=(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_0)}u(y)+\frac{1}{2}\min_{y\in S(x_0)}u(y)\Big\}+p_k v(x)-u(x_0)>0 \end{align*} $$

and consider the function

$$\begin{align*}u_{0} (x) =\left\lbrace \begin{array}{@{}ll} u(x), & \ \ x \neq x_{0}, \\ \displaystyle u(x)+\frac{\delta}{2}, & \ \ x =x_{0}. \\ \end{array} \right. \end{align*}$$

Observe that

$$\begin{align*}u_{0}(x_{0})=u(x_{0})+\frac{\delta}{2}< (1-p_k)\Big\{\frac{1}{2} \max_{y \in S(x_{0})}u(y) + \frac{1}{2} \min_{y \in S(x_0)}u(y)\Big\}+p_kv(x_{0}), \end{align*}$$

and hence

$$\begin{align*}u_{0}(x_{0})<(1-p_k)\Big\{\frac{1}{2} \max_{y \in S(x_{0})}u_0(y) + \frac{1}{2} \min_{y \in S(x_0)}u_0(y)\Big\}+p_kv(x_{0}). \end{align*}$$

Then we have that $(u_{0},v)\in \mathscr {A}$ but $u_{0}(x_{0})>u(x_{0})$ reaching a contradiction. A similar argument shows that we $(u,v)$ also solves the second equation in the system.

By construction, $\underline {u}\leq u$ and $\underline {v}\leq v$ . On the other hand, given $(z,w)\in \mathscr {A}$ , by the comparison principle, we obtain $u\leq \overline {u}$ and $v\leq \overline {v}$ .

Now observe that for $\psi (\pi )\in I_j$ , it holds that $C_f^j\geq f(\psi (\pi ))-\varepsilon $ and $D_f^j\leq f(\psi (\pi ))+\varepsilon $ , and $C_g^j\geq g(\psi (\pi ))-\varepsilon $ and $D_g^j\leq g(\psi (\pi ))+\varepsilon $ . Thus, we have

$$ \begin{align*} \displaystyle f(\psi(\pi))-\varepsilon&\leq C_f^j=\liminf_{{x}\rightarrow \pi}\underline{u}(x)\leq\liminf_{{x}\rightarrow \pi}u(x) \\ \displaystyle &\leq\limsup_{{x}\rightarrow \pi}u(x)\leq\limsup_{{x}\rightarrow \pi}\overline{u}(x)=D_f^j\leq f(\psi(\pi))+\varepsilon \end{align*} $$

and

$$ \begin{align*} \displaystyle g(\psi(\pi))-\varepsilon&\leq C_g^j=\liminf_{{x}\rightarrow \pi}\underline{v}(x)\leq\liminf_{{x}\rightarrow \pi}v(x)\\ \displaystyle &\leq\limsup_{{x}\rightarrow \pi}v(x)\leq\limsup_{{x}\rightarrow \pi}\overline{v}(x)=D_g^j\leq g(\psi(\pi))+\varepsilon. \end{align*} $$

Hence, since $\varepsilon $ is arbitrary, we obtain

$$ \begin{align*} \lim_{{x}\rightarrow \pi}u(x)=f(\psi(\pi)) \quad \mbox{and} \quad \lim_{{x}\rightarrow \pi}v(x)=g(\psi(\pi)) \end{align*} $$

and then conclude that $(u,v)$ is a solution to (2.1) and (2.2).

The uniqueness of solutions is a direct consequence of the comparison principle. Suppose that $(u_1,v_1)$ and $(u_2,v_2)$ are two solutions of (2.1) and (2.2). Then, $(u_1,v_1)$ is a subsolution and $(u_2,v_2)$ is a supersolution. From the comparison principle, we get $u_1\leq u_2$ and $v_1\leq v_2$ in $\mathbb {T}_m$ . The reverse inequalities are obtained reversing the roles of $(u_1,v_1)$ and $(u_2,v_2)$ .

Next, we show that the condition $\sum _{k=0}^{\infty }p_k<\infty $ is also necessary to have existence of solution for every continuous boundary data.

Theorem 2.6 Let $(p_k)_{k\geq 0}$ be a sequence of positive numbers such that

$$ \begin{align*}\sum_{k=1}^{\infty}p_k=+\infty.\end{align*} $$

Then, for any two constants $C_1$ and $C_2$ such that $C_1 \neq C_2$ , the systems (2.1) and (2.2) with $f\equiv C_1$ and $g\equiv C_2$ does not have a solution.

Proof Arguing by contradiction, suppose that for every pair of constants $C_1$ , $C_2$ , the system has a solution $(u,v)$ . First, suppose that $C_1>C_2$ . If we take $\overline {u}\equiv \overline {v}\equiv C_1$ , this pair is a supersolution to our problem. Then, by the comparison principle, we get

$$ \begin{align*} u(x)\leq C_1 \qquad \mbox{and} \qquad v(x)\leq C_1 \qquad \mbox{for all} \ x\in\mathbb{T}_m. \end{align*} $$

Given a level $k\geq 0,$ we have

$$ \begin{align*} \displaystyle u(x_k)&=(1-p_k)\Big\{\frac{1}{2}\max_{y\in S(x_k)}u(y)+\frac{1}{2}\min_{y\in S(x_k)}+p_kv(x_k) \Big\}\\ &\displaystyle \leq (1-p_k)u(x_{k+1})+p_kv(x_k), \end{align*} $$

where we have chosen $x_{k+1}$ such that

$$ \begin{align*}u(x_{k+1})=\max_{y\in S(x_k)}u(y).\end{align*} $$

Now using the same argument, we obtain

$$ \begin{align*} u(x_{k+1})\leq(1-p_{k+1})u(x_{k+2})+v(x_{k+1}), \end{align*} $$

and hence we arrive to

$$ \begin{align*} u(x_k) &\leq(1-p_k)\Big\{(1-p_{k+1})u(x_{k+2})+p_{k+1}v(x_{k+1})\Big\}+p_kv(x_k)\\ &=(1-p_k)(1-p_{k+1})u(x_{k+2})+(1-p_k)p_{k+1}v(x_{k+1})+p_kv(x_k). \end{align*} $$

Remark that the coefficients sum up to 1, that is, we have

$$ \begin{align*}(1-p_k)(1-p_{k+1})+(1-p_k)p_{k+1}+p_k=1. \end{align*} $$

Inductively, we get

$$ \begin{align*} u(x_k)\leq \prod_{j=0}^{l}(1-p_{k+j})u(x_{k+j})+\sum_{j=0}^{l}a_j v(x_{k+j}), \end{align*} $$

where the coefficients verify

(2.8) $$ \begin{align} \prod_{j=0}^{l}(1-p_{k+j})+\sum_{j=0}^{l}a_j=1. \end{align} $$

Now, the condition

$$ \begin{align*}\sum_{j=0}^{\infty}p_j=+\infty \end{align*} $$

is equivalent to

$$ \begin{align*}\prod_{j=0}^{\infty}(1-p_j)=0\end{align*} $$

(just take logarithm and use that $\ln (1+x) \sim x$ for $x\to 0$ ). Then, taking $l \to \infty $ in the equation (2.8), we get

$$ \begin{align*} \sum_{j=0}^{\infty} a_j=1. \end{align*} $$

But $x_k,x_{k+1},x_{k+2}, \dots $ is included in a branch in $\mathbb {T}_m$ , then we must have

(2.9) $$ \begin{align} \lim_{j\rightarrow\infty}v(x_{k+j})=C_2 \quad , \quad \lim_{j\rightarrow\infty}u(x_{k+j})=C_1. \end{align} $$

In particular, the sequences $(u(x_{k+j}))_{j\ge 0}$ and $(v(x_{k+j}))_{j\ge 0}$ are bounded. Thus, if we come back to

$$ \begin{align*} \displaystyle u(x_k)&\leq \prod_{j=0}^{l}(1-p_{k+j})u(x_{k+j})+\sum_{j=0}^{l}a_j v(x_{k+j})\\ &\displaystyle \leq \big(\prod_{j=0}^{l}(1-p_{k+j})\big)\sup_{j\ge 0}u(x_{k+j})+\big(\sum_{j=0}^{l}a_j\big)\sup_{j\ge 0}v(x_{k+j}) , \end{align*} $$

and taking the limit as $l\rightarrow \infty $ , we obtain

$$ \begin{align*} u(x_k)\leq \big(\sum_{j=0}^{\infty}a_j\big)\sup_{j\ge 0}v(x_{k+j})=\sup_{j\ge 0}v(x_{k+j}). \end{align*} $$

Now, if we take $k\rightarrow \infty $ and use (2.9), we obtain

$$ \begin{align*} C_1=\lim_{k\rightarrow\infty}u(x_k)\leq \lim_{k\rightarrow\infty}\sup_{j\ge 0}v(x_{k+j})= C_2, \end{align*} $$

which is a contradiction since we assumed that $C_1> C_2$ .

To reach a contradiction, if we have $C_2> C_1$ is analogous (we have to reverse the roles of u and v in the previous argument).

Remark 2.7 When $C_1=C_2$ if we take $u\equiv v \equiv C_1$ , we have a solution to our system. Therefore, we have proved that when $\sum _{k=1}^{\infty }p_k=+\infty ,$ the only solutions to the systems (2.1) and (2.2) are the trivial ones, $u\equiv v \equiv C_1$ .

3 A general system on the undirected tree

In this section, we deal with the general system

(3.1) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle u(x)=(1-p_k)\Big\{ \displaystyle (1-\beta_k^u ) F(u(y_0),\dots,u(y_{m-1})) + \beta_k^u u (\hat{x}) \Big\}+p_k v(x) & \ x\in\mathbb{T}_m , \\[10pt] \displaystyle v(x)=(1-q_k)\Big\{ (1-\beta_k^v ) G(v(y_0),\dots,v(y_{m-1})) + \beta_k^v v (\hat{x}) \Big\}+q_k u(x) & \ x \in\mathbb{T}_m , \end{array} \right.\nonumber\\ \end{align} $$

with boundary conditions

(3.2) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow \pi}u(x) = f(\psi(\pi)) , \\[10pt] \displaystyle \lim_{{x}\rightarrow \pi}v(x)=g(\psi(\pi)). \end{array} \right. \end{align} $$

First, we want to prove existence and uniqueness of a solution. From the computations that we made in the previous section, we see that the key ingredients to obtain the result are: the validity of a comparison principle and the possibility of constructing sub and supersolutions that take constants as boundary values.

Now, we need to introduce the concept of sub and supersolutions for this system.

Definition 3.1 Given $f,g: [0,1] \to \mathbb {R},$

  • The pair $(\underline {u},\underline {v})$ is subsolution of (3.1) and (3.2) if

    $$ \begin{align*} \!\left\lbrace \begin{array}{@{}ll} \displaystyle \underline{u}(x)\le(1-p_k)\Big\{(1-\beta^u_k) F(\underline{u}(y_0),\dots,\underline{u}(y_{m-1})) +\beta^u_k \underline{u}(\hat{x}) \Big\}+p_k \underline{v}(x), & x\in\mathbb{T}_m , \\[10pt] \displaystyle \underline{v}(x)\le(1-q_k)\Big\{ (1-\beta^v_k) G(\underline{v}(y_0),\dots,\underline{v}(y_{m-1})) +\beta^v_k \underline{v}(\hat{x}) \Big\}+q_k \underline{u}(x), & x \in\mathbb{T}_m , \end{array} \right. \end{align*} $$
    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \limsup_{{x}\rightarrow \pi}\underline{u}(x) \leq f(\psi(\pi)) , \\[10pt] \displaystyle \limsup_{{x}\rightarrow \pi}\underline{v}(x)\leq g(\psi(\pi)). \end{array} \right. \end{align*} $$
  • The pair $(\overline {u},\overline {v})$ is supersolution of (3.1) and (3.2) if

    $$ \begin{align*} \!\left\lbrace \begin{array}{@{}ll} \displaystyle \overline{u}(x)\ge(1-p_k)\Big\{(1-\beta^u_k) F(\overline{u}(y_0),\dots,\overline{u}(y_{m-1})) +\beta^u_k \overline{u}(\hat{x}) \Big\}+p_k \overline{v}(x), & x\in\mathbb{T}_m , \\[10pt] \displaystyle \overline{v}(x)\ge(1-q_k)\Big\{ (1-\beta^v_k) G(\overline{u}(y_0),\dots,\overline{u}(y_{m-1})) +\beta^v_k \overline{v}(\hat{x}) \Big\}+q_k \overline{u}(x), & x \in\mathbb{T}_m , \end{array} \right. \end{align*} $$
    $$ \begin{align*} \left\lbrace \begin{array}{@{}ll} \displaystyle \liminf_{{x}\rightarrow \pi}\overline{u}(x) \ge f(\psi(\pi)) , \\[10pt] \displaystyle \liminf_{{x}\rightarrow \pi}\overline{v}(x)\ge g(\psi(\pi)). \end{array} \right. \end{align*} $$

As before, we have a comparison principle.

Lemma 3.1 (Comparison Principle)

Assume that $(\underline {u},\underline {v})$ is a subsolution and $(\overline {u},\overline {v})$ is a supersolution, then it holds that

$$ \begin{align*} \displaystyle \underline{u}(x)\leq \overline{u}(x) \qquad \mbox{and} \qquad \displaystyle \underline{v}(x)\leq \overline{v}(x) \qquad \mbox{for all} \ x\in\mathbb{T}_m. \end{align*} $$

Proof The proof starts as before. Suppose, arguing by contradiction, that

$$ \begin{align*} \max\Big\{\sup_{x\in\mathbb{T}_m}(\underline{u}-\overline{u})(x), \sup_{x\in\mathbb{T}_m}(\underline{v}-\overline{v})(x)\Big\}\geq \eta>0. \end{align*} $$

Let

$$ \begin{align*} Q=\Big\{ x\in\mathbb{T}_m : \max\{(\underline{u}-\overline{u})(x),(\underline{v}-\overline{v})(x)\}\geq\eta\Big\} \neq\emptyset. \end{align*} $$

Now, let us call $k_0=\min \{k : x_k\in Q\}$ . Let $x_0\in \mathbb {T}_m^{k_0}$ such that $x_0\in Q$ . As in the previous section, our first step is to prove the following claim:

Claim # 1 There exists a sequence $(x_0,x_1,\dots )$ inside a branch such that $x_j\in Q$ for all $j\geq 0$ , $x_{j+1}\in S(x_j)$ . To prove this claim, let us begin proving that exists $y\in S(x_0)$ such that $y\in Q$ . Using that $x_0\in Q$ , we have to consider two cases:

First case:

$$ \begin{align*} (\underline{u}-\overline{u})(x_0)\geq \eta \qquad \mbox{and} \qquad (\underline{u}-\overline{u})(x_0)\geq (\underline{v}-\overline{v})(x_0). \end{align*} $$

The choice of $x_0\in \mathbb {T}_m^{k_0}$ as a node in Q that has the smallest possible level implies $(\underline {u}-\overline {u})(x_0)\geq (\underline {u}-\overline {u})(\hat {x_0})$ . Then, using that $\underline {u}$ is subsolution and $\overline {u}$ is supersolution, we have

$$ \begin{align*} \displaystyle (\underline{u}-\overline{u})(x_0)&\leq (1-p_k)(1-\beta_k^u)\Big[ F(\underline{u}(y_0),\dots,\underline{u}(y_{m-1})) - F(\overline{u}(y_0),\dots,\overline{u}(y_{m-1}))\Big] \\ \displaystyle & \quad +(1-p_k) \beta_k^u(\underline{u}-\overline{u})(\hat{x_0}) +p_k (\underline{v}-\overline{v})(x_0). \end{align*} $$

Using that $(\underline {u}-\overline {u})(x_0)\geq (\underline {v}-\overline {v})(x_0),$ we obtain

$$ \begin{align*} \displaystyle (\underline{u}-\overline{u})(x_0)&\leq(1-\beta_k^u) \Big[ F(\underline{u}(y_0),\dots,\underline{u}(y_{m-1})) - F(\overline{u}(y_0),\dots,\overline{u}(y_{m-1}))\Big] \\\displaystyle & \quad +\beta_k^u(\underline{u}-\overline{u})(\hat{x_0}). \end{align*} $$

Now using that $(\underline {u}-\overline {u})(x_0)\geq (\underline {u}-\overline {u})(\hat {x_0}),$ we get

(3.3) $$ \begin{align} (\underline{u}-\overline{u})(x_0)\leq \Big[F(\underline{u}(y_0),\dots,\underline{u}(y_{m-1}))- F(\overline{u}(y_0),\dots,\overline{u}(y_{m-1}))\Big]. \end{align} $$

Using that F is an averaging operator, this implies that there exists $y\in S(x_0)$ such that

$$ \begin{align*} (\underline{u}-\overline{u})(y)\geq (\underline{u}-\overline{u})(x_0)\geq \eta. \end{align*} $$

In fact, if $\max _y (\underline {u}-\overline {u})(y):=t < (\underline {u}-\overline {u})(x_0)$ , using that F verifies

$$ \begin{align*}F(t+x_1,\dots,t+x_m)=t+ F(x_1,\dots,x_m),\end{align*} $$

and that F is nondecreasing with respect to each variable, we get

$$ \begin{align*}F(\underline{u}(y_0),\dots,\underline{u}(y_{m-1})) = t + F(\underline{u}(y_0)-t,\dots,\underline{u}(y_{m-1})-t) \leq t + F(\overline{u}(y_0),\dots,\overline{u}(y_{m-1})), \end{align*} $$

a contradiction with (3.3). We can deduce that $y\in Q$ , but we also obtain $(\underline {u}-\overline {u})(y)\geq (\underline {u}-\overline {u})(x_0)$ a property that we are going to use later.

Second case:

$$ \begin{align*} (\underline{v}-\overline{v})(x_0)\geq \eta \qquad \mbox{and} \qquad (\underline{v}-\overline{v})(x_0)\geq (\underline{u}-\overline{u})(x_0). \end{align*} $$

Using again that $\underline {u}$ is subsolution and $\overline {u}$ is supersolution, we have

$$ \begin{align*} \displaystyle (\underline{v}-\overline{v})(x_0)&\leq (1-q_k)(1-\beta_k^v) \Big[ G(\underline{v}(y_0),\dots,\underline{v}(y_{m-1})) - G(\overline{v}(y_0),\dots,\overline{v}(y_{m-1}))\Big] \\\displaystyle & \quad + (1-q_k)\beta_k^v(\underline{u}-\overline{v})(\hat{x_0}) +q_k (\underline{u}-\overline{u})(x_0). \end{align*} $$

Using first $(\underline {v}-\overline {v})(x_0)\geq (\underline {u}-\overline {u})(x_0)$ and then that $(\underline {v}-\overline {v})(x_0)\geq (\underline {v}-\overline {v})(\hat {x_0})$ , we obtain

$$ \begin{align*} (\underline{v}-\overline{v})(x_0)\leq \Big[ G(\underline{v}(y_0),\dots,\underline{v}(y_{m-1})) - G(\overline{v}(y_0),\dots,\overline{v}(y_{m-1}))\Big]. \end{align*} $$

Arguing as before, using that G is an averaging operator, this implies that there exists $y\in S(x_0)$ such that $y\in Q$ and

$$ \begin{align*} (\underline{v}-\overline{v})(y)\geq (\underline{v}-\overline{v})(x_0)\geq \eta. \end{align*} $$

Now, calling $x_1\in S(x_0)$ the node that verifies

$$ \begin{align*} (\underline{u}-\overline{u})(x_1)\geq (\underline{u}-\overline{u})(x_0)\geq \eta \qquad \mbox{or} \qquad (\underline{v}-\overline{v})(x_1)\geq (\underline{v}-\overline{v})(x_0)\geq \eta , \end{align*} $$

we can obtain, with the same techniques used before, a node $x_2\in S(x_1)$ such that

$$ \begin{align*} (\underline{u}-\overline{u})(x_2)\geq (\underline{u}-\overline{u})(x_1)\geq \eta \qquad \mbox{or} \qquad (\underline{v}-\overline{v})(x_2)\geq (\underline{v}-\overline{v})(x_1)\geq \eta. \end{align*} $$

By an inductive argument, we can obtain a sequence $(x_0,x_1,x_2,\dots )\subseteq Q $ such that $x_{j+1}\in S(x_j)$ . This ends the proof of the claim.

Therefore, we can take a subsequence $(x_{j_l})_{l\geq 1}$ with the following properties:

(3.4) $$ \begin{align} (\underline{u}-\overline{u})(x_{j_l})\geq \eta \qquad \mbox{for all} \quad l\geq 1, \end{align} $$

or

(3.5) $$ \begin{align} (\underline{v}-\overline{v})(x_{j_l})\geq \eta \qquad \mbox{for all} \quad l\geq 1. \end{align} $$

Suppose that (3.4) is true. Let $\lim _{l\rightarrow \infty }x_{j_l}=\pi $ . Then, we finally arrive to

$$ \begin{align*} 0<\eta \leq \limsup_{l\rightarrow\infty}(\underline{u}-\overline{u})(x_{j_l})=\limsup_{l\rightarrow\infty}\underline{u}(x_{j_l})-\liminf_{l\rightarrow\infty}\overline{u}(x_{j_L})\leq f(\psi(\pi))-f(\psi(\pi))=0 \end{align*} $$

which is a contradiction. The argument with (3.5) is similar. This ends the proof.

Now, we deal with constant data on the boundary, $f\equiv C_1$ and $g\equiv C_2$ . Notice that now we only have a supersolution to our system that takes the boundary data (and not an explicit solution as in the previous section).

Lemma 3.2 Given two constants $C_1$ and $C_2$ . Suppose that the conditions (1.3) hold, that is,

(3.6) $$ \begin{align} \begin{array}{ll} \displaystyle \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^u}{(1-\beta_{j}^u)}<\infty , \quad \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^v}{(1-\beta_{j}^v)}<\infty, \\[10pt] \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^u}{(1-\beta_l^u)}\big)\frac{p_j}{(1-p_j)}<\infty , \quad \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^v}{(1-\beta_l^v)}\big)\frac{q_j}{(1-q_j)}<\infty, \\[10pt] \displaystyle \sum_{k=1}^{\infty}p_k<\infty , \quad \sum_{k=1}^{\infty}q_k<\infty. \end{array}\nonumber\\ \end{align} $$

Then, there exists a supersolution of (3.1) such that

$$ \begin{align*}\lim_{x\rightarrow \pi} u(x) = C_1 \qquad \mbox{and} \qquad \lim_{x\rightarrow \pi} v(x) = C_2. \end{align*} $$

Proof We look for the desired supersolution taking

$$ \begin{align*} u(x)=\sum_{j=k}^{\infty}r_j +C_1 \qquad \mbox{ and } \qquad v(x)=\sum_{j=k}^{\infty}r_j +C_2 \end{align*} $$

for every $x\in \mathbb {T}_m^k$ . To attain the boundary conditions, we need that

$$ \begin{align*}\sum_{k=1}^{\infty}r_k<\infty.\end{align*} $$

Indeed, is this series converges, then we have

$$ \begin{align*}\lim_{x\rightarrow \pi} u(x)= \lim_{k \to \infty} \sum_{j=k}^{\infty}r_j +C_1 = C_1 \qquad \mbox{and} \qquad \lim_{x\rightarrow \pi} v(x)= \lim_{k \to \infty} \sum_{j=k}^{\infty}r_j +C_2 = C_2. \end{align*} $$

Now, notice that $u(x) = u(\tilde {x})$ as long as x and $\tilde {x}$ are at the same level. Therefore, using that $F(k,\dots ,k) =k$ , since we aim for a supersolution, from the first equation, we arrive to

$$ \begin{align*} &\displaystyle \sum_{j=k}^{\infty}r_j +C_1 \\& \quad \displaystyle \geq (1-p_k)(1-\beta_k^u) \Big(\sum_{j=k+1}^{\infty}r_j+C_1\Big) \\& \qquad \displaystyle \qquad +(1-p_k)\beta_k^u\Big(\sum_{j=k-1}^{\infty}r_j+C_1\Big)+p_k\Big(\sum_{j=k}^{\infty}r_j+ C_2\Big). \end{align*} $$

We can rewrite this as

$$ \begin{align*} (1-p_k)\sum_{j=k}^{\infty}r_j\geq (1-p_k)(1-\beta_k^u) \sum_{j=k+1}^{\infty}r_j +(1-p_k)\beta_k^u \sum_{j=k-1}^{\infty}r_j + p_k(C_2-C_1). \end{align*} $$

If we call $L=C_2-C_1$ , dividing by $(1-p_k),$ we obtain

$$ \begin{align*} \sum_{j=k}^{\infty}r_j\geq (1-\beta_k^u) \sum_{j=k+1}^{\infty}r_j +\beta_k^u \sum_{j=k-1}^{\infty}r_j + \frac{p_k}{(1-p_k)} L. \end{align*} $$

We can write

$$ \begin{align*}\sum_{j=k}^{\infty}r_j=(1-\beta_k^u)\sum_{j=k}^{\infty}r_j+\beta_k^u\sum_{j=k}^{\infty}r_j\end{align*} $$

to obtain

$$ \begin{align*} (1-\beta_k^u)r_k\geq \beta_k^u r_{k-1} + \frac{p_k}{(1-p_k)} L. \end{align*} $$

Then, we have

$$ \begin{align*} r_k\geq \frac{\beta_k^u}{(1-\beta_k^u)} r_{k-1} + \frac{p_k}{(1-p_k)} \frac{L}{(1-\beta_k^u)}. \end{align*} $$

If we iterate this inequality one more time, we arrive to

$$ \begin{align*} r_k\geq \frac{\beta_k^u}{(1-\beta_k^u)}\Big\{ \frac{\beta_{k-1}^u}{(1-\beta_{k-1}^u)}r_{k-2}+\frac{p_{k-1}}{(1-p_{k-1})}\frac{L}{(1-\beta_{k-1}^u)} \Big\} + \frac{p_k}{(1-p_k)} \frac{L}{(1-\beta_k^u)}. \end{align*} $$

Then, by an inductive argument, we obtain for $k\ge 2$

$$ \begin{align*} \displaystyle r_k &\geq \prod_{j=1}^{k}\frac{\beta_{j}^u}{(1-\beta_{j}^u)}r_0 + \sum_{j=1}^{k-1} \Big(\prod_{l=j+1}^k \frac{\beta_{l}^u}{(1-\beta_{l}^u)}\Big)\frac{p_{j}}{(1-p_{j})}\frac{L}{(1-\beta_{j}^u)} \\\displaystyle & \quad +\frac{p_k}{(1-p_k)} \frac{L}{(1-\beta_k^u)} : =\Lambda_k(\beta_{j}^u,p_j). \end{align*} $$

Now, from analogous computations for the other equation in (3.1), we obtain

$$ \begin{align*} \displaystyle r_k &\geq \prod_{j=1}^{k}\frac{\beta_{j}^v}{(1-\beta_{j}^v)}r_0 + \sum_{j=1}^{k-1}\Big(\prod_{l=j+1}^k \frac{\beta_{l}^v}{(1-\beta_{l}^v)}\Big)\frac{q_{j}}{(1-q_{j})}\frac{L}{(1-\beta_{j}^v)}\\\displaystyle &\quad +\frac{q_k}{(1-q_k)} \frac{L}{(1-\beta_k^v)}:=\Lambda_k(\beta_{j}^v,q_j). \end{align*} $$

In order to fulfill the two inequalities, we can take as $r_k$ the maximum of the two right-hand sides, that is,

$$ \begin{align*} r_k = \max\{ \Lambda_k(\beta_j^u,p_j) , \Lambda_k(\beta_j^v,q_j) \}. \end{align*} $$

Now, we recall that we need that

$$ \begin{align*} \sum_{k=2}^{\infty}r_k<\infty, \end{align*} $$

and this follows by the hypotheses (3.6).

This ends the proof.

Remark 3.3 Notice that taking $r_0$ large enough we can make this supersolution as large as we want at the root of the tree, that is, $u(\emptyset )$ and $v(\emptyset )$ can be chosen as large as we need.

Notice that we also have a subsolution.

Lemma 3.4 Given two constants $C_1$ and $C_2$ , there exists a subsolution of (3.1) with

$$ \begin{align*}\lim_{x\rightarrow \pi} u(x) = C_1 \qquad \mbox{and} \qquad \lim_{x\rightarrow \pi} v(x) = C_2. \end{align*} $$

Proof Using the above lemma, we know that there exists a supersolution $(\overline {u},\overline {v})$ of (3.1) and (3.2) with $f\equiv -C_1$ and $g\equiv -C_2$ .

Consider $\underline {u}=-\overline {u}$ and $\underline {v}=-\overline {v}$ . Then $(\underline {u},\underline {v})$ is subsolution of (3.1) and (3.2) with $f\equiv C_1$ and $g\equiv C_2$ .

Now, we are ready to prove existence and uniqueness of a solution when the conditions on the coefficients, (1.3), hold.

Theorem 3.5 Assume that the coefficients verify (1.3), that is,

$$ \begin{align*} \begin{array}{ll} \displaystyle \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^u}{(1-\beta_{j}^u)}<\infty , \quad \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^v}{(1-\beta_{j}^v)}<\infty, \\[10pt] \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^u}{(1-\beta_l^u)}\big)\frac{p_j}{(1-p_j)}<\infty , \quad \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^v}{(1-\beta_l^v)}\big)\frac{q_j}{(1-q_j)}<\infty, \\[10pt] \displaystyle \sum_{k=1}^{\infty}p_k<\infty , \quad \sum_{k=1}^{\infty}q_k<\infty. \end{array} \end{align*} $$

Then, for every $f,g\in C([0,1])$ , there exists a unique solution to (3.1) and (3.2).

Proof We want to prove that there exists a unique solution to (3.1) and (3.2). As in the previous section, let

$$ \begin{align*} \mathscr{A}=\Big\{(z,w)\colon (z,w)\mbox{ is subsolution of (3.1) and (3.2)} \Big\}. \end{align*} $$

We observe that $\mathscr {A}\neq \emptyset .$ In fact, taking $z(x)=w(x)=-\max \{\|f\|_{L^{\infty }}, \|g\|_{L^{\infty }}\}=C \in \mathbb {R}$ , we obtain that $(z,w)\in \mathscr {A}.$ Moreover, functions in $\mathscr {A}$ are bounded above. In fact, using the Comparison Principle, we obtain that $z\le C=\max \{\|f\|_{L^{\infty }}, \|g\|_{L^{\infty }}\}$ and $w\le C$ for all $(z,w)\in \mathscr {A}.$

As before, we let

$$\begin{align*}(u(x), v(x))= \sup_{(z,w)\in\mathscr{A}} (z,w), \end{align*}$$

and we want to prove that this pair of functions is solution of (3.1) and (3.2).

If $(z,w)\in \mathscr {A},$

$$ \begin{align*} z(x)&\le(1-p_k)\Big\{ (1-\beta_k^u)\Big(\displaystyle F(z(y_0),\dots,z(y_{m-1})) +\beta_k^u z(\hat{x}) \Big\}+p_k w(x)\\ &\le(1-p_k)\Big\{ (1-\beta_k^u)\Big(\displaystyle F(u(y_0),\dots,u(y_{m-1}))\Big) +\beta_k^u z(\hat{x}) \Big\}+p_k v(x) \end{align*} $$

and

$$ \begin{align*} w(x)&\le(1-q_k)\Big\{(1-\beta_k^v)G(w(y_0),\dots,w(y_{m-1}))+\beta_k^v w(\hat{x}) \Big\}+q_k z(x)\\ &\le(1-q_k)\Big\{ (1-\beta_k^v)G(v(y_0),\dots,v(y_{m-1}))+\beta_k^v v(\hat{x}) \Big\}+q_k u(x). \end{align*} $$

Then, taking supremum in the left-hand sides, we obtain that $(u,v)$ is a subsolution of the equations in (3.1).

For the two functions $f,g\in {C}([0,1])$ given $\varepsilon>0$ , there exists $\delta>0$ such that

$$ \begin{align*}|f(\psi(\pi_1))-f(\psi(\pi_2))|<\varepsilon \quad \mbox{ and } \quad |g(\psi(\pi_1))-g(\psi(\pi_2))|<\varepsilon\end{align*} $$

if $|\psi (\pi _1) - \psi (\pi _2)|<\delta $ . Let us take $k\in \mathbb {N}$ such that $\frac {1}{m^k}<\delta $ . We divide the segment $[0,1]$ in $m^k$ subsegments $I_j=[\frac {j-1}{m^k},\frac {j}{m^k}]$ for $1\leq j\leq m^k$ . Let us consider the constants

$$ \begin{align*} D_1^j=\max_{x\in I_j}f \quad \mbox{ and } \quad D_2^j=\max_{x\in I_j}g \end{align*} $$

for $1\leq j\leq m^k$ .

If we consider $\mathbb {T}_m^k=\{x\in \mathbb {T}_m : |x|=k \},$ we have $\# \mathbb {T}_m^k=m^k$ and given $x\in \mathbb {T}_m^k$ any branch that have this vertex in the k-level ends in only one segment $I_j$ . Then we can relate one to one, the set $\mathbb {T}_m^k$ with the segments $(I_j)_{j=1}^{m^k}$ . Let us call $x^j$ the vertex associated with $I_j$ .

Given $1\leq j\leq m^k$ , if we consider $x^j$ as a first vertex, we obtain a tree such that the boundary (via $\psi $ ) is $I_j$ . Using Lemma 3.2 in this tree, we can obtain the value of a supersolution $(\overline {u},\overline {v})$ of (3.1) and (3.2) with the constants $C_1^j$ and $C_2^j$ for all vertices $y\in T_m^l$ with $l>k$ such that $\overline {u}(x^j)=C$ and $\overline {v}(x^j)=C,$ where $C>0$ is some large constant that we will determine later (see Remark 3.3) (the constant will be the same for all $x\in T_m^k$ ), and for all $x\in T_m^l$ with $l<k,$ we define $\overline {u}(x)=\overline {v}(x)=C$ . Then Lemma 3.2 says that if $x\in T_m^k$ , we have

$$ \begin{align*} \overline{u}(x)\ge (1-p_k)F(\overline{u}(y_0),\dots , \overline{u}(y_{m-1}))+p_k \overline{v}(x) \end{align*} $$

since $(\overline {u},\overline {v})$ is a supersolution. Then, for $x\in T_m^k$ , we have

(3.7) $$ \begin{align} \begin{array}{l} C\geq (1-p_k)F(\overline{u}(y_0),\dots , \overline{u}(y_{m-1}))+p_k C \\[10pt] \Rightarrow C\ge F(\overline{u}(y_0),\dots , \overline{u}(y_{m-1})), \quad y_i \in S(x), \end{array} \end{align} $$

for all $x\in T_m^k$ .

On the other hand, we want the pair $(\overline {u},\overline {v})$ to be a supersolution of (3.1), then we need to extend $(\overline {u},\overline {v})$ to the nodes in $x\in T_m^i$ with $i<k$ in such a way that

$$ \begin{align*} \overline{u}(x)\ge (1-p_k)\big\{(1-\beta_k)F(\overline{u}(y_0),\dots , \overline{u}(y_{m-1}))+\beta_k \overline{u}(\hat{x})\big\}+p_k \overline{v}(x). \end{align*} $$

Therefore, if we set $\overline {u}(x) = C$ for these nodes, we need

$$ \begin{align*} C\ge (1-p_k)\big\{(1-\beta_k)F(\overline{u}(y_0),\dots , \overline{u}(y_{m-1}))+\beta_k C\big\}+p_k C, \end{align*} $$

and we get the same condition as above for C. Thus, if we consider $C>0$ such that it verifies (3.7), we obtain that $(\overline {u},\overline {v})$ is a supersolution of (3.1) and (3.2) in the whole tree $\mathbb {T}_m.$ Using the comparison principle, we get

$$\begin{align*}z(x)\le \overline{u}(x)\, \mbox{ and } \,w(x)\le \overline{v}(x), \qquad \mbox{for every } (z,w)\in\mathscr{A}. \end{align*}$$

Then, taking supremums in the right-hand sides, we conclude that

$$\begin{align*}u(x)\le \overline{u}(x)\, \mbox{ and } \,v(x)\le \overline{v}(x). \end{align*}$$

Hence, we obtain

$$\begin{align*}\limsup_{x\rightarrow \pi} u(x) \leq \limsup_{{x}\rightarrow \pi} \overline{u}(x)=D_j^1\le f(\psi(\pi))+\varepsilon. \end{align*}$$

Similarly,

$$\begin{align*}\limsup_{x\rightarrow \pi} v(x) \leq g(\psi(\pi))+\varepsilon. \end{align*}$$

Using that $\varepsilon>0$ is arbitrary, we get that $(u,v)\in \mathscr {A}.$

We want to prove that $(u,v)$ satisfies (3.1). We know that it is a subsolution. Suppose that there exits $x_0$ such that

(3.8) $$ \begin{align} u(x_0)<(1-p_k)\Big\{ (1-\beta_k^u) F(u(y_0),\dots,u(y_{m-1})) +\beta_k^u u(\hat{x_0}) \Big\}+p_k v(x_0). \end{align} $$

Let

$$ \begin{align*} u^*(x)=\left\lbrace \begin{array}{@{}ll} \displaystyle u(x)+\eta, \quad &x=x_0, \\[5pt] \displaystyle u(x), \quad &x\neq x_0. \end{array} \right. \end{align*} $$

Since we have a strict inequality in (3.8) and F is monotone and continuous, it is easy to check that for $\eta $ small $(u^*,v)\in \mathscr {A}.$ This is a contradiction because we have

$$ \begin{align*}u^* (x_0)>u (x_0)=\sup_{(w,z)\in\mathscr{A}}w(x_0).\end{align*} $$

A similar argument shows that $(u,v)$ also solves the second equation in (3.1).

Up to this point, we have that $(u,v)$ satisfies (3.1) together with

$$ \begin{align*}\limsup_{x\rightarrow \pi} u(x) \leq f(\pi) \qquad \mbox{ and } \qquad \limsup_{x\rightarrow \pi} v(x) \leq g(\pi).\end{align*} $$

Hence, our next fast is to prove that $(u,v)$ satisfies the limits in (3.2).

As before, we use that f and g are continuous. Given $\varepsilon>0$ , there exists $\delta>0$ such that

$$ \begin{align*}|f(\psi(\pi_1))-f(\psi(\pi_2))|<\varepsilon \quad \mbox{ and } \quad |g(\psi(\pi_1))-g(\psi(\pi_2))|<\varepsilon\end{align*} $$

if $|\psi (\pi _1) - \psi (\pi _2)|<\delta $ . Let us take $k\in \mathbb {N}$ such that $\frac {1}{m^k}<\delta $ . We divide the segment $[0,1]$ in $m^k$ subsegments $I_j=[\frac {j-1}{m^k},\frac {j}{m^k}]$ for $1\leq j\leq m^k$ . Let us consider the constants

$$ \begin{align*} C_1^j=\min_{x\in I_j}f \quad \mbox{ and } \quad C_2^j=\min_{x\in I_j}g. \end{align*} $$

By Lemma 3.4, using the same construction as before, there exists $(\underline {u},\underline {v})$ a subsolution of (3.1) and (3.2) such that

$$ \begin{align*} \lim_{{x}\rightarrow \pi}\underline{u}(x)=C_1^j \quad \mbox{ and } \quad \lim_{{x}\rightarrow \pi}\underline{v}(x)=C_2^j \end{align*} $$

and if $x\in \mathbb {T}_m^j$ for $j\le k$ we set $\underline {u}(x_j)=-C$ and $\underline {v}(x_j)=-C$ with C a large constant. Then $(\underline {u},\underline {v})$ is a subsolution of (3.1) and (3.2) in $\mathbb {T}_m.$ From the definition of $(u,v),$ the suprema of subsolutions we get

$$\begin{align*}u(x)\ge \underline{u}(x)\, \mbox{ and } \,v(x)\ge \underline{v}(x). \end{align*}$$

Then we have that

$$ \begin{align*}\liminf_{x\rightarrow \pi} u(x) \ge \liminf_{{x}\rightarrow \pi}\underline{u}=C_1^j\ge f(\psi(\pi))-\varepsilon\end{align*} $$

and

$$ \begin{align*}\liminf_{x\rightarrow \pi} v(x) \ge g(\psi(\pi))-\varepsilon.\end{align*} $$

Using that $\varepsilon>0$ is arbitrary, we obtain

$$ \begin{align*} \liminf_{x\rightarrow \pi} u(x) \ge f(\psi(\pi))\quad \mbox{ and } \quad \liminf_{x\rightarrow \pi} v(x) \ge g(\psi(\pi)). \end{align*} $$

Hence, we conclude (3.2),

$$ \begin{align*}\lim_{x\rightarrow \pi} u(x)= f(\psi(\pi)) \qquad \mbox{and} \qquad \lim_{x\rightarrow \pi} v(x)= g(\psi(\pi)).\end{align*} $$

To end the proof, we just observe that the comparison principle gives us uniqueness of solutions.

Finally, the nonexistence of solutions when one of the conditions fails completes the if and only if in the result.

Theorem 3.6 Suppose that one of the following conditions:

$$ \begin{align*} \begin{array}{ll} \displaystyle \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^u}{(1-\beta_{j}^u)}<\infty , \quad \sum_{k=1}^{\infty}\prod_{j=1}^{k}\frac{\beta_{j}^v}{(1-\beta_{j}^v)}<\infty, \\[10pt] \displaystyle \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^u}{(1-\beta_l^u)}\big)\frac{p_j}{(1-p_j)}<\infty, \quad \sum_{k=2}^{\infty}\sum_{j=1}^{k-1}\big(\!\!\prod_{l=j+1}^k \frac{\beta_l^v}{(1-\beta_l^v)}\big)\frac{q_j}{(1-q_j)}<\infty, \\[10pt] \displaystyle \sum_{k=1}^{\infty}p_k<\infty , \quad \sum_{k=1}^{\infty}q_k<\infty, \end{array} \end{align*} $$

are not satisfied. Then, there exist two constants $C_1$ and $C_2$ such that the system (3.1) with condition (3.2) and $f\equiv C_1$ and $g\equiv C_2$ does not have a solution.

Proof Suppose that the system have a solution $(u,v)$ with boundary condition (3.2) with $f\equiv C_1$ and $g\equiv C_2$ for two constants such that $C_1>C_2.$ We have

(3.9) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle u(x)=(1-p_k)\Big\{ (1-\beta_k^u) F(u(y_0),\dots,u(y_{m-1})) +\beta_k^u u(\hat{x}) \Big\}+p_k v(x) & \ x \in\mathbb{T}_m^k , \\[10pt] \displaystyle v(x)=(1-q_k)\Big\{(1-\beta_k^v) G(v(y_0),\dots,v(y_{m-1}))+\beta_k^v v(\hat{x}) \Big\} +q_k u(x) & \ x\in\mathbb{T}_m^k. \end{array} \right.\nonumber \\ \end{align} $$

Let us follow the path given by the maxima among successors, that is, we let

$$ \begin{align*}\overline{x}_0=\emptyset , \quad u(\overline{x}_1)=\max_{y\in S(\emptyset)}u(y) \quad \mbox{and} \quad u(\overline{x}_k)=\max_{y\in S(\overline{x}_{k-1})}u(y), \quad k\geq2.\end{align*} $$

Then, using that $F(z_1,\dots ,z_m) \leq \max _i z_i$ , we have

$$\begin{align*}u(\overline{x}_k)\le (1-p_k)(1-\beta_k^u) u(\overline{x}_{k+1})+ (1-p_k)\beta_k^u u(\overline{x}_{k-1})+p_k v(\overline{x}_k), \end{align*}$$

that is,

$$ \begin{align*} \displaystyle 0 &\le (1-p_k)(1-\beta_k^u) (u(\overline{x}_{k+1})-u(\overline{x}_k)) \\\displaystyle &\quad + (1-p_k)\beta_k^u (u(\overline{x}_{k-1})-u(\overline{x}_k))+p_k (v(\overline{x}_k)-u(\overline{x}_k)). \end{align*} $$

If we call $a_k=u(\overline {x}_{k+1})-u(\overline {x}_k),$ we get

$$\begin{align*}0 \le (1-p_k)(1-\beta_k^u) a_k - (1-p_k)\beta_k^u a_{k-1}+p_k (v(x_k)-u(x_k)). \end{align*}$$

Then

$$\begin{align*}0 \le (1-\beta_k^u) a_k - \beta_k^u a_{k-1}+\frac{p_k}{(1-p_k)} (v(x_k)-u(x_k)). \end{align*}$$

Now, calling $b_k=v(x_k)-u(x_k)$ , we obtain

$$\begin{align*}a_k\ge \frac{\beta_k^u}{(1-\beta_k^u)} a_{k-1} +\frac{p_k}{(1-p_k)(1-\beta_k^u)}(-b_k).\end{align*}$$

Now, using the same argument one more time, we get

$$ \begin{align*} \displaystyle a_k&\ge \frac{\beta_k^u}{(1-\beta_k^u)}\frac{\beta_{k-1}^u}{(1-\beta_{k-1}^u)} a_{k-2} \\\displaystyle &\quad +\frac{\beta_k^u}{(1-\beta_k^u)}\frac{p_{k-1}}{(1-p_{k-1})(1-\beta_{k-1}^u)}(-b_{k-1})+ \left(\frac{p_{k}}{1-p_{k}}\right)\left(\frac{1}{1-\beta_{k}^u}\right)(-b_{k}). \end{align*} $$

Inductively, for $k_0\ge 2$ and $k>k_0$ , we arrive to

$$ \begin{align*} \displaystyle a_k&\ge \left[\prod_{j=k_0+1}^k\left(\frac{\beta_j^u}{1-\beta_j^u}\right)\right]a_{k_0}+ \sum_{j=k_0}^{k-1}\left(\prod_{l=j+1}^k \frac{\beta_l^u}{1-\beta_l^u}\right)\left(\frac{p_j}{1-p_j}\right)\left(\frac{1}{1-\beta_j}\right)(-b_j)\\\displaystyle &\quad +\left(\frac{p_k}{1-p_k}\right)\left(\frac{1}{1-\beta_k}\right)(-b_k). \end{align*} $$

On the other hand, for $M>k_0$ , we have that

$$\begin{align*}\sum_{k=k_0}^M a_k= u(\overline{x}_{M+1})-u(\overline{x}_{k_0}). \end{align*}$$

Then, we obtain

(3.10) $$ \begin{align} u(\overline{x}_{M+1})&\geq u(\overline{x}_{k_0}) +\sum_{k=k_0+1}^M \left[\prod_{j=k_0+1}^k\left(\frac{\beta_j^u}{1-\beta_j^u}\right)\right]a_{k_0}\nonumber\\&\quad + \sum_{k=k_0+1}^M\sum_{j=k_0}^{k-1}\left(\prod_{l=j+1}^k \frac{\beta_l^u}{1-\beta_l^u}\right)\left(\frac{p_j}{1-p_j}\right)\left(\frac{1}{1-\beta_j}\right)(-b_j)\notag\\&\quad +\sum_{k=k_0+1}^M \left(\frac{p_k}{1-p_k}\right)\left(\frac{1}{1-\beta_k}\right)(-b_k). \end{align} $$

We observe that the boundary conditions

$$ \begin{align*}\lim_{j\to+\infty}u(\overline{x}_j)=C_1\qquad \mbox{ and }\qquad \lim_{j\to+\infty}v(\overline{x}_j)=C_2\end{align*} $$

implies that

$$ \begin{align*}\lim_{j\to+\infty}b_j=C_2-C_1.\end{align*} $$

Therefore, since we have taken $C_1>C_2$ , there exists a constant c such that

$$ \begin{align*}(-b_j)\geq c>0\end{align*} $$

for j large enough. Hence, using (3.9), we get $a_j\neq 0$ for j large enough.

If

$$\begin{align*}\sum_{k=1}^{+\infty} \left[\prod_{j=1}^k\left(\frac{\beta_j^u}{1-\beta_j^u}\right)\right]=+\infty, \end{align*}$$

we obtain a contradiction from (3.10) taking the limit as $M\to \infty $ .

Now, if

$$ \begin{align*} \sum_{k=1}^{+\infty} \left[\prod_{j=1}^k\left(\frac{\beta_j^u}{1-\beta_j^u}\right)\right]<+\infty, \end{align*} $$

but

$$ \begin{align*} \sum_{k=2}^{+\infty}\sum_{j=1}^{k-1}\left(\prod_{l=j+1}^k \frac{\beta_l^u}{1-\beta_l^u}\right)\left(\frac{p_j}{1-p_j}\right)\left(\frac{1}{1-\beta_j^u}\right)=+\infty, \end{align*} $$

or

$$ \begin{align*} \sum_{k=1}^{+\infty}p_k=+\infty, \end{align*} $$

we obtain again a contradiction from (3.10) using that $(-b_j)\geq c>0$ for j large enough and taking the limit as $M\to \infty $ .

Similarly, we can arrive to a contradiction from

$$\begin{align*}\sum_{k=1}^{+\infty} \left[\prod_{j=1}^k\left(\frac{\beta_j^v}{1-\beta_j^v}\right)\right]=+\infty, \end{align*}$$
$$\begin{align*}\sum_{k=2}^{+\infty}\sum_{j=1}^{k-1}\left(\prod_{l=j+1}^k \frac{\beta_l^v}{1-\beta_l^v}\right)\left(\frac{q_j}{1-q_j}\right)\left(\frac{1}{1-\beta_j^v}\right)=+\infty \end{align*}$$

or

$$\begin{align*}\sum_{k=1}^{+\infty}q_k=+\infty \end{align*}$$

using the second equation in (3.1) (in this case, we follow a path that contains the maxima among values on successors of the second component of the system, v, and start with two constants such that $C_1<C_2$ ).

Remark 3.7 Notice that we proved that the system (1.1) has a unique solution for every continuous data f and g if and only if it has a unique solution when f and g are constant functions.

4 Game theoretical interpretation

Recall that in the introduction, we mentioned that the system (1.1) with F and G given by (1.5),

(4.1) $$ \begin{align} \left\lbrace \begin{array}{@{}l} \displaystyle u(x)=(1-p_k)\Big\{ \displaystyle (1-\beta_k^u ) \Big(\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y)\Big) + \beta_k^u u (\hat{x}) \Big\}+p_k v(x), \\[10pt] \displaystyle v(x)=(1-q_k)\Big\{ (1-\beta_k^v )\Big( \frac{1}{m}\sum_{y\in S(x)}v(y) \Big) + \beta_k^v v (\hat{x}) \Big\}+q_k u(x), \end{array} \right.\nonumber\\ \end{align} $$

for $x \in \mathbb {T}_m$ , has a probabilistic interpretation. In this final section, we present the details.

The game is a two-player zero-sum game played in two boards (each board is a copy of the m-regular tree) with the following rules: the game starts at some node in one of the two trees $(x_0,i)$ with $x_0\in \mathbb {T}$ and $i=1,2$ (we add an index to denote in which board is the position of the game). If $x_0$ is in the first board, then with probability $p_k$ , the position jumps to the other board and with probability $(1-p_k)(1-\beta _k^u),$ the two players play a round of a Tug-of-War game (a fair coin is tossed and the winner chooses the next position of the game at any point among the successors of $x_0$ , we refer to [Reference Blanc and Rossi4, Reference Lewicka13, Reference Peres, Schramm, Sheffield and Wilson18, Reference Peres and Sheffield19] for more details concerning Tug-of-War games) and with probability $(1-p_k)\beta _k^u$ the position of the game goes to the predecessor (in the first board); in the second board with probability $q_k$ , the position changes to the first board, and with probability $(1-q_k) (1-\beta _k^v),$ the position goes to one of the successors of $x_0$ with uniform probability while with probability $(1-q_k)\beta _k^v$ then position goes to the predecessor. We take a finite level L (large) and we add the rule that the game ends when the position arrives to a node at level L, $x_\tau $ . We also have two final payoffs f and g. This means that in the first board, Player I pays to Player II the amount encoded by $f(\psi (x_\tau ))$ while in the second board, the final payoff is given by $g(\psi (x_\tau ))$ . Then the value function for this game is defined as

$$ \begin{align*}w_L (x,i) = \inf_{S_I} \sup_{S_{II}} \mathbb{E}^{(x,i)} (\mbox{final payoff}) = \sup_{S_{II}} \inf_{S_I} \mathbb{E}^{(x,i)} (\mbox{final payoff}). \end{align*} $$

Here, the $\inf $ and $\sup $ are taken among all possible strategies of the players (the choice that the players make at every node of what will be the next position if they play (probability $(1-p_k)(1-\beta _k^u)$ ) and they win the coin toss (probability $1/2$ )). The final payoff is given by f or g according to $i_\tau =1$ or $i_\tau =2$ (the final position of the game is in the first or in the second board). The value of the game $w_L (x,i)$ encodes the amount that the players expect to get/pay playing their best with final payoffs f and g at level L.

We have that the pair of functions $(u_L,v_L)$ given by $u_L(x) = w_L (x,1)$ and $v_L (x) = w_L (x,2)$ is a solution to the system (4.1) in the finite subgraph of the tree composed by nodes of level less than L.

Notice that the first equation encodes all the possibilities for the next position of the game in the first board. We have

$$ \begin{align*} \displaystyle u(x)&=(1-p_k)\Big\{ \displaystyle (1-\beta_k^u ) \Big(\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y)\Big) + \beta_k^u u (\hat{x}) \Big\}+p_k v(x)\\\displaystyle &= (1-p_k) (1-\beta_k^u ) \Big(\frac12\max_{y\in S(x)}u(y)+\frac12\min_{y\in S(x)}u(y)\Big) + (1-p_k) \beta_k^u u (\hat{x}) +p_k v(x). \end{align*} $$

Now, we observe that the value of the game at one node x in the first board is the sum of the conditional expectations: the probability of playing $(1-p_k) (1-\beta _k^u )$ times the value of one round of Tug-of-War (with probability $1/2$ the first player chooses the successor that maximizes $u(y)$ and with probability $1/2$ the other player chooses y such that the minimum of u is achieved); plus, the probability $(1-p_k)\beta _k^u$ times the value of u at the predecessor; plus, finally, the probability of jumping to the other board, $p_k$ times the value of the game if this happens, $v(x)$ .

Similarly, the second equation

$$ \begin{align*}v(x)=(1-q_k)\Big\{ (1-\beta_k^v )\Big( \frac{1}{m}\sum_{y\in S(x)}v(y) \Big) + \beta_k^v v (\hat{x}) \Big\}+q_k u(x) \end{align*} $$

takes into account all the possibilities for the game in the second board.

Remark that when $\beta _k^u=1$ then at a node of level k there is no possibility to go to a successor (when the players play the only possibility is to go to the predecessor). Therefore, when $\beta _k^u=\beta _k^v=1$ this game is not well defined (since for L larger than k the game never ends). Therefore, our assumption that $\beta _k^u$ $\beta _k^v$ are uniformly bounded away from 1 seems reasonable. Notice that the game is also not well defined when $p_k=q_k=1$ .

Now, our goal is to take the limit as $L \to \infty $ in these value functions for this game and obtain that the limit is the unique solution to our system (4.1) that verifies the boundary conditions

(4.2) $$ \begin{align} \left\lbrace \begin{array}{@{}ll} \displaystyle \lim_{{x}\rightarrow z = \psi (\pi)}u(x) = f(z) , \\[10pt] \displaystyle \lim_{{x}\rightarrow z = \psi(\pi)}v(x)=g(z). \end{array} \right. \end{align} $$

Theorem 4.1 Fix two continuous functions $f,g:[0,1] \to \mathbb {R}$ . The values of the game $(u_L,v_L)$ , that is, the solutions to (4.1) in the finite subgraph of the tree with nodes of level less than L and conditions $u_L(x) = f (\psi (x))$ , $v_L(x) = g (\psi (x))$ at nodes of level L converge as $L\to \infty $ to $(u,v)$ the unique solution to (4.1) with (4.2) in the whole tree.

Proof From the estimates that we have proved in the previous section for the unique solution $(u,v)$ to (4.1) with (4.2) in the whole tree we know that, given $\eta>0$ there exists L large enough such that we have

$$ \begin{align*}u(x) \leq \max_{I_x} f + \eta, \qquad \mbox{and} \qquad v(x) \leq \max_{I_x} g + \eta \end{align*} $$

for every x at level L.

On the other hand, since f and g are continuous, it holds that

$$ \begin{align*}|u_L(x) - \max_{I_x} f | = |f(\psi(x)) - \max_{I_x} f | < \eta,\end{align*} $$

and

$$ \begin{align*}|v_L(x) - \max_{I_x} g | = |g(\psi(x)) - \max_{I_x} g |< \eta, \end{align*} $$

for every x at level L with L large enough.

Therefore, $(u,v)$ and $(u_L,v_L)$ are two solutions to the system (4.1) in the finite subgraph of the tree with nodes of level less than L that verify

$$ \begin{align*}u(x) < u_L(x) + 2 \eta, \qquad \mbox{and} \qquad v(x) < v_L(x) + 2\eta \end{align*} $$

for every x at level L. Now, since $(u_L(x) + 2 \eta , v_L(x) + 2 \eta )$ and $(u,v)$ are two solutions to (4.1) in the subgraph of the tree with nodes of level less than L that are ordered at its boundary (the set of nodes of level L) and the comparison principle can be used in this context, we conclude that

$$ \begin{align*}u(x) \leq \liminf_{L\to \infty} u_L(x) , \qquad \mbox{and} \qquad v(x) \leq \liminf_{L\to \infty} v_L(x). \end{align*} $$

A similar argument starting with

$$ \begin{align*}u(x) \geq \min_{I_x} f + \eta, \qquad \mbox{and} \qquad v(x) \geq \min_{I_x} g + \eta \end{align*} $$

for every x at level L with L large gives

$$ \begin{align*}u(x) \geq \limsup_{L\to \infty} u_L(x) , \qquad \mbox{and} \qquad v(x) \leq \limsup_{L\to \infty} v_L(x) \end{align*} $$

and completes the proof.

Footnotes

C. Mosquera is partially supported by grants UBACyT 20020170100430BA (Argentina), PICT 2018–03399 (Argentina), and PICT 2018–04027 (Argentina). A. Miranda and J. Rossi are partially supported by grants CONICET grant PIP GI No. 11220150100036CO (Argentina), PICT-2018-03183 (Argentina), and UBACyT grant 20020160100155BA (Argentina).

References

Alvarez, V., Rodríguez, J. M., and Yakubovich, D. V., Estimates for nonlinear harmonic “measures” on trees . Michigan Math. J. 49(2001), no. 1, 4764.Google Scholar
Anandam, V., Harmonic functions and potentials on finite or infinite networks, Lecture Notes of the Unione Matematica Italiana, 12, Springer, Heidelberg; UMI, Bologna, 2011.Google Scholar
Bjorn, A., Bjorn, J., Gill, J. T., and Shanmugalingam, N., Geometric analysis on Cantor sets and trees . J. Reine Angew. Math. 725(2017), 63114.CrossRefGoogle Scholar
Blanc, P. and Rossi, J. D., Game theory and partial differential equations, De Gruyter Series in Nonlinear Analysis and Applications, 31, Walter de Gruyter GmbH & Co KG, Berlin, 2019.Google Scholar
Del Pezzo, L. M., Mosquera, C. A., and Rossi, J. D., The unique continuation property for a nonlinear equation on trees . J. Lond. Math. Soc. (2) 89(2014), no. 2, 364382.CrossRefGoogle Scholar
Del Pezzo, L. M., Mosquera, C. A., and Rossi, J. D., Estimates for nonlinear harmonic measures on trees . Bull. Braz. Math. Soc. (N.S.) 45(2014), no. 3, 405432.Google Scholar
Del Pezzo, L. M., Mosquera, C. A., and Rossi, J.D., Existence, uniqueness and decay rates for evolution equations on trees . Port. Math. 71(2014), no. 1, 6377.Google Scholar
Hartenstine, D. and Rudd, M., Asymptotic statistical characterizations of $p$ -harmonic functions of two variables . Rocky Mountain J. Math. 41(2011), no. 2, 493504.Google Scholar
Hartenstine, D. and Rudd, M., Statistical functional equations and $p$ -harmonious functions . Adv. Nonlinear Stud. 13(2013), no. 1, 191207.Google Scholar
Kaufman, R., Llorente, J. G., and Wu, J.-M., Nonlinear harmonic measures on trees . Ann. Acad. Sci. Fenn. Math. 28(2003), no. 2, 279302.Google Scholar
Kaufman, R. and Wu, J.-M., Fatou theorem of $p$ -harmonic functions on trees . Ann. Probab. 28(2000), no. 3, 11381148.CrossRefGoogle Scholar
Kesten, H., Relations between solutions to a discrete and continuous Dirichlet problem. In: Durrett, R. and Kesten, H. (eds.), Random walks, Brownian motion, and interacting particle systems, Progress in Probability, 28, Birkhauser, Boston, MA, 1991, pp. 309321.CrossRefGoogle Scholar
Lewicka, M., A course on tug-of-war games with random noise: Introduction and basic constructions, Universitext Book Series, Springer, Cham, 2020.Google Scholar
Manfredi, J. J., Oberman, A., and Sviridov, A.. Nonlinear elliptic PDEs on graphs . Differential Integral Equations 28 (2015), no. 1-2, 79102.Google Scholar
Manfredi, J. J., Parviainen, M., and Rossi, J. D., An asymptotic mean value characterization for $p$ -harmonic functions . Proc. Amer. Math. Soc. 138(2010), 881889.Google Scholar
Oberman, A., A convergent difference scheme for the infinity Laplacian: Construction of absolutely minimizing Lipschitz extensions . Math. Comp. 74(2005), no. 251, 12171230.Google Scholar
Oberman, A., Finite difference methods for the infinity Laplace and $p$ -Laplace equations. J. Comput. Appl. Math. 254 (2013), 6580.Google Scholar
Peres, Y., Schramm, O., Sheffield, S., and Wilson, D., Tug-of-war and the infinity Laplacian . J. Amer. Math. Soc. 22(2009), 167210.CrossRefGoogle Scholar
Peres, Y. and Sheffield, S., Tug-of-war with noise: A game theoretic view of the $p$ -Laplacian .Duke Math. J. 145(2008), no. 1, 91120.Google Scholar
Sviridov, A. P., Elliptic equations in graphs via stochastic games. Ph. D. thesis, University of Pittsburgh. ProQuest LLC, Ann Arbor, MI, 2011, 53 pp.Google Scholar
Sviridov, A. P., $p$ -harmonious functions with drift on graphs via games . Electron. J. Differential Equations 2011(2011), no. 114, 11.Google Scholar