- Research
- Open access
- Published:
On Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations
Journal of Inequalities and Applications volume 2015, Article number: 152 (2015)
Abstract
In this paper, the dynamically consistent nonlinear evaluations that were introduced by Peng are considered in probability space \(L^{2} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). We investigate the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). Furthermore, we give four equivalent conditions on the n-dimensional Jensen inequality for g-evaluations induced by backward stochastic differential equations with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)). Finally, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true. These results include and extend some existing results.
1 Introduction
It is well known that (see Peng [1, 2]) a dynamically consistent nonlinear evaluation in probability space \(L^{2} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\), where \(\{{\mathcal {F}}_{t}\}_{t\geq0}\) is a given filtration, is a system of operators:
which satisfies the following properties:
-
(i)
\({\mathcal{E}}_{s,t}[X_{1}]\geq{\mathcal {E}}_{s,t}[X_{2}]\), if \(X_{1}\geq X_{2}\);
-
(ii)
\({\mathcal{E}}_{t,t}[X]=X\);
-
(iii)
\({\mathcal{E}}_{r,s}[{\mathcal {E}}_{s,t}[X]]={\mathcal{E}}_{r,t}[X]\), if \(0\leq r\leq s\leq t<\infty\);
-
(iv)
\(1_{A}{\mathcal{E}}_{s,t}[X]=1_{A}{\mathcal {E}}_{s,t}[1_{A}X]\), \(\forall A\in{\mathcal{F}}_{s}\).
Of course, we can define this notion in \(L^{1} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\).
In a financial market, the evaluation of the discounted value of a derivative is often treated as a dynamically consistent nonlinear evaluation (expectation). The well-known g-evaluation (g-expectation) induced by backward stochastic differential equations (BSDEs for short), which was put forward by Peng, is a special case of a dynamically consistent nonlinear evaluation (expectation). While nonlinear BSDEs were firstly introduced by Pardoux and Peng [3], who proved the existence and uniqueness of adapted solutions, when the coefficient g is Lipschitz in \((y,z)\) uniformly in \((t,\omega)\), with square-integrability assumptions on the coefficient \(g(t,\omega,y,z)\) and terminal condition ξ. Later many researchers developed the theory of BSDEs and their applications in a series of papers (for example see Hu and Peng [4], Lepeltier and San Martin [5], El Karoui et al. [6], Pardoux [7, 8], Briand et al. [9] and the references therein) under some other assumptions on the coefficients but for a fixed terminal time \(T>0\). In 2000, Chen and Wang [10] obtained the existence and uniqueness theorem for \(L^{2}\) solutions of infinite time interval BSDEs when \(T=\infty\), by the martingale representation theorem and fixed point theorem. Recently, Zong [11] have obtained the result on \(L^{p}\) (\(1< p<2\)) solutions of infinite time interval BSDEs. One of the special cases is the existence and uniqueness theorem of BSDEs with non-uniformly Lipschitz coefficients.
The original motivation for studying nonlinear evaluation (expectation) and g-evaluation (g-expectation) comes from the theory of expected utility, which is the foundation of modern mathematical economics. Chen and Epstein [12] gave an application of dynamically consistent nonlinear evaluation (expectation) to recursive utility, Peng [1, 2, 13–15] and Rosazza Gianin [16] investigated some applications of dynamically consistent nonlinear evaluations (expectations) and g-evaluations (g-expectations) to static and dynamic pricing mechanisms and risk measures.
Since the notions of nonlinear evaluation (expectation) and g-evaluation (g-expectation) were introduced, many properties of the nonlinear evaluation (expectation) and g-evaluation (g-expectation) have been studied in [1, 2, 6, 10–31]. In [1, 2], Peng obtained an important result: he proved that if a dynamically consistent nonlinear evaluation \({\mathcal {E}}_{s,t}[\cdot]\) can be dominated by a kind of g-evaluation, then \({\mathcal{E}}_{s,t}[\cdot]\) must be a g-evaluation. Thus, in this case, many problems on dynamically consistent nonlinear evaluations \({\mathcal{E}}_{s,t}[\cdot]\) can be solved through the theory of BSDEs.
It is well known that Jensen’s inequality for classic mathematical expectations holds in general, which is a very important property and has many important applications. But for nonlinear expectation, even for its special case: g-expectation, by Briand et al. [17], we know that Jensen’s inequality for g-expectations usually does not hold in general. So under the assumption that g is continuous with respect to t, some papers, such as [18, 19, 25, 27, 28] have been devoted to Jensen’s inequality for g-expectations, with the help of the theory of BSDEs, they have obtained the necessary and sufficient conditions under which Jensen’s inequality for g-expectations holds in general. Under the assumptions that g does not depend on y and is convex, Chen et al. [18, 19] studied Jensen’s inequality for g-expectations and gave a necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. Provided g only does not depend on y, Jiang and Chen [28] gave another necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. It was an improved result in comparison with the result that Chen et al. found. Later, this result was improved by Hu [25] and Jiang [27], in fact, Jiang [27] showed that g must be independent of y. In addition, Fan [22] studied Jensen’s inequality for filtration-consistent nonlinear expectations without domination condition. Jia [26] studied the n-dimensional (\(n>1\)) Jensen’s inequality for g-expectations and got the result that the n-dimensional (\(n>1\)) Jensen’s inequality holds for g-expectations if and only if g is independent of y and linear with respect to z, in other words, the corresponding g-expectation must be linear. Then the natural question is asked:
For more general dynamically consistent nonlinear evaluation \({\mathcal{E}}_{s,t}[\cdot]\), what are the sufficient and necessary conditions under which Jensen’s inequality for \({\mathcal{E}}_{s,t}[\cdot]\) holds in general? Roughly speaking, what conditions on \({\mathcal{E}}_{s,t}[\cdot]\) are equivalent with the inequality
holding for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\)?
One of the objectives of this paper is to investigate this problem. At the same time, this paper will also investigate the sufficient and necessary conditions on \({\mathcal {E}}_{s,t}[\cdot]\) under which the n-dimensional (\(n>1\)) Jensen inequality holds. As applications of these two results, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively.
The remainder of this paper is organized as follows: In Section 2, we study the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\). In Section 3, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively. These results generalize the known results on Jensen’s inequality for g-expectation in [18, 19, 22, 25–28, 31]. In Section 4, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true.
2 Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations
Let \((\Omega,{\mathcal{F}},P)\) be a probability space carrying a standard d-dimensional Brownian motion \((B_{t})_{t\geq0}\), and let \(({\mathcal{F}}_{t} )_{t\geq0}\) be the σ-algebra generated by \((B_{t} )_{t\geq0}\). We always assume that \(({\mathcal{F}}_{t} )_{t\geq0}\) is complete. Let \(T > 0\) be a given real number. In this paper, we always work in the probability space \((\Omega,{\mathcal{F}}_{T},P)\), and only consider processes indexed by \(t\in[0, T ]\). We denote \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) (\(p\geq1\)), the space of \({\mathcal {F}}_{t}\)-measurable random variables satisfying \(E_{P}[|X|^{p}]<\infty\), and by \(L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\) the space of non-negative random variables in \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\). Let \(1_{A}\) denote the indicator of event A. For notational simplicity, we use \(L^{p}({\mathcal{F}}_{t}):= L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) and \(L^{p}_{+}({\mathcal{F}}_{t}):=L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\). For the convenience of the reader, we recall the notion of a dynamically consistent nonlinear evaluation, defined in \(L^{2}({\mathcal{F}}_{T})\) in Peng [1, 2], but defined in \(L^{1}({\mathcal{F}}_{T})\) in this section.
Definition 2.1
An \({\mathcal{F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\) is a system of operators:
which satisfies the following properties:
-
(A.1)
monotonicity: \({\mathcal {E}}_{s,t}[X_{1}]\geq{\mathcal{E}}_{s,t}[X_{2}]\), if \(X_{1}\geq X_{2}\);
-
(A.2)
\({\mathcal{E}}_{t,t}[X]=X\);
-
(A.3)
dynamical consistency: \({\mathcal {E}}_{r,s}[{\mathcal{E}}_{s,t}[X]]={\mathcal{E}}_{r,t}[X]\), if \(0\leq r\leq s\leq t\leq T\);
-
(A.4)
zero one law: \(1_{A}{\mathcal {E}}_{s,t}[X]=1_{A}{\mathcal{E}}_{s,t}[1_{A}X]\), \(\forall A\in{\mathcal{F}}_{s}\).
First, we consider Jensen’s inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations. We have the following results.
Theorem 2.1
Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\), then the following two statements are equivalent:
-
(i)
Jensen’s inequality for \({\mathcal{F}}_{t}\)-consistent evaluation \({\mathcal{E}}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\) and \(\xi\in L^{1}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in L^{1}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi\bigl({\mathcal {E}}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$ -
(ii)
\(\forall(\xi,a,b)\in L^{1}({\mathcal{F}}_{t})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}_{s,t}[a\xi+b]\geq a{\mathcal{E}}_{s,t}[\xi]+b\) a.s.
Proof
First, we prove (i) implies (ii). Suppose (i) holds, for each \((\xi,a, b)\in L^{1}({\mathcal {F}}_{t} )\times\mathcal{R} \times\mathcal{R}\), let \(\varphi(x):=ax + b\). Obviously, \(\varphi(x)\) is a convex function and \(\varphi(\xi)\in L^{1}({\mathcal {F}}_{t} )\), then we have
In the following, we prove (ii) implies (i). Suppose (ii) holds, for each \((\xi,a, b)\in L^{1}({\mathcal{F}}_{t} )\times \mathcal{R} \times\mathcal{R}\), we have
But, for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{2}\) such that
In view of (2.1), for any \((a,b)\in\mathcal{D}\), we have
which implies (i) by taking into consideration of (2.2). □
Theorem 2.2
Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\) and \(n>1\), then the following two statements are equivalent:
-
(i)
the n-dimensional Jensen inequality for a \({\mathcal{F}}_{t}\)-consistent evaluation \({\mathcal{E}}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\) and \(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}_{s,t} \bigl[\varphi(\xi_{1},\xi_{2}, \ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}_{s,t}[ \xi_{1}],{\mathcal{E}}_{s,t}[\xi_{2}],\ldots,{ \mathcal {E}}_{s,t}[\xi_{n}] \bigr) \quad \textit{a.s.}; $$ -
(ii)
\({\mathcal{E}}_{s,t}\) is linear, i.e.,
-
(a)
\({\mathcal{E}}_{s,t}[\lambda X]=\lambda{\mathcal {E}}_{s,t}[X]\) a.s., \(\forall(X,\lambda)\in L^{1}({\mathcal{F}}_{t})\times \mathcal{R}\);
-
(b)
\({\mathcal{E}}_{s,t}[X+Y]={\mathcal {E}}_{s,t}[X]+{\mathcal{E}}_{s,t}[Y]\) a.s., \(\forall(X,Y)\in L^{1}({\mathcal{F}}_{t})\times L^{1}({\mathcal{F}}_{t})\);
-
(c)
\({\mathcal{E}}_{s,t}[\mu]=\mu\) a.s., \(\forall\mu\in \mathcal{R}\).
-
(a)
Proof
We prove (i) implies (ii).
First, we prove (i) implies (ii)(a). For each \((X,\lambda)\in L^{1}({\mathcal{F}}_{t})\times\mathcal{R}\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=\lambda x_{1}\) and \(\xi_{1}:=X\). Obviously, \(\varphi(x_{1},x_{2},\ldots,x_{n})\) is a convex function and \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t} )\), then we have
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}-(\lambda-1)x_{2}\), \(\xi_{1}:=\lambda X\), and \(\xi_{2}:=X\). By (i), we can deduce that
i.e.,
It follows from (2.3) and (2.4) that (ii)(a) holds true.
Next we prove (ii)(b) holds. For each \((X,Y)\in L^{1}({\mathcal {F}}_{t})\times L^{1}({\mathcal{F}}_{t})\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}+x_{2}\), \(\xi_{1}:=X\), and \(\xi_{2}:=Y\), then we have
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}-x_{2}\), \(\xi_{1}:=X+Y\), and \(\xi_{2}:=Y\). By (i), we have
i.e.,
Thus, from (2.5) and (2.6), we can see that (ii)(b) holds.
Finally, we prove (ii)(c) holds. For each \(\mu\in\mathcal{R}\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=\mu\), then we have
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=2x_{1}-\mu\) and \(\xi_{1}:=\mu\). By (i), we can obtain
i.e.,
It follows from (2.7) and (2.8) that (ii)(c) holds true.
In the following, we prove (ii) implies (i). Suppose (ii) holds, for any \((a_{1},a_{2},\ldots,a_{n},b)\in\mathcal{R}^{n+1}\) and \(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), we have
But, for any convex function \(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{n+1}\) such that
In view of (2.9), for any \((a_{1},a_{2},\ldots,a_{n},b)\in\mathcal{D}\), we have
which implies (i) by taking into consideration of (2.10). □
The basic version of Hölder’s inequality for the classical mathematical expectation \(E_{P}\) defined in \((\Omega,{\mathcal{F}}_{T},P)\) reads
where X, Y are non-negative random variables in \((\Omega,{\mathcal {F}}_{T},P)\) and \(1< p\), \(q<\infty\) is a pair of conjugated exponents, i.e., \(\frac{1}{p}+\frac{1}{q}=1\). One may proceed in the following way (cf., e.g., Krein et al. [32], p.43). By elementary calculus, one verifies
for any constant \(a, b\geq0\). This yields \(XY\leq\frac{r^{p}}{p}X^{p}+\frac{r^{-q}}{q}Y^{q}\) a.s. for any \(r>0\). Taking the expectation yields \(E_{P}[XY]\leq\frac{r^{p}}{p}E_{P}[X^{p}]+\frac{r^{-q}}{q}E_{P}[Y^{q}]\) for any \(r>0\), and taking the infimum with respect to r again we arrive at (2.11).
By the above argument, we have the following Hölder inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations.
Theorem 2.3
Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\). If \({\mathcal{E}}_{s,t}[\cdot]\) satisfies the following conditions:
-
(d)
\({\mathcal{E}}_{s,t}[\xi+\eta]\leq{\mathcal {E}}_{s,t}[\xi]+{\mathcal{E}}_{s,t}[\eta]\) a.s., \(\forall(\xi,\eta)\in L^{1}_{+}({\mathcal{F}}_{t})\times L^{1}_{+}({\mathcal{F}}_{t})\);
-
(e)
\({\mathcal{E}}_{s,t}[\lambda \xi]\leq\lambda{\mathcal{E}}_{s,t}[\xi]\) a.s., \(\forall\xi\in L^{1}_{+}({\mathcal{F}}_{t})\), \(\lambda\geq0\),
then, for any \(X,Y\in L^{1}({\mathcal{F}}_{t})\) and \(|X|^{p}, |Y|^{q}\in L^{1}({\mathcal{F}}_{t})\) (\(p, q>1\) and \(1/p+1/q=1\)), we have
Similarly, we have the following Minkowski inequality for \({\mathcal {F}}_{t}\)-consistent nonlinear evaluations.
Theorem 2.4
Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\). If \({\mathcal{E}}_{s,t}[\cdot]\) satisfies the following conditions:
-
(d)
\({\mathcal{E}}_{s,t}[\xi+\eta]\leq{\mathcal {E}}_{s,t}[\xi]+{\mathcal{E}}_{s,t}[\eta]\) a.s., \(\forall(\xi,\eta)\in L^{1}_{+}({\mathcal{F}}_{t})\times L^{1}_{+}({\mathcal{F}}_{t})\);
-
(e)
\({\mathcal{E}}_{s,t}[\lambda \xi]\leq\lambda{\mathcal{E}}_{s,t}[\xi]\) a.s., \(\forall\xi\in L^{1}_{+}({\mathcal{F}}_{t})\), \(\lambda\geq0\),
then, for any \(X,Y\in L^{1}({\mathcal{F}}_{t})\) and \(|X|^{p},|Y|^{p}\in L^{1}({\mathcal{F}}_{t})\) (\(p>1\)), we have
Proof
Here \(h:[0,\infty)\times[0,\infty)\mapsto[0,\infty)\) is of the form
where \(\mathcal{Q}\) is the set of all rational numbers in \(\mathcal {R}\). Let \(x_{1}:=|X|^{p}\) and \(x_{2}:=|Y|^{p}\). From (2.13), we have
for all \(r\in\mathcal{Q}\cap(0,1)\). It follows from (d) and (e) that
for all \(r\in\mathcal{Q}\cap(0,1)\). Taking the infimum with respect to r in \(\mathcal{Q}\cap(0,1)\), we have
Thus, (2.12) holds true. □
3 Jensen’s inequality for g-evaluations
In this section, first, we present some notations, notions, and propositions which are useful in this paper.
Let
and
For each \(t\in[0,T ]\), we consider the following BSDE with terminal time t:
Here the function g:
satisfies the following assumptions:
-
(B.1)
there exist two non-negative deterministic functions \(\alpha(t)\) and \(\beta(t)\) such that for all \(y_{1},y_{2}\in\mathcal{R}\), \(z_{1},z_{2}\in\mathcal{R}^{d}\),
$$\bigl\vert g(t,y_{1},z_{1})-g(t,y_{2},z_{2}) \bigr\vert \leq\alpha(t)|y_{1}-y_{2}|+\beta(t)|z_{1}-z_{2}|, \quad \forall t\in[0,T], $$where \(\alpha(t)\) and \(\beta(t)\) satisfy \(\int_{0}^{T}\alpha^{2}(t)\, \mathrm{d}t<\infty\), \(\int_{0}^{T}\beta^{2}(t)\, \mathrm{d}t<\infty\);
-
(B.2)
\(g(t,0,0)\in{\mathcal{M}}(0,t;P;\mathcal{R})\);
-
(B.3)
\(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall y\in\mathcal{R}\).
It is well known that (see Zong [11]) if we suppose that the function g satisfies (B.1) and (B.2), then for each given \(X\in{\mathcal {L}}({\mathcal{F}}_{t})\), there exists a unique solution \((Y^{X},Z^{X})\in {\mathcal{S}}(0,t;P;\mathcal{R})\times{\mathcal{L}}(0,t;P;\mathcal {R}^{d})\) of BSDE (3.1).
Example 3.1
For each given \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), the BSDE
has a unique solution in \({\mathcal{S}}(0,T;P;\mathcal{R})\times{\mathcal {L}}(0,T;P;\mathcal{R}^{d})\).
We denote \({\mathcal{E}}^{g} _{s,t}[X] :=Y_{s}^{X}\). We thus define a system of operators:
This system is completely determined by the above given function g. We have the following.
Proposition 3.1
We assume that the function g satisfies (B.1) and (B.2). Then the system of operators \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation defined in \({\mathcal {L}}({\mathcal{F}}_{T})\).
The proof of Proposition 3.1 is very similar to that of Corollary 2.9 in [13], so we omit it.
Remark 3.1
From Proposition 3.1, we know that the dynamically consistent nonlinear evaluation \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is completely determined by the given function g. Thus, we call \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) a g-evaluation.
Definition 3.1
(g-Expectation) (see Zong [11])
Suppose that the function g satisfies (B.1) and (B.3). The g-expectation \({\mathcal{E}}_{g}[\cdot]:{\mathcal{L}}({\mathcal {F}}_{T})\mapsto\mathcal{R}\) is defined by \({\mathcal {E}}_{g}[\xi]=Y_{0}^{\xi}\).
Definition 3.2
(Conditional g-expectation) (see Zong [11])
Suppose that the function g satisfies (B.1) and (B.3). The conditional g-expectation of ξ with respect to \({\mathcal{F}}_{t}\) is defined by \({\mathcal{E}}_{g}[\xi|{\mathcal{F}}_{t}]=Y_{t}^{\xi}\).
Proposition 3.2
(see Zong [11])
\({\mathcal {E}}_{g}[\xi|{\mathcal{F}}_{t}]\) is the unique random variable η in \({\mathcal{L}}({\mathcal{F}}_{t})\) such that
Proposition 3.3
For any \(\xi_{n}\in{\mathcal {L}}({\mathcal{F}}_{t})\), if \(\lim_{n\rightarrow\infty}\xi_{n}=\xi\) a.s. and \(|\xi_{n}|\leq\eta\) a.s. with \(\eta\in{\mathcal{L}}({\mathcal{F}}_{t})\), then for \(0\leq s\leq t\leq T\),
The proof of Proposition 3.3 is very similar to that of Theorem 3.1 in Hu and Chen [24], so we omit it.
In the following, we study Jensen’s inequality for g-evaluations. First, we introduce some notions on g.
Definition 3.3
Let \(g: \Omega\times[0,T]\times \mathcal{R}\times\mathcal{R}^{d}\mapsto\mathcal{R}\). The function g is said to be super-homogeneous if for each \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\) and \(\lambda\in\mathcal{R}\), then \(g(t,\lambda y,\lambda z)\geq\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be positively homogeneous if for each \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\) and \(\lambda\geq0\), then \(g(t,\lambda y,\lambda z)=\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be sub-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\leq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be super-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\geq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.
Theorem 3.1
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following three statements are equivalent:
-
(i)
Jensen’s inequality for g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\) and each \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}^{g}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$ -
(ii)
\(\forall(\xi,a,b)\in{\mathcal{L}}({\mathcal {F}}_{t})\times\mathcal{R}\times\mathcal{R}\), \({\mathcal {E}}^{g}_{s,t}[a\xi+b]\geq a{\mathcal{E}}^{g}_{s,t}[\xi]+b\) a.s.;
-
(iii)
g is independent of y and super-homogeneous with respect to z.
Theorem 3.2
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following three statements are equivalent:
-
(i)
the n-dimensional (\(n>1\)) Jensen inequality for the g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: R^{n}\mapsto R\) and \(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}^{g}_{s,t} \bigl[\varphi(\xi_{1}, \xi_{2},\ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi_{1}],{\mathcal{E}}^{g}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}^{g}_{s,t}[ \xi_{n}] \bigr) \quad \textit{a.s.}; $$ -
(ii)
\({\mathcal{E}}^{g}_{s,t}\) is linear in \({\mathcal {L}}({\mathcal{F}}_{t})\);
-
(iii)
g is independent of y and linear with respect to z, i.e., g is of the form \(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), where α is a \(R^{d}\)-valued progressively measurable process.
In order to prove Theorems 3.1 and 3.2, we need the following lemmas. These lemmas can be found in Zong and Hu [33].
Lemma 3.1
Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
-
(i)
The function g is independent of y.
-
(ii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \({\mathcal{F}}_{t}\) measurable simple function X and \(y\in\mathcal{R}\),
$${\mathcal{E}}^{g}_{s,t}[X+y]={\mathcal{E}}^{g}_{s,t}[X]+y\quad \textit{a.s.} $$ -
(iii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\), and \(\eta \in {\mathcal{L}}({\mathcal{F}}_{s})\),
$${\mathcal{E}}^{g}_{s,t}[X+\eta]={\mathcal{E}}^{g}_{s,t}[X]+ \eta \quad \textit{a.s.} $$
Lemma 3.2
Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
-
(i)
The function g is positively homogeneous.
-
(ii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \(\lambda\geq0\), and \({\mathcal{F}}_{t}\) measurable simple function X,
$${\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{ \mathcal{E}}^{g}_{s,t}[X] \quad \textit{a.s.} $$ -
(iii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) is positively homogeneous: for each \(0\leq s\leq t\leq T\), \(\lambda\geq0\), and \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$${\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{ \mathcal{E}}^{g}_{s,t}[X]\quad \textit{a.s.} $$
Lemma 3.3
Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
-
(i)
The function g is sub-additive (super-additive).
-
(ii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\) and \({\mathcal{F}}_{t}\) measurable simple functions X and \(\overline{X}\),
$${\mathcal{E}}^{g}_{s,t}[X+\overline{X}]\leq(\geq)\, { \mathcal{E}}^{g}_{s,t}[X] +{\mathcal{E}}^{g}_{s,t}[ \overline{X}] \quad \textit{a.s.} $$ -
(iii)
The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) is sub-additive (super-additive): for each \(0\leq s\leq t\leq T\) and X, \(\overline{X}\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$${\mathcal{E}}^{g}_{s,t}[X+\overline{X}]\leq(\geq)\, { \mathcal{E}}^{g}_{s,t}[X] +{\mathcal{E}}^{g}_{s,t}[ \overline{X}] \quad \textit{a.s.} $$
Lemma 3.4
Suppose that the functions g and \(\overline{g}\) satisfy (B.1) and (B.2). Then the following three conditions are equivalent:
-
(i)
\(g(t,y,z)\geq\overline{g}(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal{R}^{d}\).
-
(ii)
The corresponding dynamically consistent nonlinear evaluations \({\mathcal{E}}^{g}[\cdot]\) and \({\mathcal {E}}^{\overline{g}}[\cdot]\) satisfy, for each \(0\leq s\leq t\leq T\) and \({\mathcal{F}}_{t}\) measurable simple function X,
$${\mathcal{E}}^{g}_{s,t}[X]\geq{\mathcal{E}}^{\overline{g}}_{s,t}[X] \quad \textit{a.s.} $$ -
(iii)
The corresponding dynamically consistent nonlinear evaluations \({\mathcal{E}}^{g}[\cdot]\) and \({\mathcal {E}}^{\overline{g}}[\cdot]\) satisfy, for each \(0\leq s\leq t\leq T\) and \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$${\mathcal{E}}^{g}_{s,t}[X]\geq{\mathcal{E}}^{\overline{g}}_{s,t}[X] \quad \textit{a.s.} $$
In particular, \({\mathcal {E}}^{g}[\cdot]\equiv{\mathcal{E}}^{\overline{g}}[\cdot]\) if and only if \(g\equiv\overline{g}\).
Proof of Theorem 3.1
From Theorem 2.1, we only need to prove (ii) ⇔ (iii). (iii) ⇒ (ii) is obvious.
In the following, we prove (ii) ⇒ (iii). First, we prove that g is independent of y. Suppose (ii) holds, then we have, for any \((\xi,y)\in{\mathcal{L}}({\mathcal {F}}_{t})\times \mathcal{R}\),
By Lemma 3.1, we can deduce that g is independent of y.
Next we prove that g is super-homogeneous with respect to z. By (ii), we have, for any \((\xi,\lambda)\in{\mathcal {L}}({\mathcal{F}}_{t})\times R\),
For each \((s,z)\in[0,t]\times\mathcal{R}^{d}\), let \(Y_{\cdot}^{s,z}\) be the solution of the following stochastic differential equation (SDE for short) defined on \([s,t]\):
From (3.3), we have
Thus, \((\lambda Y_{r}^{s,z})_{r\in[s,t]}\) is an \({\mathcal{E}}_{g}\)-submartingale. From the decomposition theorem of an \({\mathcal {E}}_{g}\)-supermartingale (see Zong and Hu [33]), it follows that there exists an increasing process \((A_{r})_{r\in[s,t]}\) such that
This with \(\lambda Y_{t}^{s,z}=-\int_{s}^{t}\lambda g(r,z)\, \mathrm{d}r+\int_{s}^{t}\lambda z\cdot\mathrm{d}B_{r}\) yields \(Z_{r}\equiv\lambda z\) and
The proof of Theorem 3.1 is complete. □
Remark 3.2
The condition that g is super-homogeneous with respect to z implies that g is positively homogeneous with respect to z. Indeed, for each fixed \(\lambda>0\), by (3.5), we have \(\frac{1}{\lambda}g(t,\lambda z)\leq g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., i.e.,
Thus by (3.5) and (3.6), for any \(\lambda>0\),
In particular, choosing \(\lambda=2\), we have \(2 g(t,0)=g(t,0)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence \(g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Thus, for \(\lambda=0\) (3.7) still holds.
Proof of Theorem 3.2
From Theorem 2.2, we only need to prove (ii) ⇔ (iii). (iii) ⇒ (ii) is obvious.
In the following, we prove (ii) ⇒ (iii). From the proof of Theorem 3.1, we can obtain, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\), \(g(t,y,\lambda z)=g(t,\lambda z)\geq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Using the same method, we have \(g(t,y,\lambda z)=g(t,\lambda z)\leq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall\lambda\in\mathcal{R}\), \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). The above arguments imply that, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\),
On the other hand, by Lemma 3.3, we have, for any \((y,z),(\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\),
It follows from (3.8) and (3.9) that (iii) holds true. The proof of Theorem 3.2 is complete. □
From Theorem 3.1(iii), we know that, for any \(y\in\mathcal{R}\), \(g(t,y,0)=g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence, \({\mathcal{E}}^{g}_{s,t}[\cdot]={\mathcal{E}}_{g}[\cdot|{\mathcal {F}}_{s}]\). Thus, Theorem 3.1 can be rewritten as follows.
Corollary 3.1
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following four statements are equivalent:
-
(i)
Jensen’s inequality for the g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\) and each \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}^{g}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$ -
(ii)
\(\forall(\xi,a,b)\in L^{2}({\mathcal{F}}_{T})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}^{g}_{0,T}[a\xi+b]\geq a{\mathcal {E}}^{g}_{0,T}[\xi]+b\), and, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.;
-
(iii)
\(\forall(\xi,a,b)\in L^{2}({\mathcal{F}}_{t})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}^{g}_{s,t}[a\xi+b]\geq a{\mathcal {E}}^{g}_{s,t}[\xi]+b\) a.s.;
-
(iv)
g is independent of y and super-homogeneous with respect to z.
Similarly, Theorem 3.2 can be rewritten as follows.
Corollary 3.2
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following four statements are equivalent:
-
(i)
the n-dimensional (\(n>1\)) Jensen inequality for g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal {R}^{n}\mapsto\mathcal{R}\) and \(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
$${\mathcal{E}}^{g}_{s,t} \bigl[\varphi(\xi_{1}, \xi_{2},\ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi_{1}],{\mathcal{E}}^{g}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}^{g}_{s,t}[ \xi_{n}] \bigr) \quad \textit{a.s.}; $$ -
(ii)
\({\mathcal{E}}^{g}_{0,T}\) is linear in \(L^{2}({\mathcal{F}}_{T})\) and, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.;
-
(iii)
\({\mathcal{E}}^{g}_{s,t}\) is linear in \(L^{2}({\mathcal{F}}_{t})\);
-
(iv)
for each \((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., where α is a \(\mathcal{R}^{d}\)-valued progressively measurable process.
Proof of Corollary 3.1
From Proposition 3.3 and Theorem 3.1, we only need to prove (ii) ⇔ (iii). It is obvious that (iii) implies (ii).
In the following, we prove that (ii) implies (iii). Suppose (ii) holds. For each \((X,t,k)\in L^{2}({\mathcal{F}}_{T} )\times[0, T]\times\mathcal{R}\), by (ii), we know that for each \(A\in{\mathcal{F}}_{t}\),
Thus
For each \(\lambda\neq0\), define \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]:=\frac{{\mathcal {E}}^{g}_{t,T}[\lambda\cdot]}{\lambda}\), \(\forall t\in[0,T]\). It is easy to check that \({\mathcal{E}}^{g}_{t,T}[\cdot]\) and \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]\) are two \({\mathcal{F}}\)-expectations in \(L^{2}({\mathcal{F}}_{T})\) (the notion of \({\mathcal{F}}\)-expectation can be seen in Coquet et al. [20]). If \(\lambda>0\), for each \(\xi \in L^{2}({\mathcal{F}}_{T})\), \({\mathcal{E}}^{\lambda}_{0,T}[\xi]\geq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20], we can obtain
If \(\lambda<0\), for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}^{\lambda}_{0,T}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20] again, we have
From (3.11) and (3.12), we have, for any \((\xi,\lambda)\in L^{2}({\mathcal{F}}_{T})\times \mathcal{R}\),
From (3.10) and (3.13), we have, for any \((\xi,a,b)\in L^{2}({\mathcal {F}}_{T})\times \mathcal{R}\times\mathcal{R}\),
Since, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., we have
Therefore, (iii) holds true. The proof of Corollary 3.1 is complete. □
Proof of Corollary 3.2
From Proposition 3.3 and Theorem 3.2, we only need to prove (ii) ⇔ (iii). It is obvious that (iii) implies (ii).
In the following, we prove that (ii) implies (iii). Suppose (ii) holds. By Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset L^{2}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem (cf., e.g., Yan [34], Theorem 3.6.8, p.83), there exists a unique probability measure \(P_{\alpha}\) defined on \((\Omega,{\mathcal {F}}_{T})\) such that
holds. Indeed, from (iv), we know that \(\frac{\mathrm{d}P_{\alpha}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\alpha_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\alpha_{t}|^{2}\, \mathrm{d}t )\).
On the other hand, since, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., we can obtain
It follows from (3.14) and (3.15) that
Therefore, \({\mathcal {E}}^{g}_{s,t}\) is linear in \(L^{2}({\mathcal{F}}_{t})\). The proof of Corollary 3.2 is complete. □
From Corollary 3.2, we can immediately obtain the following.
Theorem 3.3
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following two statements are equivalent:
-
(i)
\({\mathcal{E}}^{g}_{s,t}\) is linear in \({\mathcal {L}}({\mathcal{F}}_{t})\);
-
(ii)
there exists a unique probability measure \(P_{\alpha}\) defined on \((\Omega,{\mathcal{F}}_{T})\) such that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$${\mathcal{E}}^{g}_{s,t}[\xi]=E_{P_{\alpha}}[\xi|{ \mathcal{F}}_{s}] \quad \textit{a.s.} $$
The following result can be seen as an extension of Theorem 3.3.
Theorem 3.4
Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following two statements are equivalent:
-
(i)
\({\mathcal{E}}^{g}_{s,t}\) is sublinear in \({\mathcal{L}}({\mathcal{F}}_{t})\), i.e.,
-
(f)
\({\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{\mathcal {E}}^{g}_{s,t}[X]\) a.s., for any \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\) and \(\lambda\geq0\);
-
(g)
\({\mathcal{E}}_{s,t}[X+Y]\leq{\mathcal {E}}^{g}_{s,t}[X]+{\mathcal{E}}^{g}_{s,t}[Y]\) a.s., for any \((X,Y)\in {\mathcal{L}}({\mathcal{F}}_{t})\times{\mathcal{L}}({\mathcal{F}}_{t})\);
-
(h)
\({\mathcal{E}}^{g}_{s,t}[\mu]=\mu\) a.s., for any \(\mu\in\mathcal{R}\);
-
(f)
-
(ii)
for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$${\mathcal{E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi|{\mathcal{F}}_{s}] \quad \textit{a.s.}, $$where Λ is a set of probability measures on \((\Omega,{\mathcal{F}}_{T})\) and defined by
$$\Lambda:=\bigl\{ Q_{\theta}:E_{Q_{\theta}}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi],\forall\xi\in{\mathcal{L}}({ \mathcal{F}}_{T})\bigr\} . $$
Proof
It is obvious that (ii) implies (i).
In the following, we prove that (i) implies (ii). Suppose (i) holds. Since \({\mathcal{E}}_{0,T}[\cdot]\) is a sublinear expectation in \({\mathcal{L}}({\mathcal{F}}_{T})\), by Lemma 2.4 in Peng [35], we know that there exists a family of linear expectations \(\{E_{\theta}:\theta\in\Theta\}\) on \((\Omega,{\mathcal {F}}_{T})\) such that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
On the other hand, by Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset{\mathcal{L}}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem, we can deduce that for each \(\theta\in\Theta\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), there exists a unique probability measure \(Q_{\theta}\) defined on \((\Omega,{\mathcal{F}}_{T})\) such that
It follows from (3.16) and (3.17) that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
Let Π be a set of probability measures on \((\Omega,{\mathcal {F}}_{T})\) defined by
where \(\Theta^{g}\) := {\((\alpha_{t})_{t\in[0,T]}:\alpha\) is \(\mathcal{R}^{d}\)-valued, progressively measurable and, for any \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\), \(\alpha_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.}. In order to prove (ii), now we prove that \(\Pi=\Lambda\).
For any \(\alpha\in\Theta^{g}\), we define \(g^{\alpha}(t,y,z):=\alpha_{t}\cdot z\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). Then, for any \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), by the well-known Girsanov theorem, we can deduce that
Since, for any \((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(\alpha_{t}\cdot z=g^{\alpha}(t,y,z)\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., it follows from the well-known comparison theorem for BSDEs that \(E_{P_{\alpha}}[\xi]={\mathcal {E}}^{g^{\alpha}}_{0,T}[\xi]\leq{\mathcal{E}}^{g}_{0,T}[\xi]\). Hence \(\Pi\subseteq\Lambda\).
Next let us prove that \(\Lambda\subseteq\Pi\). For each \(Q_{\theta}\in\Lambda\), since \(E_{Q_{\theta}}[\cdot]\leq{\mathcal {E}}^{g}_{0,T}[\cdot]\), \(\forall\xi, \eta\in L^{2}({\mathcal{F}}_{T})\), we have
Denote \(g^{\beta}(t,y,z):=\beta(t)|z|\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). From Lemmas 3.1 and 3.2 and applying the well-known comparison theorem for BSDEs again, we have
From (3.19) and (3.20), we can deduce that \(E_{Q_{\theta}}[\xi+\eta]-E_{Q_{\theta}}[\eta]\leq{\mathcal {E}}_{g^{\beta}}[\xi]\). Then, in a similar manner to Theorem 7.1 in Coquet et al. [20], we know that there exists a unique function \(g^{\theta}\) defined on \(\Omega\times[0,T]\times\mathcal{R}\times \mathcal{R}^{d}\) satisfying the following three conditions:
-
(H.1)
\(g^{\theta}(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall y\in\mathcal{R}\);
-
(H.2)
\(|g^{\theta}(t,y_{1},z_{1})-g^{\theta}(t,y_{2},z_{2})|\leq\beta(t)|z_{1}-z_{2}|\), \(\forall(y_{1},z_{1}), (y_{2},z_{2})\in\mathcal{R}\times\mathcal{R}^{d}\), where \(\beta(t)\) is a non-negative deterministic function satisfying that \(\int_{0}^{T}\beta^{2}(t)\, \mathrm{d}t<\infty\);
-
(H.3)
\({\mathcal{E}}_{g^{\theta}}[\xi|{\mathcal {F}}_{t}]=E_{Q_{\theta}}[\xi|{\mathcal{F}}_{t}]\) a.s., \(\forall\xi\in L^{2}({\mathcal{F}}_{T})\).
It follows from the linearity of \(({\mathcal {E}}_{g^{\theta}}[\cdot|{\mathcal{F}}_{t}] )_{t\in[0,T]}\) and Theorem 3.2 that \(g^{\theta}\) is linear with respect to z. Therefore, there exists a \(\mathcal{R}^{d}\)-valued progressively measurable process \((\theta_{t})_{t\in[0,T]}\) such that \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). In view of \(Q_{\theta}\in\Lambda\) and (H.3), we have for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). Then in a similar manner to Lemma 4.5 in Coquet et al. [20] and by Lemma 3.4, we can obtain \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal {R}^{d}\). For θ, we define the probability measure \(P_{\theta}\) satisfying \(\frac{\mathrm{d}P_{\theta}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\theta_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\theta_{t}|^{2}\, \mathrm{d}t )\), then \(P_{\theta}\in\Pi\) and \(E_{P_{\theta}}[\xi]={\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\), \(\forall\xi\in L^{2}({\mathcal {F}}_{T})\). Hence, \(Q_{\theta}=P_{\theta}\in\Pi\). Thus, \(\Lambda\subseteq\Pi\). Therefore, we have \(\Pi=\Lambda\).
Finally, we prove that, for any \(s, t\in[0,T]\) satisfying \(s\leq t\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), \({\mathcal {E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[\xi |{\mathcal{F}}_{s}]\) a.s. It follows from (H.3), the well-known comparison theorem for BSDEs, and Proposition 3.3 that
Hence, for any \(s, t\in[0,T]\) satisfying \(s\leq t\) and \(\xi\in{\mathcal{L}}({\mathcal {F}}_{t})\),
On the other hand, by Lemmas 3.1, 3.2, and 3.3, we can deduce that g is independent of y and positively homogeneous, sub-additive with respect to z. For any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), let \((Y^{\xi}_{t},Z^{\xi}_{t} )_{t\in[0,T]}\) denote the solution of the following BSDE:
By a measurable selection theorem (cf., e.g., El Karoui and Quenez [21], p.215), we can deduce that there exists a progressively measurable process \(\alpha^{\xi}\in\Theta^{g}\) such that
From (3.22) and applying the well-known Girsanov theorem, we have \({\mathcal{E}}^{g}_{s,t}[\xi]={\mathcal {E}}^{g}_{s,T}[\xi]=E_{P_{\alpha^{\xi}}}[\xi|{\mathcal{F}}_{s}]\) a.s. Hence, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
It follows from (3.21) and (3.23) that
The proof of Theorem 3.4 is complete. □
4 Hölder’s inequality and Minkowski’s inequality for g-evaluations
In this section, we give a sufficient condition on g under which Hölder’s inequality and Minkowski’s inequality for g-evaluations hold true.
First, we give the following lemma.
Lemma 4.1
Suppose that the function g satisfies (B.1) and (B.2). Let g satisfy the following conditions:
-
(i)
for any \(y_{1}\geq0\), \(y_{2}\geq0\), and \((z_{1},z_{2})\in\mathcal{R}^{d}\times\mathcal{R}^{d}\),
$$g(t,y_{1}+y_{2},z_{1}+z_{2})\leq g(t,y_{1},z_{1})+g(t,y_{2},z_{2}), \quad \mathrm{d}P\times \mathrm{d}t\textit{-a.s.}; $$ -
(ii)
for any \(\lambda\geq0\), \(y\geq0\), and \(z\in \mathcal{R}^{d}\),
$$g(t,\lambda y,\lambda z)\leq\lambda g(t,y,z),\quad \mathrm{d}P\times\mathrm{d}t \textit{-a.s.}, $$
then \({\mathcal{E}}^{g}_{s,t}[\cdot]\) satisfies the following conditions:
-
(j)
\({\mathcal{E}}^{g}_{s,t}[\xi+\eta]\leq{\mathcal {E}}^{g}_{s,t}[\xi]+{\mathcal{E}}^{g}_{s,t}[\eta]\) a.s., for any \((\xi,\eta)\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\times{\mathcal {L}}_{+}({\mathcal{F}}_{t})\);
-
(k)
\({\mathcal{E}}^{g}_{s,t}[\lambda\xi]=\lambda{\mathcal {E}}^{g}_{s,t}[\xi]\) a.s., for any \(\xi\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\) and \(\lambda\geq0\).
The key idea of the proof of Lemma 4.1 is the well-known comparison theorem for BSDEs. The proof is very similar to that of Proposition 4.2 in Jia [26]. So we omit it.
Applying Lemma 4.1 and Theorems 2.3 and 2.4, we immediately have the following Hölder inequality and Minkowski inequality for g-evaluations.
Theorem 4.1
Let g satisfy the conditions of Lemma 4.1, then, for any \(X,Y\in{\mathcal{L}}({\mathcal{F}}_{t})\) and \(|X|^{p}, |Y|^{q}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p, q>1\) and \(1/p+1/q=1\)), we have
Theorem 4.2
Let g satisfy the conditions of Lemma 4.1, then, for any \(X, Y\in{\mathcal{L}}({\mathcal{F}}_{t})\), and \(|X|^{p},|Y|^{p}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p>1\)), we have
References
Peng, SG: Dynamical evaluations. C. R. Acad. Sci. Paris, Ser. I 339, 585-589 (2004)
Peng, SG: Dynamically consistent nonlinear evaluations and expectations (2005). arXiv:math.PR/0501415v1
Pardoux, E, Peng, SG: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55-61 (1990)
Hu, Y, Peng, SG: Solution of forward-backward stochastic differential equations. Probab. Theory Relat. Fields 103, 273-283 (1995)
Lepeltier, JP, San Martin, J: Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 32, 425-430 (1997)
El Karoui, N, Peng, SG, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7, 1-71 (1997)
Pardoux, E: Generalized discontinuous BSDEs. In: El Karoui, N, Mazliak, L (eds.) Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364, pp. 207-219. Longman, Harlow (1997)
Pardoux, E: BSDEs, weak convergence and homogenization of semilinear PDEs. In: Nonlinear Analysis, Differential Equations and Control (Montreal, QC, 1998), pp. 503-549. Kluwer Academic, Dordrecht (1998)
Briand, P, Delyon, B, Hu, Y, Pardoux, E, Stoica, L: \(L^{p}\) Solutions of backward stochastic differential equations. Stoch. Process. Appl. 108, 109-129 (2003)
Chen, ZJ, Wang, B: Infinite time interval BSDEs and the convergence of g-martingales. J. Aust. Math. Soc. A 69, 187-211 (2000)
Zong, ZJ: \(L^{p}\) Solutions of infinite time interval BSDEs and the corresponding g-expectations and g-martingales. Turk. J. Math. 37, 704-718 (2013)
Chen, ZJ, Epstein, L: Ambiguity, risk and asset returns in continuous time. Econometrica 70, 1403-1443 (2002)
Peng, SG: Dynamically nonlinear consistent evaluations and expectations. Lecture notes presented in Weihai Summer School, Weihai (2004)
Peng, SG: Filtration consistent nonlinear expectations and evaluations of contingent claims. Acta Math. Appl. Sinica (Engl. Ser.) 20, 191-214 (2004)
Peng, SG: Modelling derivatives pricing mechanism with their generating functions (2006). arXiv:math.PR/0605599v1
Rosazza Gianin, E: Risk measures via g-expectations. Insur. Math. Econ. 39, 19-34 (2006)
Briand, P, Coquet, F, Hu, Y, Mémin, J, Peng, SG: A converse comparison theorem for BSDEs and related properties of g-expectation. Electron. Commun. Probab. 5, 101-117 (2000)
Chen, ZJ, Kulperger, R, Jiang, L: Jensen’s inequality for g-expectation: part 1. C. R. Acad. Sci. Paris, Ser. I 337, 725-730 (2003)
Chen, ZJ, Kulperger, R, Jiang, L: Jensen’s inequality for g-expectation: part 2. C. R. Acad. Sci. Paris, Ser. I 337, 797-800 (2003)
Coquet, F, Hu, Y, Mémin, J, Peng, SG: Filtration-consistent nonlinear expectations and related g-expectations. Probab. Theory Relat. Fields 123, 1-27 (2002)
El Karoui, N, Quenez, MC: Non-linear pricing theory and backward stochastic differential equations. In: Runggaldier, WJ (ed.) Financial Mathematics. Lecture Notes in Mathematics, vol. 1656, pp. 191-246. Springer, Heidelberg (1996)
Fan, SJ: Jensen’s inequality for filtration consistent nonlinear expectation without domination condition. J. Math. Anal. Appl. 345, 678-688 (2008)
Hu, F: Dynamically consistent nonlinear evaluations with their generating functions in \(L^{p}\). Acta Math. Sin. Engl. Ser. 29, 815-832 (2013)
Hu, F, Chen, ZJ: Generalized g-expectations and related properties. Stat. Probab. Lett. 80, 191-195 (2010)
Hu, Y: On Jensen’s inequality for g-expectation and for nonlinear expectation. Arch. Math. 85, 572-680 (2005)
Jia, GY: On Jensen’s inequality and Hölder’s inequality for g-expectation. Arch. Math. 94, 489-499 (2010)
Jiang, L: Jensen’s inequality for backward stochastic differential equation. Chin. Ann. Math., Ser. B 27, 553-564 (2006)
Jiang, L, Chen, ZJ: On Jensen’s inequality for g-expectation. Chin. Ann. Math., Ser. B 25, 401-412 (2004)
Peng, SG: Backward SDE and related g-expectation. In: El Karoui, N, Mazliak, L (eds.) Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364, pp. 141-159. Longman, Harlow (1997)
Peng, SG: Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type. Probab. Theory Relat. Fields 113, 473-499 (1999)
Zong, ZJ: Jensen’s inequality for generalized Peng’s g-expectations and its applications. Abstr. Appl. Anal. 2013, Article ID 683047 (2013)
Krein, SG, Petunin, YI, Semenov, EM: Interpolation of Linear Operators. Translations of Mathematical Monographs, vol. 54. Am. Math. Soc., Providence (1982) (Translated from the Russian by J Szucs)
Zong, ZJ, Hu, F: \(L^{p}\) Weak convergence method on BSDEs with non-uniformly Lipschitz coefficients and its applications (submitted)
Yan, JA: Lecture Note on Measure Theory, 2nd edn. Science Press, Beijing (2005) (Chinese version)
Peng, SG: Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci. China Ser. A 52, 1391-1411 (2009)
Acknowledgements
The authors would like to thank the anonymous referees for their careful reading of this paper, correction of errors, and valuable suggestions. The work of Zhaojun Zong, Feng Hu and Chuancun Yin is supported by the National Natural Science Foundation of China (Nos. 11301295 and 11171179), the Doctoral Program Foundation of Ministry of Education of China (Nos. 20123705120005 and 20133705110002), the Program for Scientific Research Innovation Team in Colleges and Universities of Shandong Province of China and the Program for Scientific Research Innovation Team in Applied Probability and Statistics of Qufu Normal University (No. 0230518). The work of Helin Wu is Supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission (No. KJ1400922).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zong, Z., Hu, F., Yin, C. et al. On Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations. J Inequal Appl 2015, 152 (2015). https://doi.org/10.1186/s13660-015-0677-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0677-5