Skip to main content

A monotonic refinement of Levinson’s inequality

Abstract

In this paper we give a monotonic refinement of the probabilistic version of Levinson’s inequality based on the monotonic refinement of Jensen’s inequality obtained by Cho et al. (Panam. Math. J. 12:43-50, 2002).

1 Introduction

Levinson’s inequality and its converse are summarized in the following result taken from Bullen [1].

Theorem 1.1

(a) If \(f:[a,b]\rightarrow\mathbb{R}\) is 3-convex and \(p_{i}\), \(x_{i}\), \(y_{i}\), \(i=1,2,\ldots, n\), are such that \(p_{i}>0\), \(\sum_{i=1}^{n}p_{i}=1\), \(a\leq x_{i},y_{i}\leq b\),

$$ \max(x_{1},\ldots,x_{n})\leq \min(y_{1},\ldots,y_{n}) $$
(1)

and

$$ x_{1} + y_{1} = x_{2} + y_{2} = \cdots= x_{n} + y_{n} = 2c $$
(2)

for some \(c\in[a,b]\), then

$$ \sum_{i=1}^{n}p_{i}f(x_{i})-f( \overline{x})\leqslant\sum_{i=1}^{n}p_{i}f(y_{i})-f( \overline{y}), $$
(3)

where \(\overline{x}=\sum_{i=1}^{n}p_{i}x_{i}\) and \(\overline{y}=\sum_{i=1}^{n}p_{i}y_{i}\) denote the weighted arithmetic means.

(b) If for a continuous function f inequality (3) holds for all n, all \(c\in[a,b]\), all 2n distinct points \(x_{i}, y_{i} \in [a,b]\) satisfying (1) and (2) and all weights \(p_{i}>0\) such that \(\sum_{i=1}^{n}p_{i}=1\), then f is 3-convex.

Levinson [2] originally proved the inequality for functions \(f: (0,2c) \to\mathbb{R}\) such that \(f''' \geq0\). Popoviciu [3] showed that the assumption of nonnegativity of the third derivative can be weakened to 3-convexity of f. Bullen [1] gave another proof of the inequality (rescaled to a general interval \([a,b]\)) as well as its converse given in part (b) of Theorem 1.1. Pečarić and Raşa [4] extended the inequality by using the method of index set functions; in the process they weakened assumption (1) and obtained a monotonic refinement of the inequality.

The above version of the inequality assumes that the sequences \(x_{i}\)’s and \(y_{i}\)’s are symmetrically distributed around the point c. Mercer [5] made a significant improvement by replacing this condition of symmetric distribution with the weaker one that the variances of the two sequences are equal.

Theorem 1.2

If \(f:[a,b]\rightarrow\mathbb{R}\) satisfies \(f''' \geq0\) and \(p_{i}\), \(x_{i}\), \(y_{i}\), \(i = 1,2,\ldots,n\), are such that \(p_{i}>0\), \(\sum_{i=1}^{n}p_{i}=1\), \(a\leqslant x_{i},y_{i}\leqslant b\), (1) holds and

$$ \sum_{i=1}^{n}p_{i}(x_{i}- \overline{x})^{2}=\sum_{i=1}^{n}p_{i}(y_{i}- \overline{y})^{2}, $$

then (3) holds.

Witkowski [6] extended Mercer’s result to 3-convex functions and a more general probabilistic setting. Baloch et al. [7] showed that inequality (3) holds for a larger class of functions they introduced and called 3-convex functions at a point.

Definition 1.3

Let I be an interval in \(\mathbb{R}\) and \(c\in I\). A function \(f: I\to\mathbb{R} \) is said to be 3-convex at point c if there exists a constant A such that the function \(F(s) = f(s) - \frac{A}{2} s^{2} \) is concave on \(I\cap(-\infty,c]\) and convex on \(I \cap[c ,\infty)\).

Baloch et al. [7] also proved the converse of the inequality, i.e., 3-convex functions at a point are the largest class of functions for which Levinson’s inequality holds under the equal variances assumption. Probabilistic version of Levinson’s inequality and its converse are summarized in the following result taken from Pečarić et al. [8].

Theorem 1.4

(a) Let \(f: [a,b] \to\mathbb{R}\) be 3-convex at point c and \(X:\Omega\to[a,c]\) and \(Y:\Omega\to[c,b]\) be two random variables such that \(\operatorname{Var}(X) = \operatorname{Var}(Y)\). Then

$$ \mathbb{E}\bigl(f(X)\bigr) - f\bigl(\mathbb{E}(X)\bigr) \leq \mathbb{E}\bigl(f(Y)\bigr) - f \bigl(\mathbb{E}(Y)\bigr). $$
(4)

(b) Let \(f:[a,b] \to\mathbb{R}\) be continuous and \(c\in(a,b)\) fixed. Suppose that inequality (4) holds for all discrete random variables X and Y taking two values \(x_{1}, x_{2} \in[a,c]\) and \(y_{1}, y_{2} \in[c,b]\), respectively, each with probability \(\frac{1}{2}\) and such that \(\operatorname{Var}(X) = \operatorname{Var}(Y)\) (i.e. \(|x_{2}- x_{1}| = |y_{2} - y_{1}|\)). Then f is 3-convex at c.

Remark 1.5

Results in [8] were stated for f defined on an arbitrary interval I. In that case, the finiteness of \(\operatorname{Var}(X) = \operatorname{Var}(Y)\), \(\mathbb{E}[f(X)]\) and \(\mathbb{E}[f(Y)]\) needs to be assumed. For simplicity, in this paper we will work with the closed interval \([a,b]\) since in this case the function f and all random variables are bounded and the aforementioned finiteness assumptions are satisfied.

If X and Y are discrete random variables taking values \(x_{i}\) and \(y_{i}\), respectively, with probabilities \(p_{i}\), then Theorem 1.4(a) gives Theorem 1.2. In [8] it was proven that a function defined on an interval is 3-convex if and only if it is 3-convex at every point of the interval. Therefore, the converse stated in Theorem 1.4(b) strengthens the converse stated in Theorem 1.1(b).

Theorem 1.4 shows that 3-convex functions at a point are characterized by Levinson’s inequality in a similar way that convex functions are characterized by Jensen’s inequality. Cho et al. [9] constructed two mappings connected to Jensen’s inequality and proved their monotonicity and convexity properties. Throughout the rest of the paper Ω denotes a measurable space with a finite measure μ, and we assume all mappings to be measurable. Further, \(\mathbb{E}[\cdot]\) and \(\operatorname{Var}(\cdot)\) denote the expectation and variance operators with respect to the probability measure \(\frac{1}{\mu(\Omega)} \mu\), i.e., for \(z:\Omega\to \mathbb{R}\),

$$\begin{aligned} &\mathbb{E}[z] = \frac{1}{\mu(\Omega)} \int_{\Omega} z(s)\,d\mu(s), \\ &\operatorname{Var}[z] = \frac{1}{\mu(\Omega)} \int_{\Omega} \bigl(z(s) - \mathbb{E}[z] \bigr)^{2}\,d\mu(s) = \mathbb{E}\bigl[z^{2}\bigr] - \mathbb{E}^{2}[z]. \end{aligned}$$

The following is a result from [9].

Theorem 1.6

Let \(f:[a,b]\to\mathbb{R}\) be convex, \(x: \Omega\to[a,b]\) and \(H,V: [0,1]\to\mathbb{R}\) the mappings

$$H(t) = \frac{1}{\mu(\Omega)}\int_{\Omega} f\bigl(t x(s) + (1-t) \mathbb{E}[x]\bigr)\,d\mu(s) $$

and

$$V(t) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f\bigl(t x(s) + (1-t) x(u)\bigr)\,d\mu(s)\,d\mu(u). $$

Then:

  1. (a)

    the mappings H and V are convex on \([0,1]\),

  2. (b)

    the mapping H is nondecreasing on \([0,1]\), while the mapping V is nonincreasing on \([0,\frac{1}{2} ]\) and nondecreasing on \([\frac{1}{2} ,1]\),

  3. (c)

    the following equalities hold:

    $$\begin{aligned} & \inf_{t\in[0,1]} H(t) = H(0) = f\bigl(\mathbb{E}[x]\bigr), \\ & \sup_{t\in[0,1]} H(t) = H(1) = \mathbb{E}\bigl[f(x)\bigr], \\ & \inf_{t\in[0,1]} V(t) = V\biggl(\frac{1}{2} \biggr) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f \biggl(\frac{ x(s) + x(u)}{2} \biggr)\,d\mu(s)\,d\mu(u), \\ & \sup_{t\in[0,1]} V(t) = V(0)=V(1) = \mathbb{E}\bigl[f(x)\bigr], \end{aligned}$$
  4. (d)

    the following inequality holds for all \(t\in[0,1]\):

    $$V(t) \geq\max\bigl\{ H(t), H(1-t)\bigr\} . $$

Remark 1.7

Theorem 1.6 was proven in [9] for the case when Ω is an interval in \(\mathbb{R}\) and μ is a measure with density, i.e., \(d\mu(s) = p(s)\,ds\). But, from the proofs given there, it is obvious that the statements hold under the more general setting given here.

If we denote \(x_{(t)} (s) = tx(s) + (1-t) \mathbb{E}[x]\), then \(H(t) = \mathbb{E}[f(x_{(t)})]\). As t ranges from 0 to 1, the function (i.e., random variable) \(x_{(t)}\) ranges from the constant \(\mathbb{E}[x]\) to the function x itself. In the process the expectation \(\mathbb{E}[f(x_{(t)})]\) increases by the monotonicity property from Theorem 1.6(b). Therefore, for \(0\leq s\leq t \leq1\), the following monotonic refinement of Jensen’s inequality holds:

$$f\bigl(\mathbb{E}[x]\bigr) = H(0) \leq \mathbb{E}\bigl[f(x_{(s)})\bigr] \leq \mathbb{E}\bigl[f(x_{(t)})\bigr] \leq H(1) = \mathbb{E}\bigl[f(x)\bigr]. $$

Furthermore, if \(x'\) and \(x''\) are two independent identically distributed ‘copies’ of x on the product space \(\Omega\times\Omega \), then \(V(t) = \mathbb{E}[f(\tilde{x}_{(t)})]\), where \(\tilde{x}_{(t)} = tx' + (1-t) x''\), and Theorem 1.6(d) can be interpreted as \(\mathbb{E}[f(x_{(t)})] \leq \mathbb{E}[f(\tilde{x}_{(t)})]\).

In this paper we will construct the corresponding two mappings in connection with Levinson’s inequality and show their monotonicity and convexity properties.

2 Main results

The following is our main result.

Theorem 2.1

Let \(f: [a,b] \to\mathbb{R}\) be 3-convex at point c, \(x:\Omega\to [a,c]\) and \(y:\Omega\to[c,b]\) such that \(\operatorname{Var}(x)= \operatorname{Var}(y)\) and \(H,V: [0,1] \to\mathbb{R}\) the mappings

$$ H(t) = \frac{1}{\mu(\Omega)}\int_{\Omega} \bigl[ f\bigl(t y(s) + (1-t) \mathbb{E}[y]\bigr) - f\bigl(t x(s) + (1-t) \mathbb{E}[x]\bigr) \bigr]\,d\mu(s) $$

and

$$\begin{aligned} V(t) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} \bigl[ f\bigl(t y(s) + (1-t) y(u)\bigr) - f\bigl(t x(s) + (1-t)x(u)\bigr) \bigr]\,d\mu(s)\,d\mu(u). \end{aligned}$$

Then:

  1. (a)

    the mappings H and V are convex on \([0,1]\),

  2. (b)

    the mapping H is nondecreasing on \([0,1]\), while the mapping V is nonincreasing on \([0,\frac{1}{2} ]\) and nondecreasing on \([\frac{1}{2} ,1]\),

  3. (c)

    the following equalities hold:

    $$\begin{aligned}& \inf_{t\in[0,1]} H(t) = H(0) = f\bigl(\mathbb{E}[y]\bigr) - f\bigl(\mathbb{E}[x] \bigr), \\& \sup_{t\in[0,1]} H(t) = H(1) = \mathbb{E}\bigl[f(y)\bigr] - \mathbb{E}\bigl[f(x) \bigr], \\& \begin{aligned}[b] \inf_{t\in[0,1]} V(t) ={}& V\biggl( \frac{1}{2} \biggr) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} \biggl[ f \biggl(\frac{ y(s) + y(u)}{2} \biggr) \\ & {} - f \biggl(\frac{ x(s) + x(u)}{2} \biggr) \biggr]\,d\mu(s)\,d\mu(u), \end{aligned} \\& \sup_{t\in[0,1]} V(t) = V(0) = V(1) = \mathbb{E}\bigl[f(y)\bigr] - \mathbb{E}\bigl[f(x)\bigr], \end{aligned}$$
  4. (d)

    the following inequality holds for all \(t\in[0,1]\):

    $$V(t) \geq\max\bigl\{ H(t), H(1-t)\bigr\} . $$

Proof

Let the constant A be as in Definition 1.3, i.e., such that the function \(F(s) = f(s) - \frac{A}{2} s^{2}\) is concave on \([a,c]\) and convex on \([c,b]\).

Since the function y takes values in \([c,b]\), so does the function \(y_{(t)} = ty + (1-t) \mathbb{E}[y]\) for every \(t\in[0,1]\). Furthermore, since the function F is convex on \([c,b]\), by Theorem 1.6 the mapping

$$H_{1}(t) = \frac{1}{\mu(\Omega)}\int_{\Omega} F\bigl(t y(s) + (1-t) \mathbb{E}[y]\bigr)\,d\mu(s) $$

is convex and nondecreasing on \([0,1]\). We have

$$\begin{aligned} H_{1} (t) ={}& \frac{1}{\mu(\Omega)}\int_{\Omega} f\bigl(t y(s) + (1-t) \mathbb{E}[y]\bigr)\,d\mu(s) \\ &{} - \frac{A}{2\mu(\Omega)}\int_{\Omega} \bigl(t y(s) + (1-t) \mathbb{E}[y] \bigr)^{2}\,d\mu(s) \\ ={}& \mathbb{E}\bigl[f(y_{(t)})\bigr] - \frac{A}{2}t^{2} \mathbb{E}\bigl[y^{2}\bigr] - At(1-t) \mathbb{E}[y] \mathbb{E}[y] - \frac {A}{2} (1-t)^{2} \mathbb{E}^{2}[y] \\ ={}& \mathbb{E}\bigl[f(y_{(t)})\bigr]- \frac{A}{2}t^{2} \bigl( \mathbb{E}\bigl[y^{2}\bigr] - \mathbb{E}^{2}[y] \bigr) - \frac {A}{2} \mathbb{E}^{2}[y] \\ ={}& \mathbb{E}\bigl[f(y_{(t)})\bigr] - \frac{A}{2}t^{2} \operatorname{Var}(y) - \frac{A}{2} \mathbb{E}^{2}[y]. \end{aligned}$$

Similarly, the function \(x_{(t)} = tx + (1-t)\mathbb{E}[x]\) takes values in \([a,c]\) for every \(t\in[0,1]\) and −F is convex on \([a,c]\), so by Theorem 1.6 the mapping

$$H_{2}(t) = -\frac{1}{\mu(\Omega)}\int_{\Omega} F\bigl(t x(s) + (1-t) \mathbb{E}[x]\bigr)\,d\mu(s) $$

is convex and nondecreasing on \([0,1]\), and we have

$$\begin{aligned} H_{2} (t) ={}& {-}\frac{1}{\mu(\Omega)}\int_{\Omega} f \bigl(t x(s) + (1-t) \mathbb{E}[x]\bigr)\,d\mu(s) \\ &{} + \frac{A}{2\mu(\Omega)}\int_{\Omega} \bigl(t x(s) + (1-t) \mathbb{E}[x] \bigr)^{2}\,d\mu(s) \\ ={}& {-}\mathbb{E}\bigl[f(x_{(t)})\bigr] + \frac{A}{2}t^{2} \operatorname{Var}(x) + \frac{A}{2} \mathbb{E}^{2}[x]. \end{aligned}$$

Let us also denote the (constant) mapping \(H_{3} (t) = \frac{A}{2} ( \mathbb{E}^{2}[y] - \mathbb{E}^{2}[x] )\). All three of the mappings \(H_{i}\), \(i=1,2,3\), are convex and nondecreasing and, therefore, so is their sum. Since \(\operatorname{Var}(x)=\operatorname{Var}(y)\), we have \(H= H_{1} + H_{2} + H_{3}\), and this proves the convexity and monotonicity properties of H from parts (a) and (b), while the first two equalities in (c) follow by simple calculation.

As for the mapping V, first of all, it is easy to see that \(V(t) = V(1-t)\) for all \(t\in[0,1]\), that is, V is symmetric with respect to \(t=\frac{1}{2}\). Next, since y takes values in \([c,b]\) and F is convex on that interval, by Theorem 1.6 the mapping

$$V_{1}(t) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} F\bigl(t y(s) + (1-t) y(u)\bigr)\,d\mu(s)\,d\mu(u) $$

is convex on \([0,1]\) and nondecreasing on \([\frac{1}{2}, 1]\). We have

$$\begin{aligned} V_{1} (t) ={}& \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} f\bigl(t y(s) + (1-t) y(u)\bigr)\,d\mu(s)\,d\mu(u) \\ &{} - \frac{A}{2\mu(\Omega )^{2}}\int_{\Omega} \int_{\Omega} \bigl(t y(s) + (1-t) y(u)\bigr)^{2}\,d\mu(s)\,d\mu (u)\,d\mu(u) \\ ={}& \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f\bigl(t y(s) + (1-t) y(u)\bigr)\,d\mu(s)\,d\mu(u) \\ &{} - \frac{A}{2}t^{2} \mathbb{E}\bigl[y^{2}\bigr] - At(1-t) \mathbb{E}[y] \mathbb{E}[y] - \frac{A}{2} (1-t)^{2} \mathbb{E}\bigl[y^{2}\bigr] \\ ={}& \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f\bigl(t y(s) + (1-t) y(u)\bigr)\,d\mu(s)\,d\mu(u) \\ &{} + At(1-t) \bigl(\mathbb{E}\bigl[y^{2}\bigr] - \mathbb{E}^{2}[y] \bigr) - \frac{A}{2} \mathbb{E}\bigl[y^{2}\bigr] \\ ={}& \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f\bigl(t y(s) + (1-t) y(u)\bigr)\,d\mu(s)\,d\mu(u) \\ & + At(1-t)\operatorname{Var}(y) - \frac{A}{2} \mathbb{E}\bigl[y^{2}\bigr]. \end{aligned}$$

Similarly, since x takes values in \([a,c]\) and −F is convex on that interval, by Theorem 1.6 the mapping

$$V_{2}(t) = -\frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} F\bigl(t x(s) + (1-t) x(u)\bigr)\,d\mu(s)\,d\mu(u) $$

is convex on \([0,1]\) and nondecreasing on \([\frac{1}{2}, 1]\) and we have

$$\begin{aligned} V_{2} (t) ={}& {-} \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f\bigl(t x(s) + (1-t) x(u)\bigr)\,d\mu(s)\,d\mu(u) \\ &{} -At(1-t)\operatorname{Var}(x) + \frac{A}{2} \mathbb{E}\bigl[x^{2}\bigr]. \end{aligned}$$

Let us also denote the (constant) mapping \(V_{3}(t) = \frac{A}{2} (\mathbb{E}[y^{2}] - \mathbb{E}[x^{2}] )\). All three of the mappings \(V_{i}\), \(i=1,2,3\), are convex and nondecreasing on \([\frac{1}{2},1]\) and, therefore, so is their sum. Since \(\operatorname{Var}(x)=\operatorname{Var}(y)\), we have \(V= V_{1} + V_{2} + V_{3}\). Furthermore, since V is symmetric around \(t=\frac{1}{2}\), it follows that it is nonincreasing on \([0,\frac{1}{2}]\), its minimum is attained at \(t=\frac{1}{2}\) and its maximum is attained at \(t=0\) and \(t=1\). This proves the convexity and monotonicity properties of V.

Finally, as for part (d), since V is symmetric around \(t=\frac{1}{2}\) and H is nondecreasing, it is enough to prove that \(V(t) \geq H(t)\) for \(t\in[\frac{1}{2},1]\). This inequality holds since \(V_{1} (t) \geq H_{1} (t)\) and \(V_{2} (t) \geq H_{2} (t)\) by Theorem 1.6(d) and \(V_{3}(t) = H_{3}(t)\) since \(\operatorname{Var}(x) =\operatorname{Var}(y)\) and this finishes the proof. □

A monotonic refinement of Levinson’s inequality (4) based on Theorem 2.1 is the following: if \(x_{(t)}\) and \(y_{(t)}\) for \(t\in[0,1]\) are as in the proof of Theorem 2.1, then \(H(t) =\mathbb{E}[f(y_{(t)})] - \mathbb{E}[f(x_{(t)})]\) and for \(0\leq s \leq t \leq1\) it holds

$$\begin{aligned} f\bigl(\mathbb{E}[y]\bigr) - f\bigl(\mathbb{E}[x]\bigr) &= H(0) \leq \mathbb{E}\bigl[f(y_{(s)}) \bigr] - \mathbb{E}\bigl[f(x_{(s)})\bigr] \\ &\leq \mathbb{E}\bigl[f(y_{(t)})\bigr] - \mathbb{E}\bigl[f(x_{(t)})\bigr] \leq H(1) = \mathbb{E}\bigl[f(y)\bigr] - \mathbb{E}\bigl[f(x)\bigr]. \end{aligned}$$

Remark 2.2

The convexity and monotonicity property of the mapping H in the case when x and y are two discrete random variables taking values \(x_{i}\) and \(y_{i}\), respectively, with probabilities \(p_{i}\), \(i=1,\ldots,n\), was proven in [7].

Remark 2.3

The assumption of equal variances in Theorem 2.1 can be weakened. If we denote \(B = A ( \operatorname{Var}(y) - \operatorname{Var}(x) )\), then the assumption \(\operatorname{Var}(x)=\operatorname{Var}(y)\) can be relaxed to \(B\geq0\). Indeed, what we have shown in the proof of Theorem 2.1 is that

$$H= \sum_{i=1}^{4} H_{i} \quad \mbox{and}\quad V=\sum_{i=1}^{4} V_{i}, $$

where \(H_{4} (t) = \frac{1}{2} B t^{2}\) and \(V_{4}(t) = Bt(t-1)\). For \(B\geq0\) the mapping \(H_{4}\) is convex and nondecreasing, while the mapping \(V_{4}\) is convex, symmetric around \(t=\frac{1}{2}\) and nondecreasing on \([\frac{1}{2},1]\). Therefore, the convexity and monotonicity properties of H and V are preserved.

Furthermore, \(V_{3} (t) - H_{3} (t) = \frac{1}{2} B\), so from \(V_{1}(t) \geq H_{1}(t)\), \(V_{2} (t) \geq H_{2} (t)\) and \(V_{3}(t)+V_{4}(t) - H_{3}(t) - H_{4}(t) = \frac{1}{2} B (1-t)^{2} \geq0 \) it follows that \(V(t) \geq H(t)\), i.e., part (d) also holds.

References

  1. Bullen, P: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 421(460), 109-112 (1973)

    MathSciNet  MATH  Google Scholar 

  2. Levinson, N: Generalization of an inequality of Ky Fan. J. Math. Anal. Appl. 8, 133-134 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  3. Popoviciu, T: Sur une inégalité de N. Levinson. Mathematica 6, 301-306 (1964)

    MathSciNet  Google Scholar 

  4. Pečarić, J, Raşa, I: On an index set function. Southeast Asian Bull. Math. 24, 431-434 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Mercer, A: Short proof of Jensen’s and Levinson’s inequalities. Math. Gaz. 94, 492-495 (2010)

    Google Scholar 

  6. Witkowski, A: On Levinson’s inequality. Ann. Univ. Paedagog. Crac. Stud. Math. 12, 59-67 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Baloch, I, Pečarić, J, Praljak, M: Generalization of Levinson’s inequality. J. Math. Inequal. 9(2), 571-586 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Pečarić, J, Praljak, M, Witkowski, A: Generalized Levinson’s inequality and exponential convexity. Opusc. Math. 35(3), 397-410 (2015)

    Article  MATH  Google Scholar 

  9. Cho, Y, Matić, M, Pečarić, J: Two mappings in connection to Jensen’s inequality. Panam. Math. J. 12, 43-50 (2002)

    MATH  Google Scholar 

Download references

Acknowledgements

This work has been fully supported by Croatian Science Foundation under the project 5435.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marjan Praljak.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors jointly worked on the results, and they read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jakšetić, J., Pečarić, J. & Praljak, M. A monotonic refinement of Levinson’s inequality. J Inequal Appl 2015, 162 (2015). https://doi.org/10.1186/s13660-015-0682-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0682-8

MSC

Keywords