Skip to main content
  • Research Article
  • Open access
  • Published:

Iterative Refinements of the Hermite-Hadamard Inequality, Applications to the Standard Means

Abstract

Two adjacent recursive processes converging to the mean value of a real-valued convex function are given. Refinements of the Hermite-Hadamard inequality are obtained. Some applications to the special means are discussed. A brief extension for convex mappings with variables in a linear space is also provided.

1. Introduction

Let be a nonempty convex subset of and let be a convex function. For , the following double inequality

(1.1)

is known in the literature as the Hermite-Hadamard inequality for convex functions. Such inequality is very useful in many mathematical contexts and contributes as a tool for establishing some interesting estimations.

In recent few years, many authors have been interested to give some refinements and extensions of the Hermite-Hadamard inequality (1.1), [14]. Dragomir [1] gave a refinement of the left side of (1.1) as summarized in the next result.

Theorem 1.1.

Let be a convex function and let be defined by

(1.2)

Then is convex increasing on , and for all , one has

(1.3)

Yang and Hong [3] gave a refinement of the right side of (1.1) as itemized below.

Theorem 1.2.

Let be a convex function and let be defined by

(1.4)

Then is convex increasing on , and for all , one has

(1.5)

From the above theorems we immediately deduce the following.

Corollary 1.3.

With the above, there holds

(1.6)

for all , with

(1.7)

The following refinement of (1.1) is also well-known.

Theorem 1.4.

With the above, the following double inequality holds

(1.8)

For the sake of completeness and in order to explain the key idea of our approach to the reader we will reproduce here the proof of the above known theorem.

Proof.

Applying (1.1) successively in the subintervals and we obtain

(1.9)

The desired result (1.8) follows by adding the above obtained inequalities (1.9).

In [4] Zabandan introduced an improvement of Theorem 1.4 as recited in the following. Let and be the sequences defined by

(1.10)

Theorem 1.5.

With the above, one has the following inequalities:

(1.11)

with the relationship

(1.12)

Notation 1.

Throughout this paper, and for the sake of presentation, the above expressions and will be denoted by and , and the sequences by , respectively. Further, the middle member of inequality (1.1), usually known by the mean value of in , will be denoted by , that is,

(1.13)

2. Iterative Refinements of the Hermite-Hadamard Inequality

Let be a nonempty convex subset of and let be a convex function. As already pointed out, our fundamental goal in the present section is to give some iterative refinements of (1.1) containing those recalled in the above. We start with our general viewpoint.

2.1. General Approach

Examining the proof of Theorem 1.4 we observe that the same procedure can be again recursively applied. More precisely, let us start with the next double inequality

(2.1)

where are two given functions. Assume that, by the same procedure as in the proof of Theorem 1.4 we have

(2.2)

with the following relationships

(2.3)

Reiterating successively the same, we then construct two sequences, denoted by and , satisfying the following inequalities:

(2.4)

where and are defined by the recursive relationships

(2.5)

The initial data and , which of course depend generally of the convex function , are for the moment upper and lower bounds of inequality (1.1), respectively, and satisfying

(2.6)

Summarizing the previous approach, we may state the following results.

Theorem 2.1.

With the above, the sequence is increasing and is a decreasing one. Moreover, the following inequalities:

(2.7)

hold true for all .

Proof.

Follows from the construction of and . It is also possible to prove the same by using the above recursive relationships defining and . The proof is complete.

Corollary 2.2.

The sequences and both converge and their limits are, respectively, the lower and upper bounds of , that is,

(2.8)

Proof.

According to inequalities (2.7), the sequence is increasing upper bounded by while is decreasing lower bounded by . It follows that and both converge. Passing to the limits in inequalities (2.7) we obtain (2.8), which completes the proof.

Now, we will observe a question arising naturally from the above study: what is the explicit form of (and ) in terms of ? The answer to this is given in the following result.

Theorem 2.3.

With the above, for all , there hold

(2.9)

Proof.

Of course, it is sufficient to show the first formulae which follows from a simple induction with a manipulation on the summation indices. We omit the routine details.

After this, we can put the following question: what are the explicit limits of the sequences and ? Before giving an answer to this question in a special case, we may state the following examples.

Example 2.4.

Of course, the first choice of and is to take the upper and lower bounds of (1.1), respectively, that is,

(2.10)

With this choice, we have

(2.11)

which, respectively, correspond to the lower and upper bounds of (1.8). By convexity of , it is easy to see that the inequalities (2.6) are satisfied. In this case we will prove in the next subsection that and coincide with and , respectively, and so both converge to .

Example 2.5.

Following Corollary 1.3 we can take

(2.12)

for fixed , . It is not hard to verify that the inequalities (2.6) are here satisfied. In this case, our above approach defines us two sequences which depend on the variable . For this, such sequences of functions will be denoted by and . This example, which contains the above one, will be detailed in the following.

2.2. Case of Example 2.4

Choosing and as in Example 2.4, we first state the following result.

Proposition 2.6.

With (2.10), one has

(2.13)

where and are given by (1.10).

Proof.

It is a simple verification from formulas (2.9) with (1.10).

Now, we will reproduce to prove that the sequences and both converge to by adopting our technical approach. In fact, with (2.10) the sequences and can be relied by a unique interesting relationship which, as we will see later, will simplify the corresponding proofs. Precisely, we may state the following result.

Proposition 2.7.

Assume that, for , one has (2.10). Then the following relation holds:

(2.14)

Proof.

It is a simple induction on and we omit the details for the reader.

Now we are in position to state the following result which gives an answer to the above question when and are chosen as in Example 2.4.

Theorem 2.8.

With (2.10), the sequences and are adjacent with the limit

(2.15)

and the following error-estimations hold

(2.16)

Proof.

According to Corollary 2.2, the sequences and both converge and by the relation (2.14) their limits are equal. Now, by virtue of (2.14) again we can write

(2.17)

This, with the inequalities (2.7), yields

(2.18)

By a simple mathematical induction, we simultaneously obtain (2.15) and (2.16). Thus completes the proof.

Remark 2.9.

Starting from a general point of view, we have found again Theorem 1.5 under a new angle and via a technical approach. Furthermore, such approach stems its importance in what follows.

(i)As the reader can remark it, the proofs are here more simple as that of [4] for proving the monotonicity and computing the limit of the considered sequences. See [4, pages 3–5] for such comparison.

(ii)The sequences having as limit are here defined by simple and recursive relationships which play interesting role in the theoretical study as in the computation context.

(iii)Some estimations improving those already stated in the literature are obtained here. In particular, inequalities (2.16) appear to be new for telling us that, in the numerical context, the convergence of and to is with geometric-speed.

2.3. Case of Example 2.5

As pointed out before, we can take

(2.19)

for fixed . The function sequences and are defined, for all , by the recursive relationships

(2.20)

By induction, it is not hard to see that the maps and , for fixed , are convex and increasing.

Similarly to the above, we obtain the next result.

Theorem 2.10.

With (2.19), the following assertions are met.

(1)The function sequences and , for fixed , are, respectively, monotone increasing and decreasing.

(2)For fixed , the functions and are (convex and) monotonic increasing.

(3)For all and , one has

(2.21)

Proof.

  1. (1)

    By construction, as in the proof of Theorem 2.1.

  2. (2)

    Comes from the recursive relationships defining and .

  3. (3)

    By construction as in the above.

By virtue of the monotonicity of the sequences , in a part, and that of the maps , in another part, the double iterative-functional inequality (2.21) yields some improvements of refinements recalled in the above section. In particular, we immediately find the inequalities (1.3) and (1.6), respectively, by writing

(2.22)

for all , and

(2.23)

for all .

Open Question 2.3.

As we have seen, for every , the sequences and both converge. What are their limits? To know if such convergence is uniform on is not obvious and appears also to be interesting.

3. Applications to Scalar Means

As already pointed out, this section will be devoted to display some applications of the above theoretical results. For this, we need some additional basic notions about special means.

For two nonnegative real numbers and , the arithmetic, geometric, harmonic, logarithmic, exponential (or identric) means of and are, respectively, defined by

(3.1)

with . The following inequalities are well known in the literature

(3.2)

When and are given, the computations of , and are simple while that of and specially that of are not. So, approaching and by simple and practical algorithms appears to be interesting. That is the fundamental aim of what follows. In the following applications, we consider the choice (of Example 2.4),

(3.3)

3.1. Application 1: Approximation of the Logarithmic Mean

Consider the convex function defined by . Preserving the same notations as in the previous section, the associate sequences and correspond to the initial data

(3.4)

Applying the above theoretical result to this particular case we immediately obtain the following result.

Theorem 3.1.

The sequences and , corresponding to , both converge to with the next estimation

(3.5)

for all , and the following inequalities hold

(3.6)

The above theorem tells us that containing logarithm can be approached by an iterative algorithm involving only the elementary operations sum, product and inverse. Further, such algorithm is simple, recursive and practical for the numerical context, with a geometric-speed.

3.2. Application 2: Approximation of the Identric Mean

Let be the convex map . Writing explicitly the corresponding iterative process we see that, for reason of simplicity, we may set

(3.7)

The auxiliary sequence is so recursively defined by

(3.8)

As for , it is easy to establish by a simple induction that

(3.9)

where the dual sequence is defined by a similar relationship as with the initial data . Our above approach allows us to announce the following interesting result.

Theorem 3.2.

The above sequence converges to with the estimation

(3.10)

and the iterative inequalities hold

(3.11)

Furthermore, one has

(3.12)

Proof.

It is immediate from the above general study. The details are left to the reader.

Combining the inequalities of Theorems 3.1 and 3.2, with the fact that for all , we simultaneously obtain the known inequalities (3.2). Further, the next result of convergence

(3.13)

when goes to , is not obvious to establish directly. This proves again the interest of this work and the generality of our approach.

Remark 3.3.

The identric mean having a transcendent expression is here approached by an algorithm, of algebraic type, utile for the theoretical study and simple for the numerical computation. Further as well-known, to define a non monotone operator mean, via Kubo-Ando theory [5], from the scalar case is not possible. Thus, our approach here could be the key idea for defining the identric mean involving operator and functional variables.

4. Extension for Real-Valued Function with Vector Variable

As well known, the Hermite-Hadamard inequality has an extension for real-valued convex functions with variables in a linear vector space in the following sense: let be a nonempty convex of and let be a convex function, then for all there holds

(4.1)

In particular, in every linear normed space , we have

(4.2)

In general, the computation of the middle side integrals of the above inequalities is not always possible. So, approaching such integrals by recursive and practical algorithms appears to be very interesting. Our aim in this section is to state briefly an analogue of our above approach, with its related fundamental results, for convex functions . We start with the analogue of Theorem 1.4.

Theorem 4.1.

Let be a convex function. Then, for all , , there holds

(4.3)

where and are given by

(4.4)

Proof.

On making the change of variable , we have

(4.5)

while for the change of variable we have

(4.6)

Now, applying the inequality (4.1), we have

(4.7)

If we divide both inequalities with and add the obtained results we deduce the desired double inequality (4.3).

Similarly, we set

(4.8)

Now, the extension of our above study is itemized in the following statement.

Theorem 4.2.

Let be a nonempty convex subset of a linear space and a convex function. For all , the sequences and defined by

(4.9)

are, respectively, monotonic increasing and decreasing and both converge to with the following estimation

(4.10)

Proof.

Similar to that of real variables. We omit the details here.

Of course, the sequences and are relied by similar relation as (2.14) and explicitly given by analogue expressions of (2.9). In particular, we may state the following.

Example 4.3.

Let be a real number and let be the convex function defined by . In this case, and are given by

(4.11)

with the following inequalities:

(4.12)

Remark 4.4.

The Hermite-Hadamard inequality, together with some associate refinements, can be extended for nonreal-valued maps that are convex with respect to a given (partial) ordering. In this direction, we indicate the recent paper [6].

References

  1. Dragomir SS: Two mappings in connection to Hadamard's inequalities. Journal of Mathematical Analysis and Applications 1992, 167(1):49–56. 10.1016/0022-247X(92)90233-4

    Article  MathSciNet  MATH  Google Scholar 

  2. Dragomir SS, McAndrew A: Refinements of the Hermite-Hadamard inequality for convex functions. Journal of Inequalities in Pure and Applied Mathematics 2005., 6(5, article no. 140):

  3. Yang G-S, Hong M-C: A note on Hadamard's inequality. Tamkang Journal of Mathematics 1997, 28(1):33–37.

    MathSciNet  MATH  Google Scholar 

  4. Zabandan G: A new refinement of the Hermite-Hadamard inequality for convex functions. Journal of Inequalities in Pure and Applied Mathematics 2009., 10(2, article no. 45):

  5. Kubo F, Ando T: Means of positive linear operators. Mathematische Annalen 1980, 246(3):205–224. 10.1007/BF01371042

    Article  MathSciNet  MATH  Google Scholar 

  6. Dragomir SS, Raïssouli M: Jensen and Hermite-Hadamard inequalities for the Legendre-Fenchel duality, application to convex operator maps. Mathematica Slovaca, 2010, Submitted Mathematica Slovaca, 2010, Submitted

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mustapha Raïssouli.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dragomir, S., Raïssouli, M. Iterative Refinements of the Hermite-Hadamard Inequality, Applications to the Standard Means. J Inequal Appl 2010, 107950 (2010). https://doi.org/10.1155/2010/107950

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/107950

Keywords