Skip to main content

Approximations of Jensen divergence for twice differentiable functions

Abstract

The Jensen divergence is used to measure the difference between two probability distributions. This divergence has been generalised to allow the comparison of more than two distributions. In this paper, we consider some bounds for generalised Jensen divergence of twice differentiable functions with bounded second derivatives. Evidently, these bounds provide approximations for the Jensen divergence of twice differentiable functions by the Jensen divergence of simpler functions such as the power functions and the paired entropies associated to the Harvda-Charvát functions.

MSC:26D15, 94A17.

1 Introduction

One of the more important applications of probability theory is finding an appropriate measure of distance (or difference) between two probability distributions [1]. A number of these divergence measures have been widely studied and applied by a number of mathematicians such as Burbea and Rao [2], Havrda and Charvát [3], Lin [4] and others.

In Burbea and Rao [2], a generalisation of Jensen divergence is considered to allow the comparison of more than two distributions. If Φ is a function defined on an interval I of the real line , the (generalised) Jensen divergence between two elements x=( x 1 ,, x n ) and y=( y 1 ,, y n ) in I n (where n1) is given by the following equation (cf. Burbea and Rao [2]):

J n , Φ (x,y):= i = 1 n [ 1 2 [ Φ ( x i ) + Φ ( y i ) ] Φ ( x i + y i 2 ) ]
(1)

for all x,y I n × I n . Several measures have been proposed to quantify the difference (also known as the divergence) between two (or more) probability distributions. We refer to Grosse et al. [5], Kullback and Leibler [6] and Csiszar [7] for further references.

We denote by S n

S n = { ( x 1 , , x n ) I n , i = 1 n x i = 1 } ,I=[0,1].

Utilising the family of functions, for α R + ,

Φ α (t):={ ( α 1 ) 1 ( t α t ) , α 1 , t log t , α = 1 ,

by Havrda and Charvát in [3] to introduce their entropies of degree α, Burbea and Rao [2] introduced the following family of Jensen divergences:

J n , α :={ ( α 1 ) 1 i = 1 n [ 1 2 ( x i α + y i α ) ( x i + y i 2 ) α ] , α 1 , 1 2 i = 1 n [ x i log x i + y i log y i ( x i + y i ) log ( x i + y i 2 ) ] , α = 1 ,

that can be defined on S n × S n with the convention that 0log0=0 for α R + . We note that the divergence J n , 1 is also known as the Jensen-Shannon divergence [8].

These measures have been applied in a variety of fields, for example, in information theory [9]. The Jensen divergence introduced in Burbea and Rao [2] has its applications in bioinformatics [10, 11], where it is usually utilised to compare two samples of healthy population (control) and diseased population (case) in detecting gene expression for a certain disease. We refer the readers to Dragomir [1] for the applications in other areas.

In a recent paper by Dragomir et al. [12], the authors found sharp upper and lower bounds for the Jensen divergence for various classes of functions Φ, including functions of bounded variation, absolutely continuous functions, Lipschitzian continuous functions, convex functions and differentiable functions. We recall some of these results in Section 2, which motivates the new results we obtain in this paper.

In this paper, we provide bounds for Jensen divergence of twice differentiable function Φ whose second derivative Φ satisfies some boundedness conditions. These bounds provide approximations of the Jensen divergence J n , Φ (cf. (1)) by the divergence of simpler functions such as the power functions (cf. Section 3) and the above mentioned family of Jensen divergences J n , α (cf. Section 4). Finally, we apply these bounds to some elementary functions in Section 5.

2 Definitions, notation and previous results

In this section, we provide definitions and notation that will be used in the paper. We also provide some results regarding sharp bounds for the generalised Jensen divergence as stated in Dragomir et al. [12].

2.1 Definitions and notation

Throughout the paper, for any real number r>1, we define r to be its Hölder conjugate, that is, 1/r+1/ r =1.

Definition 1 (Bullen [13])

If s is an extended real number, the generalised logarithmic mean of order s of two positive numbers x and y is defined by

L [ s ] (x,y)={ [ 1 s + 1 ( y s + 1 x s + 1 y x ) ] 1 s if  s 1 , 0 , ± , y x log y log x if  s = 1 , 1 e ( y y x x ) 1 y x if  s = 0 , max { x , y } if  s = + , min { x , y } if  s = ,
(2)

and L [ s ] (x,x)=x.

This mean is homogeneous and symmetric [[13], p.385]. In particular, there is no loss in generality by assuming 0<x<y. Note also that

L [ s ] (x,y)= ( x y [ ( 1 t ) x + t y ] s d t ) 1 / s

for 0<x<y and s[1,). This mean generalises not only logarithmic mean (when s=1), which is particularly useful in distribution of electrical charge of a conductor, but also arithmetic mean (when s=1) and geometric mean (when s=2).

We use the following notations for Lebesgue integrable functions: for any Lebesgue integrable function g on [a,b], we define, for axyb,

g [ x , y ] , p :=| x y |g(s) | p ds | 1 / p if p1 and g L p [a,b];

and for g L [a,b], we denote g [ x , y ] , := ess sup s [ x , y ] |g(s)|.

We recall that a function f:[a,b]R is absolutely continuous on [a,b] if and only if it is differentiable almost everywhere in [a,b], the derivative f is Lebesgue integrable on this interval and f(y)f(x)= x y f (t)dt for any x,y[a,b].

2.2 Previous results

In a recent paper by Dragomir et al. [12], the authors provide sharp upper and lower bounds for the Jensen divergence for various classes of functions Φ. Some results are stated in the following.

Theorem 2 (Dragomir et al. [12])

Assume that Φ:[a,b]R is absolutely continuous on [a,b]. Then we have the bounds

| J n , Φ ( x , y ) | 1 2 × { i = 1 n | y i x i | Φ [ x i , y i ] , if Φ L [ a , b ] , i = 1 n | y i x i | p 1 p Φ [ x i , y i ] , p if Φ L p [ a , b ] , p > 1 , i = 1 n Φ [ x i , y i ] , 1 1 2 × { Φ [ a , b ] , i = 1 n | y i x i | if Φ L [ a , b ] , Φ [ a , b ] , p i = 1 n | y i x i | p 1 p if Φ L p [ a , b ] , p > 1 , n Φ [ a , b ] , 1
(3)

for any x=( x 1 ,, x n ),y=( y 1 ,, y n ) [ a , b ] n .

Moreover, if the modulus of the derivative is convex, then we have the inequality

| J n , Φ ( x , y ) | 1 4 i = 1 n | y i x i | [ | Φ ( x i + y i 2 ) | + | Φ ( x i ) | + | Φ ( y i ) | 2 ] 1 4 i = 1 n | y i x i | [ | Φ ( x i ) | + | Φ ( y i ) | ] ( ( Φ [ a , b ] , δ ( x , y ) )
(4)

for any x=( x 1 ,, x n ),y=( y 1 ,, y n ) [ a , b ] n , where δ(x,y)= 1 2 i = 1 n | y i x i |.

The constant 1/4 is best possible in both inequalities.

Some more assumptions for Φ lead to the following results.

Theorem 3 (Dragomir et al. [12])

Let Φ:[a,b]R be a differentiable function on the interval [a,b] of real numbers .

  1. (i)

    If the derivative Φ is of bounded variation on [a,b], then

    | J n , Φ ( x , y ) | 1 4 i = 1 n | y i x i | | x i y i ( Φ ) | 1 4 a b ( Φ ) i = 1 n | y i x i | = 1 2 a b ( Φ ) δ ( x , y )
    (5)

for any x=( x 1 ,, x n ),y=( y 1 ,, y n ) [ a , b ] n .

The constant 1/4 is best possible in both inequalities (5).

  1. (ii)

    If the derivative Φ is K-Lipschitzian on [a,b] with the constant K>0, then

    | J n , Φ (x,y)| 1 8 K i = 1 n ( y i x i ) 2 = 1 2 K J n , 2 (x,y)
    (6)

for any x=( x 1 ,, x n ),y=( y 1 ,, y n ) [ a , b ] n , where

J n , 2 (x,y)= 1 4 i = 1 n ( y i x i ) 2 .

The constant 1/8 is best possible in (6).

Motivated by these results, we state bounds for J n , Φ for twice differentiable functions Φ with some boundedness conditions for the second derivative in the next sections.

3 Approximating with Jensen divergence for power functions

In this section we provide some bounds for the generalised Jensen divergence for twice differentiable function Φ:IRR, whose second derivative Φ is bounded above and below in the following sense:

γ t 2 p p ( p 1 ) Φ (t)Γ
(7)

for some γ<Γ and p(,0)(1,) and all tI; and

δ t 2 q q ( q 1 ) Φ (t)Δ
(8)

for some δΔ, some q(0,1) and all tI. These conditions enable us to provide approximations of the Jensen divergence for Φ via the functions f(t)= t p for p0,1 and t R + , i.e.

J n , ( ) p = i = 1 n [ 1 2 ( x i p + y i p ) ( x i + y i 2 ) p ] .

Lemma 4 (Dragomir et al. [12])

Let Φ:[a,b]R be a differentiable function and let the derivative Φ be absolutely continuous. Then

| J n , Φ ( x , y ) | { 1 8 Φ [ a , b ] , i = 1 n ( y i x i ) 2 if Φ L [ a , b ] , Φ [ a , b ] , r ( r + 1 ) 1 / r 2 1 + 1 / r i = 1 n | y i x i | 1 + 1 / r if Φ L r [ a , b ] , r > 1 .
(9)

We refer to [12] for the proof of the above lemma.

Lemma 5 Let Φ:[a,b]R be a twice differentiable function and 0<a<b<. If Φ satisfies (7), then

( Φ γ + Γ 2 ( ) p ) [ a , b ] , p(p1) Γ γ 2 max { a p 2 , b p 2 } ;
(10)

and

( Φ γ + Γ 2 ( ) p ) [ a , b ] , r p(p1) Γ γ 2 ( L [ ( p 2 ) r ] ( a , b ) ) p 2 ,r>1,
(11)

where L [ s ] is the sth generalised logarithmic mean.

Proof Note that condition (7) is equivalent to

γp(p1) t p 2 Φ (t)Γp(p1) t p 2

since p(p1)>0. This is also equivalent to

| Φ (t)p(p1) γ + Γ 2 t p 2 |p(p1) Γ γ 2 t p 2 .
(12)

We take the supremum of both sides to obtain (10). For r>1, we note that (12) is equivalent to

Φ ( t ) p ( p 1 ) γ + Γ 2 t p 2 [ a , b ] , r = ( a b | Φ ( t ) p ( p 1 ) γ + Γ 2 t p 2 | r d t ) 1 / r p ( p 1 ) Γ γ 2 ( a b t r ( p 2 ) d t ) 1 / r = p ( p 1 ) Γ γ 2 ( L [ r ( p 2 ) ] ( a , b ) ) p 2 ,

which proves (11). □

Theorem 6 Let Φ:[a,b]R be a twice differentiable function and 0<a<b<. If Φ satisfies (7), then

| J n , Φ ( x , y ) γ + Γ 2 J n , ( ) p ( x , y ) | { 1 16 p ( p 1 ) ( Γ γ ) max { a p 2 , b p 2 } i = 1 n ( y i x i ) 2 if Φ L [ a , b ] , p ( p 1 ) ( Γ γ ) ( r + 1 ) 1 / r 2 2 + 1 / r L [ ( p 2 ) r ] ( a , b ) i = 1 n | y i x i | 1 + 1 / r if Φ L r [ a , b ] , r > 1 .

Proof Since any differentiable function is absolutely continuous, we may employ Lemma 4. Combining this with Lemma 5, we have

| J n , Φ ( x , y ) γ + Γ 2 J n , ( ) p ( x , y ) | { 1 8 ( Φ γ + Γ 2 ( ) p ) [ a , b ] , i = 1 n ( y i x i ) 2 , 1 ( r + 1 ) 1 / r 2 1 + 1 / r ( Φ γ + Γ 2 ( ) p ) [ a , b ] , r i = 1 n | y i x i | 1 + 1 / r , { 1 8 p ( p 1 ) Γ γ 2 max { a p 2 , b p 2 } i = 1 n ( y i x i ) 2 , 1 ( r + 1 ) 1 / r 2 1 + 1 / r p ( p 1 ) Γ γ 2 ( L [ ( p 2 ) r ] ( a , b ) ) p 2 i = 1 n | y i x i | 1 + 1 / r ,

as desired. □

We omit the proofs for the next results as they follow similarly to those of Lemma 5 and Theorem 6.

Lemma 7 Let Φ:[a,b]R be a twice differentiable function and 0<a<b<. If Φ satisfies (8), then

( Φ δ + Δ 2 ( ) q ) [ a , b ] , q(1q) Δ δ 2 max { a q 2 , b q 2 } ;
(13)

and

( Φ δ + Δ 2 ( ) q ) [ a , b ] , q q(1q) Δ δ 2 ( L [ r ( q 2 ) ] ( a , b ) ) p 2 ,r>1,
(14)

where L [ s ] is the sth generalised logarithmic mean.

Theorem 8 Let Φ:[a,b]R be a twice differentiable function and 0<a<b<. If Φ satisfies (8), then

| J n , Φ ( x , y ) δ + Δ 2 J n , ( ) q ( x , y ) | { 1 16 q ( 1 q ) ( Δ δ ) max { a q 2 , b q 2 } i = 1 n ( y i x i ) 2 if Φ L [ a , b ] , q ( 1 q ) ( Δ δ ) ( r + 1 ) 1 / r 2 2 + 1 / r ( L [ ( q 2 ) r ] ( a , b ) ) p 2 i = 1 n | y i x i | 1 + 1 / r if Φ L r [ a , b ] , r > 1 .

4 Further approximations

In this section, we present approximations for J n , Φ by utilising the family of the Jensen divergence

J n , α := ( α 1 ) 1 i = 1 n [ 1 2 ( x i α + y i α ) ( x i + y i 2 ) α ] ,α1;
(15)

and

J n , 1 (x,y):= 1 2 i = 1 n [ x i log x i + y i log y i ( x i + y i ) log ( x i + y i 2 ) ] .
(16)

Although J n , α is defined for α R + in [2], we may let α to be negative in (15), and for α=0, we define

J n , 0 (x,y):= i = 1 n [ log ( x i + y i 2 ) 1 2 ( log x i + log y i ) ] .
(17)

Theorem 9 Let Φ:I(0,)R be a twice differentiable function on I. If Φ satisfies (7), then

γ(p1) J n , p (x,y) J n , Φ (x,y)Γ(p1) J n , p (x,y) for any x,y I n .
(18)

Furthermore, if Φ satisfies (8), then

δ(q1) J n , q (x,y) J n , Φ (x,y)Δ(q1) J n , q (x,y) for any x,y I n .
(19)

Proof We consider the auxiliary function g γ , p :IR defined by g γ , p (t)=Φ(t)γ t p , where p(,0)(1,). We observe that g γ , p is twice differentiable on I and the second derivative is given by

g γ , p (t)=p(p1) t p 2 [ t 2 p p ( p 1 ) Φ ( t ) γ ] for any tI.

Utilising condition (7) and since p(p1) t p 2 >0 for tI, we deduce that g γ , p (t)0 for any tI which means that g γ , p is convex on I. Since for a convex function g:IR we have that J n , g (x,y)0, then we can write that

0 J n , g γ , p ( x , y ) = i = 1 n [ g γ , p ( x i ) + g γ , p ( y i ) 2 g γ , p ( x i + y i 2 ) ] = i = 1 n [ Φ ( x i ) + Φ ( y i ) 2 Φ ( x i + y i 2 ) ] γ i = 1 n [ x i p + y i p 2 ( x i + y i 2 ) p ] = J n , Φ ( x , y ) γ ( p 1 ) J n , p ( x , y ) ,

and the first inequality in (18) is proved. To prove the second inequality in (18), we consider the auxiliary function g Γ , p (x,y):IR with g Γ , p (x,y)=Γ t p Φ(t), for which we perform a similar argument; and we omit the details.

Now, if q(0,1) and if we consider the auxiliary function ψ δ , q (x,y):IR with ψ δ , q (x,y)=Φ(t)δ t q , then ψ is twice differentiable and

ψ δ , q (x,y)= t q 2 q(q1) [ t 2 q Φ ( t ) q ( q 1 ) δ ] 0for any tI

since q(0,1). Therefore ψ δ , q is concave on I, which implies that J n , ψ δ , q (x,y)0 for any x,y I n and, as above, we obtain

J n , Φ (x,y)δ i = 0 n [ x i q + y i q 2 ( x i + y i 2 ) q ] =δ(q1) J n , q (x,y).

The second inequality in (19) follows by considering the auxiliary function ψ Δ , q (x,y):IR with ψ Δ , q (x,y)=Δ t q Φ (t), and we omit the details. This completes the proof. □

Theorem 10 Let Φ:I(0,)R be a twice differentiable function on I. If there exist the constants ω<Ω such that

ω t 2 Φ (t)Ω for any tI,
(20)

then we have the bounds

ω J n , 0 (x,y) J n , Φ (x,y)Ω J n , 0 (x,y) for any x,y I n .
(21)

If there exist the constants λ<Λ such that

λt Φ (t)Λ for any tI,
(22)

then we have the bounds

λ J n , 1 (x,y) J n , Φ (x,y)Λ J n , 1 (x,y) for any x,y I n .
(23)

Proof Consider the auxiliary function g ω , 0 :IR with g ω , 0 (t)=Φ(t)+ωlogt. We observe that g ω , 0 is twice differentiable, and by (20) we have g ω , 0 (t)= t 2 ( t 2 Φ (t)ω)0 for any tI, then we can conclude that g ω , 0 is a convex function on I. Therefore we have J n , g ω , 0 (x,y)0 for any x,y I n , which implies that

0 J n , Φ ( x , y ) + ω i = 1 n [ log x i + log y i 2 log ( x i + y i 2 ) ] = J n , Φ ( x , y ) ω J n , 0 ( x , y )

and the first inequality in (21) is proved. Now, consider the auxiliary function g Ω , 0 :IR with g Ω , 0 (t)=ΩlogtΦ(t). Then g Ω , 0 (t)= t 2 (Ω t 2 Φ (t)) for any tI; and by (20) it is a convex function on I. By similar arguments, we deduce the second inequality in (21).

To prove the second part of the theorem, consider the auxiliary function g λ , 1 :IR, g λ , 1 (t)=Φ(t)λtlogt. We observe that g λ , 1 is twice differentiable and g λ , 1 (t):= Φ (t) 1 t λ, for tI. Since by (22) we have g λ , 1 (t)= t 1 (t Φ (t)λ)0 for all tI, then we can conclude that g λ , 1 is a convex function on I. The proof now follows along the lines outlined above and the first part of (23) is proved. The second part of (23) also follows by employing the auxiliary function g Λ , 1 :IR, g Λ , 1 (t)=ΛtlogtΦ(t); and this completes the proof. □

5 Applications to some elementary functions

We consider the approximations mentioned in Section 4 for some elementary functions.

We consider the function Φ(t)= e t for t[a,b][0,1] and have the following bounds for all x,y [ a , b ] n :

a 2 e a J n , 0 ( x , y ) J n , Φ ( x , y ) b 2 e b J n , 0 ( x , y ) , a e a J n , 1 ( x , y ) J n , Φ ( x , y ) b e b J n , 1 ( x , y ) , 1 2 e b J n , 2 ( x , y ) J n , Φ ( x , y ) 1 2 e a J n , 2 ( x , y ) .

In what follows, we apply these bounds to the above function on the interval [0.1,1], where x=(0.2,0.25,0.3,,1) and y=(1,,1) (cf. Figure 1).

Figure 1
figure 1

Bounds for generalised Jensen divergence, Φ(t)=exp(t) .

Discussion In this example, the best lower approximation (amongst the three) is given by 1 2 e 1 J n , 2 (x,y), and the best upper approximation is given by e 1 J n , 1 (x,y), where x=(0.2,0.25,0.3,,1) and y=(1,,1). However, it remains an open question whether this is true in general.

We consider the Havrda-Charvát function

Φ α (t)={ ( α 1 ) 1 ( t α t ) if  α 1 , t log ( t ) if  α = 1 .

For α=1, we have the following bounds for all x,y [ a , b ] n :

a J n , 0 ( x , y ) J n , Φ ( x , y ) b J n , 0 ( x , y ) for  [ a , b ] [ 0 , ) ; a J n , 1 ( x , y ) J n , Φ ( x , y ) b J n , 1 ( x , y ) for  0 a 1 b < .

We have the following bounds for all x,y [ a , b ] n :

α a α J n , 0 ( x , y ) J n , Φ ( x , y ) α b α J n , 0 ( x , y ) for  α 0 , α 1  and  [ a , b ] [ 0 , ) , α a α 1 J n , 1 ( x , y ) J n , Φ ( x , y ) α b α 1 J n , 1 ( x , y ) for  α > 0  and  [ a , b ] [ 0 , ) .

In Figure 2, we apply these bounds to the above function on the interval [0.1,1], where x=(0.2,0.201,0.202,,1), y=(1,,1), α=3/2.

Figure 2
figure 2

Bounds for generalised Jensen divergence, Havrda-Charvát function.

We also have, for all x,y [ a , b ] n ,

γ(p1) J n , p (x,y) J n , Φ (x,y)Γ(p1) J n , p (x,y)

for p(,0)(1,), where

Γ=α b α p p ( p 1 ) andγ=α a α p p ( p 1 )

for αp and [a,b][0,).

In Figure 3, we apply these bounds to the above function on the interval [0.1,1], where x=(0.2,0.201,0.202,,1), y=(1,,1), α=3 and p=3/2,2.

Figure 3
figure 3

Bounds for generalised Jensen divergence, Havrda-Charvát function.

Similarly, we have, for all x,y [ a , b ] n ,

δ(q1) J n , q (x,y) J n , Φ (x,y)Δ(q1) J n , q (x,y)

for q(0,1) and α>1, where

Δ=α a α q q ( q 1 ) andδ=α b α q q ( q 1 )

for [a,b][0,). In Figure 4, we apply these bounds to the above function on the interval [0.1,1], where x=(0.2,0.201,0.202,,1), y=(1,,1), q=1/2 and α=3.

Figure 4
figure 4

Bounds for generalised Jensen divergence, Havrda-Charvát function.

Discussion In this example, the best lower approximation (amongst the five) is given by 2 ( 0.1 ) 3 / 2 J n , 3 / 2 , and the best upper approximation is given by (3/2) J n , 1 (x,y), where x=(0.2,0.201,0.202,,1), y=(1,,1). However, it remains an open question whether this is true in general.

References

  1. Dragomir SS: Some reverses of the Jensen inequality with applications. RGMIA Research Report Collection (Online) 2011., 14: Article ID v14a72

    Google Scholar 

  2. Burbea J, Rao CR: On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory 1982, 28(3):489–495. 10.1109/TIT.1982.1056497

    Article  MathSciNet  Google Scholar 

  3. Havrda ME, Charvát F: Quantification method of classification processes: concept of structural α -entropy. Kybernetika 1967, 3: 30–35.

    MathSciNet  Google Scholar 

  4. Lin J: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37(1):145–151. 10.1109/18.61115

    Article  Google Scholar 

  5. Grosse I, Bernaola-Galvan P, Carpena P, Roman-Roldan R, Oliver J, Stanley HE: Analysis of symbolic sequences using the Jensen-Shannon divergence. Phys. Rev. E, Stat. Nonlinear Soft Matter Phys. 2002., 65(4): Article ID 041905. doi:10.1103/PhysRevE.65.041905

    Google Scholar 

  6. Kullback S, Leibler RA: On information and sufficiency. Ann. Math. Stat. 1951, 22: 79–86. 10.1214/aoms/1177729694

    Article  MathSciNet  Google Scholar 

  7. Csiszar I: Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hung. 1967, 2: 299–318.

    MathSciNet  Google Scholar 

  8. Shannon CE: A mathematical theory of communications. Bell Syst. Tech. J. 1948, 27: 379–423. 623–565

    Article  MathSciNet  Google Scholar 

  9. Menendez ML, Pardo JA, Pardo L: Some statistical applications of generalized Jensen difference divergence measures for fuzzy information systems. Fuzzy Sets Syst. 1992, 52: 169–180. 10.1016/0165-0114(92)90047-8

    Article  MathSciNet  Google Scholar 

  10. Arvey AJ, Azad RK, Raval A, Lawrence JG: Detection of genomic islands via segmental genome heterogeneity. Nucleic Acids Res. 2009, 37(16):5255–5266. 10.1093/nar/gkp576

    Article  Google Scholar 

  11. Gómez RM, Rosso OA, Berretta R, Moscato P: Uncovering molecular biomarkers that correlate cognitive decline with the changes of Hippocampus’ gene expression profiles in Alzheimer’s disease. PLoS ONE 2010., 5(4): Article ID e10153. doi:10.1371/journal.pone.0010153

    Google Scholar 

  12. Dragomir SS, Dragomir NM, Sherwell D: Sharp bounds for the Jensen divergence with applications. RGMIA Research Report Collection (Online) 2011., 14: Article ID v14a47

    Google Scholar 

  13. Bullen PS Mathematics and Its Applications 560. In Handbook of Means and Their Inequalities. Kluwer Academic, Dordrecht; 2003.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eder Kikianty.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

EK, SSD, ITD and DS contributed equally in all stages of writing the paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kikianty, E., Dragomir, S.S., Dintoe, I.T. et al. Approximations of Jensen divergence for twice differentiable functions. J Inequal Appl 2013, 267 (2013). https://doi.org/10.1186/1029-242X-2013-267

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-267

Keywords