Skip to main content

Inequalities involving Dresher variance mean

Abstract

Let p be a real density function defined on a compact subset Ω of R m , and let E(f,p)= Ω pfdω be the expectation of f with respect to the density function p. In this paper, we define a one-parameter extension Var γ (f,p) of the usual variance Var(f,p)=E( f 2 ,p) E 2 (f,p) of a positive continuous function f defined on Ω. By means of this extension, a two-parameter mean V r , s (f,p), called the Dresher variance mean, is then defined. Their properties are then discussed. In particular, we establish a Dresher variance mean inequality min t Ω {f(t)} V r , s (f,p) max t Ω {f(t)}, that is to say, the Dresher variance mean V r , s (f,p) is a true mean of f. We also establish a Dresher-type inequality V r , s (f,p) V r , s (f,p) under appropriate conditions on r, s, r , s ; and finally, a V-E inequality V r , s (f,p) ( s r ) 1 / ( r s ) E(f,p) that shows that V r , s (f,p) can be compared with E(f,p). We are also able to illustrate the uses of these results in space science.

MSC:26D15, 26E60, 62J10.

1 Introduction and main results

As indicated in the monograph [1], the concept of mean is basic in the theory of inequalities and its applications. Indeed, there are many inequalities involving different types of mean in [118], and a great number of them have been used in mathematics and other natural sciences.

Dresher in [14], by means of moment space techniques, proved the following inequality: If ρ1σ0, f,g0, and ϕ is a distribution function, then

[ ( f + g ) ρ d ϕ ( f + g ) σ d ϕ ] 1 / ( ρ σ ) ( f ρ d ϕ f σ d ϕ ) 1 / ( ρ σ ) + ( g ρ d ϕ g σ d ϕ ) 1 / ( ρ σ ) .

This result is referred to as Dresher’s inequality by Daskin [15], Beckenbach and Bellman [16] (§24 in Ch. 1) and Hu [18] (p.21). Note that if we define

E ¯ (f,ϕ)=fdϕ,

and

D r , s (f,ϕ)= ( E ¯ ( f r , ϕ ) E ¯ ( f s , ϕ ) ) 1 / ( r s ) ,r,sR,
(1)

then the above inequality can be rewritten as

D ρ , σ (f+g,ϕ) D ρ , σ (f,ϕ)+ D ρ , σ (g,ϕ).

D r , s (f,ϕ) is the well-known Dresher mean of the function f (see [10, 11, 1418]), which involves two parameters r and s and has applications in the theory of probability.

However, variance is also a crucial quantity in probability and statistics theory. It is, therefore, of interest to establish inequalities for various variances as well. In this paper, we introduce generalized ‘variances’ and establish several inequalities involving them. Although we may start out under a more general setting, for the sake of simplicity, we choose to consider the variance of a continuous function f with respect to a weight function p (including probability densities) defined on a closed and bounded domain Ω in R m instead of a distribution function.

More precisely, unless stated otherwise, in all later discussions, let Ω be a fixed, nonempty, closed and bounded domain in R m and let p:Ω(0,) be a fixed function which satisfies Ω pdω=1. For any continuous function f:ΩR, we write

E(f,p)= Ω pfdω,

which may be regarded as the weighted mean of the function f with respect to the weight function p.

Recall that the standard variance (see [12] and [19]) of a random variable f with respect to a density function p is

Var(f,p)=E ( f 2 , p ) E 2 (f,p).

We may, however, generalize this to the γ-variance of the function f:Ω(0,) defined by

Var γ (f,p)={ 2 γ ( γ 1 ) [ E ( f γ , p ) E γ ( f , p ) ] , γ 0 , 1 , 2 [ ln E ( f , p ) E ( ln f , p ) ] , γ = 0 , 2 [ E ( f ln f , p ) E ( f , p ) ln E ( f , p ) ] , γ = 1 .

According to this definition, we know that γ-variance Var γ (f,p) is a functional of the function f:Ω(0,) and p:Ω(0,), and such a definition is compatible with the generalized integral means studied elsewhere (see, e.g., [16]). Indeed, according to the power mean inequality (see, e.g., [19]), we may see that

Var γ (f,p)0,γR.

Let f:Ω(0,) and g:Ω(0,) be two continuous functions. We define

Cov γ (f,g)=E { [ f γ E γ ( f , p ) ] [ g γ E γ ( g , p ) ] , p }

is the γ-covariance of the function f:Ω(0,) and the function g:Ω(0,), where γR and the function ab: R 2 R is defined as follows:

ab={ a b , a b , a b 0 , a b , a b , a b < 0 , a , a = b .

According to this definition, we get

Cov 0 (f,g)0, Cov 1 (f,f)0

and

Var γ (f,p)={ 2 γ ( γ 1 ) Cov γ ( f , f ) , γ 0 , 1 , lim γ 0 2 γ ( γ 1 ) Cov γ ( f , f ) , γ = 0 , lim γ 1 2 γ ( γ 1 ) Cov γ ( f , f ) , γ = 1 .

If we define

Abscov γ (f,g)=E { | [ f γ E γ ( f , p ) ] [ g γ E γ ( g , p ) ] | , p }

is the γ-absolute covariance of the function f:Ω(0,) and the function g:Ω(0,), then we have that

| Cov γ ( f , g ) Abscov γ ( f , f ) Abscov γ ( g , g ) |1

for γ0 by the Cauchy inequality

| Ω pfgdω| Ω p f 2 d ω Ω p g 2 d ω .

Therefore, we can define the γ-correlation coefficient of the function f:Ω(0,) and the function g:Ω(0,) as follows:

ρ γ (f,g)={ Cov γ ( f , g ) Abscov γ ( f , f ) Abscov γ ( g , g ) , γ 0 , lim γ 0 Cov γ ( f , g ) Abscov γ ( f , f ) Abscov γ ( g , g ) , γ = 0 ,

where ρ γ (f,g)[1,1].

By means of Var γ (f,p), we may then define another two-parameter mean. This new two-parameter mean V r , s (f,p) will be called the Dresher variance mean of the function f. It is motivated by (1) and [10, 11, 1418] and is defined as follows. Given (r,s) R 2 and the continuous function f:Ω(0,). If f is a constant function defined by f(x)=c for any xΩ, we define the functional

V r , s (f,p)=c,

and if f is not a constant function, we define the functional

V r , s (f,p)={ [ Var r ( f , p ) Var s ( f , p ) ] 1 / ( r s ) , r s , exp [ E ( f r ln f , p ) E r ( f , p ) ln E ( f , p ) E ( f r , p ) E r ( f , p ) ( 1 r + 1 r 1 ) ] , r = s 0 , 1 , exp { E ( ln 2 f , p ) ln 2 E ( f , p ) 2 [ E ( ln f , p ) ln E ( f , p ) ] + 1 } , r = s = 0 , exp [ E ( f ln 2 f , p ) E ( f , p ) ln 2 E ( f , p ) 2 [ E ( f ln f , p ) E ( f , p ) ln E ( f , p ) ] 1 ] , r = s = 1 .

Since the function f:Ω(0,) is continuous, we know that the functions p f γ and p f γ lnf are integrable for any γR. Thus Var γ (f,p) and V r , s (f,p) are well defined. Since

lim γ γ Var γ ( f , p ) = Var γ ( f , p ) , lim ( r , s ) ( r , s ) V r , s ( f , p ) = V r , s ( f , p ) ,

Var γ (f,p) is continuous with respect to γR and V r , s (f,p) continuous with respect to (r,s) R 2 .

We will explain why we are concerned with our one-parameter variance Var γ (f,p) and the two-parameter mean V r , s (f,p) by illustrating their uses in statistics and space science.

Before doing so, we first state three main theorems of our investigations.

Theorem 1 (Dresher variance mean inequality)

For any continuous function f:Ω(0,), we have

min t Ω { f ( t ) } V r , s (f,p) max t Ω { f ( t ) } .
(2)

Theorem 2 (Dresher-type inequality)

Let the function f:Ω(0,) be continuous. If (r,s) R 2 , ( r , s ) R 2 , max{r,s}max{ r , s } and min{r,s}min{ r , s }, then

V r , s (f,p) V r , s (f,p).
(3)

Theorem 3 (V-E inequality)

For any continuous function f:Ω(0,), we have

V r , s (f,p) ( s r ) 1 r s E(f,p),
(4)

moreover, the coefficient ( s r ) 1 r s in (4) is the best constant.

From Theorem 1, we know that V r , s (f,p) is a certain mean value of f. Theorem 2 is similar to the well-known Dresher inequality stated in Lemma 3 below (see, e.g., [2], p.74 and [10, 11]). By Theorem 2, we see that V r , s (f,p) and V r , s (f,p) can be compared under appropriate conditions on r, s, r , s . Theorem 3 states a connection of V r , s (f,p) with the weighted mean E(f,p).

Let Ω be a fixed, nonempty, closed and bounded domain in R m , and let p=p(X) be the density function with support in Ω for the random vector X=( x 1 , x 2 ,, x m ). For any function f:Ω(0,), the mean of the random variable f(X) is

E [ f ( X ) ] = Ω pfdω=E(f,p);

moreover, its variance is

Var [ f ( X ) ] =E [ f 2 ( X ) ] E 2 [ f ( X ) ] = Ω p { f E [ f ( X ) ] } 2 dω= Var 2 (f,p).

Therefore,

Var γ [ f ( X ) ] = Var γ (f,p),γR

may be regarded as a γ-variance and

V r , s [ f ( X ) ] = V r , s (f,p),(r,s) R 2

may be regarded as a Dresher variance mean of the random variable f(X).

Note that by Theorem 1, we have

min X Ω { f ( X ) } V r , s [ f ( X ) ] max X Ω { f ( X ) } ,(r,s) R 2 ;
(5)

and by Theorem 2, we see that for any r, s, r , s R such that

max{r,s}max { r , s } andmin{r,s}min { r , s } ,

then

V r , s [ f ( X ) ] V r , s [ f ( X ) ] ;
(6)

and by Theorem 3, if r>s1, then

Var r [ f ( X ) ] Var s [ f ( X ) ] s r E r s [ f ( X ) ] ,
(7)

where the coefficient s/r is the best constant.

In the above results, Ω is a closed and bounded domain of R m . However, we remark that our results still hold if Ω is an unbounded domain of R m or some values of f are 0, as long as the integrals in Theorems 1-3 are convergent. Such extended results can be obtained by standard techniques in real analysis by applying continuity arguments and Lebesgue’s dominated convergence theorem, and hence we need not spell out all the details in this paper.

2 Proof of Theorem 1

For the sake of simplicity, we employ the following notations. Let n be an integer greater than or equal to 2, and let N n ={1,2,,n}. For real n-vectors x=( x 1 ,, x n ) and p=( p 1 ,, p n ), the dot product of p and x is denoted by A(x,p)=px= i = 1 n p i x i , where p S n and S n ={p [ 0 , ) n i = 1 n p i =1}, S n is an (n1)-dimensional simplex. If ϕ is a real function of a real variable, for the sake of convenience, we set the vector function

ϕ(x)= ( ϕ ( x 1 ) , ϕ ( x 2 ) , , ϕ ( x n ) ) .

Suppose that p S n and γ,r,sR. If x ( 0 , ) n , then

Var γ (x,p)={ 2 γ ( γ 1 ) [ A ( x γ , p ) A γ ( x , p ) ] , γ 0 , 1 , 2 [ ln A ( x , p ) A ( ln x , p ) ] , γ = 0 , 2 [ A ( x ln x , p ) A ( x , p ) ln A ( x , p ) ] , γ = 1

is called the γ-variance of the vector x with respect to p. If x ( 0 , ) n is a constant n-vector, then we define

V r , s (x,p)= x 1 ,

while if x is not a constant vector (i.e., there exist i,j N n such that x i x j ), then we define

V r , s (x,p)={ [ Var r ( x , p ) Var s ( x , p ) ] 1 r s , r s , exp [ A ( x r ln x , p ) A r ( x , p ) ln A ( x , p ) A ( x r , p ) A r ( x , p ) ( 1 r + 1 r 1 ) ] , r = s 0 , 1 , exp { A ( ln 2 x , p ) ln 2 A ( x , p ) 2 [ A ( ln x , p ) ln A ( x , p ) ] + 1 } , r = s = 0 , exp { A ( x ln 2 x , p ) A ( x , p ) ln 2 A ( x , p ) 2 [ A ( x ln x , p ) A ( x , p ) ln A ( x , p ) ] 1 } , r = s = 1 .

V r , s (x,p) is called the Dresher variance mean of the vector x.

Clearly, Var γ (x,p) is nonnegative and is continuous with respect to γR. V r , s (x,p)= V s , r (x,p) and is also continuous with respect to (r,s) in R 2 .

Lemma 1 Let I be a real interval. Suppose the function ϕ:IR is C ( 2 ) , i.e., twice continuously differentiable. If x I n and p S n , then

A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) = 1 i < j n p i p j { Φ ϕ [ w i , j ( x , p , t 1 , t 2 ) ] d t 1 d t 2 } ( x i x j ) 2 ,
(8)

where Φ is the triangle {( t 1 , t 2 ) [ 0 , ) 2 t 1 + t 2 1} and

w i , j (x,p, t 1 , t 2 )= t 1 x i + t 2 x j +(1 t 1 t 2 )A(x,p).

Proof Note that

Φ ϕ [ w i , j ( x , p , t 1 , t 2 ) ] d t 1 d t 2 = 0 1 d t 1 0 1 t 1 ϕ [ w i , j ( x , p , t 1 , t 2 ) ] d t 2 = 1 x j A ( x , p ) 0 1 d t 1 0 1 t 1 ϕ [ w i , j ( x , p , t 1 , t 2 ) ] d [ w i , j ( x , p , t 1 , t 2 ) ] = 1 x j A ( x , p ) 0 1 d t 1 ϕ [ t 1 x i + t 2 x j + ( 1 t 1 t 2 ) A ( x , p ) ] | 0 1 t 1 = 1 x j A ( x , p ) 0 1 { ϕ [ t 1 x i + ( 1 t 1 ) x j ] ϕ [ t 1 x i + ( 1 t 1 ) A ( x , p ) ] } d t 1 = 1 x j A ( x , p ) { ϕ [ t 1 x i + ( 1 t 1 ) x j ] x i x j ϕ [ t 1 x i + ( 1 t 1 ) A ( x , p ) ] x i A ( x , p ) } | 0 1 = 1 x j A ( x , p ) [ ϕ ( x i ) f ( x j ) x i x j ϕ ( x i ) ϕ ( A ( x , p ) ) x i A ( x , p ) ] = 1 ( x i x j ) [ x j A ( x , p ) ] [ x i A ( x , p ) ] | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 ϕ ( x j ) x j 1 | .

Hence,

1 i < j n p i p j { Φ ϕ [ t 1 x i + t 2 x j + ( 1 t 1 t 2 ) A ( x , p ) ] d t 1 d t 2 } ( x i x j ) 2 = 1 i < j n p i p j x i x j [ x j A ( x , p ) ] [ x i A ( x , p ) ] | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 ϕ ( x j ) x j 1 | = 1 2 1 i , j n p i p j [ 1 x j A ( x , p ) 1 x i A ( x , p ) ] | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 ϕ ( x j ) x j 1 | = 1 2 [ 1 i , j n p i p j 1 x j A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 ϕ ( x j ) x j 1 | 1 i , j n p i p j 1 x i A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 ϕ ( x j ) x j 1 | ] = 1 2 [ j = 1 n p j x j A ( x , p ) i = 1 n | ϕ ( A ( x , p ) ) A ( x , p ) 1 p i ϕ ( x i ) p i x i p i ϕ ( x j ) x j 1 | i = 1 n p i x i A ( x , p ) j = 1 n | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 p j ϕ ( x j ) p j x j p j | ] = 1 2 [ j = 1 n p j x j A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 i = 1 n p i ϕ ( x i ) i = 1 n p i x i i = 1 n p i ϕ ( x j ) x j 1 | i = 1 n p i x i A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 j = 1 n p j ϕ ( x j ) j = 1 n p j x j j = 1 n p j | ] = 1 2 [ j = 1 n p j x j A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 A ( ϕ ( x ) , p ) A ( x , p ) 1 ϕ ( x j ) x j 1 | i = 1 n p i x i A ( x , p ) | ϕ ( A ( x , p ) ) A ( x , p ) 1 ϕ ( x i ) x i 1 A ( ϕ ( x ) , p ) A ( x , p ) 1 | ] = 1 2 [ j = 1 n p j x j A ( x , p ) | ϕ ( A ( x , p ) ) A ( ϕ ( x ) , p ) 0 0 A ( ϕ ( x ) , p ) A ( x , p ) 1 ϕ ( x j ) x j 1 | i = 1 n p i x i A ( x , p ) | ϕ ( A ( x , p ) ) A ( ϕ ( x ) , p ) 0 0 ϕ ( x i ) x i 1 A ( ϕ ( x ) , p ) A ( x , p ) 1 | ] = 1 2 { j = 1 n p j x j A ( x , p ) [ ϕ ( A ( x , p ) ) A ( ϕ ( x ) , p ) ] [ A ( x , p ) x j ] i = 1 n p i x i A ( x , p ) [ ϕ ( A ( x , p ) ) A ( ϕ ( x ) , p ) ] [ x i A ( x , p ) ] } = 1 2 { j = 1 n p j [ A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) ] + i = 1 n p i [ A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) ] } = A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) .

Therefore, (8) holds. The proof is complete. □

Remark 1 The well-known Jensen inequality can be described as follows [2022]: If the function ϕ:IR satisfies ϕ (t)0 for all t in the interval I, then for any x I n and p S n , we have

A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) .
(9)

The above proof may be regarded as a constructive proof of (9).

Remark 2 We remark that the Dresher variance mean V r , s (x,p) extends the variance mean V r , 2 (x,p) (see [2], p.664, and [13, 21]), and Lemma 1 is a generalization of (2.23) of [19].

Lemma 2 If x ( 0 , ) n , p S n and (r,s) R 2 , then

min{x}=min{ x 1 ,, x n } V r , s (x,p)max{ x 1 ,, x n }=max{x}.
(10)

Proof If x is a constant vector, our assertion is clearly true. Let x be a non-constant vector, that is, there exist i,j N n such that x i x j . Note that V r , s (x,p)= V s , r (x,p) and V r , s (x,p) is continuous with respect to (r,s) in R 2 , we may then assume that

r(r1)s(s1)0andrs>0.

In (8), let ϕ:(0,)R be defined by ϕ(t)= t γ , where γ(γ1)0. Then we obtain

Var γ (x,p)=2 1 i < j n p i p j { Φ [ w i , j ( x , p , t 1 , t 2 ) ] γ 2 d t 1 d t 2 } ( x i x j ) 2 .
(11)

Since

min{x}=min{ x 1 ,, x n } w i , j (x,p, t 1 , t 2 )max{ x 1 ,, x n }=max{x},
(12)

by (11), (12), rs>0 and the fact that

V r , s ( x , p ) = [ Var r ( x , p ) Var s ( x ) ] 1 r s = [ 1 i < j n p i p j ( x i x j ) 2 Φ w i , j r 2 ( x , p , t 1 , t 2 ) d t 1 d t 2 1 i < j n p i p j ( x i x j ) 2 Φ w i , j s 2 ( x , p , t 1 , t 2 ) d t 1 d t 2 ] 1 r s = [ 1 i < j n p i p j ( x i x j ) 2 Φ w i , j s 2 ( x , p , t 1 , t 2 ) × w i , j r s ( x , p , t 1 , t 2 ) d t 1 d t 2 1 i < j n p i p j ( x i x j ) 2 Φ w i , j s 2 ( x , p , t 1 , t 2 ) d t 1 d t 2 ] 1 r s ,
(13)

we obtain (10). This concludes the proof. □

We may now turn to the proof of Theorem 1.

Proof First, we may assume that f is a nonconstant function and that

r(r1)s(s1)0,rs>0.

Let

T={Δ Ω 1 ,Δ Ω 2 ,,Δ Ω n }

be a partition of Ω, and let

T= max 1 i n max X , Y Δ Ω i { X Y }

be the ‘norm’ of the partition T, where

XY= ( X Y ) ( X Y )

is the length of the vector XY. Pick any ξ i Δ Ω i for each i=1,2,,n, set

ξ=( ξ 1 , ξ 2 ,, ξ n ),f(ξ)= ( f ( ξ 1 ) , f ( ξ 2 ) , , f ( ξ n ) ) ,

and

p ¯ (ξ)= ( p ¯ 1 ( ξ ) , p ¯ 2 ( ξ ) , , p ¯ n ( ξ ) ) = ( p ( ξ 1 ) | Δ Ω 1 | , p ( ξ 2 ) | Δ Ω 2 | , , p ( ξ n ) | Δ Ω n | ) i = 1 n p ( ξ i ) | Δ Ω i | ,

then

lim T 0 i = 1 n p( ξ i )|Δ Ω i |= Ω pdω=1,

where |Δ Ω i | is the m-dimensional volume of Δ Ω i for i=1,2,,n.

Furthermore, when γ(γ1)0, we have

Var γ ( f , p ) = 2 γ ( γ 1 ) [ lim T 0 i = 1 n p ( ξ i ) f γ ( ξ i ) | Δ Ω i | ( lim T 0 i = 1 n p ( ξ i ) f ( ξ i ) | Δ Ω i | ) γ ] = 2 γ ( γ 1 ) [ ( lim T 0 i = 1 n p ( ξ i ) | Δ Ω i | ) ( lim T 0 i = 1 n p ¯ i ( ξ ) f γ ( ξ i ) ) ( lim T 0 i = 1 n p ( ξ i ) | Δ Ω i | ) γ ( lim T 0 i = 1 n p ¯ i ( ξ ) f ( ξ i ) ) γ ] = 2 γ ( γ 1 ) [ lim T 0 i = 1 n p ¯ i ( ξ ) f γ ( ξ i ) ( lim T 0 i = 1 n p ¯ i ( ξ ) f ( ξ i ) ) γ ] = lim T 0 2 γ ( γ 1 ) [ i = 1 n p ¯ i ( ξ ) f r ( ξ i ) ( i = 1 n p ¯ i ( ξ ) f ( ξ i ) ) γ ] = lim T 0 Var γ ( f ( ξ ) , p ¯ ( ξ ) ) .
(14)

By (14), we obtain

V r , s ( f , p ) = [ Var r ( f , p ) Var s ( f , p ) ] 1 r s = lim T 0 [ Var r ( f ( ξ ) , p ¯ ( ξ ) ) Var s ( f ( ξ ) , p ¯ ( ξ ) ) ] 1 r s = lim T 0 V r , s ( f ( ξ ) , p ¯ ( ξ ) ) .
(15)

By Lemma 2, we have

min { f ( ξ ) } V r , s ( f ( ξ ) , p ¯ ( ξ ) ) max { f ( ξ ) } .
(16)

From (15) and (16), we obtain

min t Ω { f ( t ) } = lim T 0 min { f ( ξ ) } lim T 0 V r , s ( f ( ξ ) , p ¯ ( ξ ) ) = V r , s ( f , p ) lim T 0 max { f ( ξ ) } = max t Ω { f ( t ) } .

This completes the proof of Theorem 1. □

Remark 3 By [21], if the function ϕ:IR has the property that ϕ :IR is a continuous and convex function, then for any x I n and p S n , we obtain

ϕ ( V 3 , 2 ( x , p ) ) 2 [ A ( ϕ ( x ) , p ) ϕ ( A ( x , p ) ) ] Var 2 ( x , p ) 1 3 { max 1 i n { ϕ ( x i ) } + A ( ϕ ( x ) , p ) + ϕ ( A ( x , p ) ) } .
(17)

Thus, according to the proof of Theorem 1, we may see that: If the function f:Ω(0,) is continuous and the function ϕ:f(Ω)R has the property that ϕ :f(Ω)R is a continuous convex function, then

ϕ ( V 3 , 2 ( f , p ) ) 2 [ E ( ϕ f , p ) ϕ ( E ( f , p ) ) ] Var 2 ( f , p ) 1 3 { max t Ω { ϕ ( f ( t ) ) } + E ( ϕ f , p ) + ϕ ( E ( f , p ) ) } ,
(18)

where ϕf is the composite of ϕ and f. Therefore, the Dresher variance mean V r , s (f,p) has a wide mathematical background.

3 Proof of Theorem 2

In this section, we use the same notations as in the previous section. In addition, for fixed γ,r,sR, if x ( 0 , ) n and p S n , then the γ-order power mean of x with respect to p (see, e.g., [19]) is defined by

M [ γ ] (x,p)={ [ A ( x γ , p ) ] 1 γ , γ 0 , exp A ( ln x , p ) , γ = 0 ,

and the two-parameter Dresher mean of x (see [10, 11]) with respect to p is defined by

D r , s (x,p)={ [ A ( x r , p ) A ( x s , p ) ] 1 r s , r s , exp [ A ( x s ln x , p ) A ( x s , p ) ] , r = s .

We have the following well-known power mean inequality [19]: If α<β, then

M [ α ] (x,p) M [ β ] (x,p).

We also have the following result (see [2], p.74, and [10, 11]).

Lemma 3 (Dresher inequality)

If x ( 0 , ) n , p S n and (r,s),( r , s ) R 2 , then the inequality

D r , s (x,p) D r , s (x,p)
(19)

holds if and only if

max{r,s}max { r , s } and min{r,s}min { r , s } .
(20)

Proof Indeed, if (20) hold, since D r , s (x,p)= D s , r (x,p), we may assume that r r and s s . By the power mean inequality, we have

D r , s ( x , p ) = M [ r s ] ( x , x s p A ( x s , p ) ) M [ r s ] ( x , x s p A ( x s , p ) ) = M [ s r ] ( x , x r p A ( x r , p ) ) M [ s r ] ( x , x r p A ( x r , p ) ) = D r , s ( x , p ) .

If (19) holds, by [2], p.74 and [10, 11], (20) hold. □

Lemma 4 Let x ( 0 , ) n , p S n and (r,s),( r , s ) R 2 . If (20) holds, then

V r , s (x,p) V r , s (x,p).

Proof If x is a constant n-vector, our assertion clearly holds. We may, therefore, assume that there exist i,j N n such that x i x j . We may further assume that

r(r1)s(s1)(rs)0.

Let G={Δ Φ 1 ,Δ Φ 2 ,,Δ Φ l } be a partition of Φ={( t 1 , t 2 ) [ 0 , ) 2 | t 1 + t 2 1}. Let the area of each Δ Φ i be denoted by |Δ Φ i |, and let

G= max 1 k l max x , y Δ Φ k { x y }

be the ‘norm’ of the partition, then for any ( ξ k , 1 , ξ k , 2 )Δ Φ k , we have

Φ [ w i , j ( x , p , t 1 , t 2 ) ] r 2 d t 1 d t 2 = lim G 0 k = 1 l [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] r 2 |Δ Φ k |.

By (11), when γ(γ1)0, we have

Var γ ( x , p ) = 2 1 i < j n p i p j ( x i x j ) 2 Φ [ w i , j ( x , p , t 1 , t 2 ) ] γ 2 d t 1 d t 2 = 2 1 i < j n p i p j ( x i x j ) 2 lim G 0 k = 1 l [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] γ 2 | Δ Φ k | = lim G 0 { 2 1 i < j n p i p j ( x i x j ) 2 k = 1 l [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] γ 2 | Δ Φ k | } = lim G 0 { 1 i < j n , 1 k l 2 p i p j | Δ Φ k | ( x i x j ) 2 [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] γ 2 } .
(21)

By (21) and Lemma 3, we then see that

V r , s ( x , p ) = [ Var r ( x , p ) Var s ( x , p ) ] 1 r s = lim G 0 { 1 i < j n , 1 k l p i p j | Δ Φ k | ( x i x j ) 2 [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] r 2 1 i < j n , 1 k l p i p j | Δ Φ k | ( x i x j ) 2 [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] s 2 } 1 ( r 2 ) ( s 2 ) lim G 0 { 1 i < j n , 1 k l p i p j | Δ Φ k | ( x i x j ) 2 [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] r 2 1 i < j n , 1 k l p i p j | Δ Φ k | ( x i x j ) 2 [ w i , j ( x , p , ξ k , 1 , ξ k , 2 ) ] s 2 } 1 ( r 2 ) ( s 2 ) = V r , s ( x , p ) .

This ends the proof. □

We may now easily obtain the proof of Theorem 2.

Proof Indeed, by (20), (15) and Lemma 4, we get that

V r , s (f,p)= lim T 0 V r , s ( f ( ξ ) , p ¯ ( ξ ) ) lim T 0 V r , s ( f ( ξ ) , p ¯ ( ξ ) ) = V r , s (f,p).

This completes the proof of Theorem 2. □

4 Proof of Theorem 3

In this section, we use the same notations as in the previous two sections. In addition, let I n =(1,1,,1) be an n-vector and

Q + = { s r | r { 1 , 2 , 3 , } , s { 0 , 1 , 2 , } } ,

let S n be the (n1)-dimensional simplex that

S n = { x ( 0 , ) n | i = 1 n x i = n } ,

and let

F n (x)= i = 1 n x i γ ln x i 1 γ 1 ( i = 1 n x i γ n )

be defined on S n .

Lemma 5 Let γ(1,). If x is a relative extremum point of the function F n : S n R, then there exist k N n and u,v(0,n) such that

ku+(nk)v=n,
(22)

and

F n (x)=k u k lnu+(nk) v k lnv 1 γ 1 [ k u k + ( n k ) v k n ] .
(23)

Proof Consider the Lagrange function

L(x)= F n (x)+μ ( i = 1 n x i n ) ,

with

L x j = x j γ 1 (γln x j +1) γ γ 1 x j γ 1 +μ= x j γ 1 L( x j )=0,j=1,2,,n,

where the function L:(0,n)R is defined by

L(t)=γlnt+μ t 1 γ 1 γ 1 .

Then

x j (0,n),L( x j )=0,j=1,2,,n.
(24)

Note that

L (t)= γ t γ 1 t γ μ= γ t γ ( t γ 1 γ 1 γ μ ) .

Hence, the function L:(0,n)R has at most one extreme point, and L(t) has at most two roots in (0,n). By (24), we have

| { x 1 , x 2 , , x n } | 2,

where |{ x 1 , x 2 ,, x n }| denotes the count of elements in the set { x 1 , x 2 ,, x n }. Since F n : S n R is a symmetric function, we may assume that there exists k N n such that

x 1 = x 2 = = x k = u , x k + 1 = x k + 2 = = x n = v .

That is, (23) and (22) hold. The proof is complete. □

Lemma 6 Let γ(1,). If x is a relative extremum point of the function F n : S n R, then

F n (x)0.
(25)

Proof By Lemma 5, there exist k N n and u,v(0,n) such that (23) and (22) hold. If u=v=1, then (25) holds. We may, therefore, assume without loss of generality that 0<u<1<v. From (22), we see that

k n = 1 v u v .
(26)

Putting (26) into (23), we obtain that

F n ( x ) = n { k n u γ ln u + ( 1 k n ) v γ ln v 1 γ 1 [ k n u γ + ( 1 k n ) v γ 1 ] } = n { 1 v u v u γ ln u + u 1 u v v γ ln v 1 γ 1 [ 1 v u v u γ + u 1 u v v γ 1 ] } = n { 1 v u v u γ ln u + u 1 u v v γ ln v 1 γ 1 [ 1 v u v ( u γ 1 ) + u 1 u v ( v γ 1 ) ] } = n ( 1 v ) ( u 1 ) u v { u γ ln u u 1 + v γ ln v 1 v 1 γ 1 [ u γ 1 u 1 + v γ 1 1 v ] } = n ( 1 u ) ( v 1 ) v u { v γ ln v v 1 u γ ln u u 1 1 γ 1 [ v γ 1 v 1 u γ 1 u 1 ] } = n ( 1 u ) ( v 1 ) ( γ 1 ) ( v u ) [ ψ ( v , γ ) ψ ( u , γ ) ] ,
(27)

where the auxiliary function ψ:(0,)×(1,)R is defined by

ψ(t,γ)= ( γ 1 ) t γ ln t ( t γ 1 ) t 1 .

Since n ( 1 u ) ( v 1 ) ( γ 1 ) ( v u ) >0, by (27), inequality (25) is equivalent to the following inequality:

ψ(v,γ)ψ(u,γ),u(0,1),v(1,),γ(1,).
(28)

By the software Mathematica, we can depict the image of the function ψ:(0,2)×{ 3 2 }R in Figure 1, and the image of the function ψ:(0,2)×(1,2]R in Figure 2.

Figure 1
figure 1

The graph of the function ψ:(0,2)×{ 3 2 }R .

Figure 2
figure 2

The graph of the function ψ:(0,2)×(1,2]R .

Now, let us prove the following inequalities:

ψ(u,γ)<1,u(0,1),γ(1,);
(29)
ψ(v,γ)>1,v(1,),γ(1,).
(30)

By Cauchy’s mean value theorem, there exists ξ(u,1) such that

ψ ( u , γ ) = ( γ 1 ) u γ ln u ( u γ 1 ) u 1 = ( γ 1 ) ln u 1 + u γ u 1 γ u γ = [ ( γ 1 ) ln u 1 + u γ ] / u ( u 1 γ u γ ) / u | u = ξ = ( γ 1 ) ξ 1 γ ξ γ 1 ( 1 γ ) ξ γ + γ ξ γ 1 = ( γ 1 ) ξ γ γ ( γ 1 ) ξ + γ < ( γ 1 ) ξ γ ( γ 1 ) ξ + γ = 1 .

Therefore, inequality (29) holds.

Next, note that

ψ ( v , γ ) > 1 ( γ 1 ) v γ ln v ( v γ 1 ) v 1 > 1 ( γ 1 ) v γ ln v v γ + v > 0 ψ ( v , γ ) = ( γ 1 ) ln v 1 + v 1 γ > 0 .
(31)

By Lagrange’s mean value theorem, there exists ξ (1,v) such that

ψ ( v , γ ) = ψ ( v , γ ) ψ ( 1 , γ ) = ( v 1 ) ψ ( v , γ ) v | v = ξ = ( v 1 ) [ ( γ 1 ) ξ 1 + ( 1 γ ) ξ γ ] = ( γ 1 ) ( v 1 ) ξ γ ( ξ γ 1 1 ) > 0 .

Hence, (31) holds. It then follows that inequality (30) holds.

By inequalities (29) and (30), we may easily obtain inequality (28). This ends the proof of Lemma 6. □

Lemma 7 If γ(1,), then for any x S n , inequality (25) holds.

Proof We proceed by induction.

  1. (A)

    Suppose n=2. By the well-known non-linear programming (maximum) principle and Lemma 6, we only need to prove that

    lim x 1 0 , ( x 1 , x 2 ) S 2 F 2 ( x 1 , x 2 )0and lim x 2 0 , ( x 1 , x 2 ) S 2 F 2 ( x 1 , x 2 )0.

We show the first inequality, the second being similar.

Indeed, it follows from Lagrange’s mean value theorem that there exists γ (1,γ) such that

lim x 1 0 , ( x 1 , x 2 ) S 2 F 2 ( x 1 , x 2 ) = 2 γ ln 2 2 γ 2 γ 1 = 2 γ ( γ 1 ) ln 2 1 + 2 1 γ γ 1 = 2 γ d [ ( γ 1 ) ln 2 1 + 2 1 γ ] d γ | γ = γ = 2 γ ln 2 ( 1 2 1 γ ) > 0 .
  1. (B)

    Assume by induction that the function F n 1 : S n 1 R satisfies F n 1 (y)0 for all y S n 1 . We prove inequality (25) as follows. By Lemma 6, we only need to prove that

    lim x 1 0 , x S n F n (x)0,, lim x n 0 , x S n F n (x)0.

We will only show the last inequality. If we set

y=( y 1 , y 2 ,, y n 1 )= n 1 n ( x 1 , x 2 ,, x n 1 ),

then y S n 1 . By Lagrange’s mean value theorem, there exists γ (1,γ) such that

1 γ 1 [ ( γ 1 ) ( ln n n 1 ) 1 + ( n n 1 ) 1 γ ] = γ [ ( γ 1 ) ( ln n n 1 ) 1 + ( n n 1 ) 1 γ ] | γ = γ = ( ln n n 1 ) [ 1 ( n n 1 ) 1 γ ] .

Thus, by the power mean inequality

i = 1 n 1 y i γ (n1) ( 1 n 1 i = 1 n 1 y i ) γ =n1

and induction hypothesis, we see that

lim x n 0 , x S n F n ( x ) = i = 1 n 1 x i γ ln x i 1 γ 1 ( i = 1 n x i γ n ) = i = 1 n 1 ( n n 1 y i ) γ ln ( n n 1 y i ) 1 γ 1 [ i = 1 n 1 ( n n 1 y i ) γ n ] = ( n n 1 ) γ { ( ln n n 1 ) i = 1 n 1 y i γ + i = 1 n 1 y i γ ln y i 1 γ 1 [ i = 1 n 1 y i γ n ( n n 1 ) γ ] } = ( n n 1 ) γ { ( ln n n 1 ) i = 1 n 1 y i r + F n 1 ( y ) 1 γ 1 [ n 1 n ( n 1 n ) γ ] } ( n n 1 ) γ { ( ln n n 1 ) ( n 1 ) 1 γ 1 [ n 1 n ( n n 1 ) γ ] } = ( n 1 ) ( n n 1 ) γ 1 γ 1 [ ( γ 1 ) ( ln n n 1 ) 1 + ( n 1 n ) γ 1 ] = ( n 1 ) ( n n 1 ) γ 1 γ 1 [ ( γ 1 ) ( ln n n 1 ) 1 + ( n 1 n ) 1 γ ] = ( n 1 ) ( n n 1 ) γ ( ln n n 1 ) [ 1 ( n n 1 ) 1 γ ] > 0 .

This ends the proof of Lemma 7. □

Lemma 8 Let x ( 0 , ) n and p S n . If r>s1, then

V r , s (x,p) ( s r ) 1 r s A(x,p),
(32)

and the coefficient ( s r ) 1 r s in (32) is the best constant.

Proof We may assume that there exist i,j N n such that x i x j . By continuity considerations, we may also assume that r>s>1.

  1. (A)

    Suppose p= n 1 I n . Then (32) can be rewritten as

    V r , s ( x , n 1 I n ) ( s r ) 1 r s A ( x , n 1 I n ) ,

or

[ Var r ( x , n 1 I n ) Var s ( x , n 1 I n ) ) ] 1 r s ( s r ) 1 r s A ( x , n 1 I n ) ,

or

[ s ( s 1 ) r ( r 1 ) A ( x r , n 1 I n ) A r ( x , n 1 I n ) A ( x s , n 1 I n ) A s ( x , n 1 I n ) ] 1 r s ( s r ) 1 r s A ( x , n 1 I n ) .

That is,

F r ( x , n 1 I n ) F s ( x , n 1 I n ) ,
(33)

where we have introduced the auxiliary function

F γ (x,p)=ln A ( x γ , p ) A γ ( x , p ) ( γ 1 ) A γ ( x , p ) ,γ>1.

Since, for any t(0,), we have F γ (tx, n 1 I n )= F γ (x, n 1 I n ), we may assume that x S n . By Lemma 7, we have

F γ ( x , n 1 I n ) γ = γ [ ln ( 1 n i = 1 n x i γ 1 ) ln ( γ 1 ) ] = n 1 i = 1 n x i γ ln x i n 1 i = 1 n x i γ 1 1 γ 1 = i = 1 n x i γ ln x i ( γ 1 ) 1 ( i = 1 n x i γ n ) i = 1 n x i γ n = F n ( x ) i = 1 n x i γ n 0 .

Hence, for a fixed x ( 0 , ) n , F γ (x, n 1 I n ) is increasing with respect to γ in (1,). Thus, by r>s>1, we obtain (33) and (32).

  1. (B)

    Suppose p n 1 I n , but p Q + n . Then there exists N{1,2,3,} such that N p i {0,1,2,} for i=1,,n. Setting

and

p = m 1 I m ,

then x ( 0 , ) m , p S m . Inequality (32) can then be rewritten as

V r , s ( x , p ) ( s r ) 1 r s A( x , p ).
(34)

According to the result in (A), inequality (34) holds.

  1. (C)

    Suppose p n 1 I n and p S n Q + n . Then it is easy to see that there exists a sequence { p ( k ) } k = 1 Q + n such that lim k p ( k ) =p. According to the result in (B), we get

    V r , s ( x , p ( k ) ) ( s r ) 1 r s A ( x , p ( k ) ) ,k{1,2,3,}.

Therefore

V r , s (x,p)= lim k V r , s ( x , p ( k ) ) ( s r ) 1 r s lim k A ( x , p ( k ) ) = ( s r ) 1 r s A(x,p).

Next, we show that the coefficient ( s r ) 1 r s is the best constant in (32). Assume that the inequality

V r , s (x,p) C r , s A(x,p)
(35)

holds. Setting

and p= n 1 I n in (35), we obtain

[ s ( s 1 ) r ( r 1 ) n 1 n ( n 1 n ) r n 1 n ( n 1 n ) s ] 1 r s C r , s n 1 n [ s ( s 1 ) r ( r 1 ) 1 ( n 1 n ) r 1 1 ( n 1 n ) s 1 ] 1 r s C r , s n 1 n .
(36)

In (36), by letting n, we obtain

( s r ) 1 r s C r , s .

Hence, the coefficient ( s r ) 1 r s is the best constant in (32). The proof is complete. □

Remark 4 If x ( 0 , ) n , p S n and r>s>1, then there cannot be any θ, C r , s :θ(0,) and C r , s (0,) such that

V r , s (x,p) C r , s M [ θ ] (x,p).
(37)

Indeed, if there exist θ(0,) and C r , s (0,) such that (37) holds, then by setting

and p= n 1 I n in (37), we see that

[ s ( s 1 ) r ( r 1 ) 1 n 1 r 1 n 1 s ] 1 r s C r , s n 1 θ

which implies

[ s ( s 1 ) r ( r 1 ) ] 1 r s = lim n [ s ( s 1 ) r ( r 1 ) 1 n 1 r 1 n 1 s ] 1 r s lim n C r , s n 1 θ =0,
(38)

which is a contradiction.

Remark 5 The method of the proof of Lemma 8 is referred to as the descending method in [6, 7, 13, 2327], but the details in this paper are different.

We now return to the proof of Theorem 3.

Proof By (15) and Lemma 8, we obtain

V r , s (f,p)= lim T 0 V r , s ( f ( ξ ) , p ¯ ( ξ ) ) ( s r ) 1 r s lim T 0 A ( f ( ξ ) , p ¯ ( ξ ) ) = ( s r ) 1 r s E(f,p).

Thus, inequality (4) holds. Furthermore, by Lemma 8, the coefficient ( s r ) 1 r s is the best constant. This completes the proof of Theorem 3. □

5 Applications in space science

It is well known that there are nine planets in the solar system, i.e., Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. In this paper, we also believe that Pluto is a planet in the solar system. In space science, we always consider the gravity to the Earth from other planets in the solar system (see Figure 3).

Figure 3
figure 3

The graph of the planet system PS { P , m , B ( g , r ) } R 3 4 .

We can build the mathematical model of the problem. Let the masses of these planets be m 0 , m 1 ,, m n , where m 0 denotes the mass of the Earth and 1n8. At moment T 0 , in R 3 , let the coordinate of the center of the Earth be o=(0,0,0) and the center of the i th planet be p i =( p i 1 , p i 2 , p i 3 ), and the distance between p i and o be p i = p i 1 2 + p i 2 2 + p i 3 2 , where i=1,2,,n. By the famous law of gravitation, the gravity to the Earth o from the planets p 1 , p 2 ,, p n is

F:= G 0 m 0 i = 1 n m i p i p i 3 ,

where G 0 is the gravitational constant in the solar system. Assume the coordinate of the center of the sun is g=( g 1 , g 2 , g 3 ), then there exists a ball B(g,r) such that the planets p 1 , p 2 ,, p n move in this ball. In other words, at any moment, we have

p i B(g,r),i=1,2,,n,

where r is the radius of the ball B(g,r).

We denote by θ i , j :=( p i , p j ) the angle between the vectors o p i and o p j , where 1ijn. This angle is also considered as the observation angle between two planets p i , p j from the Earth o, which can be measured from the Earth by telescope.

Without loss of generality, we suppose that n2, G 0 =1, m 0 =1 and i = 1 n m i =1 in this paper.

We can generalize the above problem to an Euclidean space. Let be an Euclidean space. For two vectors αE and βE, the inner product of α, β and the norm of α are denoted by α,β and α= α , α , respectively. The angle between α and β is denoted by

(α,β):=arccos α , β α β [0,π],

where α and β are nonzero vectors.

Let gE, we say the set

B(g,r):= { x E | x g r }

is a closed sphere and the set

S(g,r):= { x E | x g = r }

is a spherical, where r R + + =(0,+).

Now let us define the planet system and the λ-gravity function.

Let be an Euclidean space, the dimension of dimE3, P=( p 1 , p 2 ,, p n ) and m=( m 1 , m 2 ,, m n ) be the sequences of and R + + , respectively, and B(g,r) be a closed sphere in . The set

PS { P , m , B ( g , r ) } E n := { P , m , B ( g , r ) }

is called the planet system if the following three conditions hold:

(H1) p i >0, i=1,2,,n;

(H2) p i B(g,r), i=1,2,,n;

(H3) i = 1 n m i =1.

Let PS { P , m , B ( g , r ) } E n be a planet system. The function

F λ : E n E, F λ (P)= i = 1 n m i p i p i λ + 1

is called a λ-gravity function of the planet system PS { P , m , B ( g , r ) } E n , and F 0 is the gravity kernel of F λ , where λ[0,+).

Write

p i = p i e i ; θ i , j :=( p i , p j )[0,π]; θ i :=( p i ,g)[0,π].

The matrix A= [ e i , e j ] n × n = [ cos θ i , j ] n × n is called an observation matrix of the planet system PS { P , m , B ( g , r ) } E n , θ i , j (ji, 1i, jn) and θ i (1in) are called the observation angle and the center observation angle of the planet system, respectively. For the planet system, we always consider that the observation matrix A is a constant matrix in this paper.

It is worth noting that the norm F 0 of the gravity kernel F 0 = F 0 (P) is independent of p 1 , p 2 ,, p n ; furthermore,

0 F 0 = m A m T 1,
(39)

where m T is the transpose of the row vector m, and

mA m T = 1 i , j n m i m j cos θ i , j

is a quadratic.

In fact, from F 0 = i = 1 n m i e i and e i , e j =cos θ i j , we have that

0 F 0 = F 0 , F 0 = 1 i , j n m i m j e i , e j = m A m T ; F 0 = i = 1 n m i e i i = 1 n m i e i = 1 .

Let PS { P , m , B ( g , 1 ) } E n be a planet system. By the above definitions, the gravity to the Earth o from n planets p 1 , p 2 ,, p n in the solar system is F 2 (P), and F 2 (P) is the norm of F 2 (P). If we take a point q i in the ray o p i such that o q i =1, and place a planet at q i with mass m i for i=1,2,,n, then the gravity of these n planets q 1 , q 2 ,, q n to the Earth o is F 0 (P).

Let PS { P , m , B ( g , 1 ) } E n be a planet system. If we believe that g is a molecule and p 1 , p 2 ,, p n are atoms of g, then the gravity to another atom o from n atoms p 1 , p 2 ,, p n of g is F 2 (P).

In the solar system, the gravity of n planets p 1 , p 2 ,, p n to the planet o is F 2 (P), while for other galaxy in the universe, the gravity may be F λ (P), where λ(0,2)(2,+).

Let PS { P , m , B ( g , r ) } E n be a planet system. Then the function

f λ : E n R + + , f λ (P)= i = 1 n m i p i λ

is called an absolute λ-gravity function of the planet system PS { P , m , B ( g , r ) } E n , where λ[0,+).

Let P be a planetary sequence in the solar system. Then 1 n f 2 (P) is the average value of the gravities of the planets p 1 , p 2 ,, p n to the Earth o.

Let P be a planetary sequence in the solar system. If we think that m i is the radiation energy of the planet p i , then, according to optical laws, the radiant energy received by the Earth o is c m i p i 2 , i=1,2,,n, and the total radiation energy received by the Earth o is c f 2 (P), where c>0 is a constant.

By Minkowski’s inequality (see [28])

x+yx+y,x,yE,

we know that if F λ (P) and f λ (P) are a λ-gravity function and an absolute λ-gravity function of the planet system PS { P , m , B ( g , r ) } E n , respectively, then we have

F λ ( P ) f λ (P),λ[0,+).
(40)

Now, we will define absolute λ-gravity variance and λ-gravity variance. To this end, we need the following preliminaries.

Two vectors x and y in are said to be in the same (opposite) direction if (i) x=0 or y=0, or (ii) x0 and y0 and x is a positive (respectively negative) constant multiple of y. Two vectors x and y in the same (opposite) direction are indicated by xy (respectively xy).

We say that the set S:=S(0,1) is a unit sphere in .

For each αS, we say that the set

Π α := { γ E | γ , α = 1 }

is the tangent plane to the unit sphere at the vector α. It is obvious that

γ Π α γα,α=0αγα.

Assume that α,βS, and that α±β0. We then say that the set

α β := { Ϝ α , β ( t ) | t ( , ) } ,

where

Ϝ α , β (t)= ( 1 t ) α + t β ( 1 t ) α + t β ,

are straight lines on the unit sphere , and that the sets

[ α β ] : = { Ϝ α , β ( t ) | λ [ 0 , 1 ] } , ( α β ] : = { Ϝ α , β ( t ) | t ( 0 , 1 ] } , [ α β ) : = { Ϝ α , β ( t ) | t [ 0 , 1 ) } , ( α β ) : = { Ϝ α , β ( t ) | t ( 0 , 1 ) } ,

are the straight line segments on the sphere , and that α β :=(α,β)=arccosα,β is the length of these line segments.

It is easy to see that α+β0 implies (1t)α+tβ0. Thus, we may easily get the existence and uniqueness of these line segments. Similarly, α±β0 implies that αβ(0,π).

Assuming that γ Π α Π β , and that α, β, γ are linearly dependent vectors, we say that γα is the tangent vector to the line segment [ α β ) at α. By definition, we see that there exist u,vR such that

γ=uα+vβ.

Therefore

1 = γ , α = u α , α + v α , β = u + v α , β , 1 = γ , β = u α , β + v β , β = u α , β + v .

We infer from α,β(1,1) that

u=v= 1 1 + α , β ,γ= α + β 1 + α , β , γ α γ α = β α , β α β α , β α .

We define also the tangent vector of [ α β ) at α by βα,βα. The tangent vector βα,βα enjoys the following properties: If γ( α β ), then

( γ α , γ α ) ( β α , β α ) .
(41)

In fact, there exists t(0,1) such that

γ= Ϝ α , β (t)= ( 1 t ) α + t β ( 1 t ) α + t β .

Since α,α=1, we see that

( γ α , γ α ) = ( 1 t ) α + t β ( 1 t ) α + t β α , ( 1 t ) α + t β ( 1 t ) α + t β α = ( 1 t ) α + t β α , ( 1 t ) α + t β α ( 1 λ ) α + t β [ ( 1 t ) α + t β α , ( 1 t ) α + t β α ] = t ( β α , β α ) ( β α , β α ) .

The angle between two line segments [ α β ), [ α γ ) on the unit sphere is defined as

( [ α β ) , [ α γ ) ) := ( β α , β α , γ α , γ α ) .

If [ α β ), [ β γ ), [ γ α ) are three straight line segments on the unit sphere , and α β γ ,β γ α ,γ α β , then we say that the set α β γ :=[ α β )[ β γ )[ γ α ) is a spherical triangle. Write A=([ α β ),[ α γ )), B=([ β γ ),[ β α )), C=([ γ α ),[ γ β )), a= β γ , b= γ α , c= α β . Then we obtain that

cos A = cos ( β α , β α , γ α , γ α ) = β α , β α , γ α , γ α β α , β α γ α , γ α = β , γ α , β α , γ 1 α , β 2 1 α , γ 2 = cos a cos b cos c sin b sin c .

Thus, we may get the law of cosine for spherical triangle

cosa=cosbcosc+sinbsinccosA.
(42)

By (42) we may get the law of cosine for spherical triangle

cosA=cosBcosC+sinBsinCcosa.
(43)

By (42) we may get the law of sine for spherical triangle

sin a sin A = sin b sin B = sin c sin C .
(44)

By cosA>1, cosa<1 and (42)-(43), we get

a,b,c(0,π),b+c>a,

or

(β,γ)<(γ,α)+(α,β),
(45)

and

A,B,C(0,π),A+B+C>π.
(46)

Lemma 9 Let be an Euclidean space, let the dimension of satisfy dimE3, and let B(g,1) be a closed sphere in . If g>1, then

max α , β B ( g , 1 ) { ( α , β ) } =2arcsin 1 g .
(47)

Proof From αB(g,1) we get

α g , α g 1 , α 2 2 g , α + g 2 1 , cos ( α , g ) = g , α α g α 2 + g 2 1 2 α g g 2 1 g .

Thus,

(α,g)arcsin 1 g .
(48)

Similarly, from βB(g,1) we have

(β,g)arcsin 1 g .
(49)

If we set

α = α α , β = β β , g = g g ,

then α , α , g S. According to inequalities (48), (49) and (45), we get

(α,β)= ( α , β ) ( α , g ) + ( g , β ) =(α,g)+(g,β)2arcsin 1 g ,

hence

(α,β)2arcsin 1 g .
(50)

Now we discuss the conditions such that the equality in inequality (50) holds. From the above analysis, we know that these conditions are:

  1. (a)

    αB(g,1) and α= g 2 1 ;

  2. (b)

    βB(g,1) and β= g 2 1 ;

  3. (c)

    g [ α β ] and ( α , g )=( g , β )=arcsin 1 g .

From (a) and (b) we know that the condition (c) can be rewritten as

(c) g g = α + β α + β or gα+β.

Based on the above analysis, we know that the equality in inequality (50) can hold. Therefore, equality (47) holds. The lemma is proved. □

It is worth noting that if PS { P , m , B ( g , 1 ) } E n is a planet system and g 2 , according to Lemma 9 and p i , p j B(g,1), then we have that

0 θ i , j =( p i , p j ) max α , β B ( g , 1 ) { ( α , β ) } =2arcsin 1 g π 2
(51)

for any i,j=1,2,,n, and

F λ ( P ) 2 = 1 i , j 1 m i m j cos θ i , j ( 1 p i 2 p j 2 ) λ 2 ,
(52)
F 2 ( P ) λ = [ 1 i , j 1 m i m j cos θ i , j ( 1 p i 2 p j 2 ) ] λ 2 ,
(53)

where

m i m j cos θ i , j 0,i,j=1,2,,n.
(54)

Now, we will define absolute λ-gravity variance and λ-gravity variance.

Let PS { P , m , B ( g , r ) } E n be a planet system, and let the functions f λ (P), F λ (P) be an absolute λ-gravity function and a λ-gravity function of the planet system, respectively. We say the functions

Var λ (P)= Var λ 2 ( p 2 , m ) = 8 λ ( λ 2 ) [ f λ ( P ) f 2 λ 2 ( P ) ] ,0<λ2,

and

Var λ (P)= 8 λ ( λ 2 ) [ ( F λ ( P ) F 0 ( P ) ) 2 ( F 2 ( P ) F 0 ( P ) ) λ ] ,0<λ2,

are absolute λ-gravity variance and λ-gravity variance of the planet system, respectively, where

p= ( p 1 , p 2 , , p n ) , p 2 = ( p 1 2 , p 2 2 , , p n 2 ) .

Let PS { P , m , B ( g , 1 ) } E n be a planet system, and 0<λ, μ2, λμ. By Lemma 2, we have

1 max 1 i n { p i 2 } [ Var λ ( P ) Var μ ( P ) ] 2 λ μ 1 min 1 i n { p i 2 } .
(55)

If g 2 , according to (51)-(54) and Lemma 2, we have

1 max 1 i n { p i 4 } [ Var λ ( P ) Var μ ( P ) ] 2 λ μ 1 min 1 i n { p i 4 } .
(56)

Let P be a planetary sequence in the solar system, and the gravity of the planet p i to the Earth o is F( p i ), i=1,2,,n. Then inequalities (55) and (56) can be rewritten as

min 1 i n { F ( p i ) m i } [ Var λ ( P ) Var μ ( P ) ] 2 λ μ max 1 i n { F ( p i ) m i }
(57)

and

min 1 i n { F ( p i ) m i } [ Var λ ( P ) Var μ ( P ) ] 1 λ μ max 1 i n { F ( p i ) m i } ,
(58)

respectively.

Let PS { P , m , B ( g , 1 ) } E n be a planet system. By Lemma 8, if λ>μ>2, then

Var λ ( P ) Var μ ( P ) μ λ [ f 2 ( P ) ] λ μ 2 .
(59)

If λ>μ>2 and g 2 , according to (51)-(54) and Lemma 8, we have

Var λ ( P ) Var μ ( P ) μ λ [ F 2 ( P ) F 0 ( P ) ] λ μ ,
(60)

where the coefficient μ/λ is the best constant in (59) and (60).

Remark 6 For some new literature related to space science, please see [4] and [25].

References

  1. Bullen PS, Mitrinnović DS, Vasić PM: Means and Their Inequalities. Reidel, Dordrecht; 1988.

    Book  Google Scholar 

  2. Kuang JC: Applied Inequalities. Shandong Sci. & Tech. Press, Jinan; 2004. (in Chinese)

    Google Scholar 

  3. Mitrinović DS, Pečarić JE, Fink AM: Classic and New Inequalities in Analysis. Kluwer Academic, Dordrecht; 1993.

    Book  Google Scholar 

  4. Wen JJ, Gao CB: Geometric inequalities involving the central distance of the centered 2-surround system. Acta Math. Sin. 2008, 51(4):815–832. (in Chinese)

    MathSciNet  Google Scholar 

  5. Wang WL, Wen JJ, Shi HN: On the optimal values for inequalities involving power means. Acta Math. Sin. 2004, 47(6):1053–1062. (in Chinese)

    MathSciNet  Google Scholar 

  6. Wen JJ, Wang WL: The optimization for the inequalities of power means. J. Inequal. Appl. 2006., 2006: Article ID 46782 10.1155/JIA/2006/46782

    Google Scholar 

  7. Pečarić JE, Wen JJ, Wang WL, Lu T: A generalization of Maclaurin’s inequalities and its applications. Math. Inequal. Appl. 2005, 8(4):583–598.

    MathSciNet  Google Scholar 

  8. Wen JJ, Cheng SS, Gao CB: Optimal sublinear inequalities involving geometric and power means. Math. Bohem. 2009, 134(2):133–149.

    MathSciNet  Google Scholar 

  9. Ku HT, Ku MC, Zhang XM: Generalized power means and interpolating inequalities. Proc. Am. Math. Soc. 1999, 127(1):145–154. 10.1090/S0002-9939-99-04845-5

    Article  MathSciNet  Google Scholar 

  10. Páles Z: Inequalities for sums of powers. J. Math. Anal. Appl. 1988, 131(1):265–270. 10.1016/0022-247X(88)90204-1

    Article  MathSciNet  Google Scholar 

  11. Bjelica M: Asymptotic planarity of Dresher mean values. Mat. Vesn. 2005, 57: 61–63.

    MathSciNet  Google Scholar 

  12. Wen JJ, Han TY, Gao CB: Convergence tests on constant Dirichlet series. Comput. Math. Appl. 2011, 62(9):3472–3489. 10.1016/j.camwa.2011.08.064

    Article  MathSciNet  Google Scholar 

  13. Yang H, Wen JJ, Wang WL: The method of descending dimension for establishing inequalities (II). Sichuan Daxue Xuebao 2007, 44(4):753–758.

    MathSciNet  Google Scholar 

  14. Dresher M: Moment spaces and inequalities. Duke Math. J. 1953, 20: 261–271. 10.1215/S0012-7094-53-02026-2

    Article  MathSciNet  Google Scholar 

  15. Daskin JM: Dresher’s inequality. Am. Math. Mon. 1952, 49: 687–688.

    Article  Google Scholar 

  16. Beckenbach EF, Bellman R: Inequalities. 3rd edition. Springer, Berlin; 1971.

    Google Scholar 

  17. Brenner JL, Carlson BC: Homogenous mean values: weights and asymptotes. J. Math. Anal. Appl. 1987, 123: 265–280. 10.1016/0022-247X(87)90308-8

    Article  MathSciNet  Google Scholar 

  18. Hu K: Problems in Analytic Inequality. Wuhan University Press, Wuhan; 2003. (in Chinese)

    Google Scholar 

  19. Wen JJ, Zhang ZH: Jensen type inequalities involving homogeneous polynomials. J. Inequal. Appl. 2010., 2010: Article ID 850215

    Google Scholar 

  20. Pečarić JE, Svrtan D: New refinements of the Jensen inequalities based on samples with repetitions. J. Math. Anal. Appl. 1998, 222: 365–373. 10.1006/jmaa.1997.5839

    Article  MathSciNet  Google Scholar 

  21. Wen JJ: The inequalities involving Jensen functions. J. Syst. Sci. Math. Sci. 2007, 27(2):208–218. (in Chinese)

    Google Scholar 

  22. Gao CB, Wen JJ: Inequalities of Jensen-Pečarić-Svrtan-Fan type. J. Inequal. Pure Appl. Math. 2008., 9: Article ID 74

    Google Scholar 

  23. Timofte V: On the positivity of symmetric polynomial functions. J. Math. Anal. Appl. 2003, 284: 174–190. 10.1016/S0022-247X(03)00301-9

    Article  MathSciNet  Google Scholar 

  24. Wen JJ, Yuan J, Yuan SF: An optimal version of an inequality involving the third symmetric means. Proc. Indian Acad. Sci. Math. Sci. 2008, 118(4):505–516. 10.1007/s12044-008-0038-0

    Article  MathSciNet  Google Scholar 

  25. Gao CB, Wen JJ: Theory of surround system and associated inequalities. Comput. Math. Appl. 2012, 63: 1621–1640. 10.1016/j.camwa.2012.03.037

    Article  MathSciNet  Google Scholar 

  26. Wen JJ, Wang WL: Chebyshev type inequalities involving permanents and their application. Linear Algebra Appl. 2007, 422(1):295–303. 10.1016/j.laa.2006.10.014

    Article  MathSciNet  Google Scholar 

  27. Wen JJ, Wang WL: The inequalities involving generalized interpolation polynomial. Comput. Math. Appl. 2008, 56(4):1045–1058. 10.1016/j.camwa.2008.01.032

    Article  MathSciNet  Google Scholar 

  28. Gardner RJ: The Brunn-Minkowski inequality. Bull., New Ser., Am. Math. Soc. 2002, 39: 355–405. 10.1090/S0273-0979-02-00941-2

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Natural Science Foundation of China (No. 10671136) and the Natural Science Foundation of Sichuan Province Education Department (No. 07ZA207).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianyong Han.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wen, J., Han, T. & Cheng, S.S. Inequalities involving Dresher variance mean. J Inequal Appl 2013, 366 (2013). https://doi.org/10.1186/1029-242X-2013-366

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-366

Keywords