Skip to main content

Admissibility in general linear model with respect to an inequality constraint under balanced loss

Abstract

Since Zellner (Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions, pp. 377-390, 1994) proposed the balanced loss function, many researchers have been attracted to the field concerned. In this paper, under a generalized balanced loss function, we investigate the admissibility of linear estimators of the regression coefficient in general Gauss-Markov model (GGM) with respect to an inequality constraint. The necessary and sufficient conditions that the linear estimators of regression coefficient function are admissible are established, in the class of homogeneous/inhomogeneous linear estimation, respectively.

MSC:62C05, 62F10.

1 Introduction

Throughout this paper, the symbols A , μ(A), A + , A , rk(A) and tr(A) stand for the transpose, the range, Moore-Penrose inverse, generalized inverse, rank, and trace of matrix A, respectively.

Consider the following Gauss-Markov model:

{ y = X β + ε , E ( ε ) = 0 , Cov ( ε ) = σ 2 I n ,
(1.1)

where y is a n×1 observable random vector. X is an n×p known design matrix and rk(X)=p, ε is a n×1 random error vector. β and σ 2 are unknown parameters.

Since rk(X)=p, β in model (1.1) is estimable, i.e., there exists an A, such that E(AY)=β. The classic estimator of regression coefficient is the least square estimator β ˆ = ( X X ) 1 X y, which is the value of d that minimizes the following expression:

( y X d ) (yXd).
(1.2)

It is also the best linear unbiased estimator (BLUE) of β. It indicates some goodness-of-fit of the model. For any estimator of β, the precision of this estimator is widely used to determine it is good or not. That is, under the quadratic loss function

( d β ) (dβ),
(1.3)

we select the estimate to achieve a minimum of the risk. [1] embraced the two standards above and proposed the concept of balanced loss. The balanced loss function is defined as

L 0 ( d ( y ) , β , σ 2 ) =w ( y X d ) (yXd)+(1w) ( d β ) S(dβ),
(1.4)

where 0w1, S is a known positive definite matrix. The balanced loss function takes both the precision of the estimator and the goodness-of-fit of the model into account. Compared to the standards in (1.2) and (1.3), it is a more comprehensive one that measures the estimate.

Much work has been done on the parameter estimation under the balanced loss function. [25] studied the risk function of some specific estimators. [68] did some work on the application of the balanced loss function. [913] investigated the goodness of the estimators under the balanced loss function.

In model (1.1), the errors have homogeneity of variance and no correlations. But in most real problems, this condition is not always satisfied. In this case, model (1.1) is generalized to the following one:

{ y = X β + ε , E ( ε ) = 0 , Cov ( ε ) = σ 2 V .
(1.5)

For model (1.5), the BLUE of β is β ˆ R = ( X D + X ) 1 X D + y. According to Rao’s unified theory of least squares, it is the minimum value d of the following expression:

( y X d ) D + (yXd),

where D=V+X X . We can prove that when V is nonsingular, β ˆ R = ( X V 1 X ) 1 X V 1 y is a generalized least square estimate. Therefore, the balanced loss function (1.4) cannot be applied to this model. Based on [1], the idea of balanced loss, we propose a general balanced loss

L ( d ( y ) , β , σ 2 ) =w ( y X d ) D + (yXd)+(1w) ( d β ) S(dβ),
(1.6)

where 0w1, S is a known matrix.

In most cases, we have some prior information in model (1.5). For example, the parameters are constrained to some subset, such as an inequality and ellipsoidal constraints. In this paper, considering model (1.5) with the balanced loss (1.6), we investigate the admissibility of linear estimator of regression coefficient in the linear model with an inequality constraint. The inequality constraint we will discuss is

T= { ( β , σ 2 ) | β C = { β : r β 0 } , σ 2 > 0 } ,
(1.7)

where r is a known vector. If r= 0 n , then the constraint condition always holds. This model embraces the unconstraint case.

Definition 1.1 Suppose d 1 (Y) and d 2 (Y) are two estimators of β, if for any (β, σ 2 ), we have

R ( d 1 , β , σ 2 ) R ( d 2 , β , σ 2 )

and there exists ( β , σ 2 ), such that R( d 2 , β , σ 2 )>R( d 1 , β , σ 2 ), where the risk function R(d,β, σ 2 )=EL(d,β, σ 2 ), then d 1 (Y) is said to be better than d 2 (Y). If there does not exist any estimator in set Ξ that is better than d(Y), where parameters β and σ take values in T, then d(Y) is called the admissible estimator of in the set Ξ. We denote it by d(Y) Ξ Kβ[T].

We use the following notations in this paper.

H L = { A Y : A  is a  p × n  matrix } , L = { A Y + a : A  is a  p × n  matrix , a R p } ,

where HL is the class of homogeneous linear estimators and L is the class of inhomogeneous linear estimators.

The admissibility is the most basic and influential rationality requirement of classical statistical decision theory. When the parameters are unconstrained, comprehensive results have been obtained. For instance, [1417]etc. studied the admissibility in univariate linear model. As [18, 19] pointed out, when the parameters are constrained, the least square estimator may not be admissible. So it is significant to discuss the admissibility of linear estimator in linear model with some constraints. For the Gauss-Markov model with constraints, [20] developed the admissible estimator. Some other researchers dedicated to this study. [2124] studied the admissibility in the linear model with an ellipsoidal constraint. For the linear model with an inequality constraint, [2528] studied the admissibility of linear estimator of parameters in the univariate and multivariate linear models under the quadratic and matrix loss, respectively. However, under the balanced loss, the model with an inequality constraint has not been considered.

2 Admissibility in the class of homogeneous linear estimators

In this section, we study the admissibility in the class of homogeneous linear estimators. Let the quadratic loss in model (1.5) be

( d ( Y ) g ( β ) ) ( d ( Y ) g ( β ) ) ,
(2.1)

where d(Y) is an estimator of g(β).

Lemma 2.1 Consider the model (1.5) with the loss function (2.1), AY H L Kβ[T] if and only if

  1. (1)

    AV=AX ( X D + X ) X D + V;

  2. (2)

    AXW X A AXW K ;

  3. (3)

    rk(AXK)W X =rk(AXK),

where D=V+X X , W= ( X D + X ) I p .

Proof The proof can be obtained from Theorem 2.1 in [26] and Theorem 2.1 in [16]. □

Lemma 2.2 Under model (1.5) with the loss function (1.6), suppose AYHL is an estimator of β, we have

R ( A Y , β , σ 2 ) R ( A P X Y , β , σ 2 ) ,
(2.2)

and the equality holds if and only if

AV=A P X V,
(2.3)

where P X =X ( X D + X ) 1 X D + .

Proof Since

R ( A Y , β , σ 2 ) = E { w ( Y X A Y ) D + ( Y X A Y ) + ( 1 w ) ( A Y β ) S ( A Y β ) } = σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + β ( A X I p ) B ( A X I p ) β ,
(2.4)

where B=w X D + X+(1w)S>0. Notice that V D + X=(DX X ) D + X=X( I n X D + X), we have P X V D + X=V D + X.

Therefore,

R ( A Y , β , σ 2 ) R ( A P X Y , β , σ 2 ) = σ 2 tr [ ( A V A A P X V P X A ) B ] = σ 2 tr [ A ( I n P X ) V ( I n P X ) A B ] 0 ,
(2.5)

and the equality holds if and only if AV=A P X V. □

Remark 2.1 This lemma indicates the class of estimators {A P X Y:A is a p×n matrix} is a complete class of HL. That is, for any estimator δ not in {A P X Y:A is a p×n matrix}, there exists an estimator δ in {A P X Y:A is a p×n matrix} such that δ is better than δ.

Consider the following linear model:

{ Z = ( X D + X ) β + ε , E ( ε ) = 0 , Cov ( ε ) = σ 2 ( X D + V D + X ) .
(2.6)

Let C=(1w) B 1 S, clearly, is estimable in model (2.6). We take the loss function

L B ( d ( Z ) , C β , σ 2 ) = ( d ( Z ) C β ) B ( d ( Z ) C β ) .
(2.7)

Lemma 2.3 Under model (2.6) with the loss function (2.7), suppose A 1 Y,AYHL, for any (β, σ 2 )T, where A 1 Y is better than AY if and only if

(1)tr ( A 1 X W X A 1 B ) 2wtr ( A 1 V D + X ) tr ( A X W X A B ) 2wtr ( A V D + X )
(2.8)
(2) β ( A 1 X I p ) B( A 1 X I p )β β ( A X I p ) B(AX I p )β.
(2.9)

Proof The lemma can easily be verified from (2.4). □

Lemma 2.4 Consider the model (1.5) with the loss function (1.6), suppose AYHL, then A P X Y H L β if and only if A ˜ Z H L Cβ in model (2.6) with the loss function (2.7), where A ˜ =AX ( X D + X ) 1 w B 1 .

Proof Since P X V= P X V P X =XW X , we have

R ( A P X Y , β , σ 2 ) = E L ( A P X Y , β , σ 2 ) = E { w ( Y X A P X Y ) D + ( Y X A P X Y ) + ( 1 w ) ( A P X Y β ) S ( A P X Y β ) } = σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A X W X A B ) ] + β ( A X I p ) B ( A X I p ) β .
(2.10)

Notice that

A ˜ ( X D + V D + X ) A ˜ = A X W X A + 2 w A V D + X B 1 + w 2 B 1 X D + V D + X B 1 , A ˜ ( X D + X ) C = A X I p .

Therefore,

E L B ( A ˜ Z , C β , σ 2 ) = E ( A ˜ Z C β ) B ( A ˜ Z C β ) = σ 2 [ tr ( A ˜ X D + V D + X ) tr ( A ˜ B ) ] + β [ A ˜ ( X D + X ) C ] B [ A ˜ ( X D + X ) C ] β = σ 2 [ w 2 tr ( B 1 X D + V D + X ) 2 w tr ( A V D + X ) + tr ( A X W X A B ) ] + β ( A X I p ) B ( A X I p ) β .
(2.11)

Equations (2.10), (2.11) and Lemma 2.3 indicate that if there exists an estimator of β, A 1 P X Y is better than A P X Y, then A ˜ Z, the estimator of , is better than A ˜ Z. □

Lemma 2.5 Consider the model (2.6) with the loss function (2.7), AZCΘ holds if and only if under the quadratic loss, AZCΘ holds.

Proof The proof is straightforward. We omit the details. □

Theorem 2.1 Consider the model (2.6) with the loss function (2.7), AY H L Sβ(T) if and only if AY H L Sβ.

Proof The necessity is trivial. We only need to prove the sufficiency. For any (β, σ 2 )T, if there exists A 1 Y that is better than AY, by Lemma 2.3, for any βC, (2.8) and (2.9) hold. Notice that (2.9) still holds if replacing β by −β. In other words, for any β C ˜ ={β:βC}, (2.9) holds. Since C C ˜ = R P , thus, for any (β, σ 2 ), (2.8) and (2.9) hold. It contradicts with AY H L Sβ. □

Theorem 2.2 Consider the model (1.5) with the loss function (1.6), AY H L β(T) if and only if

  1. (1)

    AV=A P X V;

  2. (2)

    A ¯ XW X A ¯ (1w) A ¯ XWN B 1 ;

  3. (3)

    rk[(AX I p )W]=rk(AX I p ),

where A ¯ =Aw B 1 X D + .

Proof By Lemma 2.2, (1) holds. Further, AY H L β(T) is equivalent to A P X Y H L β. By Lemma 2.4, in the model (1.5) with the loss (1.6), AY H L β(T) holds if and only if A ˜ Z H L Cβ in model (2.6) with the loss (2.7), where A ˜ =AX ( X D + X ) 1 w B 1 . It is also equivalent to A ˜ Z H L Cβ in model (2.6) with the loss (2.1) by Lemma 2.5. Therefore, when the condition (1) is satisfied, according to Lemma 2.1 and simple computations, we have AY H L β(T) holds if and only if (2) and (3) are satisfied. □

Remark 2.2 The following example indicates that the conditions in the above theorem can be satisfied.

Consider the following example: we take X=S=I, V= ( 1 0 0 0 ) , then D= ( 2 0 0 1 ) , W= ( 1 0 0 0 ) . Also let w=0.5, then the loss function is

L 0 ( d ( y ) , β , σ 2 ) = 1 2 [ ( y d ) ( y d ) + ( d β ) ( d β ) ] .

For the diagonal matrix A= ( a 0 0 b ) , we consider the admissibility of Ay. The condition (1) in Theorem 2.2 is satisfied. Theorem 2.2(3) implies that b=1. Theorem 2.2(2) implies that 1 3 a1. Thus, only if b=1 and 1 3 a1, Ay is an admissible estimate of β.

3 Admissibility in the class of inhomogeneous linear estimators

In this section, we study the admissibility in the class of inhomogeneous linear estimators.

Lemma 3.1 Let C be a cone in R P . For any vector b and real number d,

β b+d0,βC
(3.1)

if and only if b C and d0, where C ={α: α β0,βC} is the dual cone of C.

Proof This lemma can be found in [26]. □

Theorem 3.1 Consider the model (1.5) with the loss function (1.6), if AY+a L β(T), then

  1. (1)

    aμ(AX I p );

  2. (2)

    α ( A X I p ) + a0, αμ( ( A X I p ) ) C ;

  3. (3)

    AY H L β(T).

Proof (1) Let P be an orthogonal projection matrix on μ( B 1 2 (AX I p )). Take b= B 1 2 P B 1 2 a, then bμ(AX I p ). Since

R ( A Y + a , β , σ 2 ) = σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + [ ( A X I p ) β + a ] B [ ( A X I p ) β + a ] .
(3.2)

Therefore,

R ( A Y + a , β , σ 2 ) R ( A Y + b , β , σ 2 ) = [ ( A X I p ) β + a ] B [ ( A X I p ) β + a ] [ ( A X I p ) β + b ] B [ ( A X I p ) β + b ] = a B a b B b = a B 1 2 ( I P P ) B 1 2 a 0 ,
(3.3)

and the equality holds if and only if B 1 2 a=P B 1 2 a, a= B 1 2 P B 1 2 a=b. This means if aμ(AX I p ), then AY+b is better than AY+a. It is a contradiction.

(2) Assume there exists αμ( ( A X I p ) ) C , such that α ( A X I p ) + a0. Then there exists α 0 , such that α= ( A X I p ) α 0 . Take b=a+λ B 1 α 0 , where λ>0. For any (β, σ 2 )T, we have

R ( A Y + b , β , σ 2 ) R ( A Y + a , β , σ 2 ) =2λ α β+2λ α ( A X I p ) + a+ λ 2 α 0 B 1 α 0 .

According to Lemma 3.1, for any λ small enough and any (β, σ 2 )T,

R ( A Y + b , β , σ 2 ) R ( A Y + a , β , σ 2 ) 0.

AY+b is better than AY+a, which contradicts AY+a L β(T).

(3) By (1), there exists a 0 such that a=(AX I P ) a 0 . Suppose A 1 Y is as good as AY, thus, for any (β, σ 2 )T,

R ( A 1 Y , β , σ 2 ) R ( A Y , β , σ 2 ) .

By Lemma 2.3, (2.8) and (2.9) hold. Notice that for any β C ˜ ={β:βC}, (2.9) still holds and C C ˜ = R P , and therefore (2.9) is equivalent to

( A 1 X I p ) B( A 1 X I p ) ( A X I p ) B(AX I p ).
(3.4)

We obtain, from (2.8) and (3.4), for any (β, σ 2 )T,

σ 2 [ w tr ( V D + ) 2 w tr ( A 1 V D + X ) + tr ( A 1 V A 1 B ) ] + ( β + a 0 ) ( A 1 X I p ) B ( A 1 X I p ) ( β + a 0 ) σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + ( β + a 0 ) ( A X I p ) B ( A X I p ) ( β + a 0 ) .
(3.5)

That is,

R ( A 1 Y + ( A 1 X I p ) a 0 , β , σ 2 ) R ( A Y + a , β , σ 2 ) .
(3.6)

Since AY H L β(T), thus, the equality in (3.6) holds if and only if the equality in (3.5) holds. Notice that for (β, σ 2 )T and any λ>0, we have (λβ, σ 2 )T. Therefore,

R ( A 1 Y , β , σ 2 ) = σ 2 [ w tr ( V D + ) 2 w tr ( A 1 V D + X ) + tr ( A 1 V A 1 B ) ] + β ( A 1 X I p ) B ( A 1 X I p ) β = σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + β ( A X I p ) B ( A X I p ) β = R ( A Y , β , σ 2 ) .

It implies that no estimator is better than AY. Thus, AY H L β(T). □

In fact, the converse part of Theorem 3.1 is also true. We present this in the following theorem.

Theorem 3.2 Consider the model (1.5) with the loss function (1.6), AY+a L β(T) holds if and only if

  1. (1)

    aμ(AX I p );

  2. (2)

    α ( A X I p ) + a0, αμ( ( A X I p ) ) C ;

  3. (3)

    AY H L β(T).

Proof By the proof of (1) in Theorem 3.1, we need to prove that there does not exist p×n matrix A 1 and b R P such that A 1 Y+( A 1 X I p )b is better than AY+(AX I p ) a 0 , where (AX I p ) a 0 =a.

Suppose A 1 Y+( A 1 X I p )b is as good as AY+(AX I p ) a 0 , then for any (β, σ 2 )T,

R ( A 1 Y + ( A 1 X I p ) b , β , σ 2 ) R ( A Y + ( A X I p ) a 0 , β , σ 2 ) .

Hence,

σ 2 [ w tr ( V D + ) 2 w tr ( A 1 V D + X ) + tr ( A 1 V A 1 B ) ] + ( β + b ) ( A 1 X I p ) B ( A 1 X I p ) ( β + b ) σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + ( β + a 0 ) ( A X I p ) B ( A X I p ) ( β + a 0 ) .
(3.7)

Notice that for any k>0, (β,k σ 2 )T, plug it in (3.7) and let k go to ∞ and 0 respectively, we have

tr ( A 1 X W X A 1 B ) 2wtr ( A 1 V D + X ) tr ( A X W X A B ) 2wtr ( A V D + X )

and

( β + b ) ( A 1 X I p ) B ( A 1 X I p ) ( β + b ) ( β + a 0 ) ( A X I p ) B ( A X I p ) ( β + a 0 ) .
(3.8)

Similarly, replacing β with λβ in (3.8) and let λ go to ∞, we have

β ( A 1 X I p ) B( A 1 X I p )β β ( A X I p ) B(AX I p )β.

Therefore, R( A 1 Y,β, σ 2 )R(AY,β, σ 2 ). Since AY H L β(T), we get

σ 2 [ w tr ( V D + ) 2 w tr ( A 1 V D + X ) + tr ( A 1 V A 1 B ) ] + β ( A 1 X I p ) B ( A 1 X I p ) β = σ 2 [ w tr ( V D + ) 2 w tr ( A V D + X ) + tr ( A V A B ) ] + β ( A X I p ) B ( A X I p ) β .

Using the same technique, for any (β, σ 2 )T, we have

tr ( A 1 X W X A 1 B ) 2wtr ( A 1 V D + X ) =tr ( A X W X A B ) 2wtr ( A V D + X ) ,
(3.9)
( A 1 X I p ) B( A 1 X I p )= ( A X I p ) B(AX I p ).
(3.10)

From (3.7), (3.9), and (3.10), we get, for any (β, σ 2 )T,

2 β ( A X I p ) B ( A X I p ) b + b ( A X I p ) B ( A X I p ) b 2 β ( A X I p ) B ( A X I p ) a 0 + a 0 ( A X I p ) B ( A X I p ) a 0 .

Hence,

2 β ( A X I p ) B(AX I p ) [ b ( A X I p ) + a ] + b ( A X I p ) B(AX I p )b a Ba0.

From Lemma 3.1,

b ( A X I p ) B(AX I p )b a Ba0,
(3.11)
( A X I p ) B(AX I p ) [ b ( A X I p ) + a ] C .
(3.12)

This together with the condition (2) implies that

[ b ( A X I P ) + a ] ( A X I p ) B ( A X I p ) ( A X I P ) + a = b ( A X I P ) B a a B a 0 .

Hence,

b ( A X I P ) Ba a Ba.
(3.13)

From (3.11) and (3.13), we have [ ( A X I P ) b a ] B[(AX I P )ba]0. Thus, [ ( A X I P ) b a ] B[(AX I P )ba]=0.

B(AX I P )b=Ba=B(AX I P ) a 0 .
(3.14)

Plug (3.9), (3.10), and (3.14) into (3.7) and we find that the equality in (3.7) holds. It means there does not exist an estimator that is better than AY+a. Therefore, AY+a L β(T) holds. □

We summarize Theorem 2.2 and Theorem 3.2 in the following theorem.

Theorem 3.3 Consider the model (1.5) with the loss function (1.6), AY+a L β(T) holds if and only if

  1. (1)

    aμ(AX I p );

  2. (2)

    α ( A X I p ) + a0, αμ( ( A X I p ) ) C ;

  3. (3)

    AV=A P X V;

  4. (4)

    A ¯ XW X A ¯ (1w) A ¯ XWS B 1 ;

  5. (5)

    rk[(AX I p )W]=rk(AX I p ).

Conclusion

In this paper, under a generalized balanced loss function, we study the admissibility of linear estimators of the regression coefficient in general Gauss-Markov model with respect to an inequality constraint. The necessary and sufficient conditions that the linear estimators of regression coefficient function are admissible are obtained, in the class of homogeneous and inhomogeneous linear estimation, respectively.

References

  1. Zellner A: Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions. Springer, Berlin; 1994:377–390.

    Google Scholar 

  2. Rodrigues J, Zellner A: Weighted balanced loss function and estimation of the mean time to failure. Commun. Stat., Theory Methods 1994,23(12):3609–3616. 10.1080/03610929408831468

    Article  MATH  MathSciNet  Google Scholar 

  3. Wan AT: Risk comparison of the inequality constrained least squares and other related estimators under balanced loss. Econ. Lett. 1994,46(3):203–210. 10.1016/0165-1765(94)00485-4

    Article  Google Scholar 

  4. Giles JA, Giles DE, Ohtani K: The exact risks of some pre-test and stein-type regression estimators under balanced loss. Commun. Stat., Theory Methods 1996,25(12):2901–2924. 10.1080/03610929608831878

    Article  MATH  MathSciNet  Google Scholar 

  5. Ohtani K: The exact risk of a weighted average estimator of the ols and stein-rule estimators in regression under balanced loss. Stat. Risk Model. 1998,16(1):35–46.

    MATH  MathSciNet  Google Scholar 

  6. Shalabh : Least squares estimators in measurement error models under the balanced loss function. Test 2001,10(2):301–308. 10.1007/BF02595699

    Article  MATH  MathSciNet  Google Scholar 

  7. Gruber DMH: The efficiency of shrinkage estimators with respect to Zellner’s balanced loss function. Commun. Stat., Theory Methods 2004,33(2):235–249. 10.1081/STA-120028372

    Article  MATH  Google Scholar 

  8. Akdeniz F, Wan AT, Akdeniz E: Generalized Liu type estimators under Zellner’s balanced loss function. Commun. Stat., Theory Methods 2005,34(8):1725–1736. 10.1081/STA-200066357

    Article  MATH  MathSciNet  Google Scholar 

  9. Dey DK, Ghosh M, Strawderman WE: On estimation with balanced loss functions. Stat. Probab. Lett. 1999,45(2):97–101. 10.1016/S0167-7152(99)00047-4

    Article  MATH  MathSciNet  Google Scholar 

  10. Ohtani K: Inadmissibility of the Stein-rule estimator under the balanced loss function. J. Econom. 1999,88(1):193–201. 10.1016/S0304-4076(98)00030-X

    Article  MATH  MathSciNet  Google Scholar 

  11. Xu X, Wu Q: Linear admissible estimators of regression coefficient under balanced loss. Acta Math. Sci. 2000, 4: 468–473.

    Google Scholar 

  12. Jozani JM, Marchand É, Parsian A: On estimation with weighted balanced-type loss function. Stat. Probab. Lett. 2006,76(8):773–780. 10.1016/j.spl.2005.10.026

    Article  MATH  Google Scholar 

  13. Cao M: ϕ admissibility for linear estimators on regression coefficients in a general multivariate linear model under balanced loss function. J. Stat. Plan. Inference 2009,139(9):3354–3360. 10.1016/j.jspi.2009.03.013

    Article  MATH  Google Scholar 

  14. Rao CR: Estimation of parameters in a linear model. Ann. Stat. 1976,4(6):1023–1037. 10.1214/aos/1176343639

    Article  MATH  Google Scholar 

  15. LaMotte LR: Admissibility in linear estimation. Ann. Stat. 1982,10(1):245–255. 10.1214/aos/1176345707

    Article  MATH  MathSciNet  Google Scholar 

  16. Wu Q: Admissibility of linear estimators of regression coefficient in a general Gauss-Markoff model. Acta Math. Appl. Sin. 1986, 2: 251–256.

    Google Scholar 

  17. Dong L, Wu Q: The sufficient and necessary conditions of admissible linear estimates for random regression coefficients and parameters under the quadratic loss function. Acta Math. Sin. 1988,31(2):145–157.

    MATH  MathSciNet  Google Scholar 

  18. Marquaridt DW: Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation. Technometrics 1970,12(3):591–612.

    Article  Google Scholar 

  19. Perlman MD: Reduced mean square error estimation for several parameters. Sankhyā, Ser. B 1972,34(1):89–92.

    MathSciNet  Google Scholar 

  20. Hoffmann K: Admissibility of linear estimators with respect to restricted parameter sets. Stat.: J. Theor. Appl. Stat. 1977,8(4):425–438.

    MATH  Google Scholar 

  21. Mathew T: Admissible linear estimation in singular linear models with respect to a restricted parameter set. Commun. Stat., Theory Methods 1985,14(2):491–498. 10.1080/03610928508828927

    Article  MATH  MathSciNet  Google Scholar 

  22. Lu C: Admissibility of inhomogeneous linear estimators in linear models with respect to incomplete ellipsoidal restrictions. Commun. Stat., Theory Methods 1995,24(7):1737–1742. 10.1080/03610929508831582

    Article  MATH  Google Scholar 

  23. Zhang S, Gui W: Admissibility of linear estimators in a growth curve model subject to an incomplete ellipsoidal restriction. Acta Math. Sci. 2008,28(1):194–200. 10.1016/S0252-9602(08)60020-X

    Article  MATH  MathSciNet  Google Scholar 

  24. Zhang S, Gui W, Liu G: Characterization of admissible linear estimators in the general growth curve model with respect to an incomplete ellipsoidal restriction. Linear Algebra Appl. 2009,431(1):120–131.

    Article  MATH  MathSciNet  Google Scholar 

  25. Zhang S, Liu G, Gui W: Admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. J. Inequal. Appl. 2009., 2009: Article ID 718927

    Google Scholar 

  26. Lu C, Shi N: Admissible linear estimators in linear models with respect to inequality constraints. Linear Algebra Appl. 2002,354(1):187–194.

    Article  MATH  MathSciNet  Google Scholar 

  27. Zhang S, Fang Z, Qin H, Han L: Characterization of admissible linear estimators in the growth curve model with respect to inequality constraints. J. Korean Stat. Soc. 2011,40(2):173–179. 10.1016/j.jkss.2010.09.002

    Article  MATH  MathSciNet  Google Scholar 

  28. Zhang S, Fang Z, Liu G: Characterization of admissible linear estimators in multivariate linear model with respect to inequality constraints under matrix loss function. Commun. Stat., Theory Methods 2013,42(15):2837–2850. 10.1080/03610926.2011.615441

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Natural Science Foundation of China (61070236, U1334211, 11371051) and the Project of State Key Laboratory of Rail Traffic Control and Safety (RCS2012ZT004), Beijing Jiaotong University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenhao Gui.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, S., Gui, W. Admissibility in general linear model with respect to an inequality constraint under balanced loss. J Inequal Appl 2014, 70 (2014). https://doi.org/10.1186/1029-242X-2014-70

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-70

Keywords