Skip to main content

Matrix spectral norm Wielandt inequalities with statistical applications

Abstract

In this article, we construct a new matrix spectral norm Wielandt inequality. Then we apply it to give the upper bound of a new measure of association. Finally, a new alterative based on the spectral norm for the relative gain of the covariance adjusted estimator of parameters vector is given.

1 Introduction

Suppose that A is an n×n positive definite symmetric matrix, x and y are two nonnull real vectors satisfying x y=0 such that

( x A y ) 2 x A x y A y ( λ 1 λ n λ 1 + λ n ) 2 ,
(1)

where λ 1 λ n >0 are the ordered eigenvalues of A. Inequality (1) is usually called Wielandt inequality in literature; see Drury et al. [1]. Gustafson [2] gave some meaning of this inequality.

Let the random vector h has the covariance matrix A, then the maximum of the squared correlation is given as follows:

max x , y : x y = 0 corr 2 ( x h , y h ) = max x , y : x y = 0 ( x A y ) 2 x A x y A y = ( λ 1 λ n λ 1 + λ n ) 2 .
(2)

If we set

y= A 1 x x A 1 x x x x,
(3)

then the Wielandt inequality (1) becomes the Kantorovich inequality:

x A x x A 1 x ( x x ) 2 ( λ 1 + λ n ) 2 4 λ 1 λ n .
(4)

Many authors have been studied the Kantorovich inequality, for more details, see Liu [3, 4], Rao and Rao [5] and Liu and Heyde [6].

Wang and Ip [7] have extended the Wielandt inequality to the matrix version, which can be expressed as follows. Suppose X and Y be n×p and n×q matrices satisfying X Y=0, then

X AY ( Y A Y ) Y AX ( λ 1 λ n λ 1 + λ n ) 2 X AX,
(5)

where inequality (5) refers to the Lo\"{w}ner partial ordering.

In inequality (5), A be a positive definite matrix, Lu [8] has extended A to be a nonnegative definite matrix. Drury et al. [1] introduced the matrix, determinant and trace version of the Wielandt inequality. Liu et al. [9] has improved two matrix trace Wielandt inequalities and proposed their statistical applications. Wang and Yang [10] presented the Euclidean norm matrix Wielandt inequality and showed the statistical applications. In this article, we will provide a matrix spectral norm Wielandt inequality and give its application to statistics.

The rest of the article is given as follows. In Section 2, we present a matrix spectral norm versions of the Wielandt inequality. In Section 3, a new measure of association based on the spectral norm is proposed and its upper bound is obtained by using the results in previous section; then we propose an alterative based on the spectral norm of the relative gain of the covariance adjusted estimators of the parameters and its upper bound. Finally, some concluding remarks are given in Section 4.

2 Matrix spectral norm Wielandt inequality

We start this section with some notation. Let A0 be an n×n nonnegative definite matrix of ranka with an; A 1 / 2 is the nonnegative definite square root of A; X is an n×p matrix of rankk with kpa; ( ) stands for a generalized inverse of a matrix; ( ) + represents the Moore-Penrose inverse of a matrix; rank() denotes the rank of a matrix; ( ) shows for the transpose of a matrix; and () stands for the column space of a matrix. Suppose that P A =A A + stands for the orthogonal projectors onto the column space of matrix A, and use the notation

H= P X =X X +
(6)

for the orthogonal projector onto (X).

In order to prove the main results it is necessary to introduce some lemmas.

Lemma 2.1 [4]

Let A0 be an n×n matrix with ranka, and let X be an n×p matrix of rankk satisfying (X)(A), with kpan. Then

tr ( H A H ) + tr ( H A H ) { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 ,
(7)

where λ 1 λ a >0 are the nonzero eigenvalues of A.

Lemma 2.2 [9]

If A0, (X)(A) and X Y=0, then

HAY ( Y A Y ) + Y AHHAH ( H A H ) + .
(8)

Lemma 2.3 Let A0 be an n×n matrix with ranka, and let X be an n×p matrix of rankk satisfying (X)(A), with kpan. Then rank(HAH)=k.

Proof As A be a nonnegative definite matrix, we can easily get rank(HAH)=rank(HA). By Marsaglia and Styan [11], we have

rank(HA)=rank(H)dim [ ( H ) ( A ) ] .
(9)

Since (X)(A), so we have (H)(A), then we obtain dim[(H)( A )]=0, that is, rank(HA)=rank(H), thus

rank(H)=rank(HAH).
(10)

On the other hand

rank(H)=rank ( X X + ) =rank(X)=k.
(11)

So we get rank(HAH)=k. □

Lemma 2.4 Let A0 be an n×n matrix with ranka, and let X be an n×p matrix of rankk satisfying (X)(A), with kpan. Then

( H A H ) + 2 H A H 2 τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 ,
(12)

where G 2 = λ 1 (G) denotes the spectral norm of the matrix G, λ 1 (G) stands for the largest eigenvalues of matrix G, λ 1 λ a >0, are the nonzero eigenvalues of A, τ= λ 1 ( H A H ) λ k ( H A H ) is the condition number of matrix HAH.

Proof By the definition of the spectral norm, we obtain

( H A H ) + 2 H A H 2 = λ 1 ( ( H A H ) + ) λ 1 ( H A H ) .
(13)

By Lemma 2.3, we have rank( ( H A H ) + )=rank(HAH)=k, thus we get

tr ( ( H A H ) + ) = λ 1 ( ( H A H ) + ) + λ 2 ( ( H A H ) + ) + + λ k ( ( H A H ) + ) λ 1 ( ( H A H ) + ) + λ 1 ( ( H A H ) + ) + + λ 1 ( ( H A H ) + ) = k λ 1 ( ( H A H ) + ) .
(14)

So we have

λ 1 ( ( H A H ) + ) 1 k tr ( ( H A H ) + ) .
(15)

On the other hand, define the condition number of the matrix HAH as τ= λ 1 ( H A H ) λ k ( H A H ) , then we have

tr ( H A H ) = λ 1 ( H A H ) + λ 2 ( H A H ) + + λ k ( H A H ) λ 1 ( H A H ) + λ 1 ( H A H ) τ + + λ 1 ( H A H ) τ = τ + k 1 τ λ 1 ( H A H ) .
(16)

Thus,

λ 1 (HAH) τ τ + k 1 tr(HAH).
(17)

Then, using Lemma 2.1, we get

( H A H ) + 2 H A H 2 = λ 1 ( ( H A H ) + ) λ 1 ( H A H ) τ + k 1 k τ tr ( ( H A H ) + ) tr ( H A H ) τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 .
(18)

 □

Now we present the first theorem of this article.

Theorem 2.1 Suppose A0 to be an n×n matrix of ranka, and suppose X to be an n×p matrix of rankk, and suppose Y to be an n×q matrix such that (X)(A) and X P A Y= X Y=0 with kpqan. Then

H A Y ( Y A Y ) + Y A H 2 H A H 2 τ + k 1 k τ [ i = 1 k ( λ i 1 / 2 λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i 1 / 2 + λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i + λ a i + 1 ) ] 2 ,
(19)
H A H H A Y ( Y A Y ) + Y A H 2 H A H 2 τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 ,
(20)

where G 2 = λ 1 (G) denotes the spectral norm of the matrix G, λ 1 (G) stands for the largest eigenvalues of matrix G, λ 1 λ a >0 are the nonzero eigenvalues of A, τ= λ 1 ( H A H ) λ k ( H A H ) is the condition number of the matrix HAH.

Proof (1) For (19), using Lemma 2.2 and Lemma 2.4, we obtain

H A Y ( Y A Y ) + Y A H 2 = λ 1 ( H A Y ( Y A Y ) + Y A H ) λ 1 ( H A H ( H A + H ) + ) = λ 1 ( H A H ) λ 1 ( ( H A + H ) + ) = H A H 2 ( H A + H ) + 2 H A H 2 τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 H A H 2 = k τ [ i = 1 k ( λ i + λ a i + 1 ) ] 2 ( τ + k 1 ) [ i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 ] 2 k τ [ i = 1 k ( λ i + λ a i + 1 ) ] 2 H A H 2 .
(21)

Since k1 and τ1, then kτ(τ+k1)=(k1)(τ1)0. Thus we obtain

H A Y ( Y A Y ) + Y A H 2 τ + k 1 k τ [ i = 1 k ( λ i + λ a i + 1 ) ] 2 [ i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 ] 2 [ i = 1 k ( λ i + λ a i + 1 ) ] 2 H A H 2 = τ + k 1 k τ [ i = 1 k ( λ i 1 / 2 λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i 1 / 2 + λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i + λ a i + 1 ) ] 2 H A H 2 .
(22)

Inequality (19) is proved.

(2) For (20), from Lemma 2.2 and Lemma 2.4, we can obtain

H A H H A Y ( Y A Y ) + Y A H 2 = λ 1 ( H A H H A Y ( Y A Y ) + Y A H ) λ 1 ( ( H A + H ) + ) = ( H A + H ) + 2 τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 H A H 2 .
(23)

The proof of inequality (20) is completed. □

Partition matrix A0 as follows:

A= ( A 11 A 12 A 21 A 22 ) , A 11.2 = A 11 A 12 A 22 A 21 ,
(24)

where A0 of ranka, A 11 0 of rankk, A 11 is p×p and A 22 is q×q, p+q=n.

Now we give another theorem.

Theorem 2.2 Suppose A be an n×n nonnegative definite matrix of ranka partitioned as in (24) and suppose that

rank(A)=rank( A 11 )+rank( A 22 );

then

A 12 A 22 A 21 2 A 11 2 τ + k 1 k τ [ i = 1 k ( λ i 1 / 2 λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i 1 / 2 + λ a i + 1 1 / 2 ) 2 ] [ i = 1 k ( λ i + λ a i + 1 ) ] 2 ,
(25)
A 11.2 2 A 11 2 τ + k 1 k τ { i = 1 k 2 λ i 1 / 2 λ a i + 1 1 / 2 i = 1 k ( λ i + λ a i + 1 ) } 2 ,
(26)

where λ 1 λ a >0 are the nonzero eigenvalues of A, τ= λ 1 ( A 11 ) λ k ( A 11 ) is the condition number of matrix A 11 .

Proof (1) Since A>0, let the n×p matrix X be ( I p 0 ) and n×q matrix Y be ( 0 I q ) , then we obtain X AY= A 12 , Y AY= A 22 , Y AX= A 21 , X AX= A 11 , X X= I p , H=X X , τ= λ 1 ( A 11 ) λ k ( A 11 ) , (X)(A) and X P A Y=0. Substituting it into Theorem 2.1, we can get the two inequalities involving A 22 1 .

(2) As A0, A can be partitioned as in (24) with A 11 0 and A 22 0, then ( A 12 )( A 11 ) and ( A 21 )( A 22 ). On the other hand, using rank(A)=rank( A 11 )+rank( A 22 ), we get (X)(A), which is needed in Theorem 2.1. □

3 Applications to statistics

In this section, we give several inequalities involving covariance matrices, an alternative based on the spectral norm of the relative gain of the covariance adjusted estimator and its upper bound by using the inequalities in Section 2.

3.1 New measure of association

Suppose that μ and ν are p×1 and q×1 random vectors and that we have the covariance matrix

Cov( μ ν )= Σ n × n =( Σ 11 Σ 12 Σ 21 Σ 22 ),
(27)

where n=p+q.

Wang and Ip [7] have discussed the following measure of association for Σ>0:

ρ 1 = | Σ 12 Σ 22 1 Σ 21 Σ 11 1 | = | Σ 12 Σ 22 1 Σ 21 | | Σ 11 | ,
(28)

where || refers to the determinant of the concerned matrix and pqn. We can see that ρ 1 cannot be used when | Σ 11 |=0. As pointed out by Groß [12], the authors may encounter a singular covariance matrix. To solve this problem, Liu et al. [9] introduced a new measure association:

ρ 2 = tr ( Σ 12 Σ 22 1 Σ 21 ) tr ( Σ 11 ) .
(29)

They also gave an upper bound of ρ 2 and they pointed out that ρ 2 is useful in canonical correlations and regression analysis areas as discussed by Lu [8], Wang and Ip [7], and Anderson [13].

Wang and Yang [10] presented an alternative measure association, which is defined as follows:

ρ 3 = Σ 12 Σ 22 1 Σ 21 E Σ 11 E ,
(30)

where E stands for the Euclidean norm of concerned matrix and they also gave an upper bound of ρ 3 .

As is well known, there is no measure association involving the spectral norm, so we present a new measure of association based on the spectral norm:

ρ 4 = Σ 12 Σ 22 1 Σ 21 2 Σ 11 2 .
(31)

Theorem 3.1 The upper bound of ρ 4 is given as follows:

ρ 4 τ + p 1 p τ [ i = 1 p ( λ i 1 / 2 λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i 1 / 2 + λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i + λ n i + 1 ) ] 2 1,
(32)

where λ 1 λ n >0 are the ordered eigenvalues of Σ, τ= λ 1 ( Σ ) λ p ( Σ ) is the condition number of matrix Σ.

Proof It is easy to prove inequality (32) by using Theorem 2.2 and (31). □

3.2 Wishart matrices

Let S be an estimator of Σ, partitioned S as follows:

S=( S 11 S 12 S 21 S 22 ), S 11.2 = S 11 S 12 S 22 1 S 21 ,
(33)

where S 11 is a p×p matrix.

Wang and Ip [7] presented these interesting relations among these submatrices occurring in much of the statistical literature, such as in linear models

S 12 S 22 1 S 21 ( λ 1 λ n ) 2 ( λ 1 + λ n ) 2 S 11 ,
(34)

where λ 1 and λ n refer to the largest and smallest eigenvalues of S, respectively. They also considered the concept of the relative gain of the covariance adjusted estimator of a parameter vector discussed by Rao [14] and Wang and Yang [15]. | Σ 12 Σ 22 1 Σ 21 | | Σ 11 | can be regarded as the relative gain and it can be estimated by | S 12 S 22 1 S 21 | | S 11 | . Liu et al. [9] use tr ( S 12 S 22 1 S 21 ) tr ( S 11 ) to estimate tr ( Σ 12 Σ 22 1 Σ 21 ) tr ( Σ 11 ) and they also showed that

tr ( S 12 S 22 1 S 21 ) tr ( S 11 ) [ i = 1 p ( λ i 1 / 2 λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i 1 / 2 + λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i + λ n i + 1 ) ] 2 1,
(35)

where λ 1 λ n >0 are the ordered eigenvalues of S.

Wang and Yang [7] also studied this problem and used S 12 S 22 1 S 21 E S 11 E to estimate Σ 12 Σ 22 1 Σ 21 E Σ 11 E ; they also gave an upper bound of S 12 S 22 1 S 21 E S 11 E , which is given as follows:

S 12 S 22 1 S 21 E S 11 E l ( h , p ) 2 p ( λ 1 λ n p + 1 λ p λ n + λ p λ n λ 1 λ n p + 1 ) i = 1 p λ i λ n p + i ,
(36)

where rank( S 12 S 22 1 S 21 )=h and l(h,p)= max h i = 1 h λ i 2 max p i = 1 p λ i 2 .

In this article we will present the spectral norm operator instead of the determinant, trace, and Euclidean norm. The new relative gain of the covariance adjusted estimator is denoted by

ω= Σ 12 Σ 22 1 Σ 21 2 Σ 11 2

and ω is estimated by

ω ˆ = S 12 S 22 1 S 21 2 S 11 2 .

Now we give the upper bound of ω.

Theorem 3.2 The relative gain ω ˆ is bounded as follows:

ω ˆ τ + p 1 p τ [ i = 1 p ( λ i 1 / 2 λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i 1 / 2 + λ n i + 1 1 / 2 ) 2 ] [ i = 1 p ( λ i + λ n i + 1 ) ] 2 1,
(37)

where λ 1 λ n >0 are the ordered eigenvalues of S, τ= λ 1 ( S ) λ p ( S ) is the condition number of matrix S.

Proof Using Theorem 2.2, we can easily get the proof of Theorem 3.2. □

Remark 3.1 The result in Theorem 3.2 can be extended to the nonnegative definite matrix S0, but S 11 0.

4 Concluding remarks

In this article, we have presented two matrix spectral norm Wielandt inequalities and some applications of the spectral norm Wielandt inequalities, and we also can see that these applications are meaningful, useful, and practical in statistics.

References

  1. Drury SW, Liu S, Lu CY, Puntanen S, Styan GPH: Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments. Sankhya, Ser. A 2002, 64: 453-507.

    MathSciNet  Google Scholar 

  2. Gustafson K: The geometrical meaning of the Kantorovich-Wielandt inequalities. Linear Algebra Appl. 1999, 296: 143-151. 10.1016/S0024-3795(99)00106-8

    Article  MathSciNet  Google Scholar 

  3. Liu SZ Tinbergen Institute Research Series 106. In Contributions to Matrix Calculus and Applications in Econometrics. Thesis Publishers, Amsterdam; 1995.

    Google Scholar 

  4. Liu SZ: Efficiency comparisons between the OLSE and the BLUE in a singular linear model. J. Stat. Plan. Inference 2000, 84: 191-200. 10.1016/S0378-3758(99)00149-4

    Article  Google Scholar 

  5. Rao CR, Rao MB: Matrix Algebra and Its Applications to Statistics and Econometrics. World Scientific, Singapore; 1998.

    Book  Google Scholar 

  6. Liu SZ, Heyde CC: Some efficiency comparisons for estimators from quasi-likelihood and generalized estimating equations. Lecture Notes-Monograph Series 42. In Mathematical Statistics and Applications: Festschrift for Constance van Eeden. Edited by: Moore M, Froda S, Léger C. Inst. Math. Statist., Beachwood; 2003:357-371.

    Google Scholar 

  7. Wang SG, Ip WC: A matrix version of the Wielandt inequality and its applications to statistics. Linear Algebra Appl. 1999, 296: 171-181. 10.1016/S0024-3795(99)00117-2

    Article  MathSciNet  Google Scholar 

  8. Lu, CY: A generalized matrix version of the Wielandt inequality with some applications. Research Report, Department of Mathematics, North east Normal University, Changchun, China, 8 pp. (1999)

  9. Liu SZ, Lu CY, Puntanen S: Matrix trace Wielandt inequalities with statistical applications. J. Stat. Plan. Inference 2009, 139: 2254-2260. 10.1016/j.jspi.2008.10.026

    Article  MathSciNet  Google Scholar 

  10. Wang LT, Yang H: Matrix Euclidean norm Wielandt inequalities and their applications to statistics. Stat. Pap. 2012, 53: 521-530. 10.1007/s00362-010-0357-y

    Article  Google Scholar 

  11. Marsaglia G, Styan GPH: Equalities and inequalities of ranks of matrices. Linear Multilinear Algebra 1974, 2: 269-292. 10.1080/03081087408817070

    Article  MathSciNet  Google Scholar 

  12. Groß J: The general Gauss-Markov model with possibly singular dispersion matrix. Stat. Pap. 2004, 45: 311-336. 10.1007/BF02777575

    Article  Google Scholar 

  13. Anderson TW: An Introduction to Multivariate Statistical Analysis. 3rd edition. Wiley, New York; 2003.

    Google Scholar 

  14. Rao CR: Least squares theory using an estimated dispersion matrix and its application to measurement of signals. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (Berkeley, CA, 1965-66), vol. I: Statistics. Edited by: Cam LM, Neyman J. University of California Press, Berkeley; 1967:355-372.

    Google Scholar 

  15. Wang SG, Yang ZH: Pitman optimality of covariance-improved estimators. Chin. Sci. Bull. 1995, 40: 1150-1154.

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and the two anonymous referees for their valuable comments which improved the quality of the paper. This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant No: R2013SC12), the National Natural Science Foundation of China (Grant Nos: 71271227, 11201505), and Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No: KJTD201321).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jibo Wu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wu, J., Yi, W. Matrix spectral norm Wielandt inequalities with statistical applications. J Inequal Appl 2014, 110 (2014). https://doi.org/10.1186/1029-242X-2014-110

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-110

Keywords