Skip to main content

The maximal and minimal ranks of matrix expression with applications

Abstract

We give in this article the maximal and minimal ranks of the matrix expression A-B1V1C1-B2V2C2-B3V3C3-B4V4C4 with respect to V1, V2, V3, and V4. As applications, we derive the extremal ranks of the generalized Schur complement A - BM(1)C - DN(1)G and the partial matrix (A BM(1)C DN(1)G) with respect to the generalized inverse M(1) ε M{1} and N(1) N{1}.

AMS classifications: 15A03; 15A09; 15A24.

1 Introduction

Let Cm × n be the set of all m × n complex matrices with complex entries. I n denotes the identity matrix of order n and O m × n denotes the m × n matrix of all zero entries (if no confusion occurs, we will omit the subscript). For a given a matrix A Cm× n, the symbols A* and r(A) will stand for the conjugate transpose and the rank of the matrix A, respectively. We recall that a generalized inverse X Cn× mof A Cm × nis a matrix which satisfies some of the following four Penrose equations [1]:

( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A .

For a subset {i,j,...,k} of the set {1,2,3,4}, the set of n × m matrices satisfying the equations (i), (j), ...,(k) from among the above four Penrose Equations (1)-(4) is denoted by A{i,j,...,k}. A matrix X from A{i,j,...,k} is called an {i,j,...,k}-inverse of A and is denoted by A(i,j,...,k). In particular, an n × m matrix X of the set A{1} is called a g-inverse of A and denoted by A(1). The unique {1, 2, 3, 4}-inverse of A is denoted by A, which is called the Moore-Penrose inverse of A. Throughout this article, the abbreviated symbols E A and F A stand for the two projectors E A = I - AA and F A = I − AA of A, respectively. We refer the reader to [2, 3] for basic results on the generalized inverses.

Given a matrix with some variant entries in it (often called partial matrix) or a matrix expression with some variant matrices in it, the rank of the partial matrix or matrix expression will vary with respect to the variant entries or variant matrices. Because the rank of matrix is an integer between 0 and the minimal of row and column numbers of the matrix, maximal and minimal ranks of partial matrix or matrix expressions must exist with respect to their variant entries or variant matrices. Many problems in matrix theory and applications are closely related to maximal and minimal possible ranks of matrix expressions with variant entries. For example, a matrix equation AXB = C is consistent if and only if the minimal rank of C-AXB with respect to X is zero, see [46]; there is matrix X such that the partial matrix AXB of order n is nonsingular if and only if the maximal rank of AXB with respect to X is n, see [711].

The maximal and minimal ranks of matrix expressions or partial matrix are two basic concepts in matrix theory for describing the dimension of the row or column vector space of matrix expressions or partial matrix, both of which are well understood and are easy to compute by the well-known elementary or congruent matrix operations, see [5, 7, 8, 1016]. These two quantities play an essential role in characterizing algebraic properties of matrices expressions or partial matrices. In fact, maximal and minimal ranks of matrix expressions or partial matrices have been the main objects of study in matrix theory and applications. Some previous systematical researches on maximal and minimal ranks of matrix expressions or partial matrices and their applications can be found in [1720]. In recent years, the present author reconsidered the maximal and minimal ranks of matrix expressions or partial matrices by using some tricky operations on block matrices and generalized inverses of matrices, and obtained many new formulas for maximal and minimal ranks of matrix expressions or partial matrices and their applications, see [4, 6, 9, 2128].

In this article, given matrices A C m × n , B i C m × p i , C i C q i × n ,i=1,2,3,4, we will present the maximal and minimal ranks of the matrix expression A-B1V1C1 - B2V2C2 - B3V3C3 - B4V4C4 with respect to V1, V2, V3, and V4. As applications, the maximal and minimal ranks of the generalized Schur complement A - BM(1)C - DN(1)G and the partial matrix (A BM(1)C DN(1)G) with respect to the generalized inverse M(1) M{1} and N(1) N{1} are also considered. The results in this article extend the earlier studies by various authors, see, e.g., [46, 11, 16, 18, 21, 25, 26].

We first introduce some well-known results which will be used in this article.

Lemma 1.1 [5, 8, 25]. Let

M = A 11 A 12 X A 21 A 22 A 23 Y A 32 A 33

where A i j C m i × n j ( 1 i , j 3 ) are given, X C m 1 × n 3 and Y C m 3 × n 1 are two variant matrices. Then

max X , Y r ( M ) = min m 3 + n 3 + r A 11 A 12 A 21 A 22 , m 1 + n 1 + r A 22 A 23 A 32 A 33 , m 1 + m 3 + r ( A 21 A 22 A 23 ) , n 1 + n 3 + r A 12 A 22 A 32 ,
(1)
min r X , Y ( M ) = r ( A 21 A 22 A 23 ) + r A 12 A 22 A 32 + max r A 11 A 12 A 21 A 22 - r A 12 A 22 - r ( A 21 A 22 ) , r A 22 A 23 A 32 A 33 - r A 22 A 32 - r ( A 22 A 23 ) .
(2)

Lemma 1.2 [2]. Let A Cm × n. Then the expression of {1}-inverses of A can be written as

A ( 1 ) = A + ( I n - A A ) W + Z ( I m - A A ) ,
(3)

where W Cn × mand Z Cn× mare arbitrary.

Lemma 1.3 [9]. Let A Cm × n, B Cm × k, and C Cl × n. Then

( 1 ) . r ( A B ) = r ( A ) + r ( E A B ) = r ( E B A ) + r ( B ) , ( 2 ) . r A C = r ( A ) + r ( C F A ) = r ( A F C ) + r ( C ) ,

where E A = I m - AA and F A = I n - AA.

2 The maximal and minimal ranks of A - B1V1C1- B2V2C2- B3V3C3- B4V4C4

In this section, we will present the maximal and minimal ranks of the linear matrix expression

P ( V 1 , V 2 , V 3 , V 4 ) = A - B 1 V 1 C 1 - B 2 V 2 C 2 - B 3 V 3 C 3 - B 4 V 4 C 4 ,
(4)

where A C m × n , B i C m × p i , C i C q i × n ,i=1,2,3,4, are given matrices, with respect to four variant matrices V i C p i × q i ,i=1,2,3,4. Applying the formula (1) in Lemma 1.1 to the linear matrix expression in (4) and simplifying, we obtain the following result.

Theorem 2.1 Let P(V1, V2, V3, V4) be given as (4). Then

max V 1 , V 2 , V 3 , V 4 r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = min T 1 , T 2 , T 3 , T 4 ,
(5)

where

r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = r O O O O O O O I P 4 - V 4 O O O O C 4 O O O I q 4 O O O O O I P 2 - V 2 O O O O O O C 2 O I q 2 O O O B 3 O B 1 A B 2 O B 4 O O O I q 1 O C 1 O O O O O O - V 1 I P 1 O O O O O I q 3 O O O C 3 O O O O - V 3 I P 3 O O O O O O O - i = 1 4 p i - i = 1 4 q i , = r ( T ) - i = 1 4 p i - i = 1 4 q i ,

Proof. It is easy to verify by block Gaussian elimination that the rank of P(V1,V2,V3, V4) in (4) can be expressed as

r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = r O O O O O O O I P 4 - V 4 O O O O C 4 O O O I q 4 O O O O O I P 2 - V 2 O O O O O O C 2 O I q 2 O O O B 3 O B 1 A B 2 O B 4 O O O I q 1 O C 1 O O O O O O - V 1 I P 1 O O O O O I q 3 O O O C 3 O O O O - V 3 I P 3 O O O O O O O - i = 1 4 p i - i = 1 4 q i , = r ( T ) - i = 1 4 p i - i = 1 4 q i ,

where A C m × n , B i C m × p i , C i C q i × n , V i C p i × q i ,i=1,2,3,4 and I p i , I q i ,i=1,2,3,4, are denotes the identity matrix of order p i and q i , respectively.

T = O E 2 - V 4 E 1 S E 3 - V 3 E 4 O , S = O O O C 4 O O O O O O O I p 2 - V 2 O O O O C 2 O I q 2 O B 3 O B 1 A B 2 O B 4 O I q 1 O C 1 O O O O - V 1 I p 1 O O O O O O O C 3 O O O

and

E 1 = ( O O O O O O I q 3 ) * , E 2 = ( O O O O O O I p 4 ) , E 3 = ( I q 4 O O O O O O ) * , E 4 = ( I p 3 O O O O O O ) .

According to this result, we have

max V 1 , V 2 , V 3 , V 4 r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = max V 1 , V 2 , V 3 , V 4 r ( T ) - i = 1 4 p i - i = 1 4 q i .
(6)

Then applying the formula (1) in Lemma 1.1 to matrix T, we have

max V 3 , V 4 r ( T ) = min p 3 + q 4 + r O E 2 E 1 S p 4 + q 3 + r S E 3 E 4 O , p 4 + p 3 + r ( E 1 S E 2 ) , q 3 + q 4 + r E 2 S E 4 = min p 3 + q 4 + p 4 + q 3 + r ( S 1 ) , p 4 + q 3 + q 4 + p 3 + r ( S 2 ) , p 4 + p 3 + q 4 + q 3 + r ( S 3 ) , q 3 + q 4 + p 4 + p 3 + r ( S 4 ) ,

where

S 1 = O O O O I P 2 - V 2 O O O C 4 O O O O O C 2 O I q 2 O B 3 B 1 A B 2 O I q 1 O O C 1 O O - V 1 O I p 1 O O O , S 2 = O O O I P 2 O - V 2 O O C 2 O O I q 2 O B 1 A B 2 B 4 O I q 1 O C 1 O O O O O C 3 O O O - V 1 I p 1 O O O O S 3 O O O O I P 2 O - V 2 O O O C 2 O O I q 2 O B 3 B 1 A B 2 B 4 O I q 1 O O C 1 O O O - V 1 O I p 1 O O O O , S 4 = O O O I P 2 - V 2 O O C 4 O O O O C 2 O I q 2 O B 1 A B 2 O I q 1 O C 1 O O O O C 3 O O - V 1 I p 1 O O O

Again applying the formula (1) in Lemma 1.1, we get

max V 1 , V 2 , V 3 , V 4 r ( T ) = min p 3 + q 4 + p 4 + q 3 + max V 1 , V 2 r ( S 1 ) , p 4 + q 3 + q 4 + p 3 + max V 1 , V 2 r ( S 2 ) , p 4 + p 3 + q 4 + q 3 + max V 1 , V 2 r ( S 3 ) , q 3 + q 4 + p 4 + p 3 + max V 1 , V 2 r ( S 4 )
(7)

and

max V 1 , V 2 r ( S 1 ) = min p 1 + q 2 + q 1 + p 2 + r O O C 4 O O C 2 B 3 B 1 A , p 1 + q 2 + q 1 + p 2 + r O C 4 O B 3 A B 2 O C 1 O , p 1 + q 2 + q 1 + p 2 + r O O C 4 O B 3 B 1 A B 2 , p 1 + q 2 + q 1 + p 2 + r O C 4 O C 2 B 3 A O C 1 ,
(8)
max V 1 , V 2 r ( S 2 ) = min p 1 + q 2 + q 1 + p 2 + r O C 2 O B 1 A B 4 O C 3 O , p 1 + q 2 + q 1 + p 2 + r A B 2 B 4 C 1 O O C 3 O O , p 1 + q 2 + q 1 + p 2 + r B 1 A B 2 B 4 O C 3 O O , p 1 + q 2 + q 1 + p 2 + r C 2 O A B 4 C 1 O C 3 O ,
(9)
max V 1 , V 2 r ( S 3 ) = min p 1 + q 2 + q 1 + p 2 + r O O C 2 O B 3 B 1 A B 4 , p 1 + q 2 + q 1 + p 2 + r B 3 A B 2 B 4 O C 1 O O , p 1 + q 2 + q 1 + p 2 + r ( B 3 B 1 A B 2 B 4 ) , p 1 + q 2 + q 1 + p 2 + r O C 2 O B 3 A B 4 O C 1 O ,
(10)
max V 1 , V 2 r ( S 4 ) = min p 1 + q 2 + q 1 + p 2 + r O C 4 O C 2 B 1 A O C 3 , p 1 + q 2 + q 1 + p 2 + r C 4 O A B 2 C 1 O C 3 O , p 1 + q 2 + q 1 + p 2 + r O C 4 O B 1 A B 2 O C 3 O , p 1 + q 2 + q 1 + p 2 + r C 4 C 2 A C 1 C 3 .
(11)

Substituting (8)-(11) into (7) and (6) yield (5).

Recall a simple fact that a matrix equation AXB = C is consistent for every variant matrices X, if and only if the maximal rank of C - AXB with respect to X is zero. Thus, by Theorem 2.1 we can immediately obtain the following result.

Corollary 2.2 Let P(V1,V2,V3, V4) be given as (4). Then the matrix equation A = B1V1C1 + B2V2C2 + B3V3C3 + B4V4C4 holds for any V1, V2, V3, and V4 if and only if T1 = O or T2 = O or T3 = O or T4 = O.

Because the right side of (5) are just composed by ranks of block matrices, they can be easily simplified by block Gaussian elimination when the given matrices in (4) satisfy some restrictions.

Theorem 2.3 Let P(V1, V2, V3, V4) be given as (4) and let R(B1) R(B2), R(B3) R(B4), R ( B 1 ) R ( B 2 ) , R ( B 3 ) R ( B 4 ) , R ( C 2 ) R ( C 1 ) , R ( C 4 ) R ( C 3 ) Then

max V 1 , V 2 , V 3 , V 4 r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = min { τ 1 , τ 2 , τ 3 } ,
(12)

where

τ 1 = min r O O C 4 O O C 2 B 3 B 1 A , r O C 4 O B 3 A B 2 , r O C 4 B 3 A O C 1 , τ 2 = min r O C 2 O B 1 A B 4 , r ( A B 2 B 4 ) , r A B 4 C 1 O , τ 3 = min r O C 2 B 1 A O C 3 , r A B 2 C 3 O , r A C 1 C 3 .

Proof. In fact, we can write B1 = B2X, B3 = B4Y, C2 = ZC1, and C4 = WC3 under the hypotheses of Theorem 2.3. In this case, we have

r O O C 4 O B 3 B 1 A B 2 = r O C 4 O B 3 A B 2 , r O C 4 O C 2 B 3 A O C 1 = r O C 4 B 3 A O C 1 , r O O O O O C 2 B 3 B 1 A = r O O C 4 O O Z C 1 B 3 B 2 X A r O C 4 O B 3 A B 2 O C 1 O
(13)

and

r B 1 A B 2 B 4 O C 3 O O = r A B 2 B 4 C 3 O O , r C 2 O A B 4 C 1 O C 3 O = r A B 4 C 1 O C 3 O , r O C 2 O B 1 A B 4 O C 3 O = r O Z C 1 O B 2 X A B 4 O C 3 O r A B 2 B 4 C 1 O O C 3 O O
(14)

and

r ( B 3 B 1 A B 2 B 4 ) = r ( A B 2 B 4 ) = r O C 2 O B 3 A B 4 O C 1 O = r A B 4 C 1 O , r O O C 2 O B 3 B 1 A B 4 = r O O Z C 1 O B 3 B 2 X A B 4 r B 3 A B 2 B 4 O C 1 O O , r O O C 2 O B 3 B 1 A B 4 = r O C 2 O B 1 A B 4
(15)

and

r O C 4 O B 1 A B 2 O C 3 O = r A B 2 C 3 O , r C 4 C 2 A C 1 C 3 = r A C 1 C 3 , r O C 4 O C 2 B 1 A O C 3 = r O C 2 B 1 A O C 3 , r O C 4 O C 2 B 1 A O C 3 = r O C 4 O Z C 1 B 2 X A O C 3 r C 4 O A B 2 C 1 O C 3 O
(16)

Combining (5) with (13)-(16) yields (12).

Corollary 2.4 Let P(V1, V2, V3, V4) be given as (4) and let R ( B 1 ) R ( B 2 ) ,R ( B 3 ) R ( B 4 ) ,R ( C 2 * ) R ( C 1 * ) ,R ( C 4 * ) R ( C 3 * ) then the matrix equation A = B1V1C1 + B2V2C2 + B3V3C 3 + B4V4C4 holds for any V1, V2, V3, and V4 if and only if τ1 = O or τ2 = O or τ3 = O.

In the rest of this section, we will find the minimal rank of the linear matrix expression P(V1,V2, V3, V4) in (4), with respect to four variant matrices V i C p i × q i ,i=1,2,3,4, when P(V1, V2, V3, V4) satisfy some restrictions.

Theorem 2.5 Let P(V1, V2, V3, V4) be given as (4) and let R ( B 1 ) R ( B 2 ) ,R ( B 3 ) R ( B 4 ) ,R ( C 2 * ) R ( C 1 * ) ,R ( C 4 * ) R ( C 3 * ) . Then

min V 1 , V 2 , V 3 , V 4 r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = r ( A B 2 B 4 ) + r A B 4 C 1 O + r O C 2 O B 1 A B 4 + r A B 2 C 3 O + r O C 2 B 1 A O C 3 + r A C 1 C 3 - r B 1 A B 4 O C 1 O - r O C 2 O B 2 A B 4 - r B 1 A O C 1 O C 3 - r C 2 O A B 2 C 3 O + max r O C 4 O B 3 A B 2 + r O C 4 B 3 A O C 1 + r O O C 4 O O C 2 B 3 B 1 A - r O O C 4 O O C 1 B 3 B 1 A - r O O C 4 O O C 2 B 3 B 2 A - β 1 - β 2 , r A B 2 B 4 C 3 O O + r A B 4 C 1 O C 3 O + r O C 2 O B 1 A B 4 O C 3 O - r B 1 A B 4 O C 1 O O C 3 O - r C 2 O O A B 2 B 4 C 3 O O - 2 β 3 ,
(17)

where

β 1 = min r O O C 2 B 3 B 1 A O O C 3 , r B 3 A B 2 O C 3 O , r B 3 A O C 1 O C 3 , β 2 = min r O C 4 O O C 2 O B 1 A B 4 , r C 4 O O A B 2 B 4 , r C 4 O A B 4 C 1 O , β 3 = min r O C 2 O B 1 A B 4 O C 3 O , r A B 2 B 4 C 3 O O , r A B 4 C 1 O C 3 O .

Proof. From the proof of Theorem 2.1, it is easy to verify that the minimal rank of P(V1, V2, V3, V4) in (4) can be expressed as

min V 1 , V 2 , V 3 , V 4 r ( P ( V 1 , V 2 , V 3 , V 4 ) ) = min V 1 , V 2 , V 3 , V 4 r ( T ) - i = 1 4 p i - i = 1 4 q i ,
(18)

where T, S, E i , p i and q i , i = 1,2,3,4, are given as the proof of Theorem 2.1. Then applying the formula (2) in Lemma 1.1 to matrix T, we have

min V 1 , V 2 , V 3 , V 4 r ( T ) = min V 3 , V 4 r O E 2 - V 4 E 1 S E 3 - V 3 E 4 O = r ( E 1 S E 3 ) + r E 2 S E 4 + max r O E 2 E 1 S - r E 2 S - r ( E 1 S ) , r S E 3 E 4 O - r S E 4 - r ( S E 3 ) .
(19)

In this case, we derive from (19) that

min V 1 , V 2 , V 3 , V 4 r ( T ) = min V 1 , V 2 r ( E 1 S E 3 ) + min V 1 , V 2 r E 2 S E 4 + max min V 1 , V 2 r O E 2 E 1 S - min V 1 , V 2 r E 2 S - min V 1 , V 2 r ( E 1 S ) , min V 1 , V 2 r S E 3 E 4 O - min V 1 , V 2 r S E 4 - min V 1 , V 2 r ( S E 3 ) .
(20)

Again applying the formula (2) in Lemma 1.1, we have

min V 1 , V 2 r ( E 1 S E 3 ) = q 4 + q 3 + min V 1 , V 2 r ( S 3 ) = i = 1 4 q i + p 1 + p 2 + r ( B 3 B 1 A B 2 B 4 ) + r O C 2 O B 3 A B 4 O C 1 O + max r O O C 2 O B 3 B 1 A B 4 - r O O C 2 O B 3 B 1 A B 4 O O C 1 O - r O O C 2 O O B 3 B 1 A B 2 B 4 , r B 3 A B 2 B 4 O C 1 O O - r O C 2 O O B 3 A B 2 B 4 O C 1 O O - r B 3 B 1 A B 2 B 4 O O C 1 O O ,
(21)

where S3 is given as the Equation (7) of the proof of Theorem 2.1. Since B1 = B2X, B3 = B4Y, C2 = ZC1, and C4 = WC3, (21) is reduced to

min V 1 , V 2 r ( E 1 S E 3 ) = i = 1 4 q i + p 1 + p 2 + r ( A B 2 B 4 ) + r A B 4 C 1 O + max r O C 2 O B 1 A B 4 - r B 1 A B 4 O C 1 O - r C 2 O O A B 2 B 4 , - r A B 2 B 4 C 1 O O = i = 1 4 q i + p 1 + p 2 + r ( A B 2 B 4 ) + r A B 4 C 1 O + r O C 2 O B 1 A B 4 - r B 1 A B 4 O C 1 O - r C 2 O O A B 2 B 4 .
(22)

The last equality holds, since the well-known Frobenius rank inequality r(ABC) ≥ r(AB) + r(BC) − r(B), then

r O C 2 O B 1 A B 4 = r O Z C 1 O B 2 X A B 4 = r Z O O I O C 1 O B 2 A B 4 X O O O I O O O I r Z O O I O C 1 O B 2 A B 4 + r O C 1 O B 2 A B 4 X O O O I O O O I - r O C 1 O B 2 A B 4 = r O C 2 O B 2 A B 4 + r O C 1 O B 1 A B 4 - r O C 1 O B 2 A B 4 .

With the similar method, we also have

min V 1 , V 2 r E 2 S E 4 = i = 1 4 p i + q 1 + q 2 + r A C 1 C 3 + r A B 2 C 3 O + r O C 2 B 1 A O C 3 - r B 1 A O C 1 O C 3 - r C 2 O A B 2 C 3 O ,
(23)
min V 1 , V 2 r O E 2 E 1 S = p 1 + p 2 + p 4 + q 1 + q 2 + q 3 + r O C 4 O B 3 A B 2 + r O C 4 B 3 A O C 1 + r O O C 4 O O C 2 B 3 B 1 A - r O O C 4 O O C 1 B 3 B 1 A - r O O C 4 O O C 2 B 3 B 2 A ,
(24)
min V 1 , V 2 r S E 3 E 4 O = q 1 + q 2 + q 4 + p 1 + p 2 + p 3 + r A B 2 B 4 C 3 O O + r A B 4 C 1 O C 3 O + r O C 2 O B 1 A B 4 O C 3 O - r B 1 A B 4 O C 1 O O C 3 O - r C 2 O O A B 2 B 4 C 3 O O .
(25)

On the other hand, by the formula (1) in Lemma 1.1, we have

min V 1 , V 2 r E 2 S = p 4 + p 1 + p 2 + q 1 + q 2 + min r O O C 2 B 3 B 1 A O O C 3 , r O C 4 O B 3 A B 2 O C 3 O , r B 3 A O C 1 O C 3 .
(26)
max V 1 , V 2 r S E 4 = p 3 + p 1 + p 2 + q 1 + q 2 + min r O C 2 O B 1 A B 4 O C 3 O , r C 4 O O A B 2 B 4 C 3 O O , r A B 4 C 1 O C 3 O ,
(27)
max V 1 , V 2 r ( E 1 S ) = q 3 + p 1 + p 2 + q 1 + q 2 + min r O C 4 O O C 2 O B 1 A B 4 , r C 4 O O A B 2 B 4 , r C 4 O A B 4 C 1 O ,
(28)
max V 1 , V 2 r ( S E 3 ) = q 3 + p 1 + p 2 + q 1 + q 2 + min r O C 2 O B 1 A B 4 O C 3 O , r A B 2 B 4 C 3 O O , r A B 4 C 1 O C 3 O .
(29)

Contrasting (18), (20) and (22)-(29) yields (17).

Corollary 2.6 Let P(V1, V2, V3, V4) be given as (4) and let R(B1) R(B2), R(B3) R(B4), R ( B 1 ) R ( B 2 ) ,R ( B 3 ) R ( B 4 ) ,R ( C 2 * ) R ( C 1 * ) ,R ( C 4 * ) R ( C 3 * ) . Then the matrix equation A = B1V1C1 + B2V2C2+ B3V3C3 + B4V4C4 is consistent if and only if the right side of (17) is zero.

3 Some applications to generalized Schur complement and partial matrix

As direct applications of the results in Section 2, we determine in this section the maximal and minimal ranks of the generalized Schur complement A-BM(1)C-DN(1)G and the partial matrix (A BM(1)C DN(1)G) with respect to two variant matrices M(1) M{1} and N(1) N{1}.

Theorem 3.1 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s.

Then

max M ( 1 ) M { 1 } , N ( 1 ) N { 1 } r ( A - B M ( 1 ) C - D N ( 1 ) G ) = min { T 1 ^ , T 2 ^ , T 3 ^ } ,
(30)

where

T 1 ^ = min r A D B G N O C O M - r ( M ) - r ( N ) , r A D B G N O - r ( N ) , r A D G N C O - r ( N ) , T 2 ^ = min A B D C M O - r ( M ) , r ( A B D ) , r A D C O , T 3 ^ = min r A B C M G O - r ( M ) , r A C G , r A B G O .

Proof. Applying Lemma 1.2, we have

M ( 1 ) = M + F M W 1 + W 2 E M
(31)

and

N ( 1 ) = N + F N W 3 + W 4 E N ,
(32)

where W i , i = 1,2,3,4 are arbitrary, E M = I q - MM and F M = I p - MM. Substituting the Equation (31) and Equation (32) into the generalized Schur complement A - BM(1)C - DN(1)G yields

A - B M ( 1 ) C - D N ( 1 ) G = A 1 - B F M W 1 C - B W 2 E M C - D F N W 3 G - D W 4 E N G ,
(33)

where A1 = A - BMC - DNG.

In fact A1 - BF M W1C - BW2E M C - DF N W3G - DW4E N G is a special case of the matrix expression P(V1,V2,V3,V4), and R(BF M ) R(B), R(DF N ) R(D), R((E M C)*) R(C*), R((E N G)*) R(G*). In this case, from the formula (12) in Theorem 2.3, we have

max M ( 1 ) M { 1 } , N ( 1 ) N { 1 } r ( A - B M ( 1 ) C - D N ( 1 ) G ) = max W 1 , W 2 , W 3 , W 4 r ( A 1 - B F M W 1 C - B W 2 E M C - D F N W 3 G - D W 4 E N G ) = min { T 1 , T 2 , T 3 } ,
(34)

where

T 1 = min r O O E N G O O E M C D F N B F M A 1 r O E N G O D F N A 1 B , r O E N G D F N A 1 O C T 2 = min r O E M C O B F M A 1 D , r ( A 1 B B ) , r A 1 D C O T 3 = min r O E M C B F M A 1 O G , r A 1 B G O , r A 1 C G

For T1′, simplifying the ranks of matrices by Lemma 1.3 and block Gaussian elimination, we find that:

r O O E N G O O E M C D F N B F M A 1 = A 1 D F N B F M E M G O O E M C O O = r A 1 D B O O O O N O O O O O O M O O O G O O O N O C O O O O M - 2 r ( N ) - 2 r ( M ) = r A D B G N O C O M - r ( N ) - r ( M ) ,
(35)
r O E N G O D F N A 1 B = r A 1 D F N B E N G O O = r A 1 D B O O N O O G O O N - 2 r ( N ) = r A D B G N O - r ( N ) ,
(36)
r O E N G O D F N A 1 B = r A 1 E F N B D N G O O = r A 1 D B O O N O O G O O N - 2 r ( N ) = r A D B G N O - r ( N ) ,
(37)

Combining the rank equalities (35), (36) with (37), we have T 1 = T 1 ^ .

By the similar approach, we also have T 2 = T 2 ^ and T 3 = T 3 ^ . Then we have complete the proof of theorem.

Corollary 3.2 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s. Then the identity A = BM(1)C + DN(1)G holds for every M(1) M{1} and N(1) N{1} if and only if T 1 ^ =O or T 2 ^ =O or T 3 ^ =O.

From the proof of Theorem 3.1, we known that A - BM(1)C - DN(1)G = A1 - BF m W1C - BW2E m C - DF N W3G - DW4E N G, where A1 = A - BMC - DNG. In this case, A - BM(1)C - DN(1)G is a special case of the matrix expression P(V1, V2, V3, V4), and R(BF M ) R(B), R(DF N ) R(D), R((E M C)*) R(C*), R((E N G)*) R(G*). Then from the Theorem 2.5, we have

Theorem 3.3 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s.

Then

min M ( 1 ) M { 1 } , N ( 1 ) N { 1 } r ( A - B M ( 1 ) C - D N ( 1 ) G ) = min W 1 , W 2 , W 3 , W 4 r ( A 1 - B F M W 1 C - B W 2 E M C - D F N W 3 G - D W 4 E N G ) = r ( A B D ) + r A D C O + r M C O B A D + r A C G + r A B G O + r M C B A O G - r B A D M O O O C O - r O C O M B A D O - r B A M O O C O G - r C O M A B O G O O + 3 r ( M ) + max r N G O D A B + r O G N B A O O C O + r N O G O M C D B A - r N O G O O C D B A O M O - r N O G O O O C M D B A O + r ( N ) - δ 1 - δ 2 , r A B D G O O + r A B C O G O + r M C O B A D O G O - r B A D M O O O C O O G O - r C O O M A B D O G O O O - 2 δ 3 ,
(38)

where

δ 1 = min r O M C D B A N O O O O G - r ( M ) , r D A B N O O O G O , r D A N O O C O G , δ 2 = min r O G O N M C O O B A D O - r ( M ) , r G O O N A B D O , r G O N A D O C O O , δ 3 = min r M C O B A D O G O - r ( M ) , r A B D G O O , r A D C O G O .

Corollary 3.4 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s. then the identity A = BM(1)C + DN(1)G is consistent if and only if the right side of (38) is zero.

Next, we will determine the maximal and minimal ranks of the partial matrix

( A B M ( 1 ) C D N ( 1 ) G )

with respect to M(1) M{1} and N(1) N{1}, by applying the results in Section 2.

It is quite obvious that the partial matrix (A BM(1)C DN(1)G) may be written as

( A B M ( 1 ) C D N ( 1 ) G ) = ( A O O ) + B M ( 1 ) ( O C O ) + D N ( 1 ) ( O O G ) .
(39)

Then from (39) and Theorems 2.3 and 3.1, we have

Theorem 3.5 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s.

Then

max M ( 1 ) ε M { 1 } , N ( 1 ) ε N { 1 } r ( A B M ( 1 ) C D N ( 1 ) G ) = min { T ˜ 1 , T ˜ 2 , T ˜ 3 } ,
(40)

where

T ˜ 1 = min { r ( A O O D B O O G N O O C O O M ) r ( M ) r ( N ) , r ( A O D B O G N O ) r ( N ) , r ( A O D O G N ) + r ( C ) r ( N ) } , T ˜ 2 = min { r ( A O B D O C M O ) r ( M ) , r ( A B D ) , r ( A D ) + r ( C ) } , T ˜ 3 = min { r ( A O B O C M ) + r ( G ) r ( M ) , r ( A B ) + r ( G ) , r ( A ) + r ( C ) + r ( G ) } .

Corollary 3.6 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s. then the inclusion R(BM(1)C + DN(1)G) R(A) holds for every M(1) M{1} and N(1) N{1} if and only if T ˜ 1 = O or T ˜ 2 = O or T ˜ 3 = O .

On the other hand, from (39) and Theorems 2.5 and 3.3, we can easily obtain the minimal rank of the partial matrix (A BM(1)C DN(1)G) with respect to M(1) M{1} and N(1) N{1}.

Theorem 3.7 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s.

Then

min M ( 1 ) M { 1 } , N ( 1 ) N { 1 } r ( A B M ( 1 ) C D N ( 1 ) G ) = r ( A ) + r ( A D ) + r ( A B ) + r ( A B D ) + r M O C O B A O D - r B A M O - r B A D M O O - r O O C O M B A O D O + 3 r ( M ) + max r N O G O D A O B + r O O G N B A O O + r N O O O G O M O C O D B A O O - r N O O G D B A O O M O O - r N O O O G O a O O O C O M D B A O O O + r ( N ) - ξ 1 - ξ 2 , r ( G ) + r ( A D ) + r ( A B D ) + r M O C O B A O D O G O - r B A D M O O - r O C O O M A O B D O - 2 ξ 3 ,
(41)

where

ξ 1 = min r O M O C D B A O N O O O + r ( G ) - r ( M ) , r D A B N O O + r ( G ) , r D A N O + r ( C ) + r ( G ) ξ 2 = min r O O O G O N M O C O O O B A O O D O - r ( M ) , r O G O O N A O B D O , r O O G O N A O O D O O C O O O , ξ 3 = min { r M O C O B A O D + r ( G ) - r ( M ) , r ( A B D ) + r ( G ) , r ( A D ) + r ( C ) + r ( G ) } .

Corollary 3.8 Let A Cm × n, B Cm × p, C Cq × n, D Cm × s, G Ct × n, M Cq × p, and N Ct × s. then there are some M(1) M{1} and N(1) N{1}, such that the inclusion R(BM(1)C+DN(1)G) R(A) holds if and only if the right side of (41) is zero.

References

  1. Penrose R: A generalized inverse for matrices. Proc Cambridge Philos Soc 1955, 51: 406–413. 10.1017/S0305004100030401

    Article  MathSciNet  Google Scholar 

  2. Ben-Israel A, Greville TNE: Generalized Inverse: Theory and Applications, Wiley-Interscience, 1974. 2nd edition. Springer-Verlag, New York; 2002.

    Google Scholar 

  3. Wang G, Wei Y, Qiao S: Generalized Inverses: Theory and Computations. Science Press, Beijing; 2004.

    Google Scholar 

  4. Braden HW: The matrix equation ATX ± XTA = B . SIAM J Matrix Anal Appl 1998, 20: 295–302. 10.1137/S0895479897323270

    Article  MathSciNet  Google Scholar 

  5. Johnson CR, Whitney GT: Minimum rank completions. Linear and multilinear Algebra 1991, 28: 271–273. 10.1080/03081089108818051

    Article  MathSciNet  Google Scholar 

  6. Tian Y: Ranks of solutions of the matrix equation AXB = C . Linear and multilinear Algebra 2003, 51: 111–125. 10.1080/0308108031000114631

    Article  MathSciNet  Google Scholar 

  7. Bostain AA, Woerdeman HJ: Unicity of minimal rank completions for tri-diagonal partial block matrices. Linear Algebra Appl 2001, 325: 23–25. 10.1016/S0024-3795(00)00253-6

    Article  MathSciNet  Google Scholar 

  8. Cohen N, Johnson CR, Rodman L, Woerdeman HJ: Ranks of completions of partial matrices. Oper Theory Adv Appl 1989, 40: 165–185.

    MathSciNet  Google Scholar 

  9. Marsaglia G, Styan GPH: Equalities and inequalities for ranks of matrices. Linear and multilinear Algebra 1974, 2: 269–292. 10.1080/03081087408817070

    Article  MathSciNet  Google Scholar 

  10. Woerdeman HJ: Minimal rank completions for block matrices. Linear Algebra Appl 1989, 121: 105–122.

    Article  MathSciNet  Google Scholar 

  11. Woerdeman HJ: Minimal rank completions of partial banded matrices. Linear and multilinear Algebra 1993, 36: 59–69. 10.1080/03081089308818275

    Article  MathSciNet  Google Scholar 

  12. Davis C: Completing a matrix so as to minimize its rank. Oper Theory Adv Appl 1988, 29: 87–95.

    Google Scholar 

  13. Johnson CR: Matrix completion problems: a survey in matrix theory and applications. Proc Sympos Appl Math AMS 1990, 40: 171–179.

    Article  Google Scholar 

  14. Rao CR, Mitra SK: Generlized Inverse of matrices and its Application. Wiley, New York; 1971.

    Google Scholar 

  15. Tian Y: Completing block matrices with maximal and minimal ranks. Linear Algebra Appl 2000, 321: 327–345. 10.1016/S0024-3795(00)00224-X

    Article  MathSciNet  Google Scholar 

  16. Tian Y: The minimal rank of a 3 × 3 partial block matrix. Linear and multilinear Algebra 2002, 50: 125–131. 10.1080/03081080290019531

    Article  MathSciNet  Google Scholar 

  17. Cohen N, Dancis J: Maximal rank Hermitian completions of partially specified Hermitian matrices. Linear Algebra Appl 1996, 244: 265–276.

    Article  MathSciNet  Google Scholar 

  18. Liu Y, Tian Y: More on extremal ranks of the matrix expressions A - BX ± X * B * with statistical applications. Numer Linear Algebra Appl 2008, 15: 307–325. 10.1002/nla.553

    Article  MathSciNet  Google Scholar 

  19. Liu Y, Tian Y: Extremal ranks of submatrices in an Hermitian solution to the matrix equation AX A * = B with applications. J Appl Math Comput 2010, 32: 289–301. 10.1007/s12190-009-0251-8

    Article  MathSciNet  Google Scholar 

  20. Tian Y, Liu Y: Extremal ranks of some symmetric matrix expressions with applications. SIAM J Matrix Anal Appl 2006, 28: 890–905. 10.1137/S0895479802415545

    Article  MathSciNet  Google Scholar 

  21. Mitra SK: A pair of simulations linear matrix equations A1X1B1= C1and A2X2B2= C2and a programming problem. Linear Algebra Appl 1990, 131: 107–123.

    Article  MathSciNet  Google Scholar 

  22. Puntanen S, Styan GHP, Tian Y: Three rank formulas associated with the covariance matrices of the BLUE and the OLSE in the general linear model. Econometric Theory 2005, 21: 659–664.

    Article  MathSciNet  Google Scholar 

  23. Rao CR: Unified theory of linear estimation. Sankhyā Ser A 1971, 33: 371–394.

    Google Scholar 

  24. Rao CR: Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix. J Multivariate Anal 1973, 3: 276–292. 10.1016/0047-259X(73)90042-0

    Article  MathSciNet  Google Scholar 

  25. Tian Y: Upper and lower bounds for ranks of matrix expressions using generalized inverses. Linear Algebra Appl 2002, 355: 187–214. 10.1016/S0024-3795(02)00345-2

    Article  MathSciNet  Google Scholar 

  26. Tian Y: Using rank formulas to characterize equalities for Moore-Penrose inverse of matrix products. Appl Math Comput 2004, 147: 581–600. 10.1016/S0096-3003(02)00796-8

    Article  MathSciNet  Google Scholar 

  27. Tian Y: More on maximal and minimal ranks of Schur complements with applications. Appl Math Comput 2004, 152: 675–692. 10.1016/S0096-3003(03)00585-X

    Article  MathSciNet  Google Scholar 

  28. Tian Y, Cheng S: The maximal and minimal ranks of A - BXC with applications. New York J Math 2003, 9: 345–362.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would likes to thank the Editor-in-Chief and the anonymous referees for their very detailed comments, which greatly improved the presentation of this article. The research of the Z. Xiong was supported by the start-up fund of Wuyi University Jiangmen 529020, Guangdong province, P.R. China and the foundation for High-Level Talents in Guangdong province, P.R. China. The research of the S.Yuan was supported by Guangdong Natural Science Fund of China (No.10452902001005845).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingying Qin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The authors jointly worked on deriving the results. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Xiong, Z., Qin, Y. & Yuan, S. The maximal and minimal ranks of matrix expression with applications. J Inequal Appl 2012, 54 (2012). https://doi.org/10.1186/1029-242X-2012-54

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-54

Keywords