Skip to main content

Estimations for spectral radius of nonnegative matrices and the smallest eigenvalue of M-matrices

Abstract

In this paper, some estimations for the spectral radius of nonnegative matrices and the smallest eigenvalue of M-matrices are given by matrix directed graphs and their k-path covering. The existent results on the upper and lower bounds of the spectral radius of nonnegative matrices are improved.

MSC:15A18, 65F15.

1 Introduction

The following notations are used throughout this paper. Let A=( a i j ) R n × n be an n×n matrix with real entries. Denote N={1,2,,n}, r i (A)= j N | a i j |, R i (A)= j N { i } | a i j |, iN. A[P] denotes the prime sub-matrices of A where row-column subscripts are all in PN, ρ(A) is the spectral radius of A, and ω 0 (A) is the smallest eigenvalue according to the module of A.

For A=( a i j ) R n × n , if a i j 0, i,jN, A is called a nonnegative matrix. We denote it A N n . If A=sIB, B N n , s>ρ(B), A is called a nonsingular M-matrix. We denote it AK. Nonnegative matrices and M-matrices are two important matrix classes, which are applied in many fields such as computational mathematics, probability theory, and mathematical economics (see [1]). Some spectral properties of these two classes of matrices are discussed in the paper.

With regard to estimations for the nonnegative matrix spectral radius, the earliest result is given by Perron-Frobenius (see [2]), that is,

min i N r i (A)ρ(A) max i N r i (A).

Though the result is earlier than the Geršgorin theorem (see [3]), it can be seen as the estimation of ρ(A) by using the right end-point of the Geršgorin disc. Therefore it is still called a Geršgorin estimation.

Denote

r A (i,j)= 1 2 { a i i + a j j + [ ( a i i a j j ) 2 + 4 R i ( A ) R j ( A ) ] 1 2 } ,ij,i,jN.

A Brauer (see [4]) gave the Brauer estimation for the spectral radius of nonnegative matrices by the Cassini oval region, and he improved Perron-Frobenius’ result.

Suppose A N n is inducible, then

min i j { r A ( i , j ) } ρ(A) max i j { r A ( i , j ) } .

A main equivalent representation of M-matrices indicates that the real parts of M-matrix eigenvalues are all positive (see [1]). Tong (see [5]) improved the result and concluded that the min-eigenvalue by the module of an M-matrix is a positive number. Zhang (see [6]) proved that the min-eigenvalue of an M-matrix by its real part is also its min-eigenvalue by the module and offered the estimation formula

min i N r i (A) ω 0 (A) max i N r i (A).

For the same reason, we also call it a Geršgorin type estimation. Let A be a nonsingular M-matrix and denote s= max i N { a i i }. Then A=sIB, B N n , thus ω(A)=sρ(B), and the above estimation results as regards the M-matrix are natural. Therefore, according to [4], we can give the Brauer estimation for the M-matrix min-eigenvalues.

Denote

l A (i,j)= 1 2 { a i i + a j j [ ( a i i a j j ) 2 + 4 R i ( A ) R j ( A ) ] 1 2 } ,ij,i,jN.

Let A be a nonsingular and irreducible M-matrix. Then

min i j { l A ( i , j ) } ω(A) max i j { l A ( i , j ) } .

In this paper, we present the Brualdi type estimation for the nonnegative matrix spectral radius and the minimal eigenvalue of M-matrix by the deduction method with directed graph (see [7]) and the concept of the k-path covering of the directed graph. Moreover, we give the improved Brauer type estimations, which improve the relevant results of [46, 810].

2 Directed graph and its k-path covering

Let Γ(A) be the directed graph of A=( a i j ) C n × n . N denotes its nodal set, and E(A)={ e i , j a i j 0,i,jN} denotes its directed edge set. The directed edge sequence γ: e i 1 , i 2 , e i 2 , i 3 ,, e i s 1 , i s , e i s , i 1 is called the simple loop of Γ(A) where s2 and all i 1 , i 2 ,, i s are different from each other. In short, we denote γ: i 1 , i 2 ,, i s , i s + 1 = i 1 . Let |γ| be the length of the simple loop γC(A), then C(A) is the set of all the simple loops of matrix A.

In this paper, directed graphs and simple loops of Γ(A) are all regarded as its sub-graphs, and Γ + (i)={jNji, e i , j E(A)} means i’s successor set in Γ(A).

We will introduce the concept of k-path coverage of matrix directed graphs (see [11]).

Definition 2.1 [11]

Let γ: i 1 , i 2 ,, i s , i s + 1 = i 1 C(A), k be an integer, t=min{k+1,s}, and [t,s]=pt=qs be the least common multiple of t and s. We call the set { ρ 1 ,, ρ p } made up of directed paths in γ, respectively, starting with i j , i j + t ,, i j + ( p 1 ) t , and including t1 directed edges as the k-path coverage of the simple loop γ ( i j as base point), denoted by P k (γ).

Definition 2.2 [11]

For every γC(A) in Γ(A), P k (A)= γ C ( A ) P k (γ) is called a k-path coverage in Γ(A).

Obviously, if the chosen base-points are different, usually there are different k-path coverings P k (γ). When k=l|γ|, l Z + , the k-path coverage on γC(A) is γ itself. If |γ|=p(k+1), γC(A), then P k (γ) only covers every nodal point on γ once and there are k+1 different k-path coverings P k (γ) on γ. If k max γ C ( A ) |γ|, then P k (A)=C(A).

Besides, when k=1, let γ: i 1 , i 2 ,, i s , i s + 1 = i 1 C(A), η be the maximal common divisor of 2 and s, and τ=s/η. The set { e i 1 , i 2 , e i 1 + η , i 2 + η ,, e i 1 + ( τ 1 ) η , i 2 + ( τ 1 ) η } is called an odd 1-path covering of γ (corresponding to the contemporary notation); and the set { e i 2 , i 3 , e i 2 + η , i 3 + η ,, e i 2 + ( τ 1 ) η , i 3 + ( τ 1 ) η } is called an even 1-path coverage of γ (corresponding to the contemporary notation). Likewise, a definitive 1-path coverage of γ is denoted as P 1 (γ). When s is a positive odd number, the even 1-path and the odd one of γ are the same. Namely there is only a 1-path coverage including all s directed edges in γ. When s is a positive even number, there are two odd and even 1-path coverings, respectively, including its s/2 pieces of directed edges in γ.

A relation ‘’ on the nonempty set ν is called pre-order, if ‘’ satisfies reflexivity and transitivity, namely aa, aν; ab and bc implies ac, a,b,cν.

Lemma 2.1 [7, 11]

Let ‘’ be a pre-order on the nodal set N={1,2,,n} of Γ(A), n2. If for all iN, Γ + (i), then:

  1. (1)

    If for all iN, Γ + (i), then there exists a simple loop γ: i 1 , i 2 ,, i s , i s + 1 = i 1 such that

    l i j + 1 ,l Γ + ( i j ),j=1,2,,s.
    (2.1)
  2. (2)

    If P k (γ) is a k-path covering of the above simple loop, then there exists a directed path ρ: m 1 , m 2 ,, m t P k (γ) in γ, such that

    l m j + 1 ,l Γ + ( m j ),j=1,2,,t,
    (2.2)
    m t + 1 m 1 .
    (2.3)

Proof (1) See [7].

(2) Without loss of generality, we suppose that i 1 is the base-point of P k (γ)=( ρ 1 ,, ρ p ), t=min{k+1,s}, [t,s]=pt=qs, and ρ j : i 1 + ( j 1 ) t , i 2 + ( j 1 ) t ,, i j t for 1jp. Let m 1 be the largest element of the nodal set { i 1 , i 1 + t ,, i 1 + ( p 1 ) t } in the sense of pre-order ‘’. In γ, take a directed path ρ: m 1 , m 2 ,, m t starting with m 1 including t1 directed edges, then ρ P k (γ). According to (2.1), then (2.2) holds. Because m 1 is the largest element of { i 1 , i 1 + t ,, i 1 + ( p 1 ) t } and m t + 1 { i 1 , i 1 + t ,, i 1 + ( p 1 ) t }, (2.3) holds. □

The discussion in this paper needs some basic results as regards a reducible matrix determined by a polynomial.

If A is a reducible matrix with n2, then there exists a permutation matrix P such that

PA P T = [ A 11 A 12 A 1 K 0 A 22 A 2 K 0 0 A K K ] ,
(2.4)

where A t t is n t × n t principal sub-matrix of A and 2Kn. They are either irreducible or zero matrix with order 1, t = 1 K n t =n. The right-side matrix of (2.4) is called the reduced polynomial of A. If the order of A t t is not considered, (2.4) has nothing to do with the choice of P. So (2.4) is a unique definite partition N 1 , N 2 ,, N K on a set N={1,2,,n} corresponding to the subscripted set of A 11 , A 22 ,, A K K . When A is an irreducible matrix and a zero matrix with order 1, for unity, denote A=[ A 11 ], where n 1 =n and N 1 =N. Denote

α= n t 2 , 1 t K N t , θ A ={ a i i AiNα}.

Obviously α={iNiγC(A)}.

Definition 2.3 Let A=( a i j ) C n × n . If N=α, then A is a weakly irreducible matrix, which is denoted by AWI. For a general matrix, if α, we call A[α] the weakly irreducible nucleus of A, which is denoted by A ˆ . Denote A ˆ = if α=.

3 The spectral radius of a nonnegative matrix

In this section, we suppose that max=min=0.

Lemma 3.1 Let a 1 , a 2 ,, a n R, N={1,2,,n}, JN. Define a function f(x)= i J (x a i ). Then f(x) is strongly monotone increasing when x max i J { a i }.

Theorem 3.1 Let A=( a i j ) N n . γC(A), r A (γ) denotes the real roots of i γ (x a i i )= i γ R i ( A ˆ ) which are larger than max i γ { a i i }. Denote

m c r ( A ) = max { min γ C ( A ) r A ( γ ) , max Θ A } , M c r ( A ) = max { max γ C ( A ) r A ( γ ) , max Θ A } ,

then

m c r (A)ρ(A) M c r (A).

Proof (1) A is irreducible. By the Perron-Frobenius theorem, we know there is x= ( x 1 , x 2 , , x n ) T >0, such that Ax=ρ(A)x. Define the pre-order ‘’: ij on the nodal set N of Γ(A), if and only if x i x j . From Lemma 2.1(1), there exists γ : i 1 , i 2 ,, i s , i s + 1 = i 1 C(A) such that x l x i j + 1 , l Γ + ( i j ), j=1,2,,s. Hence, from

( ρ ( A ) a i j i j ) x i j = p i j a i j p x p = p Γ + ( i j ) a i j p x p ( p Γ + ( i j ) a i j p ) x i j + 1 = R i j ( A ) x i j + 1 , j = 1 , 2 , , s ,

we obtain

j = 1 s ( ρ ( A ) a i j i j ) j = 1 s x i j j = 1 s R i j (A) j = 1 s x i j + 1 .

Then

j = 1 s ( ρ ( A ) a i j i j ) j = 1 s R i j (A),

i.e.

i γ ( ρ ( A ) a i i ) i γ R i (A).
(3.1)

Similarly, if we define a pre-order ‘’: ij on the nodal set N of Γ(A), if and only if x i x j . From Lemma 2.1, there exists γ : i 1 , i 2 ,, i s , i s + 1 = i 1 C(A) such that x l x i j + 1 , l Γ + ( i j ), j=1,2,,s. Similar to the above, we obtain

i γ ( ρ ( A ) a i i ) i γ R i (A).
(3.2)

Besides, notice ρ(A) max i γ { a i i } and ρ(A) max i γ { a i i }. From (3.1) and (3.2), it is deduced that

min γ C ( A ) r A (γ) r A ( γ ) ρ(A) r A ( γ ) max γ C ( A ) r A (γ),

i.e. m c r (A)ρ(A) M c r (A).

(2) A is weakly irreducible. It can be supposed that A has got its polynomial (2.4), where A t t is irreducible whose order is not less than 2. We prove it with the following two steps.

  1. (i)

    Since R i ( A ˆ )= R i ( A K K ), i N K . From (1), we easily see

    ρ(A)ρ( A K K ) m c r ( A K K )= min γ C ( A K K ) r A K K (γ)= min γ C ( A K K ) r A (γ) min γ C ( A ) r A (γ).
  2. (ii)

    Let t such that ρ(A)=ρ( A t t ). From (1), we easily see that

    ρ(A)=ρ( A t t ) M c r ( A t t )= max γ C ( A t t ) r A t t (γ) max γ C ( A t t ) r A (γ) max γ C ( A ) r A (γ).

Combining (i) with (ii), we have m c r (A)ρ(A) M c r (A).

(3) A is weakly irreducible. Noticing r A (γ)= r A ˆ (γ), from (2), we see

min γ C ( A ) r A (γ)= min γ C ( A ˆ ) r A ˆ (γ)ρ( A ˆ ) max γ C ( A ˆ ) r A ˆ (γ)= max γ C ( A ) r A (γ).

Since

ρ ( A ) = max { max Θ A , ρ ( A ˆ ) } , max { min γ C ( A ) r A ( γ ) , max Θ A } ρ ( A ) max { max γ C ( A ) r A ( γ ) , max Θ A } ,

i.e. m c r (A)ρ(A) M c r (A). □

Remark 3.1 Because m c r (A) and M c r (A) have relations to the directed graphs, when A is reducible, and traditional continuity deduction has no effect, we use (2.4) to prove it. Besides, in Theorem 3.1, r A (γ) must be defined by R i ( A ˆ ), but it cannot be defined by R i (A) directly. See Example 5.3.

Theorem 3.2 Let A=( a i j ) N n and P k (A) be the k-path covering of Γ(A). ρ P k (A), always denote its nodal set as ρ: i 1 , i 2 ,, i k , i k + 1 . r A (ρ) denotes the real roots of the equation i ρ (x a i i )= i ρ R i ( A ˆ ), r A (ρ)> max i ρ { a i i }. Denote

m ρ r ( A ) = max { min ρ P k ( A ) r A ( ρ ) , max Θ A } , M ρ r ( A ) = max { max ρ P k ( A ) r A ( ρ ) , max Θ A } ,

then

m ρ r (A)ρ(A) M ρ r (A).

Proof Let A be an irreducible and nonnegative matrix. By the Perron-Frobenius theorem, there exists x= ( x 1 , , x n ) T >0 such that Ax=ρ(A)x. Define a pre-order ‘’: ij on the nodal set N of Γ(A), if and only if x i x j . It follows from Lemma 2.1 that there exists a directed path ρ : m 1 , m 2 ,, m t P k (A) in Γ(A), such that x l x j + 1 , l Γ + ( m j ), j=1,,t; x m t + 1 x m 1 ; x m j >0, j=1,2,,t. Because Ax=ρ(A)x, we get

0 < ( ρ ( A ) a m j m j ) x m j = p m j a m j p x p = p Γ + ( m j ) a m j p x p ( p Γ + ( m j ) a m j p ) x m j + 1 = ( p m j a m j p ) x m j + 1 = R m j ( A ) x m j + 1 , j = 1 , 2 , , t .
(3.3)

Multiply all the inequalities in (3.3):

j = 1 t ( ρ ( A ) a m j m j ) j = 1 t x m j j = 1 t R m j (A) j = 1 t x m j + 1 .

Furthermore,

j = 1 t ( ρ ( A ) a m j m j ) ( j = 1 t R m j ( A ) ) x m t + 1 x m 1 .

Then

i ρ ( ρ ( A ) a m j m j ) i ρ R i (A).
(3.4)

Likewise, a pre-order ‘’: ij is defined on the nodal set N of Γ(A) if and only if x i x j . It follows from Lemma 2.1 that there exists a directed path ρ : m 1 , m 2 ,, m t P k (A) in Γ(A) such that x l x j + 1 , l Γ + ( m j ), j=1,2,,t; x m t + 1 x m 1 , and x m j >0, j=1,2,,t. Therefore from Ax=ρ(A)x, we obtain

i ρ ( ρ ( A ) a m j m j ) i ρ R i (A).
(3.5)

Thus, if A N n is irreducible, with (3.4) and (3.5), the theorem is proved. If A N n is a weakly irreducible or non-weakly irreducible matrix, similar to the proof of (2), (3) in Theorem 3.1, it is the same with the theorem here. □

In Theorem 3.2, if k max γ C ( A ) |γ|, it is Theorem 3.1. Therefore Theorem 3.1 can be viewed as a special case of Theorem 3.2.

Theorem 3.3 Let A=( a i j ) N n , and P k (A) be the k-path covering of Γ(A). ρ P k (A), its nodal set is always denoted as ρ: i 1 , i 2 ,, i k , i k + 1 . The other notations are the same as those in Theorem  3.1 and Theorem  3.2. Then

m ρ r (A) m c r (A)ρ(A) M c r (A) M ρ r (A).

Proof First we prove m ρ r (A) m c r (A). It is necessary to prove that for all γC(A) there exists ρ P k (γ) P k (A) such that r A (γ) r A (ρ). Otherwise there exists γ: i 1 , i 2 ,, i s , i s + 1 = i 1 C(A), |γ|=s, such that r A (γ)< r A (ρ), ρ P k (γ). Note that r A (ρ) is the real root of the equation i ρ (x a i i )= i ρ R i ( A ˆ ), larger than max i ρ { a i i }. From Lemma 3.1, we see that

i ρ ( r A ( γ ) a i i ) < i ρ R i ( A ˆ ),ρ P k (γ).
(3.6)

Multiply all the inequalities in (3.6),

ρ P k ( γ ) ( i ρ ( r A ( γ ) a i i ) ) < ρ P k ( γ ) ( i ρ R i ( A ˆ ) ) ,

i.e.

( i γ ( r A ( γ ) a i i ) ) p ( k + 1 ) / | γ | < ( i γ R i ( A ˆ ) ) p ( k + 1 ) / | γ | .

Furthermore

i γ ( r A ( γ ) a i i ) < i γ R i ( A ˆ ).
(3.7)

Then from Lemma 3.1, we know that (3.7) implies r A (γ)< r A (γ), which leads to a contradiction.

Similarly, M c r (A) M ρ r (A) can be proved. With the above and Theorem 3.1, the theorem is proved. □

If k=1, the following result, which is more convenient to use, can be obtained from Theorem 3.2.

Corollary 3.1 Let A=( a i j ) N n , and P 1 (A) be the 1-path covering of Γ(A). Denote

r A ( i , j ) = 1 2 { a i i + a j j + [ ( a i i a j j ) 2 + 4 R i ( A ˆ ) R j ( A ˆ ) ] 1 2 } , m e r ( A ) = max { min e i , j P 1 ( A ) r A ( i , j ) , max Θ A } , M e r ( A ) = max { max e i , j P 1 ( A ) r A ( i , j ) , max Θ A } .

Then

m e r (A)ρ(A) M e r (A).

Obviously, generally speaking, different k-path coverings can be taken for the directed graph Γ(A) of A C n × n . We denote the congregation set of these different k-path coverings P k (A) of Γ(A) as P k (A), then we have the following theorem.

Theorem 3.4 Let A=( a i j ) N n , for a given k-path covering P k (A) P k (A) of Γ(A), with M ρ r (A), m ρ r (A), the same as in Theorem  3.2. Then

max P k ( A ) P k ( A ) { m ρ r ( A ) } ρ(A) min P k ( A ) P k ( A ) { M ρ r ( A ) } .

Remark 3.2 Theorem 3.1 can be viewed as the estimation for ρ(A) by means of the right end-point of the Brualdi region of an eigenvalue distribution, so it is called a Brualdi estimation. Corollary 3.1 is the improved Brauer estimation. Because we only need to calculate the corresponding r A (i,j) of the edge e i , j in the simple loop, especially when the length of the loop is even, only the corresponding r A (i,j) of half of the edges needs to be calculated. Thus the calculation decreases greatly and meanwhile the accuracy improves. If Theorem 3.4 is used, a general estimation is more accurate. Theorem 3.1 and Corollary 3.1 are both superior to the results of Perron-Frobenius and Brauer. See Example 5.1 and Example 5.2.

4 The smallest eigenvalue of M-matrix

In this section, we define max=min=+.

Theorem 4.1 Let A=( a i j ) R n × n be a nonsingular M-matrix. γC(A), l A (γ) is the real root of the equation i γ ( a i i x)= i γ R i ( A ˆ ), and l A (γ)< min i γ { a i i }. Denote

m c l ( A ) = min { min γ C ( A ) l A ( γ ) , min Θ A } , M c l ( A ) = min { max γ C ( A ) l A ( γ ) , min Θ A } .

Then

m c l (A) ω 0 (A) M c l (A).

Proof Let A=sIB, B=( b i j ) N n , ρ(B) be the spectral radius of B and s>ρ(B). Obviously ω 0 (A)=sρ(B)>0. It follows from Theorem 3.1 that m c r (B)ρ(B) M c r (B), and then s M c r (B) ω 0 (A)s m c r (B).

Define a i i =s b i i , iN. Then it follows from the definitions of l A (γ) and r A (γ) that l A (γ)=s r B (γ). We obtain

m c l ( A ) = min { min γ C ( A ) l A ( γ ) , min Θ A } = min { min γ C ( B ) { s r B ( γ ) } , s max Θ B } = s max { max γ C ( B ) r B ( γ ) , max Θ B } = s M c r ( B ) .

Similarly, it follows that M c l (A)=s m c r (B). So m c l (A) ω 0 (A) M c l (A). □

Analogously, we have the following results.

Theorem 4.2 Let A=( a i j ) R n × n be a nonsingular M-matrix, and P k (A) be the k-path covering of Γ(A). ρ P k (A), always denote its nodal set as ρ: i 1 , i 2 ,, i k , i k + 1 . Denoted by r A (ρ) the real root of the equation i ρ ( a i i x)= i ρ R i ( A ˆ ), which is less than min i ρ { a i i }, and denote

m ρ l ( A ) = min { min ρ P k ( A ) l A ( ρ ) , min Θ A } , M ρ l ( A ) = min { max ρ P k ( A ) l A ( ρ ) , min Θ A } .

Then

m ρ l (A) ω 0 (A) M ρ l (A)

and

m ρ l (A) m c (A) ω 0 (A) M c (A) M ρ l (A).

If we take k=1, we have the following corollary.

Corollary 4.1 Let A=( a i j ) R n × n be a nonsingular M-matrix, P 1 (A) is the 1-path covering of Γ(A). Denote

l A ( i , j ) = 1 2 { a i i + a j j [ ( a i i a j j ) 2 + 4 R i ( A ˆ ) R j ( A ˆ ) ] 1 2 } , m e l ( A ) = min { min e i , j P 1 ( A ) l A ( i , j ) , min Θ A } , M e l ( A ) = min { max e i , j P 1 ( A ) l A ( i , j ) , min Θ A } .

Then

m e l (A) ω 0 (A) M e l (A).

Remark 4.1 Theorem 4.1 and Corollary 4.1 are, respectively, the Brualdi type estimation and the Brauer type estimation for the least eigenvalue of the M-matrix. These results are more accurate. See the matrix B in Example 5.1 and Example 5.2.

5 Examples

Example 5.1 Consider the nonnegative matrix

A= [ 8 1 0 0 0 1 2 1 0 0 0 1 5 1 0 0 0 1 2 1 0 0 0 1 8 ] .

By calculating, ρ(A)=8.18014. It follows from a Geršgorin type estimation that 4ρ(A)9. Because r A (1,2)= r A (1,4)= r A (2,5)= r A (4,5)=8.31662, r A (1,5)=9, r A (2,4)=4, r A (2,3)= r A (3,4)=6, and r A (1,3)= r A (3,5)=8.56155, it follows from a Brauer type estimation that 4ρ(A)9. Take P 1 (A)={ e 1 , 2 , e 2 , 3 , e 3 , 4 , e 4 , 5 }, and it follows from Corollary 3.1 that 6ρ(A)8.31662.

Consider a nonsingular M-matrix B=9IA. It only follows from Geršgorin type and Brauer type estimations that 0 ω 0 (B)5. Take P 1 (B)={ e 1 , 2 , e 2 , 3 , e 3 , 4 , e 4 , 5 }, and it follows from Corollary 4.1 that 0.68338 ω 0 (B)3. But in fact ω 0 (B)=0.81986.

Example 5.2 Consider the nonnegative matrix

A= [ 1 0.6 0 0 0 2 0.6 0 0 0 3 0.6 0.6 0 0 4 ] .

By calculating, ρ(A)=4.02080. It follows from a Geršgorin type estimation that 1.6ρ(A)4.6. Because r A (1,2)=2.28102, r A (1,3)=3.16619, r A (1,4)=4.11555, r A (2,3)=3.28102, r A (2,4)=4.16619, and r A (3,4)=4.28102, it follows from a Brauer type estimation that 2.28102ρ(A)4.28102. Take P 1 (A)={ e 2 , 3 , e 4 , 1 }, and it follows from Corollary 3.1 that 3.28102ρ(A)4.11555. Because C(A)={γ:1,2,3,4,1}, m c r (A)= M c r (A)=4.02080 and it follows from Theorem 3.1 that the estimation ρ(A)=4.02080 is already accurate.

Consider the nonsingular M-matrix B=5IA. It follows from a Geršgorin type estimation that 0.4 ω 0 (B)3.4 and it follows from a Brauer type estimation that 0.71898 ω 0 (B)2.71898. Take P 1 (B)={ e 2 , 3 , e 4 , 1 }, and it follows from Corollary 4.1 that 0.88445 ω 0 (B)1.71898. It follows from Theorem 4.1 that we obtain the accurate result ω 0 (B)=0.97920.

Example 5.3 When A is weakly irreducible, R i (A)= R i ( A ˆ ), iN. Consider the following non-weakly irreducible and nonnegative matrix:

A= [ 1 1 1 0 0 2 1 1 0 1 2 1 0 0 0 1 ] .

Obviously, ρ(A)=3, C(A)={γ:2,3,2}, and Θ A ={1}. In Theorem 3.1, if r A (γ) is defined as the real root of i γ (x a i i )= i γ R i (A), which is larger than max i γ { a i i }, then r A (γ)=4, and, moreover, m c r (A)=max{ min γ C ( A ) r A (γ),max Θ A }=4=max{ max γ C ( A ) r A (γ),max Θ A }= M c r (A). According to Theorem 3.1, obviously it is false. Likewise, if r A (i,j), l A (γ), l A (i,j) in Corollary 3.1, Theorem 4.1 and Corollary 4.1 are directly defined by R i (A); mistakes also happen, which will not be discussed in detail.

References

  1. Berman A, Plemmons RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, New York; 1994.

    Book  MATH  Google Scholar 

  2. Frobenius GF: Über Matrizen aus nicht negativen Elementen. Königliche Akademie der Wissenschaften, Berlin; 1912.

    MATH  Google Scholar 

  3. Varga RS: Geršgorin and His Circles. Springer, New York; 2000.

    MATH  Google Scholar 

  4. Brauer A, Gentry IC: Bounds for the greatest characteristic root of an irreducible nonnegative matrix. Linear Algebra Appl. 1974, 8: 105–107. 10.1016/0024-3795(74)90048-2

    Article  MathSciNet  MATH  Google Scholar 

  5. Tong WT: On the distributions of eigenvalues for some classes of matrices. Acta Math. Sin. 1977,20(4):272–275.

    MATH  Google Scholar 

  6. Zhang JJ: On the distributions of eigenvalues for conjugate diagonal dominant matrices. Acta Math. Sin. 1980,23(4):544–546.

    MathSciNet  MATH  Google Scholar 

  7. Brualdi RA: Matrices, eigenvalues, and directed graphs. Linear Multilinear Algebra 1982, 11: 143–165. 10.1080/03081088208817439

    Article  MathSciNet  MATH  Google Scholar 

  8. Zhang X, Gu DH: A note on A. Brauer’s theorem. Linear Algebra Appl. 1994, 196: 163–174.

    Article  MathSciNet  MATH  Google Scholar 

  9. Li LL: A simplified Brauer’s theorem on matrix eigenvalues. Appl. Math. J. Chin. Univ. Ser. B 1999,14(3):259–264. 10.1007/s11766-999-0034-x

    Article  MATH  Google Scholar 

  10. Kolotilina YL: Generalizations of the Ostrowski-Brauer theorem. Linear Algebra Appl. 2003, 364: 65–80.

    Article  MathSciNet  MATH  Google Scholar 

  11. Du YH: An improvement of Brauer’s theorem and Shemesh’s theorem. Acta Math. Appl. Sin. 2004,27(1):1–11.

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We wish to thank the anonymous referees for their thorough reading and constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongbin Lv.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, T., Lv, H. & Sang, H. Estimations for spectral radius of nonnegative matrices and the smallest eigenvalue of M-matrices. J Inequal Appl 2014, 477 (2014). https://doi.org/10.1186/1029-242X-2014-477

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-477

Keywords