Research

# Some new two-sided bounds for determinants of diagonally dominant matrices

Wen Li1* and Yanmei Chen2

Author Affiliations

1 School of Mathematical Sciences, South China Normal University, Guangzhou, 510631, P. R. China

2 School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665, P.R. China

For all author emails, please log on.

Journal of Inequalities and Applications 2012, 2012:61 doi:10.1186/1029-242X-2012-61

 Received: 24 August 2011 Accepted: 9 March 2012 Published: 9 March 2012

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

In this article, we present some new two-sided bounds for the determinant of some diagonally dominant matrices. In particular, the idea of the preconditioning technique is applied to obtain the new bounds.

MS Classification: 65F10; 15A15.

##### Keywords:
diagonally dominant matrix; determinant; M-matrix; bound

### 1 Introduction

By we denote the set of all n × n complex (real) matrices. A matrix is called a Z-matrix if aij ≤ 0 for any i j; a nonsingular M-matrix if A is a Z-matrix with A-1 is nonnegative, i.e., A-1 ≥ 0. The comparison matrix A〉 = (ãij) for A is defined by

where 〈n〉 ≡ {1, 2,..., n}.

Throughout this article, we always assume that A = D - L - U, where D, -L and -U are nonsingular diagonal, strict lower and strict upper triangular parts of A. It is noted that 〈A〉 = |D| - |L| - |U|, where |C| = (|cij|) for C = (cij).

Let and Λi(B) = Σik∈〈n|bik|. Then it is easy to see that 〈Ae = (|a11| - Λ1(A),..., |ann| - Λn(A))T, where e = (1,..., 1)T with appropriate dimensions. Let , and . Then |L|e = (l1,...,ln), |U|e = (u1,...,un), and Λi(A) = li + ui.

Definition 1.1 Let . Then A is said to be

(1) a diagonally dominant matrix (d.d.), if |aii| ≥ Λi(A) for each i ∈ 〈n〉;

(2) a strictly diagonally dominant matrix (s.d.d.), if |aii| > Λi(A) for each i ∈ 〈n〉;

(3) a weakly chained diagonally dominant matrix (c.d.d.) (e.g., see [1,2]), if A is a d.d. matrix with , where β(A) = {j | |ajj| > Σjk∈〈n|ajk|) and for all i ∈ 〈n〉, i β(A), there exist indices i1,...,ik in 〈n〉 with , 0 ≤ r k -1, where i0 = i and ik β(A).

(4) a generalized diagonally dominant matrix (g.d.d.), if there is a positive diagonal matrix D such that AD is an s.d.d. matrix.

It is noted that the comparison matrix of a g.d.d. matrix is a nonsingular M-matrix (e.g., see [[1], Lemma 3.2]).

The classical bound for the determinant of an s.d.d. matrix A is the Ostrowski's inequality [3], i.e.,

which was improved by Price as follows [4]

(1.1)

The bound (1.1) was further improved by Ostrowski [5] and Yong [6]. In [6] the author obtained the following two-sided bounds for s.d.d. matrices (see [[6], Theorem 2.2])

(1.2)

The inequalities of the determinant can be applied to estimate the spectral of a matrix and to determine the nonsingularity of a matrix, etc, which are useful in numerical analysis. Some numerical examples show that the bound in (1.2) is not optimal. By this motivation, in this article, we consider to give some sharper bounds than the ones in (1.1) and (1.2). The rest of this article is organized as follows. In Section 2, we use the classical technique to obtain new two-sided bounds; see Theorems 2.5 and 2.5'. In Section 3, we apply the idea of the preconditioning technique to give a new bound for the M-matrix case; see Theorem 3.2. A conclusion is given in the final section.

### 2 The classical technique

Let α1 and α2 be two subsets of 〈n〉 such that 〈n〉 = α1α2 and . Let . By Aij = A[αi|αj] we denote the submatrix of A whose rows are indexed by αi and columns by αj. For simplicity, we denote A[α1] instead of A[αi|αi]. If A[α1] is nonsingular, the Schur complement of A[α1] in A is denoted by , i.e., . By A(k) we denote A(k) = A[α(k)], where α(k) = {k + 1,..., n}.

We define sk(A) as follows:

(2.1)

Alternatively, the recursive Equation (2.1) can be computed by the following lemma, which can be deduced from the similar proof to those in [7].

Lemma 2.1 Let . Then

(2.2)

The following lemma is well-known, e.g., see [1].

Lemma 2.2 Let A be a c.d.d. matrix. Then A is g.d.d., and hence is nonsingular.

Now we partition A into the following block form:

(2.3)

Then it is easy to check that

(2.4)

where .

The following lemma can be found in [8].

Lemma 2.3 Let A = (aij) be a nonsingular d.d. M-matrix, and let . Then

(2.5)

Lemma 2.4 Let A be a c.d.d. matrix. Then

(2.6)

where we define A(0) = A.

Proof. It follows from Lemma 2.2 that A is nonsingular. Let A be as in (2.3) and . Then we have

(2.7)

By (2.4) we have

which together with (2.7) gives that

(2.8)

It follows from (2.5) and (2.8) that

(2.9)

Because A is c.d.d., 〈A〉 is a nonsingular M-matrix, and so is 〈A(1)〉, which implies that A(1) is also a c.d.d. matrix (see [ 1, Theorem 3.3]). Applying the induction on k to (2.9) one may deduce the desired inequality (2.6).

Remark 2.1 It is difficult to compute the bound (2.6) because one needs to compute all si(A(k-1)), i = n,...,k for k = 1,...,n. However, we may replace s1(A(k-1)) by si(A).

Theorem 2.5 Let A be a c.d.d. matrix. Then

(2.10)

Proof. By (2.1) we have

and hence

Therefore, we have

which together with (2.6) gives the bound (2.10).

Let . Then Rk(A) is given by (e.g., see [9] or [1])

(2.11)

A matrix A is called a Nekrasov matrix ([9] or [1]) if |akk| > Rk(A) for k ∈ 〈n〉. A Nekrasov matrix is a g.d.d. matrix (e.g., see [9]). The bound for the determinant of a Nekrasov matrix is given below (see [10,11]):

(2.12)

However there is a typos for this bound, a counter-example was given in [12]. In the following theorem, we get an estimation of the determinant of A by using Ri (A), the proof is analogical to those in Theorem 2.5.

Theorem 2.5' Let A be a c.d.d. matrix. Then

(2.13)

Remark 2.2 Let A = D - L - U. Then the recursive Equations (2.1) and (2.11) for Sk(A) and Rk(A) can be computed by (2.2) and the following formula (see [7])

(2.14)

respectively. Hence two bounds (2.10) and (2.13) are based on different splittings A = (D - U) - L = (D - L) - U. The following two examples illustrate that none of these two bounds is better than other.

Example 2.1 Let

Then A is an s.d.d. matrix. Applying the bounds (2.10) and (2.13) to this matrix yields

and

respectively, which shows that the bound in (2.13) is better.

Example 2.2 Let

Then A is s.d.d.. By (2.10) and (2.13), we have

and

respectively. Hence the bound (2.10) is better.

Remark 2.3 It is noted that the bound in (2.10) (or (2.13)) only provides alternative estimation for the determinant, this bound does not improve (1.2) in general. However, Example 2.1 illustrates that the bound in (2.10) is better. In fact, by (1.2) we obtain

The following example shows that the upper bound in (1.2) can be better than the one in (2.10).

Example 2.3 Let

Then by (2.10) and (1.2) we have

and

respectively.

### 3 The preconditioning technique

It is well known that the preconditioning technique plays more and more important roles in solving linear systems (e.g., see [13]). In this section we improve the bound (1.2) based on the idea of preconditioning.

Without loss of generality we may assume that all diagonal entries of A are equal to 1 in this section. Otherwise, we consider the matrix D-1A, where D = diag(a11,..., ann). Then det(D-1A) = det D-1 det A Hence, we assume that

where L and U are a strictly lower triangular and a strictly upper triangular matrices, respectively

Let

(3.1)

which was first introduced in [14] for solving linear systems, and was further studied by many authors (e.g., see [15-18]). Usually, P is call a preconditioner for solving the linear system Ax = b.

Let B = PA. Then det B = det A and

where and are a lower triangular and a strictly upper triangular matrices, respectively. The ith diagonal entry of B is given by

(3.2)

If A is an s.d.d. M-matrix, so is B (see [16]). Let A have the block form (2.3). We partition I + S into the following block form

where and . A simple calculation yields that

where

(3.3)

Then

(3.4)

It is easy to see that

(3.5)

By (3.3) we have

and hence from [19] (also see [6]) it follows that

(3.6)

Notice that , which together with (3.4), (3.5), (3.6), and (3.2) gives

(3.7)

By (3.3), B(1) is also the preconditioned matrix of A(1) with the preconditioner I + S(1). In this case, B(1) is also an s.d.d. matrix. So we may proceed by induction with (3.7), and then one may easily deduce the following lemma.

Lemma 3.1 Let A be an s.d.d. M-matrix with unit diagonal entries. Then

(3.8)

By the above argument, we may deduce the following result without the assumption that A has unit diagonal entries as in Lemma 3.1.

Theorem 3.2 Let A be an s.d.d. M-matrix. Then

(3.9)

Remark 3.1 It is noted that the bound (3.9) is always sharper than the one (1.2). In fact, for any i, ui < |aii| we have and hence the upper bound is better than the one in (1.2). For the lower bound, since

the lower bound in (3.9) is better than the one in (1.2), which proves our assertion.

Remark 3.2 None of these two bounds in (3.9) and (2.10) is uniformly better than other. However the following example illustrates that the upper bound in (3.9) is better.

Example 3.1 Let

Applying (3.9), (1.2), and (2.10) to estimate the determinant of A, respectively we have

and

### 4 Conclusion

In Sections 2 and 3, we have provided some two-sided bounds for the determinant of a d.d. matrix via both classical and preconditioning techniques. Although none of two bounds in (1.2) and (2.10) are uniformly better than other in general, the condition in the (2.10) is weaker than the one in (1.2).

When the preconditioning technique is applied to estimate the determinant of an s.d.d. M-matrix, we may obtain a more tighter bound. Here, we only present a bound (3.9) for the special preconditioner (3.1), and prove that this bound is sharper than the bound (1.2), which shows that a good preconditioning technique is a powerful tool not only for solving linear system but also for some estimations such as determinants etc.

### Competing interests

The authors declare that they have no competing interests.

### Acknowledgements

The work was supported in part by National Natural Science Foundation of China (No. 10971075), Research Fund for the Doctoral Program of Higher Education of China (No. 20104407110002) and Guangdong Provincial Natural Science Foundation (No. 9151063101000021), P.R. China.

### References

1. Li, W: On the Nekrasov matrix. Linear Algebra Appl. 281, 87–96 (1998). Publisher Full Text

2. Shivakumar, PN, Chew, KH: A sufficient condition for nonvanishing of determinants. Proc Am Math Soc. 43, 63–66 (1974). Publisher Full Text

3. Ostrowski, AM: Sur la determination des bones inferieures pour une classe des determinants. Bull Sci Math. 61(2), 19–32 (1937)

4. Price, GB: Bounds for determinants with dominant principal diagonal. Proc Am Math Soc. 2, 497–502 (1951). Publisher Full Text

5. Ostrowski, AM: Note on bounds for determinants with dominant principal diagonal. Proc Am Math Soc. 3, 26–30 (1952). Publisher Full Text

6. Yong, XR: Two properties of diagonally dominant matrices. Numer Linear Algebra Appl. 3(2), 173–177 (1996). Publisher Full Text

7. Robert, F: Blocs-H-matrices et convergence des methodes iteratives classiques par blocs. Linear Algebra Appl. 2, 223–265 (1969). Publisher Full Text

8. Li, W: The infinity norm bound for the inverse of nonsingular diagonal dominant matrices. Appl Math Lett. 21, 258–263 (2008). Publisher Full Text

9. Szulc, T: Some remarks on a theorem of Gudkov. Linear Algebra Appl. 225, 221–235 (1995)

10. Bayley, DW, Crabtree, DE: Bounds for determinants. Linear Algebra Appl. 2, 303–309 (1969). Publisher Full Text

11. Szulc, T: On bound for certain determinants. Z Angew Math Mech. 72, 637–640 (1992)

12. Huang, TZ, Xu, CX: A note on the bound for the Bayley-Crabtree determinant of Nekrasov matrices. J Xi'an Jiaotong Univ. 36, 1320 (In Chinese) (2002)

(In Chinese)

PubMed Abstract | Publisher Full Text

13. Axelsson, O: Iterative Solution Methods. Cambridge University Press, Cambridge (1994)

14. Gunawardena, AD, Jain, SK, Snyder, L: Modified iterative methods for consistent linear systems. Linear Algebra Appl. 154/156, 123–143 (1991)

15. Hadjidimos, A, Noutsos, D, Tzoumas, M: More on modifications and improvements of classical iterative schemes for M-matrices. Linear Algebra Appl. 364, 253–279 (2003)

16. Li, W, Sun, W: Modified Gauss-Seidel type methods and Jacobi type methods. Linear Algebra Appl. 317, 227–247 (2000). Publisher Full Text

17. Li, W: The convergence of the modified Gauss-Seidel method for consistent linear systems. J Comput Appl Math. 154, 97–105 (2003). Publisher Full Text

18. Sun, LY: Some extensions of the improved modified Gauss-Seidel iterative method for H-matrices. Numer Linear Algebra Appl. 13, 869–876 (2006). Publisher Full Text

19. Hu, JG: Estimates for ∥B-1A. J Comput Math. 2, 122–149 (1984)