Skip to main content

Finite algorithms for the numerical solutions of a class of nonlinear complementarity problems

Abstract

In this paper, we reformulate a nonlinear complementarity problem or a mixed complementarity problem as a system of piecewise almost linear equations. The problems arise, for example, from the obstacle problems with a nonlinear source term or some contact problems. Based on the reformulated systems of the piecewise almost linear equations, we propose a class of semi-iterative algorithms to find the exact solution of the problems. We prove that the semi-iterative algorithms enjoy a nice monotone convergence property in the sense that subsets of the indices consisting of the indices, for which the corresponding components of the iterates violate the constraints, become smaller and smaller. Then the algorithms converge monotonically to the exact solutions of the problems in a finite number of steps. Some numerical experiments are presented to show the effectiveness of the proposed algorithms.

1 Introduction

Let \(F:R^{n}\to R^{n}\) be a given function. The nonlinear complementarity problem, denoted by \(\operatorname{NCP}(F,\phi)\), is to find an \(x\in R^{n}\) such that

$$ x\geq\phi,\qquad F(x)\geq0,\qquad (x-\phi)^{T}F(x)=0. $$
(1.1)

In this paper, we focus on problem (1.1), in which the function F has the form of \(F(x)=Ax+\Psi(x)\), that is, we have the problem of finding an \(x\in R^{n}\) such that

$$ x\geq\phi,\qquad Ax+\Psi(x)\geq0,\qquad (x-\phi)^{T}\bigl[Ax+ \Psi(x)\bigr]=0, $$
(1.2)

where \(A=(a_{ij})\in R^{n\times n}\) is a given matrix, \(\phi=(\phi _{i})\in R^{n}\) is a given vector, and \(\Psi:R^{n}\to R^{n}\) is a given diagonal differentiable mapping, that is, the ith component \(\Psi_{i}\) of \(\Psi(x)\) is a function of the ith variable \(x_{i}\) only:

$$\Psi_{i}=\Psi_{i}(x_{i}), \quad i=1,2,\ldots,n. $$

We denote the above problem by \(\operatorname{ALCP}(A,\Psi,\phi)\) and call it an almost linear complementarity problem (see, e.g., [1]). Obviously, if Ψ is a linear function, \(\operatorname{ALCP}(A,\Psi,\phi)\) reduces to a linear complementarity problem.

\(\operatorname{ALCP}(A,\Psi,\phi)\) has many applications, especially in engineering. For instance, it can be derived from the discrete simulations of Bratu obstacle problem [2], which models the nonlinear diffusion phenomena taking place in combustion and in semiconductors, and of some free boundary problems with nonlinear source terms, which models the diffusion problems involving Michaelis-Menten or second order irreversible reactions [3, 4].

In the literature, one classical approach used for solving (1.1) consists of the linearized projected relaxation methods [5, 6]. They are known to be convergent and easy to implement. However, their convergence rates depend crucially on the choice of the relaxation parameter and deteriorate heavily with mesh refinement. In order to improve the efficiency, later multigrid methods were proposed (e.g. see [7–9]) and domain decomposition methods (e.g. see [10–16]), in which the subproblems are generally linear complementarity problems and then preconditioners, widely used in the linear systems, may not be applied directly. Another efficient way to solve problem (1.1) is given by active set strategies (e.g. see [17–19]). To solve the linear complementarity problem, the basic iteration of the active set strategies consists of two steps. First, based on a certain active set method, the (mesh) domain is decomposed into active and inactive parts. Then a reduced linear system associated with the inactive set can be solved by using a fast linear system solver such as the multigrid method or the preconditioned conjugate gradient method. For instance, in [17, 18], two approaches were introduced, respectively, for the elliptic and the parabolic case, where the Lagrange multiplier strategy is used in order to express the problem as a higher dimension standard equality problem. In particular, in [18] such a strategy is combined with a semi-iterative procedure based on a suitable successive update of the coincidence set (that is, the area where the solution touches the obstacle), while in [17] the solution of the parabolic variational inequality is obtained as the limit of the solutions of a family of appropriately regularized nonlinear parabolic equations. Alternately, inexact semismooth Newton methods have been developed to solve problem (1.1) based on its semismooth reformulation [20–23]. These algorithms are attractive because they converge rapidly from any sufficiently good initial iterate and the subproblems are also systems of equations.

In this paper, we will propose some semi-iterative algorithms for the numerical treatment of an almost linear unilateral obstacle problem or mixed complementarity problem. The algorithms are obtained as applications of piecewise almost linear systems to the problem which can be considered as an extension of piecewise linear systems to the affine lower obstacle problem [24–26]. The algorithms do not require the use of Lagrange multipliers and the involved subproblems in the algorithms are systems of almost linear equations, whose dimensions are less than n, the dimension of the original problem. We prove that the proposed algorithms enjoy a nice global monotonic convergence property in the sense that the subsets of the indices consisting of the indices, for which the corresponding components of the iterates violate the constraints, become smaller and smaller. Then the iterates converge to the exact solution in a finite number of steps. The numerical examples presented indicate the effectiveness of the proposed algorithms.

The rest of this paper is organized as follows. In Section 2 we investigate the classical obstacle problem and reformulate the problem via a piecewise almost linear system (PALS). Based on the PALS given in Section 2, we propose a semi-iterative algorithm and discuss its monotone and finite convergence in Section 3. In Section 4, semi-iterative algorithms are proposed for solving upper obstacle problems and mixed complementarity problems, respectively, via their PALS reformulations, and the monotone and finite convergence of the algorithms are also obtained. In Section 5 we present some numerical examples to investigate the efficiency of the algorithms. Finally, in Section 6, we give a few conclusions.

2 Some preliminaries

For any given vector \(x\in R^{n}\), let the diagonal function \(P_{\phi}(x)\) be defined as

$$ P_{\phi}(x)=\operatorname{diag}\bigl(p_{\phi_{1}}(x_{1}),p_{\phi_{2}}(x_{2}), \ldots, p_{\phi_{n}}(x_{n})\bigr) $$
(2.1)

with

$$p_{\phi_{i}}(x_{i})=\textstyle\begin{cases} 1,& \mbox{if } x_{i}\geq\phi_{i},\\ 0, & \mbox{if } x_{i}< \phi_{i}, \end{cases}\displaystyle \quad i=1,2,\ldots, n. $$

It is easy to see the following result is true [24].

Lemma 2.1

$$P_{\phi}(x) (x-\phi)=\max\{x,\phi\}-\phi \quad\textit{and}\quad \bigl[I-P_{\phi}(x)\bigr](x-\phi)=\min\{x,\phi\}-\phi. $$

According to Lemma 2.1, we have the following conclusion.

Lemma 2.2

Let x be the solution of the following nonsmooth equations:

$$ \bigl(I-P_{\phi}(x)+AP_{\phi}(x)\bigr) (x-\phi)+ \Psi\bigl(P_{\phi}(x) (x-\phi)+\phi \bigr)+A\phi=0. $$
(2.2)

Let \(y=P_{\phi}(x)(x-\phi)+\phi=\max\{x,\phi\}\). Then y is the solution of the almost linear complementarity problem (1.2).

Proof

It is easy to check by Lemma 2.1 that

$$\begin{aligned}& \bigl(I-P_{\phi}(x)+AP_{\phi}(x)\bigr) (x-\phi)+ \Psi\bigl(P_{\phi}(x) (x-\phi)+\phi \bigr)+A\phi \\& \quad= \min\{x,\phi\}-\phi+A\bigl(\max\{x,\phi\}-\phi\bigr)+\Psi\bigl(\max\{x, \phi\}\bigr)+A\phi \\& \quad= \min\{x,\phi\}-\phi+Ay+\Psi(y). \end{aligned}$$
(2.3)

By (2.3), x is the solution of problem (2.2) equivalent to \(y=\max\{x,\phi\}\) satisfying

$$Ay+\Psi(y)=\phi-\min\{x,\phi\}=-\min\{x-\phi,0\}, $$

which implies

$$y\geq\phi,\qquad Ay+\Psi(y)\geq0, $$

and

$$(y-\phi)^{T}\bigl(Ay+\Psi(y)\bigr)=-\bigl[\max\{x-\phi,0\} \bigr]^{T}\min\{x-\phi,0\}=0. $$

That is, y is the solution of problem (1.2). □

Remark 2.1

For the linear complementarity problem

$$ x\geq\phi,\qquad Ax-b\geq0,\qquad (x-\phi)^{T}(Ax-b)=0, $$
(2.4)

problem (2.2) reduces to the following piecewise linear system (PLS):

$$\bigl[I-P_{\phi}(x)+AP_{\phi}(x)\bigr](x-\phi)=b-A\phi. $$

3 A semi-iterative algorithm for obstacle problem (1.2)

It is to be noted that the left-hand side of system (2.2) is not everywhere differentiable even for smooth Ψ. Nevertheless, a semi-iterative algorithm for solving system (2.2) can be constructed as follows:

Algorithm 3.1

Let \(P^{0}=O\). Set \(k:=0\).

Step 1: Solve the system of finding \(x^{k+1}\), such that

$$ \bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi\bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}- \phi\bigr)+\phi\bigr)+A\phi=0 $$
(3.1)

with

$$ P^{k}=P_{\phi}\bigl(x^{k}\bigr),\quad k=1,2,\ldots, $$
(3.2)

where \(P_{\phi}\) is defined by (2.1).

Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\), let \(y=P^{k}(x^{k+1}-\phi)+\phi=\max\{x^{k+1},\phi\}\) and stop. Otherwise, go to Step 3.

Step 3: Set \(k:=k+1\) and go to Step 1.

Remark 3.1

In Algorithm 3.1, one needs to solve a system of nonlinear equations (3.1) at each iteration. Since the nonlinearity of \(F(x)=Ax+\Psi(x)\) occurs only in the diagonal function Ψ, (3.1) is a system of almost linear equations (see, e.g., [1]). Especially, when \(F(x)=Ax-b\), Algorithm 3.1 reduces to the semi-iterative Newton type algorithm for the solution of the linear complementarity problem (2.4). In this algorithm, only a system of linear equations needs to be solved at each iteration. We refer to [24] for more details.

Remark 3.2

Let

$$I^{k}=\bigl\{ i:p_{i}^{k}=1\bigr\} \quad\mbox{and}\quad J^{k}=\bigl\{ i:p_{i}^{k}=0\bigr\} . $$

Equation (3.1) can be rewritten as

$$\textstyle\begin{cases} A_{I^{k}I^{k}}x_{I^{k}}^{k+1}+\Psi_{I^{k}}(x_{I^{k}}^{k+1})=-A_{I^{k}J^{k}}\phi _{J^{k}},\\ x_{J^{k}}^{k+1}= \phi_{J^{k}}-A_{J^{k}I^{k}}x_{I^{k}}^{k+1}-A_{J^{k}J^{k}}\phi _{J^{k}}-\Psi_{J^{k}}(\phi_{J^{k}}), \end{cases} $$

where \(A_{IJ}\) denotes submatrix of A consisting of \(a_{ij}\) with \(i\in I\), \(j\in J\), and \(x_{I}\) denotes the subvector of x consisting of \(x_{i}\) with \(i\in I\). Consequently, the main work in solving (3.1) is to solve a system of almost linear equations with dimension \(n_{k}=\operatorname{dim}(I^{k})\leq n\).

In order to prove the convergence of Algorithm 3.1, some preliminary results are needed first (see, e.g., [24, 27]).

Lemma 3.1

Let A be an M-matrix. Then, for any diagonal matrix P, whose diagonals are zeros or ones, the two matrices \(I- P + AP\) and \(I-P + PA\) are M-matrices, and therefore, \((I- P + AP)^{-1}\geq0\) and \((I- P + PA)^{-1}\geq0\). Furthermore, for any nonnegative diagonal matrix Γ, the matrices \(I- P + AP+\Gamma\) and \(I-P + PA+\Gamma\) are M-matrices, and then \((I- P + AP+\Gamma)^{-1}\geq0\) and \((I- P + PA+\Gamma)^{-1}\geq0\).

Define

$$\mathcal{P}=\bigl\{ P\in R^{n\times n}: P\mbox{ is a diagonal matrix with zeros or ones diagonals}\bigr\} $$

and

$$\bar{\mathcal{S}}_{P}=\bigl\{ x\in R^{n}: T_{P}(x) \geq0\bigr\} , $$

where

$$T_{P}(x)=(I-P+AP) (x-\phi)+\Psi\bigl(P(x-\phi)+\phi\bigr)+A\phi. $$

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreasing functions. Then, for any \(P\in\mathcal{P}\), \(T_{P}\) is an M-function. In fact, for any \(x\in R^{n}\) and \(y\in R^{n}\), if \(T_{P}(x)\geq T_{P}(y)\), we have

$$\begin{aligned} T_{P}(x)-T_{P}(y) =&(I-P+AP) (x-y)+\Psi\bigl(P(x-\phi)+\phi\bigr)-\Psi\bigl(P(y-\phi)+\phi \bigr) \\ =& \bigl(I-P+AP+\Gamma_{P}(x,y)\bigr) (x-y) \\ \geq&0, \end{aligned}$$
(3.3)

where

$$ \Gamma_{P}(x,y)=\int_{0}^{1} \nabla\Psi\bigl(P\bigl(x+t(y-x)\bigr)+(I-P)\phi \bigr)\,dtP $$
(3.4)

is a diagonal matrix with nonnegative diagonals. By Lemma 3.1, \((I-P+AP+\Gamma_{P}(x,y))^{-1}\geq0\), and hence by (3.3), we have \(x\geq y\), which implies the inverse isotone of \(T_{P}\). On the other hand, for every pair of indices \(i\neq j\), and for every \(x\in R^{n}\), noting that \(A^{P}:=I-P+AP=(a^{P}_{ij})\) is an M-matrix, \(a^{P}_{ij}\leq0\). Therefore, the one-dimensional function \(g_{ij}\) in the variable t has the form

$$g_{ij}(t)=\bigl(T_{P}\bigl(x^{t,j}\bigr) \bigr)_{i}=\sum_{l\neq j}a^{P}_{il}(x_{l}- \phi _{l})+a^{P}_{ij}(t-\phi_{j})+ \Psi_{i}\bigl(p_{i}(x_{i}-\phi_{i})+ \phi_{i}\bigr)+\sum_{j=1}^{n}a_{ij} \phi_{j} , $$

where \(x^{t,j}=(x_{1},\ldots, x_{j-1},t,x_{j+1},\ldots, x_{n})\) and \(P=\operatorname {diag}(p_{1},p_{2},\ldots,p_{n})\). It is clear that \(g_{ij}\) is nonincreasing, which implies the off-diagonal antitone of \(T_{P}\). Therefore, the inverse isotone together with the off-diagonal antitone of \(T_{P}\) implies that \(T_{P}\) is an M-function [5].

For any \(P\in\mathcal{P}\) and \(x\in \bar{\mathcal{S}} _{P}\), we have

$$\begin{aligned} 0 \leq& T_{P}(x)=(I-P+AP) (x-\phi)+\Psi\bigl(P(x-\phi)+\phi\bigr)+A \phi \\ =&\bigl(I-P+AP+\Gamma_{P}(x,\phi)\bigr) (x-\phi)+\Psi(\phi)+A\phi, \end{aligned}$$

where \(\Gamma_{P}(x,\phi)\) is defined as (3.4). Then,

$$\bigl(I-P+AP+\Gamma_{P}(x,\phi)\bigr) (x-\phi)\geq-\Psi(\phi)-A\phi \geq\min\bigl\{ -\Psi (\phi)-A\phi,0\bigr\} . $$

Noting that \((I-P+AP)^{-1}\geq(I-P+AP+\Gamma_{P}(x,\phi))^{-1}\geq0\) and \(\min\{-\Psi(\phi)-A\phi,0\}\leq0\) we have

$$\begin{aligned} x \geq& \phi+ \bigl(I-P+AP+\Gamma_{P}(x,\phi)\bigr)^{-1}\min \bigl\{ -\Psi(\phi)-A\phi ,0\bigr\} \\ \geq& \phi+ (I-P+AP)^{-1}\min\bigl\{ -\Psi(\phi)-A\phi,0\bigr\} . \end{aligned}$$

That is, \(\bar{\mathcal{ S}} _{P}\) is bounded below. On the other hand, let \(x= \phi+ (I-P+AP)^{-1}\max\{ -\Psi(\phi)-A\phi,0\}\). It is easy to check \(x\in \bar{\mathcal{S}} _{P}\). In fact,

$$\begin{aligned} T_{P}(x) =& \max\bigl\{ -\Psi(\phi)-A\phi,0\bigr\} +\Psi \bigl(P(I-P+AP)^{-1}\max\bigl\{ -\Psi (\phi)-A\phi,0\bigr\} +\phi\bigr)+A \phi \\ \geq& \max\bigl\{ -\Psi(\phi)-A\phi,0\bigr\} +\Psi(\phi)+A\phi \geq0, \end{aligned}$$

by the use of the monotonicity of Ψ and the relation \(P(I-P+AP)^{-1}\max\{ -\Psi(\phi)-A\phi,0\}+\phi\geq\phi\). Therefore, \(\bar{\mathcal{S}} _{P}\) is nonempty and bounded below and has a minimal element. In fact, \(\bar{\mathcal{S}}_{P}\) has unique minimal element \(x^{*}_{P}\), which solves \(T_{P}(x)=0\), i.e. \(T_{P}(x^{*}_{P})=0\) (see, e.g., [12] and the references therein). According to the above discussion, we have immediately the following result.

Lemma 3.2

Let A be an M-matrix, and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm 3.1 is well defined.

The following two lemmas are crucial for the convergence of Algorithm 3.1.

Lemma 3.3

Let \(\{P^{k}\}\) be defined by (3.2). Then

$$ \bigl(P^{k+1}-P^{k}\bigr) \bigl(x^{k+1}- \phi\bigr)\geq0,\quad k=0,1,2,\ldots. $$
(3.5)

Moreover, let \(\{x^{k}\}\) be generated by Algorithm 3.1. Then, if the equality in (3.5) holds, \(x^{k+1}\) is the solution of problem (2.2).

Proof

Inequality (3.5) follows directly from the definitions (2.1) and (3.2). If for some k,

$$\bigl(P^{k+1}-P^{k}\bigr) \bigl(x^{k+1}-\phi\bigr)= 0, $$

then, by (3.1),

$$\begin{aligned}& \bigl(I-P^{k+1}+AP^{k+1}\bigr) \bigl(x^{k+1}-\phi\bigr)+ \Psi\bigl(P^{k+1}\bigl(x^{k+1}-\phi\bigr)+\phi \bigr)+A\phi \\& \quad= \bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi \bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi\bigr)+A\phi, \\& \quad= 0, \end{aligned}$$

which implies that \(x^{k+1}\) is the solution of problem (2.2). □

Lemma 3.4

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Let \(\{x^{k}\}\) and \(\{P^{k}\}\) be generated by Algorithm 3.1. Then

$$ P^{k}\bigl(x^{k+1}-\phi\bigr)\geq P^{k-1}\bigl(x^{k}-\phi\bigr)\geq\cdots\geq P^{0} \bigl(x^{1}-\phi\bigr)=O $$
(3.6)

and

$$ P^{k+1}\geq P^{k}\geq O,\quad k=0,1,2,\ldots. $$
(3.7)

Proof

By (3.1), we have

$$\begin{aligned} 0 =&\bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi \bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi\bigr)+A\phi \\ =&\bigl(I-P^{k-1}+AP^{k-1}\bigr) \bigl(x^{k}-\phi \bigr)+\Psi\bigl(P^{k-1}\bigl(x^{k}-\phi\bigr)+\phi\bigr)+A \phi. \end{aligned}$$

Therefore,

$$\begin{aligned}& \bigl(I-P^{k}+P^{k}A\bigr)P^{k} \bigl(x^{k+1}-\phi\bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}- \phi\bigr)+\phi\bigr) \\& \quad= P^{k}\bigl[\bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi\bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}- \phi\bigr)+\phi\bigr)\bigr] \\& \quad\quad{}+\bigl(I-P^{k}\bigr)\Psi\bigl(P^{k} \bigl(x^{k+1}-\phi\bigr)+\phi\bigr) \\& \quad= P^{k}\bigl[\bigl(I-P^{k-1}+AP^{k-1}\bigr) \bigl(x^{k}-\phi\bigr)+\Psi\bigl(P^{k-1}\bigl(x^{k}- \phi\bigr)+\phi \bigr)\bigr] \\& \quad\quad{}+\bigl(I-P^{k}\bigr)\Psi\bigl(P^{k} \bigl(x^{k+1}-\phi\bigr)+\phi\bigr) \\& \quad= \bigl(I-P^{k}+P^{k}A\bigr)P^{k-1} \bigl(x^{k}-\phi\bigr)+\Psi\bigl(P^{k-1}\bigl(x^{k}- \phi\bigr)+\phi\bigr) \\& \quad\quad{} + \bigl(P^{k}-P^{k-1}\bigr) \bigl(x^{k}-\phi\bigr) \\& \quad\quad{}+\bigl(I-P^{k}\bigr)\bigl[\Psi\bigl(P^{k} \bigl(x^{k+1}-\phi\bigr)+\phi\bigr)-\Psi\bigl(P^{k-1} \bigl(x^{k}-\phi\bigr)+\phi \bigr)\bigr]. \end{aligned}$$
(3.8)

We note that if \(p_{i}^{k}\neq1\), then \(x_{i}^{k}< \phi_{i}\) and \(p_{i}^{k}=0\). It follows for \(p_{i}^{k}\neq1\) that

$$p_{i}^{k}\bigl(x_{i}^{k+1}- \phi_{i}\bigr)+\phi_{i}=\phi_{i} \quad\mbox{and}\quad p_{i}^{k-1}\bigl(x_{i}^{k}- \phi_{i}\bigr)+\phi_{i}\leq\phi_{i}. $$

By the use of the monotonicity of \(\Psi_{i}\), we have immediately

$$\Psi_{i}\bigl(p_{i}^{k}\bigl(x_{i}^{k+1}- \phi_{i}\bigr)+\phi_{i}\bigr)-\Psi_{i} \bigl(p_{i}^{k-1}\bigl(x_{i}^{k}-\phi _{i}\bigr)+\phi_{i}\bigr)\geq0, \quad\mbox{if } p_{i}^{k}\neq1, $$

which implies

$$ \bigl(I-P^{k}\bigr)\bigl[\Psi\bigl(P^{k} \bigl(x^{k+1}-\phi\bigr)+\phi\bigr)-\Psi\bigl(P^{k-1} \bigl(x^{k}-\phi\bigr)+\phi \bigr)\bigr]\geq0. $$
(3.9)

Equation (3.8) together with (3.5) and (3.9) leads to

$$\begin{aligned}& \bigl(I-P^{k}+P^{k}A\bigr) \bigl[P^{k}\bigl(x^{k+1}-\phi\bigr)-P^{k-1} \bigl(x^{k}-\phi\bigr)\bigr] \\& \quad\quad{}+\Psi\bigl(P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi \bigr)-\Psi\bigl(P^{k-1}\bigl(x^{k}-\phi\bigr)+\phi\bigr) \\& \quad= \bigl(I-P^{k}+P^{k}A+\Gamma^{k}\bigr) \bigl[P^{k}\bigl(x^{k+1}-\phi\bigr)-P^{k-1} \bigl(x^{k}-\phi\bigr)\bigr] \\& \quad\geq 0, \end{aligned}$$
(3.10)

where

$$\Gamma^{k}=\int_{0}^{1}\nabla\Psi \bigl(P^{k-1}\bigl(x^{k}-\phi\bigr)+\phi+t \bigl[P^{k}\bigl(x^{k+1}-\phi\bigr)-P^{k-1} \bigl(x^{k}-\phi\bigr)\bigr] \bigr)\,dt $$

is a diagonal matrix with nonnegative diagonals. By Lemma 3.1, \((I-P^{k}+P^{k}A+\Gamma^{k})^{-1}\geq0\), and hence by (3.10), we get \(P^{k}(x^{k+1}-\phi)-P^{k-1}(x^{k}-\phi)\geq0\), that is, (3.6) holds.

If \(p_{i}^{k}=1\), \(x^{k}\ge\phi_{i}\), and by (3.6), we have

$$x_{i}^{k+1}-\phi_{i}=p_{i}^{k} \bigl(x_{i}^{k+1}-\phi_{i}\bigr)\geq p_{i}^{k-1}\bigl(x_{i}^{k}-\phi _{i}\bigr)\geq0. $$

This implies \(x_{i}^{k+1}\geq\phi_{i}\) and hence \(p_{i}^{k+1}=1=p_{i}^{k}\). If \(p_{i}^{k}=0\), we immediately have \(p_{i}^{k+1}\geq0=p_{i}^{k}\). Therefore, we conclude (3.7). □

Theorem 3.1

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm 3.1 is well defined and stops at the solution of problem (1.2) in a finite number of iterations.

Proof

Lemma 3.2 implies that the algorithms are well defined. By Lemma 3.4 and Algorithm 3.1, we have \(I\geq P^{k+1}\geq P^{k}\geq\cdots\geq P^{0}=O\). Obviously, we have \(P^{k+1}=P^{k}\) for some \(k< n\). In this case, \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\). By Lemma 3.3, we obtain the solution of the problem by Algorithm 3.1 within at most n steps. □

Theorem 3.1 indicates that Algorithms 3.1 can obtain the solution of problem (1.2) within at most n steps. Nevertheless, the corresponding upper bound (i.e. n) may be large, when the dimension of the system is large. However, several numerical tests presented in Section 5 show that the convergence can be reached in just a few iterations.

Define

$$\underline{\mathcal{S}} =\bigl\{ x:\min\bigl\{ x-\phi,Ax+\Psi(x)\bigr\} \leq0\bigr\} . $$

Then, if \(T(x):=Ax+\Psi(x)\) is an M-function, the unique solution of problem (1.2) is the maximal element of \(\underline{\mathcal{S}}\) (see, e.g., [12]). The following conclusion indicates that Algorithm 3.1 has a monotone convergence property.

Theorem 3.2

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions. Let \(\{x^{k}\}\) and \(\{P^{k}\}\) be generated by Algorithm 3.1. Assume

$$y^{k+1}=P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi,\quad k=0,1,2, \ldots. $$

Then \(\{y^{k+1}\}\subset \underline{\mathcal{S}}\) and

$$y^{k+1}\geq y^{k}\geq\cdots\geq y^{1}. $$

That is to say, \(\{y^{k}\}\) is in \(\underline{\mathcal{S}}\) and converges to the solution of problem (1.2) monotonically.

Proof

The monotone convergence is from (3.6) directly. If \(y^{k+1}_{i}>\phi_{i}\), \(p^{k}_{i}=1\), and by (3.1), we have

$$\begin{aligned}& \bigl[Ay^{k+1}+\Psi\bigl(y^{k+1}\bigr)\bigr]_{i} \\& \quad= \bigl[\bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi \bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi\bigr)+A\phi +\bigl(P^{k}-I\bigr) \bigl(x^{k+1}-\phi\bigr)\bigr]_{i} = 0, \end{aligned}$$

which implies \(\min\{y^{k+1}-\phi,Ay^{k+1}+\Psi(y^{k+1})\}\leq0 \) and \(y^{k+1}\in \underline{\mathcal{S}}\). □

In Theorem 3.1, we assume that A is an M-matrix. This assumption can be replaced by the following conditions:

$$ \operatorname{null}\bigl(A^{T}\bigr)\equiv\operatorname{span}(v),\quad\quad \mbox{null}(A)\equiv\operatorname{span}(w)\quad \mbox{and} \quad A+\Gamma\mbox{ is an }M\mbox{-matrix} $$
(3.11)

for some \(v>0 \) and \(w>0\) (componentwise), and for any diagonal matrix Γ satisfying \(\Gamma\geq0\) and \(\Gamma\neq O\). This case may occur for elliptic problems with Neumann boundary condition. Let further \(v^{T}\Psi(\phi)\geq0\). Similar to Lemma 3.1, for any nonnegative diagonal matrix Γ, and any diagonal matrix \(P\neq I\), whose diagonals are zeros or ones, the matrices \(I- P + AP+\Gamma\) and \(I-P + PA+\Gamma\) are M-matrices, which imply \((I- P + AP+\Gamma)^{-1}\geq0\) and \((I- P + PA+\Gamma)^{-1}\geq0\) (see, e.g., [24]). According to the proof in Lemma 3.4, for \(P^{k}\neq I\), problem (3.1) is well defined and (3.6) and (3.7) hold. If \(P^{k}=I\neq P^{k-1}\), we have \(x^{k}\geq\phi\) and then \(P^{k-1}(x^{k}-\phi)+\phi\geq\phi\). By (3.1), we have

$$\bigl(I-P^{k-1}+AP^{k-1}\bigr) \bigl(x^{k}-\phi \bigr)+\Psi\bigl(P^{k-1}\bigl(x^{k}-\phi\bigr)+\phi\bigr)+A \phi=0. $$

Noting that

$$\bigl(P^{k}-P^{k-1}\bigr) \bigl(x^{k}-\phi\bigr)= \bigl(I-P^{k-1}\bigr) \bigl(x^{k}-\phi\bigr)\geq0, $$

for positive vector v given in (3.11), we have

$$\begin{aligned} 0 \leq&v^{T}\bigl(P^{k}-P^{k-1}\bigr) \bigl(x^{k}-\phi\bigr) \\ =&v^{T}\bigl(I-P^{k-1}\bigr) \bigl(x^{k}-\phi \bigr) \\ =&v^{T}\bigl(I-P^{k-1}+AP^{k-1}\bigr) \bigl(x^{k}-\phi\bigr) \\ =&v^{T}\bigl( -A\phi-\Psi\bigl(P^{k-1}\bigl(x^{k}- \phi\bigr)+\phi\bigr) \bigr) \\ \leq& -v^{T}\Psi(\phi) \\ \leq& 0, \end{aligned}$$

which implies \((P^{k}-P^{k-1})(x^{k}-\phi)=0\), where the second inequality is from \(v>0\), \(\operatorname{null}(A^{T})\equiv\operatorname{span}(v)\) and the monotonicity of Ψ. Therefore, Algorithm 3.1 stops at kth iteration if \(P^{k}=I\). Consequently, we have the following convergence result.

Theorem 3.3

Let A satisfy (3.11) and \(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions with \(v^{T}\Psi(\phi)\geq0\). Then Algorithm 3.1 is well defined. Let \(\{x^{k}\}\) and \(\{P^{k}\}\) be generated by Algorithm 3.1, and let

$$y^{k+1}=P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi,\quad k=0,1,2, \ldots. $$

Then \(\{y^{k}\} \) is in \(\underline{\mathcal{S}}\) and converges monotonically to the solution of problem (1.2) in at most n iterations.

The above theorem also implies the existence of the solution for problem (2.1) under the conditions of Theorem 3.3.

4 Some extensions

In this section, we make some extensions. First, let us consider the upper obstacle problem of finding an \(x\in R^{n}\) such that

$$ x\leq\phi,\qquad Ax+\Psi(x)\leq0,\qquad (x-\phi)^{T}\bigl(Ax+\Psi(x) \bigr)=0, $$
(4.1)

where \(A=(a_{ij})\) is a given matrix, \(\phi=(\phi_{i})\in R^{n}\) is a given vector, and \(\Psi:R^{n}\to R^{n}\) is a given diagonal mapping defined as before.

For any given vector \(x\in R^{n}\), let the diagonal function \(P_{\phi}(x)\) be defined as

$$ P_{\phi}(x)=\operatorname{diag}\bigl(p_{\phi_{1}}(x_{1}),p_{\phi_{2}}(x_{2}), \ldots, p_{\phi_{n}}(x_{n})\bigr) $$
(4.2)

with

$$p_{\phi_{i}}(x_{i})=\textstyle\begin{cases} 1,& \mbox{if } x_{i}\leq\phi_{i},\\ 0, & \mbox{if } x_{i}>\phi_{i}. \end{cases} $$

Similar to Lemma 2.2, we have the following lemma.

Lemma 4.1

Let x be the solution of the following nonsmooth equations:

$$\bigl[I-P_{\phi}(x)+AP_{\phi}(x)\bigr](x-\phi)+\Psi \bigl(P_{\phi}(x) (x-\phi)+\phi \bigr)+A\phi=0, $$

where \(P_{\phi}(x)\) is defined by (4.2). Then \(y=P_{\phi}(x)(x-\phi)+\phi=\min\{x,\phi\}\) is the solution of the almost linear upper obstacle problem (4.1).

According to Lemma 4.1, a semi-iterative algorithm for solving upper obstacle problem (4.1) can be constructed as follows.

Algorithm 4.1

Let \(P^{0}=O\). Set \(k:=0\).

Step 1: Solve the system finding \(x^{k+1}\), such that

$$\bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\phi\bigr)+ \Psi\bigl(P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi\bigr)+A\phi=0, $$

where \(P^{k}=P_{\phi}(x^{k}) \) is defined by (4.2).

Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\), let \(y=P^{k}(x^{k+1}-\phi)+\phi=\min\{x^{k+1},\phi\}\) and stop. Otherwise, go to Step 3.

Step 3: Set \(k:=k+1\) and go to Step 1.

Similar to the proofs of Lemmas 3.1-3.3 as well as Theorems 3.1 and 3.2, we have the following finite convergence result.

Theorem 4.1

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions. Then Algorithm 4.1 is well defined. Moreover, let \(\{x^{k}\}\) and \(\{P^{k}\}\) be generated by Algorithm 4.1. Then

$$P^{k}\bigl(x^{k+1}-\phi\bigr)\leq P^{k-1} \bigl(x^{k}-\phi\bigr)\leq\cdots\leq P^{0} \bigl(x^{1}-\phi\bigr)=O, $$
$$P^{k+1}\geq P^{k}\geq O,\quad k=0,1,2,\ldots, $$

and

$$y^{k+1}\leq y^{k}\leq\cdots\leq y^{1}, $$

where

$$y^{k+1}=P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi,\quad k=0,1,2, \ldots. $$

Therefore, Algorithm 3.1 stops at the solution of problem (4.1) in at most n iterations.

In the following, we consider the numerical solution of the almost linear mixed complementarity problem of finding an \(x\in R^{n}\) such that (see, e.g., [28])

$$ \textstyle\begin{cases} x_{i}\geq\phi_{i}, (Ax+\Psi(x))_{i}\geq0,\qquad (x_{i}-\phi_{i})^{T}(Ax+\Psi(x))_{i}=0,&\mbox{if } i\in N_{1},\\ (Ax+\Psi(x))_{i}=0,&\mbox{if } i\in N_{2}, \end{cases} $$
(4.3)

where \(N_{1}\cap N_{2}=\emptyset\) and \(N_{1}\cup N_{2}=\{1,2,\ldots, n\}\).

Let \(\varphi\in R^{n}\) be any vector satisfying

$$ \varphi_{i}=\phi_{i}, \quad\forall i\in N_{1}. $$
(4.4)

For any given vector \(x\in R^{n}\), let the diagonal function \(P_{\varphi}(x)\) be defined as

$$ P_{\varphi}(x)=\operatorname{diag} \bigl(p_{\varphi_{1}}(x_{1}),p_{\varphi_{2}}(x_{2}), \ldots , p_{\varphi_{n}}(x_{n})\bigr) $$
(4.5)

with

$$p_{\varphi_{i}}(x_{i})=\textstyle\begin{cases} 1,& \mbox{if } i\in N_{1} \mbox{ and } x_{i}\geq\varphi_{i}=\phi_{i},\\ 0, & \mbox{if } i\in N_{1} \mbox{ and } x_{i}< \varphi_{i}=\phi_{i},\\ 1,& \mbox{if } i\in N_{2}. \end{cases} $$

Noting that

$$\begin{aligned}& \bigl(AP_{\varphi}(x) (x-\varphi)+A\varphi\bigr)_{i} \\& \quad= \sum_{j\in N_{1},x_{j}\geq\phi_{j}}a_{ij}(x_{j}- \varphi_{j})+\sum_{j\in N_{2}}a_{ij}(x_{j}- \varphi_{j})+\sum_{j\in N_{1}\cup N_{2}}a_{ij} \varphi_{j} \\& \quad= \sum_{j\in N_{1},x_{j}\geq\phi_{j}}a_{ij}x_{j} + \sum_{j\in N_{1},x_{j}< \phi_{j}}a_{ij}\phi_{j} +\sum _{j\in N_{2}}a_{ij}x_{j} \end{aligned}$$

and

$$\bigl(P_{\varphi}(x) (x-\varphi)+\varphi\bigr)_{i}= \textstyle\begin{cases} x_{i},& j\in N_{1}\mbox{ and }x_{j}\geq\phi_{j},\\ \phi_{i},& j\in N_{1}\mbox{ and }x_{j}< \phi_{j},\\ x_{i},& j\in N_{2}, \end{cases} $$

the following result becomes obvious.

Lemma 4.2

Let x be the solution of the following nonsmooth equations:

$$ \bigl(I-P_{\varphi}(x)+AP_{\varphi}(x)\bigr) (x- \varphi)+\Psi\bigl(P_{\varphi }(x) (x-\varphi)+\varphi\bigr)+A\varphi=0, $$
(4.6)

where \(P_{\varphi}(x)\) is defined by (4.5) and φ satisfies (4.4). Then \(y=P_{\varphi}(x)(x-\varphi)+\varphi\) is the solution of the almost linear mixed complementarity problem (4.3).

According to Lemma 4.2, we construct the following algorithm for the solution of mixed complementarity problem (4.3).

Algorithm 4.2

Let \(P^{0}=(p^{0}_{i})\) with

$$p^{0}_{i}= \textstyle\begin{cases} 0,&\mbox{if }i\in N_{1},\\ 1,&\mbox{if }i\in N_{2}. \end{cases} $$

Set \(k:=0\).

Step 1: Solve the system finding \(x^{k+1}\), such that

$$ \bigl(I-P^{k}+AP^{k}\bigr) \bigl(x^{k+1}-\varphi\bigr)+\Psi\bigl(P^{k}\bigl(x^{k+1}- \varphi\bigr)+\varphi \bigr)+A\varphi=0, $$
(4.7)

where \(P^{k}=P_{\varphi}(x^{k}) \) is defined by (4.3) and φ satisfies (4.4).

Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\varphi)=0\), let \(y=P^{k}(x^{k+1}-\varphi)+\varphi\) and stop. Otherwise, go to Step 3.

Step 3: Set \(k:=k+1\) and go to Step 1.

Similarly, we have the following conclusion.

Theorem 4.2

Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm 4.2 is well defined. Moreover, let \(\{x^{k}\}\) and \(\{P^{k}\}\) be generated by Algorithm 4.2. Then

$$\begin{aligned}& P^{k}\bigl(x^{k+1}-\varphi\bigr)- P^{k-1} \bigl(x^{k}-\varphi\bigr)\geq O,\quad i=1,2,\ldots, \\& P^{k+1}\geq P^{k}\geq O,\quad k=0,1,2,\ldots, \end{aligned}$$

and

$$y^{k+1}\geq y^{k}\geq\cdots\geq y^{1}, $$

where

$$y^{k+1}=P^{k}\bigl(x^{k+1}-\phi\bigr)+\phi,\quad k=0,1,2, \ldots. $$

Therefore, Algorithm 4.2 stops at the solution of problem (4.3) in at most n iterations.

For a matrix A satisfying (3.11), the monotone and finite steps convergence can be deduced in a similar way. We omit the details.

5 Numerical experiments

In this section, we present some numerical experiments in order to investigate the efficiency of the proposed algorithms. The programs are coded in Visual C++ 6.0 and run on a computer with 2.0 GHz CPU. We consider the following four problems.

Problem 1

(Michaelis-Menten reaction [11])

We consider the following free boundary problem on the unit square \(\Omega=(0,1)\times(0,1)\):

$$\Delta u-cu_{t}=f(x,y,u)+\varepsilon(x,y),\quad u\in D, $$

where \(f(x,y,u)=u/(1+u)\), \(\varepsilon(x,y)\) is a local threshold consumption rate, and D is a domain in Ω. At the free boundary (\(x = s(y)\)), the concentration and its gradient vanish. We take into account only the steady-state case, i.e., \(c=0\) with the following additional data on Ω:

$$\textstyle\begin{cases} \varepsilon(x,y)=8(y-0.5)^{2},\\ \frac{\partial u}{\partial n}=0 \quad\mbox{when }y=0 \mbox{ and } y=1 \mbox{ and } s(y)=1,\\ u(0,y)=y(1-y), \end{cases} $$

with \(x=s(y)\) being the free boundary. We shall be interested in a nonnegative solution of the above problem. We discretize the problem by using a five-point difference scheme with a constant mesh step size: \(h=1/(m+1)\), where m denotes the number of mesh nodes in x- or y-direction. Then Problem 1 can be transformed to problem (1.1).

Problem 2

([5])

We consider the following nonlinear complementarity problem:

$$x\ge0,\qquad F(x)\ge0,\qquad x^{T}F(x)=0. $$

Here, \(F(x)=Ax+D(x)+f\), where

$$A= \frac{1}{h^{2}} \begin{pmatrix} H&-I&&\\ -I&H&\ddots&\\ &\ddots&\ddots&-I\\ &&-I&H \end{pmatrix} $$

with

$$H= \begin{pmatrix} 4&-1&&\\ -1&4&\ddots&\\ &\ddots&\ddots&-1\\ &&-1&4 \end{pmatrix} $$

and \(h=\frac{ 1}{ \sqrt{n}+1}\), \(D(x)=(D_{i}):R^{n}\rightarrow R^{n}\) is a given diagonal mapping with \(D_{i}:R\rightarrow R\), for \(i=1,2,\ldots,n\), that is, the component \(D_{i}\) of D is a function of the ith variable \(x_{i}\) only. Set \(D_{i}(x_{i})=\lambda e^{x_{i}}\) and obtain a diagonal mapping \(D(x)=(D_{i}(x_{i}))\). In our test, we fix \(\lambda=0.8\), and let \(f_{i}=\max\{ 0,v_{i}-0.5\}\times10^{w_{i}-0.5}\), where \(w_{i}\) and \(v_{i}\) are random numbers in \([0,1]\), \(i=1,2,\ldots, n\).

Problem 3

([29])

We discuss the following nonlinear complementarity problem:

$$x\ge0,\qquad F(x)\ge0,\qquad x^{T}F(x)=0, $$

where

$$F(x)=Mx+D(x)+f. $$

The matrix \(M=A^{T}A+B\) is produced as follows. Let A be an \(n\times n\) matrix whose entries are randomly generalized in the interval \((-5,5)\) and the skew-symmetric matrix B be generated in the same way. Let the vector f be generated from a uniform distribution in the interval \((-200,300)\). Let \(D(x)=(D_{i}(x_{i})):R^{n}\rightarrow R^{n}\) be a given diagonal mapping with \(D_{i}(x_{i})=d_{i}*\arctan(x_{i})\), \(i=1,2,\ldots, n\), where \(d_{i}\) (\(i=1,2,\ldots, n\)) are chosen randomly in \((0,1)\).

Problem 4

(Signorini problem [30])

We consider the contact problem of finding a u such that

$$\textstyle\begin{cases} -\triangle u=f \quad\mbox{in } \Omega=(0,1)\times(0,1),\\ u=0 \quad\mbox{on } \Gamma_{p}=\{(x,y):x=0,0\le y\le1\}\cup\{ (x,y):0\le x\le1, y=0\}, \\ u\ge0, \quad\quad\frac{\partial u}{\partial n}\ge0, \quad\quad\frac{\partial u}{\partial n}u=0 \quad\mbox{on } \Gamma_{s_{1}}\cup\Gamma_{s_{2}}, \end{cases} $$

where \(\Gamma_{s_{1}}=\{(x,y):0\le x\le1,y=1\}\), \(\Gamma_{s_{1}}=\{ (x,y):x=1, 0\le y\le1\}\), and

$$f(x,y)=\textstyle\begin{cases}-10,& \mbox{if }(x,y)\in(0,1/2]\times(0,1),\\ 10, & \mbox{if }(x,y)\in(1/2,1]\times(0,1). \end{cases} $$

We discretize the problem by using a finite difference scheme with a constant mesh step size: \(h=1/m\), where m denotes the number of mesh nodes in x- or y-direction. Then Problem 4 can be transformed to a linear mixed complementarity problem.

We compare different algorithms from the point of view of the iteration numbers. Here, we consider three algorithms: the semi-iterative algorithms proposed in this paper (denoted by SIA), the semismooth equation approach proposed in [31] (denoted by SSEA), and two-level additive Schwarz method proposed in [32] (denoted by TLASM).

For SIA, we choose the initial matrix \(P^{0}=O\) for all problems and adopt Newton iteration with line search to solve the system of nonlinear equations at each iteration. The termination criterion of inner iteration is \(\|\alpha^{k} d^{k}\|_{2}\le10^{-6}\), where \(d^{k}\) is the Newton direction and \(\alpha^{k}\) is the step length. The number of inner iterations is denoted by \(\mathrm{iter}_{\mathit{inn}}\) and the iteration number of the algorithm is denoted by iter.

For SSEA proposed in [31], we choose the initial point \(x^{0}=0\), the parameters \(\epsilon=10^{-6}\), \(p=3\), \(\rho=0.5\), \(\beta=0.3\). \(H_{k}\in\partial_{B} \Phi\) is defined by the procedure proposed in Section 7 of [31].

For TLASM, we use PSOR to solve all subproblems relating to obstacle problems with the tolerance 10−4 in \(\|\cdot\|_{2}\) norm. The systems of nonlinear equations in TLASM are solved by Newton iteration, and the termination criterion is the same as that in SIA. The relaxation parameter in the relaxation iterative method is chosen as \(\omega=1.8\). In order to get a super-solution initial, we let \(x^{0}=A^{-1}e\), with \(e=(1,1,\ldots,1)^{T}\) and \(x^{0}=-A^{-1}f\), respectively, for Problems 1 and 2. In TLASM, \(\mathrm{iter}_{\mathit{inn}}\) and iter are the same as that in SIA.

It is easy to see from Tables 1-3 that the algorithms presented in this paper are competitive as regards the above three algorithms in most cases. As for TLASM, we need to find a super-solution initial of the problem, which may usually bring about some numerical difficulties. While compared with TLASM, it is much easier for SIA to choose a suitable initial point. Another advantage of SIA is that the subproblems in the algorithm are systems of almost linear equations of dimensions less than n, while the subproblems in two-level additive Schwarz method still include complementarity problems. In Table 3, we do not list the results for two-level additive Schwarz method. The reason is that it is difficult for Problem 3 to find a super-solution initial point \(x^{0}\) and the algorithm may not converge. Theoretically, we may not guarantee the convergence of SIA for Problem 3 since A is not an M-matrix. However, it is interested to see from Table 3 that SIA is still stable and effective since the iteration numbers are small and show almost no change with the increase of the dimension.

Table 1 Comparisons of iteration numbers for Problem  1
Table 2 Comparisons of iteration numbers for Problem  2
Table 3 Comparisons of iteration numbers for Problem  3

As for SSEA and TLASM, some modifications are needed in order to solve the mixed complementarity problems. So, we only run Algorithm 4.1 for Problem 4 and the iteration numbers are listed in Table 4 to solve the mixed complementarity problems with difference dimensions discretized by Problem 4. It is easy to see that Algorithm 4.1 is still effective for the problem.

Table 4 Algorithm 4.1 for Problem  4

6 Final remark

In this paper, we present some finite algorithms to the numerical solution of almost linear (mixed) complementarity problems. The algorithms are based on the systems of piecewise almost linear equations, which are the reformulations of the problems. It is proved that the algorithms converge monotonically to the solution of the problem in a finite number of steps. Indeed, we can produce a monotone lower solution or upper solution by iterates converging to the solutions of the problems by the algorithms. Numerical results show that the proposed method is effective.

References

  1. Wang, ZY, Fukushima, M: A finite algorithm for almost linear complementarity problems. Numer. Funct. Anal. Optim. 28, 1387-1403 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  2. Rodrigues, JF: Obstacle Problems in Mathematical Physics. Elsevier, Amsterdam (1987)

    MATH  Google Scholar 

  3. Elliott, CM, Ockendon, JR: Weak and Variational Methods for Moving Boundary Problems. Research Notes in Mathematics, vol. 59. Pitman, London (1982)

    MATH  Google Scholar 

  4. Meyer, GH: Free boundary problems with nonlinear source terms. Numer. Math. 43, 463-482 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  5. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)

    Google Scholar 

  6. Glowinski, R: Lectures on Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984)

    Book  Google Scholar 

  7. Hackbusch, W, Mittelmann, H: On multigrid methods for variational inequalities. Numer. Math. 42, 65-76 (1984)

    Article  Google Scholar 

  8. Kornhuber, R: Monotone multigrid methods for elliptic variational inequalities I. Numer. Math. 69, 167-184 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  9. Zhang, Y: Multilevel projection algorithm for solving obstacle problems. Comput. Math. Appl. 41, 1505-1513 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  10. Badea, L, Tai, X, Wang, J: Convergence rate analysis of a multiplicative Schwarz method for variational inequalities. SIAM J. Numer. Anal. 41, 1052-1073 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  11. Hoffmann, KH, Zou, J: Parallel solution of variational inequality problems with nonlinear source terms. IMA J. Numer. Anal. 16, 31-45 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  12. Jiang, YJ, Zeng, JP: A multiplicative Schwarz algorithm for the nonlinear complementarity problem with an M-function. Bull. Aust. Math. Soc. 82, 353-366 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  13. Tai, XC: Rate of convergence for some constraint decomposition methods for nonlinear variational inequalities. Numer. Math. 93, 755-786 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  14. Tarvainen, P: Two-level Schwarz method for unilateral variational inequalities. IMA J. Numer. Anal. 19, 273-290 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  15. Zeng, JP, Zhou, SZ: Schwarz algorithm for the solution of variational inequalities with nonlinear source terms. Appl. Math. Comput. 97, 23-35 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  16. Zeng, JP, Zhou, SZ: Domain decomposition methods and their convergence for solving variational inequalities with a nonlinear source term. Acta Math. Appl. Sin. 23, 250-260 (2000)

    MATH  MathSciNet  Google Scholar 

  17. Ito, K, Kunisch, K: Parabolic variational inequalities: the Lagrange multiplier approach. J. Math. Pures Appl. 85, 415-449 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Kärkkänen, T, Kulisch, K, Tarvainen, P: Augmented Lagrangian active set methods for obstacle problems. J. Optim. Theory Appl. 119, 499-533 (2003)

    Article  MathSciNet  Google Scholar 

  19. Xue, L, Cheng, XL: An algorithm for solving the obstacle problems. Comput. Math. Appl. 48, 1651-1657 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  20. Kanzow, C: Inexact semismooth Newton methods for large-scale complementarity problems. Optim. Methods Softw. 19, 309-325 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  21. Sun, Z, Zeng, JP: A monotone semismooth Newton type method for a class of complementarity problems. J. Comput. Appl. Math. 235, 1261-1274 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  22. Sun, Z, Zeng, JP: A damped semismooth Newton method for mixed linear complementarity problems. Optim. Methods Softw. 26, 187-205 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  23. Sun, Z, Wu, L, Liu, Z: A damped semismooth Newton method for the Brugnano-Casulli piecewise linear system. BIT Numer. Math. 55, 569-589 (2015)

    Article  MathSciNet  Google Scholar 

  24. Brugnano, L, Sestini, A: Numerical solution of obstacle and parabolic obstacle problems based on piecewise linear systems. AIP Conf. Proc. 1168, 746-749 (2009)

    Article  Google Scholar 

  25. Brugnano, L, Casulli, V: Iterative solution of piecewise linear systems and applications to flows in porous media. SIAM J. Sci. Comput. 31, 1858-1873 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  26. Brugnano, L, Casulli, V: Iterative solution of piecewise linear systems. SIAM J. Sci. Comput. 30, 463-472 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  27. Ortega, JM, Rheinboldt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, San Diego (1970). Reprinted by SIAM, Philadelphia (2000)

    MATH  Google Scholar 

  28. Kikuchi, N, Oden, JT: Contact Problems in Elasticity: A Study of Variational Inequalities and Finite Element Methods. SIAM, Philadelphia (1988)

    Book  MATH  Google Scholar 

  29. Noor, MA, Bnouhachem, A: Modified proximal-point method for nonlinear complementarity problems. J. Comput. Appl. Math. 197, 395-405 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  30. Walloth, M: Adaptive numerical simulation of contact problems: resolving local effects at the contact boundary in space and time. Diss. Universitäts und Landesbibliothek Bonn (2012)

  31. Luca, TD, Facchinei, F, Kanzow, C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407-439 (1996)

    MATH  Google Scholar 

  32. Xu, HR, Zeng, JP, Sun, Z: Two-level additive Schwarz algorithms for nonlinear complementarity problem with an M-function. Numer. Linear Algebra Appl. 17, 599-613 (2010)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The work was supported by the NSF of China grant 11271069, by Training program for outstanding young teachers in Guangdong Province (Grant No. 20140202), and by Educational Commission of Guangdong Province, China (Grant No. 2014KQNCX210).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinping Zeng.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors jointly worked on the results and they read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, H., Zeng, J. Finite algorithms for the numerical solutions of a class of nonlinear complementarity problems. J Inequal Appl 2015, 315 (2015). https://doi.org/10.1186/s13660-015-0781-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0781-6

MSC

Keywords