We propose a new algorithm for solving systems of nonlinear equations with convex constraints which combines elements of Newton, the proximal point, and the projection method. The convergence of the whole sequence is established under weaker conditions than the ones used in existing projection-type methods. We study the superlinear convergence rate of the new method if in addition a certain error bound condition holds. Preliminary numerical experiments show that our method is efficient.
MSC: 90C25, 90C30.
Keywords:nonlinear equations; projection method; global convergence; superlinear convergence
Let S denote the solution set of (1.1). Throughout this paper, we assume that S is nonempty and F has the property that
The property (1.2) holds if F is monotone or more generally pseudomonotone on C in the sense of Karamardian .
Nonlinear equations have wide applications in reality. For example, many problems arising from chemical technology, economy, and communications can be transformed into nonlinear equations; see [2-5]. In recent years, many numerical methods for problem (1.1) with smooth mapping F have been proposed. These methods include the Newton method, quasi-Newton method, Levenberg-Marquardt method, trust region method, and their variants; see [6-14].
Recently, the literature  proposed a hybrid method for solving problem (1.1), which combines the Newton, proximal point, and projection methodologies. The method possesses a very nice globally convergent property if F is monotone and continuous. Under the assumptions of differentiability and nonsingularity, locally superlinear convergence of the method is proved. However, the condition of nonsingularity is too strong. Relaxing the nonsingularity assumption, the literature  proposed a modified version for the method by changing the projection way, and showed that under the local error bound condition which is weaker than nonsingularity, the proposed method converges superlinearly to the solution of problem (1.1). The numerical performances given in  show that the method is really efficient. However, the literatures [15,16] need the mapping F to be monotone, which seems too stringent a requirement for the purpose of ensuring global convergence property and locally superlinear convergence of the hybrid method.
To further relax the assumption of monotonicity of F, in this paper, we propose a new hybrid algorithm for problem (1.1) which covers one in . The global convergence of our method needs only to assume that F satisfies the property (1.2), which is much weaker than monotone or more generally pseudomonotone. We also discuss the superlinear convergence of our method under mild conditions. Preliminary numerical experiments show that our method is efficient.
2 Preliminaries and algorithms
We have the following property on the projection operator; see .
Step 3. Compute
Remark 2.1 When we take parameters , , and the search direction , our algorithm degrades into one in . At this step of getting the next iterate, our projection way and projection region are also different from the one in .
Now we analyze the feasibility of Algorithm 2.1. It is obvious that satisfying conditions (2.1) and (2.2) exists. In fact, when we take , satisfies (2.1) and (2.2). Next, we need only to show the feasibility of (2.3).
3 Convergence analysis
In this section, we first prove two lemmas, and then analyze the global convergence of Algorithm 2.1.
Proof Combining inequality (2.5) with the Cauchy-Schwarz inequality, we obtain
where the inequality follows from (2.3).
Remark 3.1 Lemma 3.2 means that the hyperplane
strictly separates the current iterate from the solution set of problem (1.1).
We next prove our main result. Certainly, if Algorithm 2.1 terminates at Step k, then is a solution of problem (1.1). Therefore, in the following analysis, we assume that Algorithm 2.1 always generates an infinite sequence.
This, together with inequality (3.4), we deduce that
Now, we consider the following two possible cases:
This shows that is a solution of problem (1.1). Replacing by in the preceding argument, we obtain that the sequence is nonincreasing, and hence converges. Since is an accumulation point of , some subsequence of converges to zero, which implies that the whole sequence converges to zero, and hence .
4 Convergence rate
In this section, we provide a result on the convergence rate of the iterative sequence generated by Algorithm 2.1. To establish this result, we need the following conditions (4.1) and (4.2).
In fact, by the mean value theorem of vector valued function, we have
In 1998, the literature  showed that their proposed method converged superlinearly when the underlying function F is monotone, differentiable with being nonsingular, and ∇F is locally Lipschitz continuous. It is known that the local error bound condition given in (4.1) is weaker than the nonsingular. Recently, under conditions (4.1), (4.2), and the underlying function F being monotone and continuous, the literature  obtained the locally superlinear rate of convergence of the proposed method.
Proof See . □
For the right-hand side part, it follows from (2.1) and (2.2) that
For (2), using (2.1) and (2.2), we have
Lemma 4.3Suppose that the assumptions in Lemma 4.2 hold. Then for allksufficiently large, it holds that
By Lemma 4.2(1), we obtain that
which, together with (4.8) and Lemma 4.2(1), we obtain
Now, we turn our attention to local rate of convergence analysis.
On the other hand, from Lemma 3.2, we know that
which implies that
Therefore, together with inequalities (4.1), (4.5), and (4.9), we have
which implies that the order of superlinear convergence is at least 1.5. This completes the proof. □
5 Numerical experiments
In this section, we present some numerical experiments results to show the efficiency of our method. The MATLAB codes are run on a notebook computer with CPU2.10GHZ under MATLAB Version 7.0. Just as done in , we take and use the left division operation in MATLAB to solve the system of linear equations (2.1) at each iteration. We choose , , , , and . ‘Iter.’ denotes the number of iteration and ‘CPU’ denotes the CPU time in seconds. We choose as the stop criterion. The example is tested in .
and the constraint set C be taken as
From Tables 1-2, we can see that our algorithm is efficient if parameters are chosen properly. We can also observe that the algorithm’s operation results change with the value of a. When we take , the operation results are not best, that is to say, the direction is not an optimal one.
The author declares that they have no competing interests.
The author wish to thank the anonymous referees for their suggestions and comments. This work is also supported by the Educational Science Foundation of Chongqing, Chongqing of China (Grant No. KJ111309).
Karamardian, S: Complementarity problems over cones with monotone and pseudomonotone maps . J. Optim. Theory Appl.. 18, 445–454 (1976). Publisher Full Text
Dirkse, SP, Ferris, MC: MCPLIB: a collection of nonlinear mixed complementarity problems . Optim. Methods Softw.. 5, 319–345 (1995). Publisher Full Text
Meintjes, K, Morgan, AP: A methodology for solving chemical equilibrium system . Appl. Math. Comput.. 22, 333–361 (1987). Publisher Full Text
Zhang, JL, Wang, Y: A new trust region method for nonlinear equations . Math. Methods Oper. Res.. 58, 283–298 (2003). Publisher Full Text
Fan, JY, Yuan, YX: On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption . Computing. 74, 23–39 (2005). Publisher Full Text
Fan, JY: Convergence rate of the trust region method for nonlinear equations under local error bound condition . Comput. Optim. Appl.. 34, 215–227 (2006). Publisher Full Text
Fan, JY, Pan, JY: An improved trust region algorithm for nonlinear equations . Comput. Optim. Appl.. 48, 59–70 (2011). Publisher Full Text
Solodov, MV, Svaiter, BF: A globally convergent inexact Newton method for systems of monotone equations . In: Fukushima M, Qi L (eds.) Reformulation: Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369. Kluwer Academic, Dordrecht (1998)
Wang, CW, Wang, YJ: A superlinearly convergent projection method for constrained systems of nonlinear equations . J. Glob. Optim.. 44, 283–296 (2009). Publisher Full Text
Zhou, GL, Toh, KC: Superlinear convergence of a Newton-type algorithm for monotone equations . J. Optim. Theory Appl.. 125, 205–221 (2005). Publisher Full Text