- Research
- Open access
- Published:
Iterative algorithms for finding the zeroes of sums of operators
Journal of Inequalities and Applications volume 2014, Article number: 349 (2014)
Abstract
Let , be real Hilbert spaces, be a nonempty closed convex set, and . Let , be two bounded linear operators. We consider the problem to find such that (). Recently, Eckstein and Svaiter presented some splitting methods for finding a zero of the sum of monotone operator A and B. However, the algorithms are largely dependent on the maximal monotonicity of A and B. In this paper, we describe some algorithms for finding a zero of the sum of A and B which ignore the conditions of the maximal monotonicity of A and B.
1 Introduction and preliminaries
Let , , be real Hilbert spaces, be a nonempty closed convex set and . Let , be two bounded linear operators. We consider the interesting problem of finding such that
For convenience, we denote the problem by .
For it is generally difficult to find zeroes of A and B separately. To overcome this difficulty, Eckstein and Svaiter [1] present the splitting methods for finding a zero of the sum of monotone operator A and B. Three basic families of splitting methods for this problem were identified in [1]:
-
(i)
The Douglas/Peaceman-Rachford family, whose iteration is given by
where is a fixed scalar, and is a sequence of relaxation parameters.
-
(ii)
The double backward splitting method, with iteration given by
where a sequence of regularization parameters.
-
(iii)
The forward-backward splitting method, with iteration given by
where a sequence of regularization parameters.
Convergence results for the scheme (i), in the case in which is contained in a compact subset of , can be found in [2]; the convergence analysis of the double backward scheme given by (ii), which can be found in [3] and [4]; the standard convergence analysis for (iii) one can see [5]. However, the convergence results are largely dependent on the maximal monotonicity of A and B. It is therefore the aim of this paper to construct new algorithms for problem which ignore the conditions of the maximal monotonicity of A and B.
The paper is organized as follows. In Section 2, we define the concept of the minimal norm solution of the problem (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solution (see Theorem 2.4). In Section 3, we introduce an algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem (1.1) (see Theorem 3.2). In Section 4, we introduce KM-CQ-like iterative algorithm which converge strongly to a solution of the problem (1.1) (see Theorem 4.3).
Throughout the rest of this paper, I denotes the identity operator on Hilbert space H, the set of the fixed points of an operator T and ∇f the gradient of the functional . An operator T on a Hilbert space H is nonexpansive if, for each x and y in H, . T is said to be averaged, if there exist and a nonexpansive operator N such that .
We know that the projection from H onto a nonempty closed convex subset C of H is a typical example of a nonexpansive and averaged mapping, which is defined by
It is well known that is characterized by the inequality
We now collect some elementary facts which will be used in the proofs of our main results.
Let X be a Banach space, C a closed convex subset of X, and a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then .
Lemma 1.2 [8]
Let be a sequence of nonnegative real numbers, a sequence of real numbers in with , a sequence of nonnegative real numbers with , and a sequence of real numbers with . Suppose that
Then .
Lemma 1.3 [9]
Let , be bounded sequences in a Banach space and let be a sequence in which satisfies the following condition: . Suppose that and , then .
Lemma 1.4 [10]
Let f be a convex and differentiable functional and let C be a closed convex subset of H. Then is a solution of the problem
if and only if satisfies the following optimality condition:
Moreover, if f is, in addition, strictly convex and coercive, then the minimization problem has a unique solution.
Lemma 1.5 [11]
Let A and B be averaged operators and suppose that is nonempty. Then .
2 The minimum-norm solution of the problem
In this section, we propose the concept of the minimal norm solution of (1.1). Then, using Tychonov regularization, we obtain the minimal norm solution by a net of solution for some minimization problem.
We use Γ to denote the solution set of , i.e.,
and assume consistency of . Hence Γ is closed, convex, and nonempty.
Let , , P be the linear operator from onto M, and P has the matrix form
that is to say, , .
Define by , . Then G has the matrix form , and , .
The problem can now be reformulated as finding with , or solving the following minimization problem:
which is ill-posed. A classical way is the well-known Tychonov regularization, which approximates a solution of problem (2.1) by the unique minimizer of the regularized problem:
where is the regularization parameter. Denote by the unique solution of (2.2).
Proposition 2.1 For , the solution of (2.2) is uniquely defined. is characterized by the inequality
Proof Obviously, is convex and differentiable with gradient . Recall that , we see that is strictly convex and differentiable with gradient
According Lemma 1.4, is characterized by the inequality
□
Definition 2.2 An element is said to be the minimal norm solution of SEP (1.1) if .
The following proposition collects some useful properties of the unique solution of (2.2).
Proposition 2.3 Let be given as the unique solution of (2.2). Then we have:
-
(i)
is decreasing for .
-
(ii)
defines a continuous curve from to .
Proof Let , since and are the unique minimizers of and , respectively, we get
It follows that . Thus is decreasing for .
According to Proposition 2.1, we get
and
It follows that
Thus
It turns out that
Hence, is a continuous curve from to . □
Theorem 2.4 Let be the unique solution of (2.2). Then converges strongly to the minimum-norm solution of (1.1) with .
Proof For any , is given as (2.2), we get
Since is a solution for ,
It follows that for all . Thus is a bounded net in .
All we need to prove is that for any sequence such that , contains a subsequence converging strongly to . For convenience, we set .
In fact is bounded, by passing to a subsequence if necessary, we may assume that converges weakly to a point . Due to Proposition 2.1, we get
It turns out that
Since , it follows that
Noting that , we have
Moreover, note that converges weakly to a point , thus converges weakly to . It follows that , i.e. .
Finally, we prove that and this finishes the proof.
Recall that converges weakly to and , one can deduce that
This shows that is also a point in Γ with minimum-norm. By the uniqueness of minimum-norm element, we get . □
Finally, we will introduce another method to get the minimum-norm solution of the problem .
Lemma 2.5 Let , where with being the spectral radius of the self-adjoint operator on . Then we have the following:
-
(1)
(i.e. T is nonexpansive) and averaged;
-
(2)
, ;
-
(3)
if and only if x is a solution of the variational inequality , .
Proof (1) It is easily proved that , we only need to prove that is averaged. Indeed, choose , such that , then , where is a nonexpansive mapping. That is to say T is averaged.
-
(2)
If , it is obviously that . Conversely, assume that , we have , hence then , we get . We have .
Now we prove . By , is obviously. On the other hand, since , and both and T are averaged, from Lemma 1.5, we have .
-
(3)
□
Remark 2.6 Choose a constant γ satisfying that . For , we define a mapping
It is clear that is a contractive. Hence, has a unique fixed point , we have
Theorem 2.7 Let be given as (2.4). Then converges strongly to the minimum-norm solution of the problem (1.1) when .
Proof Choose , noting that , is nonexpansive, it turns out that
That is,
Hence is bounded.
Taking into account of (2.4), we have
We assert that is relatively norm compact as . In fact, assume that and as . For convenience, we put , we get
Since is nonexpansive, one concludes that
That is ,
Thus,
Due to is bounded, there exists a subsequence of which converges weakly to a point . Without loss of generality, we may assume that converges weakly to . Noting that
and applying Lemma 1.1, we obtain .
Since
it concludes that
Hence, if converges weakly to , then converges strongly to . That is to say is relatively norm compact as .
Moreover, again using
let , we have
This implies that
This is equivalent to
It turns out that . Consequently, each cluster point of is equals . Thus the minimum-norm solution of the problem . □
3 Iterative algorithm for the minimum-norm solution of the problem
In this section, we introduce the following algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem .
Algorithm 3.1 For an arbitrary point the sequence is generated by the iterative algorithm
where is a sequence in such that
-
(i)
;
-
(ii)
;
-
(iii)
or .
Now, we prove the strong convergence of the iterative algorithm.
Theorem 3.2 The sequence generated by algorithm (3.1) converges strongly to the minimum-norm solution of the problem (1.1).
Proof Let and R be defined by
where , by Lemma 2.5 , it is easy to see that is a contraction with contractive constant . Algorithm (3.1) can be written as .
For any , we have
Hence,
It follows that . So is bounded.
Next we prove that .
Indeed,
Notice that
Hence
By virtue of the assumptions (1)-(3) and Lemma 1.2, we have
Therefore,
By the demiclosedness principle ensures that each weak limit point of is a fixed point of the nonexpansive mapping , that is, a point of the solution set Γ of SEP (1.1).
Finally, we will prove that .
Choose , such that , then , where is a nonexpansive mapping. Taking , we deduce that
Then
Note that , it follows that .
Take a subsequence of such that .
By virtue of the boundedness of , we may further assume with no loss of generality that converges weakly to a point . Since , using the demiclosedness principle, . Noticing that is the projection of the origin onto Γ, we get
Finally, we compute
Since, , , we know that , by Lemma 1.2, we conclude that . This completes the proof. □
4 KM-CQ-like iterative algorithm for the problem
In this section, we establish a KM-CQ-like algorithm converges strongly to a solution of the problem .
Algorithm 4.1 For an arbitrary initial point , sequence is generated by the iteration:
where is a sequence in such that
-
(i)
, ;
-
(ii)
;
-
(iii)
.
Lemma 4.2 If , then for any x we have , where β and V are the same in Lemma 2.5(1).
Proof By Lemma 2.5(1), we know that , where and V is a nonexpansive. It is clear that , and
□
Theorem 4.3 The sequence generated by algorithm (4.1) converges strongly to a solution of the problem .
Proof For any solution of the problem , according to Lemma 2.5, , where , and
One can deduce that
Hence, is bounded and so is . Moreover,
Since is bounded, we see that , , and are also bounded.
Let , and such that . Noting that
One concludes that
Since and , we have
and
Applying Lemma 1.3, we get
Hence,
Let and R be defined by
Noting that
So, we have
By assumption, we have
Furthermore, is bounded, there exists a subsequence of which converges weakly to a point . Without loss of generality, we may assume that converges weakly to . Since , using the demiclosedness principle we know that .
Finally, we will prove that . In fact,
Using Lemma 1.2, we only need to prove that
Applying Lemma 2.5, T is averaged, that is , where and V is nonexpansive. Hence, for , we have
By Lemma 4.2, we have
Let such that for all n, then it concludes that
Hence,
Since , we get
Therefore,
It follows that
Since converges weakly to , it follows that
□
Similar to the proof of Theorem 4.3, we can get the result that the following iterative algorithm converges strongly to a solution of the problem also. Since the proof is similar to Theorem 4.3, we omit it.
Algorithm 4.4 For an arbitrary initial point , sequence is generated by the iteration:
where is a sequence in such that
-
(i)
, ;
-
(ii)
;
-
(iii)
.
References
Eckstein J, Svaiter BF: A family of projective splitting methods for the sum of two maximal monotone operators. Math. Program. 2008, 111: 173-1199.
Eckstein J, Bertsekas D: On the Douglas-Ratford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293-318. 10.1007/BF01581204
Lions PL: Une méthode itérative de résolution d’une inéquation variationelle. Isr. J. Math. 1978, 31: 204-208. 10.1007/BF02760552
Passty GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72: 383-390. 10.1016/0022-247X(79)90234-8
Tseng P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38: 431-446. 10.1137/S0363012998338806
Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Geobel K, Reich S: Uniform Convexity, Nonexpansive Mappings, and Hyperbolic Geometry. Dekker, New York; 1984.
Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal., Theory Methods Appl. 2007,67(8):2350-2360. 10.1016/j.na.2006.08.032
Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach space. Fixed Point Theory Appl. 2005, 1: 103-123.
Engl HW, Hanke M, Neubauer A Mathematics and Its Applications 375. In Regularization of Inverse Problems. Kluwer Academic, Dordrecht; 1996.
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004,20(1):103-120. 10.1088/0266-5611/20/1/006
Acknowledgements
This research was supported by NSFC Grants No:11071279; No:11226125; No:11301379.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Recently, Eckstein and Svaiter present some splitting methods for finding a zero of the sum of monotone operator A and B. However, the algorithms are largely dependent on the maximal monotonicity of A and B. In this paper, we describe some algorithms for finding a zero of the sum of A and B which ignore the conditions of the maximal monotonicity of A and B.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shi, L.Y., Chen, R. & Wu, Y. Iterative algorithms for finding the zeroes of sums of operators. J Inequal Appl 2014, 349 (2014). https://doi.org/10.1186/1029-242X-2014-349
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-349