Skip to main content

Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces

Abstract

In this paper, we introduce two kinds of iterative methods for finding the minimum-norm solution to the standard monotone variational inequality problems in a real Hilbert space. We then prove that the proposed iterative methods converge strongly to the minimum-norm solution of the variational inequality. Finally, we apply our results to the constrained minimization problem and the split feasibility problem as well as the minimum-norm fixed point problem for pseudocontractive mappings.

1 Introduction

Let C be a nonempty, closed, and convex subset of a real Hilbert space H with the inner product\(\langle\cdot,\cdot\rangle\) and induced norm \(\|\cdot\|\). A mapping F is said to be monotone if

$$ \langle{Fx - Fy,x - y} \rangle \geq0 $$
(1.1)

for all \(x,y\in C\).

The variational inequality problem (VIP) with respect to F and C is to find a point \(x^{*}\in C\) such that

$$ \bigl\langle {F{x^{*}},x - {x^{*}}} \bigr\rangle \geq0 \quad \mbox{for all } x\in C. $$
(1.2)

Variational inequalities were initially investigated by Kinderlehrer and Stampacchia in [1], and have been widely studied by many authors ever since, due to the fact that they cover as diverse disciplines as partial differential equations, optimization, optimal control, mathematical programming, mechanics and finance (see [2–4]).

It is well known that if F is a k-Lipschitz continuous and η-strongly monotone mapping, i.e., the following inequalities hold:

$$\Vert {Fx - Fy} \Vert \le k\Vert {x - y} \Vert \quad \mbox{and}\quad \langle{Fx - Fy,x - y} \rangle \ge\eta{ \Vert {x - y} \Vert ^{2}} $$

for all \(x,y\in C\), where k and η are fixed positive numbers, then (1.2) has a unique solution.

A mapping F is said to be hemicontinuous if for any sequence \(\{x_{n}\} \) converging to \(x_{0}\in H\) along a line implies \(T{x_{n}} \rightharpoonup T{x_{0}}\), i.e., \(T{x_{n}} = T({x_{0}} + {t_{n}}x)\rightharpoonup Tx_{0}\) as \(t_{n}\rightarrow0\) for all \(x\in H\).

Theorem 1.1

Let C be a nonempty, bounded, closed, and convex subset of a real Hilbert space H. Let F be a monotone and hemicontinuous mapping of C into H. Then there exists \(x_{0}\in C\) such that

$$\langle x-x_{0},Fx_{0}\rangle\geq0 \quad \textit{for all } x \in C. $$

It is also well known that (1.2) is equivalent to the fixed point equation

$$ {x^{*}} = {P_{C}}\bigl[{x^{*}} - \mu F{x^{*}}\bigr], $$
(1.3)

where \(P_{C}\) stands for the metric projection from H onto C and μ is an arbitrarily positive number. Consequently, the well-known iterative procedure, the projected gradient method (PGM), can be used to solve (1.2). PGM generates an iterative sequence by the recursion

$$ x_{1}\in C \quad \mbox{and}\quad {x_{n + 1}} = {P_{C}}\bigl[(I - \mu F){x_{n}}\bigr]. $$
(1.4)

When F is a k-Lipschitz continuous and η-strongly monotone mapping, as \(\mu \in(0,\frac{{2\eta}}{{k^{2}}})\), the sequence \(\{ x_{n}\}\) generated by (1.4) converges strongly to a unique solution of (1.2).

However, if F fails to be Lipschitz continuous or strongly monotone, then the result above is false in general. We will assume that F is a hemicontinuous and general monotone mapping. Thus, VIP (1.2) is ill-posed and regularization is needed; moreover, a solution is often sought through iteration methods.

In 1976, Korpelevich [5] introduced the following so-called extragradient method:

$$ \left \{ { \begin{array}{l} {{x_{1}} \in C}, \\ {{y_{n}} = {P_{C}}[{x_{n}} - {\lambda}F{x_{n}}]}, \\ {{x_{n + 1}} = {P_{C}}[{x_{n}} - {\lambda}F{y_{n}}]} \end{array} } \right . $$
(EM)

for all \(n\geq0\), where \(\lambda\in(0,\frac{1}{k})\), C is a nonempty, closed, and convex subset of \(R^{n}\) and F is a monotone k-Lipschitz mapping of C into \(R^{n}\). He proved that if \(\operatorname{VI}(C, F)\) is nonempty, then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\), generated by (EM), converge weakly to the same point \(p\in \operatorname{VI}(C, F)\), which is a solution of (1.2).

Recently Chen et al. [6] introduced the following iterative method:

$$x_{n+1}=P_{C}\bigl((1-\gamma)x_{n}+\gamma \bigl((1-t_{n})fx_{n}+t_{n}Tx_{n}\bigr) \bigr), $$

where \(\gamma\in(0,\frac{{2\eta}}{{{k^{2}}}})\) is fixed, T is a nonexpansive mapping and \(I-f\) is a Lipschitz \((1-\rho)\)-strongly monotone mapping. Then the iterative sequence \(x_{n}\) converges strongly to the unique solution \(x^{*}\) of (VI) below:

$$x^{*}\in S, \quad \bigl\langle (I-f)x^{*},x-x^{*}\bigr\rangle \geq0,\quad x\in S. $$

Very recently Yao et al. [7] constructed the minimum-norm fixed points of pseudocontractions in Hilbert spaces by the following iterative algorithm:

$${x_{n + 1}} = {P_{C}}\bigl[(1 - {\alpha_{n}}- { \beta_{n}}){x_{n}}+T{x_{n}}\bigr],\quad n \ge1, $$

where T is a L-Lipschitzian and pseudocontractive with \(\operatorname {Fix}(T)\neq\emptyset\).

Questions

  1. 1.

    Can one modify extragradient method for general monotone operator variational inequality so that strong convergence of the modified algorithm is desirable?

  2. 2.

    If F is a hemicontinuous and strongly monotone mapping, the solution of VIP (1.2) is unique or not?

The purpose of this paper is to solve the questions above. We introduce implicit and explicit iterative methods for construction of the solution of the monotone variational inequality problem and prove that our algorithms converge strongly to the minimum-norm solution of variational inequality problem (1.2). Finally, we apply our results to the constrained minimization problem and the split feasibility problem as well as the minimum-norm fixed point problem for pseudocontractive mappings.

2 Preliminaries

For our main results, we shall make use of the following lemmas.

Lemma 2.1

(see [8])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(A:C\rightarrow H\) be a hemicontinuous monotone operator. Then, for a fixed element \(x^{*}\in C\), the following variational inequalities are equivalent:

  1. (i)

    \(\langle Ax,x - {x^{*}}\rangle \ge0\), \(\forall x \in C\);

  2. (ii)

    \(\langle A{x^{*}},x - {x^{*}}\rangle \ge0\), \(\forall x \in C\).

Lemma 2.2

(see [9])

Let X be a reflexive Banach space and K is a unbounded closed convex subset of X with \(\theta\in K\). Let \(A:K\rightarrow X^{*}\) be a hemicontinuous monotone coercively operator, i.e., \(\forall u\in K\),

$$\frac{{\langle Au,u\rangle}}{\|u\|} \to + \infty\quad \textit{as } \|u\|\to +\infty. $$

Then \(\forall w^{*}\in X^{*}\), there exists a \(u_{0}\in K\) such that

$$ \bigl\langle A{u_{0}} - {w^{*}},v - {u_{0}}\bigr\rangle \geq0, \quad \forall v\in K. $$
(2.1)

In Lemma 2.2, \(\theta\in K\) is needed. Indeed, if \(A:K \to{X^{*}}\) is a hemicontinuous η-strongly monotone operator, then the restriction that \(\theta\in K\) can be omitted. To prove this, we give the following lemma.

Lemma 2.3

Let K be a unbounded, closed, and convex subset of reflexive Banach space X. Let \(A:K\to H\) be a hemicontinuous η-strongly monotone operator. Then \(\forall w^{*}\in X^{*}\), there exists a \(u_{0}^{*}\in K\) such that the VI (2.1) holds.

Proof

Let \(\tilde{K} = K - {x_{0}}\), where \(x_{0}\) is a fixed element of K. Define \(\tilde{A}x = A(x + {x_{0}})\). Then we see that \(\tilde{A}\) is hemicontinuous η-strongly monotone. For any \(x, y\in\tilde{K}\) we have

$$\begin{aligned} \langle\tilde{A}x - \tilde{A}y,x - y\rangle =& \bigl\langle A(x + {x_{0}}) - A(y + {x_{0}}),x - y\bigr\rangle \\ \geq&\eta\| x - y\|^{2}. \end{aligned}$$

Since \(\langle\tilde{A}x - A{x_{0}},x\rangle \ge\eta\| x\|^{2}\), we have

$$\langle\tilde{A}x,x\rangle \geq\eta\|x\|^{2} -\| A{x_{0}}\| \|x\|. $$

Then we get

$$\frac{{\langle\tilde{A}x,x\rangle}}{\|{x}\|} \geq\eta\|x\| - \| A{x_{0}} \|\to + \infty \quad \mbox{as }\|x\|\to\infty. $$

By Lemma 2.2, \(\forall{w^{*}} \in{X^{*}}\), there exists a \({u_{0}} \in \tilde{K}\) such that

$$\bigl\langle \tilde{A} {u_{0}} - {w^{*}},v - {u_{0}}\bigr\rangle \ge0,\quad \forall v\in \tilde{K}. $$

Putting \({u_{0}}^{*} = {u_{0}} + {x_{0}}\), then we have

$$\bigl\langle A{u_{0}}^{*} - {w^{*}},v - {u_{0}}^{*}\bigr\rangle \ge0,\quad \forall v\in K. $$

Therefore, \({u_{0}}^{*}\) is a solution of VIP (2.1). □

Lemma 2.4

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(A:C\to H\) be a hemicontinuous η-strongly monotone operator. Then variational inequality

$$ \bigl\langle A{x^{*}},x - {x^{*}}\bigr\rangle \ge0,\quad \forall x \in K, $$
(2.2)

has a unique solution.

Proof

Let \(\operatorname{VI}(C,A)\) be the solution set of VI (2.2). From Lemma 2.3, we know that \(\operatorname{VI}(C,A)\) is nonempty. Next, we show that \(\operatorname{VI}(C,A)\) has a unique element. Assume that \(x^{*}, y^{*}\in \operatorname{VI}(C,A)\). Then we have

$$ \bigl\langle A{x^{*}},x - {x^{*}}\bigr\rangle \ge0,\quad \forall x \in C $$
(2.3)

and

$$ \bigl\langle A{y^{*}},x - {y^{*}}\bigr\rangle \ge0,\quad \forall x \in C. $$
(2.4)

Combining (2.3) and (2.4), we get

$$ \bigl\langle A{x^{*}}-Ay^{*},y^{*} - {x^{*}}\bigr\rangle \ge0. $$
(2.5)

Since A is η-strongly monotone, from (2.5) it follows that

$$\eta\bigl\Vert x^{*}-y^{*}\bigr\Vert ^{2}\leq\bigl\langle A{x^{*}}-Ay^{*},{x^{*}} -y^{*}\bigr\rangle \leq0. $$

Therefore, \(x^{*}=y^{*}\). This completes the proof. □

Lemma 2.5

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(A:C\to H\) be a hemicontinuous monotone operator and \({\gamma_{n}} > 0\) be a sequence of real numbers. Then \({\gamma_{n}}I + A\) are \({\gamma_{n}}\)-strongly monotone.

Proof

\(\forall x, y\in C\), we have

$$\begin{aligned}& \bigl\langle ({\gamma_{n}}I + A)x - ({\gamma_{n}}I + A)y,x - y\bigr\rangle \\& \quad \ge{\gamma_{n}}\|x - y\|^{2} + \langle Ax - Ay,x - y\rangle \\& \quad \ge{\gamma_{n}}\|x - y\|^{2}. \end{aligned}$$

So, \({\gamma_{n}}I + A\) are \({\gamma_{n}}\)-strongly monotone. □

Lemma 2.6

(see [10])

Let \(\{\alpha_{n}\}\) be a sequence of nonnegative real numbers satisfying

$${a_{n + 1}} \le(1 - {\gamma_{n}}){a_{n}} + { \gamma_{n}} {\sigma_{n}},\quad n\geq0, $$

where \(\{ {\gamma_{n}}\} \subset(0,1)\) and \(\{ {\sigma_{n}}\} \) satisfy

  1. (i)

    \(\sum_{n = 0}^{\infty}{\gamma_{n}} = \infty\);

  2. (ii)

    either \(\lim{\sup_{n \to\infty}}{\sigma_{n}} \le0\) or \(\sum_{n = 0}^{\infty}|{\gamma_{n}}{\sigma_{n}}| < \infty\).

Then \(\lim_{n\to\infty}\alpha_{n}=0\).

Lemma 2.7

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\to H\) be a mapping and write \(A:=I-T\). Then \(\operatorname{VI}(C,A)=\operatorname{Fix}(P_{C} T)\). In particular, if \(T:C\to C\) is a self-mapping, then \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\).

Proof

Indeed,

$$x^{*}\in \operatorname{VI}(C,A)\quad \Leftrightarrow\quad x^{*}=P_{C} (I-A)x^{*}\quad \Leftrightarrow \quad x^{*}=P_{C} Tx^{*}\quad \Leftrightarrow \quad x^{*}\in\operatorname{Fix}(P_{C} T). $$

If \(T:C\to C\) is a self-mapping, then we have

$$x^{*}\in\operatorname{Fix}(P_{C} T)\quad \Leftrightarrow\quad x^{*}=Tx^{*}. $$

This completes the proof. □

Now we are in a proposition to state and prove the main results in this paper.

3 Main results

In this section we will introduce two iterative methods (one implicit and the other explicit). First, we introduce the implicit one. In what follows, we assume that \(A:C\to H\) is hemicontinuous and monotone.

For given \(\gamma_{n}>0\), we consider the sequences of operators \(\{A_{n}\} \) which are defined by

$$ A_{n}x=\gamma_{n}x+Ax,\quad \forall x\in C $$
(3.1)

for all \(n\geq1\).

From Lemma 2.5, we know that \(A_{n}:C\to H\) are hemicontinuous and \(\gamma _{n}\)-strongly monotone for all \(n\geq1\). It follows from Lemma 2.4 that the variational inequality

$$ \langle A_{n}y_{n}, x-y_{n}\rangle\geq0,\quad \forall x\in C, $$
(3.2)

has a unique solution \(y_{n}\in C\) for every fixed \(n\geq1\).

Substitute (3.1) into (3.2) to obtain

$$ \langle\gamma_{n}y_{n}+Ay_{n}, x-y_{n} \rangle\geq0,\quad \forall x\in C. $$
(3.3)

Take \(\gamma_{n}=\frac{\alpha_{n}}{\beta_{n}}\). Then (3.3) yields

$$ \langle\alpha_{n}y_{n}+\beta_{n} Ay_{n}, x-y_{n}\rangle\geq0,\quad \forall x\in C, $$
(3.4)

and hence

$$ \langle y_{n}-y_{n}-\alpha_{n}y_{n}- \beta_{n} Ay_{n}, x-y_{n}\rangle\leq0, \quad \forall x\in C. $$
(3.5)

It turns out that

$$ \bigl\langle (1-\alpha_{n})y_{n}-\beta_{n} Ay_{n}-y_{n}, x-y_{n}\bigr\rangle \leq0,\quad \forall x\in C. $$
(3.6)

By virtue of the property of \(P_{C}\), we conclude

$$ y_{n}=P_{C}\bigl[(1-\alpha_{n})y_{n}- \beta_{n} Ay_{n}\bigr],\quad n\geq1. $$
(3.7)

Theorem 3.1

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let A be a hemicontinuous monotone operator. Let \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) be two sequences in \([0,1]\) that satisfy the following condition:

$$\frac{\alpha_{n}}{\beta_{n}} \to0 \quad \textit{as }n\to\infty. $$

Assume that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the sequence \(\{y_{n}\}\) generated by (3.7) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\) which is the minimum-norm solution of VIP (2.2).

Proof

Put \({z_{n}} = (1 - {\alpha_{n}}){y_{n}} - {\beta_{n}}A{y_{n}}\). \(\forall p \in \operatorname{VI}(C,A)\), we have

$$ \|{y_{n}} - p\|^{2} = \langle{y_{n}} - p,{y_{n}} - p\rangle = \langle{y_{n}} - {z_{n}},{y_{n}} - p\rangle + \langle{z_{n}} - p,{y_{n}} - p \rangle. $$
(3.8)

By using (3.7) and (3.8), we get

$$\langle{y_{n}} - {z_{n}},{y_{n}} - p\rangle = \langle{P_{C}} {z_{n}} - {z_{n}},{P_{C}} {z_{n}} - p\rangle. $$

It follows from the property of \(P_{C}\) that

$$ \langle{P_{C}} {z_{n}} - {z_{n}},{P_{C}} {z_{n}} - p\rangle \le0. $$
(3.9)

By (3.8) and (3.9), we have

$$\begin{aligned} \|{y_{n}} - p\|^{2} \le&\langle{z_{n}} - p,{y_{n}} - p\rangle \\ =& \bigl\langle (1 - {\alpha_{n}}){y_{n}} - {\beta _{n}}A{y_{n}} - p,{y_{n}} - p\bigr\rangle \\ =& \langle{y_{n}} - p,{y_{n}} - p\rangle + \langle - { \alpha_{n}} {y_{n}} - {\beta_{n}}A{y_{n}},{y_{n}} - p\rangle \\ =& \|{y_{n}} - p\|^{2} - \langle{\alpha_{n}} {y_{n}} + {\beta_{n}}A{y_{n}},{y_{n}} - p \rangle, \end{aligned}$$

which simplifies to

$$ \langle{\alpha_{n}} {y_{n}} + {\beta_{n}}A{y_{n}},{y_{n}} - p\rangle \le 0, $$
(3.10)

and then

$$ \biggl\langle \frac{\alpha_{n}}{\beta_{n}}{y_{n}} + A{y_{n}},{y_{n}} - p\biggr\rangle \le0. $$
(3.11)

Setting \({\gamma_{n}} = \frac{\alpha_{n}}{\beta_{n}}\), then we have

$$\begin{aligned} 0 \ge&\langle{\gamma_{n}} {y_{n}} + A{y_{n}},{y_{n}} - p\rangle \\ =& \langle{\gamma_{n}} {y_{n}} + A{y_{n}} + Ap - Ap,{y_{n}} - p\rangle \\ =& {\gamma_{n}}\langle{y_{n}},{y_{n}} - p\rangle + \langle A{y_{n}} - Ap,{y_{n}} - p\rangle + \langle Ap,{y_{n}} - p\rangle. \end{aligned}$$
(3.12)

Since A is a monotone operator and \(p \in \operatorname{VI}(C,A)\), we know

$$ \langle A{y_{n}} - Ap,{y_{n}} - p\rangle \ge0 $$
(3.13)

and

$$ \langle Ap, {y_{n}} - p\rangle \ge0. $$
(3.14)

Substitute (3.13) and (3.14) into (3.12) to obtain

$$ \langle{y_{n}},{y_{n}} - p\rangle = \langle{y_{n}} - p + p,{y_{n}} - p\rangle \le0. $$
(3.15)

Then we have

$$ \|{y_{n}} - p\|^{2} \le\langle-p, y_{n}-p\rangle \le\|p\|\|y_{n}-p\|, $$
(3.16)

from which it turns out that

$$\|{y_{n}} - p\|\le\|p\|. $$

Therefore, \(\{y_{n}\}\) is bounded. Then we know that \(\{y_{n}\}\) has a subsequence \(\{y_{n_{j}}\}\) such that \(y_{n_{j}}\rightharpoonup x^{*}\) as \(j\to\infty\).

Furthermore, without loss of generality, we may assume that \(\{y_{n}\}\) converges weakly to a point \({x^{*}} \in C\).

We show that \({x^{*}}\) is a solution to VIP (2.2). For any \(x\in C\), by Lemma 2.5 we have

$$\begin{aligned}& \langle{\gamma_{n}}x + Ax,x - {y_{n}}\rangle - \langle{ \gamma_{n}} {y_{n}} + A{y_{n}},x - {y_{n}} \rangle \\& \quad = \bigl\langle ({\gamma_{n}}I+ A)x-({\gamma_{n}}I + A){y_{n}},x - {y_{n}}\bigr\rangle \\& \quad \ge\gamma_{n}\|x - {y_{n}}\|^{2}. \end{aligned}$$
(3.17)

Combining (3.17) and (3.3), we get

$$ \langle{\gamma_{n}}x + Ax,x - {y_{n}}\rangle \ge\langle{ \gamma _{n}} {y_{n}} + A{y_{n}},x - {y_{n}}\rangle \ge0,\quad \forall x\in C. $$
(3.18)

Taking the limit as \(n\to\infty\) in (3.18) yields

$$\bigl\langle Ax,x - {x^{*}}\bigr\rangle \ge0,\quad \forall x\in C. $$

By Lemma 2.1, we get

$$\bigl\langle A{x^{*}},x - {x^{*}}\bigr\rangle \ge0, \quad \forall x\in C, $$

that is, \({x^{*}} \in \operatorname{VI}(C,A)\).

Therefore, we can substitute p by \(x^{*}\) in (3.16) to obtain

$$ \bigl\Vert {y_{n}} - {x^{*}}\bigr\Vert ^{2} \le\bigl\langle x^{*}, x^{*}-y_{n}\bigr\rangle . $$
(3.19)

Since \({y_{n}} \rightharpoonup{x^{*}}\) as \(n\to\infty\), by (3.19) we get \({y_{n}} \to{x^{*}}\) as \(n\to\infty\).

Moreover, from (3.15) we get

$$ \bigl\langle {x^{*}},{x^{*}} - p\bigr\rangle \le0,\quad \forall p\in \operatorname{VI}(C,A). $$
(3.20)

By virtue of the property of the projection, we claim

$$ {x^{*}}=P_{\operatorname{VI}(C,A)}\theta. $$
(3.21)

So, the sequence \(\{y_{n}\}\) generated by (3.7) converges in norm to \(x^{*}=P_{\operatorname{VI}(C,A)}\theta\) as \(n\to\infty\).

Furthermore, it follows from (3.20) that

$$ \bigl\Vert {x^{*}}\bigr\Vert ^{2} \le\bigl\langle {x^{*}},p\bigr\rangle \le\bigl\Vert x^{*}\bigr\Vert \|p\|,\quad \forall p\in \operatorname{VI}(C,A), $$
(3.22)

from which we know that \(x^{*}\) is the minimum-norm solution of VIP (2.2). This completes the proof. □

Now, we introduce an explicit method and establish its strongly convergence analysis.

From the implicit method, it is natural to consider the following iteration method that generates a sequence \(\{x_{n}\}\) according to the recursion

$$ {x_{n + 1}} = {P_{C}}\bigl[(1 - {\alpha_{n}}){x_{n}} - {\beta_{n}}A{x_{n}}\bigr],\quad n \ge1, $$
(3.23)

where the initial guess \(x_{1}\in C\) is selected arbitrarily and \(\{\alpha _{n}\}\) and \(\{\beta_{n}\}\) are two sequences of positive numbers in \((0, 1)\).

Theorem 3.2

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let A be a hemicontinuous monotone operator. Let \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) be two sequences in \([0,1]\) that satisfy the following conditions:

  1. (i)

    \(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

  2. (ii)

    \({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

  3. (iii)

    \(\frac{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}{\alpha_{n}^{2}} \to0\) as \(n \to\infty\).

Assume that both \(\{Ax_{n}\}\) and \(\{Ay_{n}\}\) are bounded and that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the iterative sequence \(\{x_{n}\}\) generated by (3.23) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\), which is the minimum-norm solution to VIP (2.2).

Proof

By using Theorem 3.1, we know that \(\{y_{n}\}\) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\).

For any \(p\in \operatorname{VI}(C,A)\), from the property of \(P_{C}\) we know

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} =& \bigl\Vert P_{C} \bigl[(1-\alpha_{n})x_{n}-\beta_{n}Ax_{n} \bigr]-p\bigr\Vert ^{2} \\ \leq& \bigl\Vert (1-\alpha_{n})x_{n}- \beta_{n}Ax_{n}-p\bigr\Vert ^{2} \\ =& \bigl\Vert (1-\alpha_{n}) (x_{n}-p)- \beta_{n}Ax_{n}-\alpha_{n}p\bigr\Vert ^{2} \\ =& \bigl\Vert (1-\alpha_{n}) (x_{n}-p)+ \alpha_{n}(-p)\bigr\Vert ^{2}+\beta_{n}^{2} \Vert Ax_{n}\Vert ^{2} \\ &{} -2\beta_{n}(1-\alpha_{n})\langle x_{n}-p, Ax_{n}\rangle+\alpha_{n}\beta _{n}\langle p, Ax_{n}\rangle. \end{aligned}$$
(3.24)

By Lemma 2.1 we know

$$ \langle x_{n}-p, Ax_{n}\rangle\geq0,\quad n\geq1. $$
(3.25)

Substitute (3.25) into (3.24) to get

$$\begin{aligned} \|x_{n+1}-p\|^{2} \leq&(1-\alpha_{n}) \|x_{n}-p\|^{2}+\alpha_{n}\|p\|^{2}+\alpha _{n}\beta_{n}\|p\|\|Ax_{n}\|+\beta_{n}^{2} \|Ax_{n}\|^{2} \\ \leq&(1-\alpha_{n})\|x_{n}-p\|^{2}+ \alpha_{n}\biggl(\|p\|^{2}+\beta_{n}\|p\| \|Ax_{n}\|+\frac {\beta_{n}^{2}}{\alpha_{n}}\|Ax_{n}\|^{2}\biggr). \end{aligned}$$
(3.26)

Since \(\{Ax_{n}\}\) is bounded, by condition (i), we see that there exists some positive constant \(M_{0}=\max\{\|x_{1}-p\|, \|p\|^{2}+\beta_{n}\|p\|\| Ax_{n}\|+\frac{\beta_{n}^{2}}{\alpha_{n}}\|Ax_{n}\|^{2}, n\geq1\}\) such that

$$\|x_{n}-p\|^{2}\leq M_{0} $$

for all \(n\ge1\), which implies that \(\{x_{n}\}\) is bounded.

By using (3.7) and (3.23), we get

$$\begin{aligned} {\Vert {{x_{n + 1}} - {y_{n}}} \Vert ^{2}} \le& {\bigl\Vert {(1 - {\alpha _{n}}) ({x_{n}} - {y_{n}}) - {\beta_{n}}(A{x_{n}} - A{y_{n}})} \bigr\Vert ^{2}} \\ \le& {(1 - {\alpha_{n}})^{2}} {\Vert {{x_{n}} - {y_{n}}} \Vert ^{2}} + \beta _{n}^{2}{ \Vert {A{x_{n}} - A{y_{n}}} \Vert ^{2}} \\ &{} - 2(1 - {\alpha_{n}}){\beta_{n}} \langle{A{x_{n}} - A{y_{n}},{x_{n}} - {y_{n}}} \rangle \\ \le& {(1 - {\alpha_{n}})^{2}} {\Vert {{x_{n}} - {y_{n}}} \Vert ^{2}} + \beta _{n}^{2}{ \Vert {A{x_{n}} - A{y_{n}}} \Vert ^{2}} \\ \le & {(1 - {\alpha_{n}})^{2}}\bigl({\Vert {{x_{n}} - {y_{n-1}}} \Vert ^{2}}+2\| x_{n}-y_{n-1}\|\|y_{n}-y_{n-1}\| \\ &{} +\|y_{n}-y_{n-1}\|^{2}\bigr)+ \beta_{n}^{2}{\Vert {A{x_{n}} - A{y_{n}}} \Vert ^{2}}. \end{aligned}$$
(3.27)

Since both \(\{y_{n}\}\) and \(\{Ay_{n}\}\) are bounded, we get

$$\begin{aligned} {\Vert {{y_{n}} - {y_{n - 1}}} \Vert ^{2}} \leq& \bigl\langle {{y_{n}} - {y_{n - 1}},\bigl[(1 - { \alpha_{n}}){y_{n}} - {\beta_{n}}A{y_{n}} \bigr]} \\ &{}- \bigl[(1 - {\alpha_{n - 1}}){y_{n - 1}} - { \beta_{n - 1}}A{y_{n - 1}}\bigr]\bigr\rangle \\ =&\langle {{y_{n}} - {y_{n - 1}},{y_{n}} - {y_{n - 1}} - {\alpha_{n}} {y_{n}} + {\alpha _{n}} {y_{n - 1}} - {\alpha_{n}} {y_{n - 1}} + {\alpha_{n - 1}} {y_{n - 1}}} \\ &{}- {\beta _{n}}A{y_{n}} + {\beta_{n}}A{y_{n - 1}} { - {\beta_{n}}A{y_{n - 1}} + {\beta_{n - 1}}A{y_{n - 1}}} \rangle \\ =& (1 - {\alpha _{n}})\Vert {{y_{n}} - {y_{n - 1}}} \Vert ^{2} + \vert {{\alpha_{n}} - { \alpha_{n - 1}}} \vert \langle{{y_{n}} - {y_{n - 1}},{y_{n - 1}}} \rangle \\ &{}+ \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert \langle{{y_{n}} - {y_{n - 1}},A{y_{n - 1}}} \rangle - { \beta_{n}} \langle{{y_{n}} - {y_{n - 1}},A{y_{n}} - A{y_{n - 1}}} \rangle \\ \le&(1 - {\alpha _{n}})\Vert {{y_{n}} - {y_{n - 1}}} \Vert ^{2} + \vert {{\alpha_{n}} - { \alpha_{n - 1}}} \vert \|{{y_{n}} - {y_{n - 1}}\| \|{y_{n - 1}}}\| \\ &{}+ \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert \| {y_{n}} - {y_{n - 1}}\|\|A{y_{n - 1}} \|. \end{aligned}$$
(3.28)

Write \(M_{1}= \max\{ \|y_{n - 1}\|, \|A{y_{n - 1}}\| \}\), \(n\geq1\). Then we have

$$ \Vert {{y_{n}} - {y_{n - 1}}} \Vert \le \frac{{\vert {{\alpha_{n}} - {\alpha_{n - 1}}} \vert + \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert }}{{{\alpha_{n}}}}M_{1}. $$
(3.29)

From conditions (i) and (iii) we know that \(\frac{{\vert {{\alpha_{n}} - {\alpha_{n - 1}}} \vert + \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert }}{{{\alpha_{n}}}} = o({\alpha_{n}})\) and \(\beta_{n}^{2}=o(\alpha_{n})\).

Putting \(M_{2}=\max\{M_{1}, 2\|x_{n}-y_{n-1}\|, \|Ax_{n}-Ay_{n}\|, \| y_{n}-y_{n-1}|\}\), then (3.27) turns out to be

$$\begin{aligned} \|x_{n+1}-y_{n}\|^{2} \leq&(1-\alpha_{n}) \|x_{n}-y_{n-1}\|^{2}+\alpha_{n}\biggl(2 \frac {{\vert {{\alpha_{n}} - {\alpha_{n - 1}}} \vert + \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert }}{{\alpha_{n}^{2}}}+\frac{\beta_{n}^{2}}{\alpha _{n}}\biggr)M_{2} \\ =&(1-\alpha_{n})\|x_{n}-y_{n-1}\|^{2}+o( \alpha_{n}). \end{aligned}$$

By Lemma 2.6 and condition (ii), we have \(\Vert {{x_{n + 1}} - {y_{n}}} \Vert \to0\), as \(n \to\infty\). It follows that \(\{ {x_{n}}\} \) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). This completes the proof. □

If \(A:C\to H\) is a k-Lipschitz continuous and monotone, we have the following convergence result.

Theorem 3.3

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let A be a k-Lipschitz continuous and monotone operator. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\) be two sequences in \([0,1]\) that satisfy the following conditions:

  1. (i)

    \(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

  2. (ii)

    \({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

  3. (iii)

    \(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).

Assume that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the iterative sequence \(\{ x_{n}\}\) generated by (3.23) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\), which is the minimum-norm solution of VIP (2.2).

Proof

From Theorem 3.1, we know that \(\{y_{n}\}\) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\to0\) as \(n\to\infty\).

In view of condition (i), without loss of generality, we may assume that

$$ \alpha_{n}^{2}+k^{2}\beta_{n}^{2} \le\alpha_{n} $$
(3.30)

for all \(n\ge1\). By using (3.7), (3.23), and (3.30), we get

$$\begin{aligned} {\Vert {{x_{n + 1}} - {y_{n}}} \Vert ^{2}} \le& {\bigl\Vert {(1 - {\alpha _{n}}) ({x_{n}} - {y_{n}}) - {\beta_{n}}(A{x_{n}} - A{y_{n}})} \bigr\Vert ^{2}} \\ \le& {(1 - {\alpha_{n}})^{2}} {\Vert {{x_{n}} - {y_{n}}} \Vert ^{2}} + \beta _{n}^{2}{ \Vert {A{x_{n}} - A{y_{n}}} \Vert ^{2}} \\ &{}- 2(1 - {\alpha_{n}}){\beta_{n}} \langle{A{x_{n}} - A{y_{n}},{x_{n}} - {y_{n}}} \rangle \\ \le& {(1 - {\alpha_{n}})^{2}} {\Vert {{x_{n}} - {y_{n}}} \Vert ^{2}} + \beta _{n}^{2}{ \Vert {A{x_{n}} - A{y_{n}}} \Vert ^{2}} \\ \le& \bigl[{(1 - {\alpha_{n}})^{2}}+k^{2} \beta_{n}^{2}\bigr]{\Vert {{x_{n}} - {y_{n}}} \Vert ^{2}} \\ \le& (1-\alpha_{n})\|x_{n}-y_{n}\|^{2}. \end{aligned}$$
(3.31)

From (3.29), (3.31), and condition (iii), we obtain

$$\begin{aligned} {\Vert {{x_{n + 1}} - {y_{n}}} \Vert } \le& {\biggl(1 - \frac{1}{2}{\alpha_{n}}\biggr)}\|x_{n}-y_{n} \| \\ \le &\biggl(1 - \frac{1}{2}{\alpha_{n}}\biggr) \bigl( \Vert x_{n} - y_{n-1}\Vert +\Vert y_{n}-y_{n-1} \Vert \bigr) \\ \le& \biggl(1 - \frac{1}{2}{\alpha_{n}}\biggr) \|x_{n}-y_{n-1}\|+o(\alpha_{n}). \end{aligned}$$
(3.32)

By condition (ii) and Lemma 2.6, we deduce that \(x_{n+1}-y_{n}\to0\) as \(n\to\infty\). This completes the proof. □

Remark 3.1

Comparing our algorithm (3.23) with (EM), we find that algorithm (3.23) enjoys the following merits:

  1. (1)

    The recursion (3.23) is simpler than (EM).

  2. (2)

    The recursion (3.23) has the strong convergence property; while (EM) has only the weak convergence property in general.

  3. (3)

    The choice of the iterative parametric sequences \(\{\alpha_{n}\} \) and \(\{\beta_{n}\}\) in (3.23) does not depend on the Lipschitz constant of A, thus, (3.23) is also efficient even in the case where the Lipschitz constant of A is unknown.

Remark 3.2

Choose the sequences \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) such that

$$\alpha_{n}=\frac{1}{n^{a}}\quad \text{and} \quad \beta_{n}=\frac{1}{n^{b}},\quad n\ge1, $$

where \(a<\frac{b+1}{2}\), \(0< b<a\) and \(a<2b\) or \(b>\frac{1}{2}\). Then it is clear that conditions (i)-(iii) of Theorems 3.2 and 3.3 are satisfied.

4 Applications

In this section, we give some applications of our results.

Problem 4.1

Let B be a bounded linear operator on a real Hilbert space H and \(b\in H\) be a fixed vector. Find the least square solutions with the minimum norm for the following class of operator equation:

$$ Bx = b. $$
(4.1)

It is well known that the above problem is equivalent to the following minimization problem:

$$ \min_{x \in C} \frac{1}{2}{\Vert {Bx - b} \Vert ^{2}}. $$
(4.2)

We denote by \({S_{B}}\) the solution set of Problem 4.1. We consider the functional \(f(x) = \frac{1}{2}{\Vert {Bx - b} \Vert ^{2}}\). Then \(\nabla f(x) = {B^{*}}(Bx - b)\). It is easy to verify that \({S_{B}} = \operatorname{VI}(C, \nabla f)\) and \(x^{*}\) solves Problem 4.1 if and only if \({x^{*}} = {P_{{S_{B}}}}\theta\). Let \(\{ {x_{n}}\} \) be generated by the following recursion:

$$ \forall{x_{1}} \in H,\quad {x_{n + 1}} = {P_{C}} \bigl[(1 - {\alpha_{n}}){x_{n}} - {\beta_{n}}\nabla f({x_{n}})\bigr], $$
(4.3)

where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \([0,1]\) that satisfy conditions (i)-(iii) in Theorem 3.3.

By virtue of Theorem 3.3, we can deduce the following convergence result.

Theorem 4.1

Assume that \(S_{B}\ne\emptyset\) and \(\{x_{n}\}\) is generated by (4.3), then \(\{x_{n}\}\) converges in norm to \(x^{*}\).

Proof

Notice that

$$\bigl\Vert {\nabla f(x) - \nabla f(y)} \bigr\Vert \le{ \Vert B \Vert ^{2}}\Vert {x - y} \Vert $$

and

$$\begin{aligned} \bigl\langle {\nabla f(x) - \nabla f(y),x - y} \bigr\rangle =& \bigl\langle {{B^{*}}(Bx - b) - {B^{*}}(By - b),x - y} \bigr\rangle \\ =& \langle{Bx - By,Bx - By} \rangle \\ =& {\Vert {Bx - By} \Vert ^{2}} \ge0, \end{aligned}$$

we see that ∇f is \({\Vert B \Vert ^{2}}\)-Lipschitz continuous and monotone. By Theorem 3.3 we conclude that \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,\nabla f)}}\theta\). This completes the proof. □

Next, we turn to consider the split feasibility problem (SFP).

Problem 4.2

Let C and Q be nonempty, closed, and convex subsets in Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. The SFP is formulated as finding a point \(x\in C\) with the property:

$$ x^{*}\in C, \quad Bx^{*}\in Q, $$
(4.4)

where \(B: C\subset H_{1}\to H_{2} \) is a bounded linear operator.

We denote by Γ the solution set of Problem 4.2. Consider the functional

$$g(x)=\frac{1}{2}\bigl\Vert (I-P_{Q})Bx\bigr\Vert ^{2}. $$

It is well known that if Problem 4.2 is consistent, i.e., \(\Gamma\ne\emptyset\), then Problem 4.2 is equivalent to the following minimization problem:

$$ \min_{x \in C} g(x). $$
(4.5)

We know that \(x^{*}\) is a solution of the minimization problem (4.5) if and only if \(x^{*}\) is a solution of the following variational inequality:

$$ \bigl\langle \nabla g\bigl(x^{*}\bigr),x-x^{*}\bigr\rangle \ge0,\quad \forall x\in C. $$
(4.6)

Therefore, we have \(\Gamma=\operatorname{VI}(C,\nabla g)\) provided that \(\Gamma\ne \emptyset\). Let \(\{ {x_{n}}\} \) be generated by the following recursion:

$$ \forall{x_{1}} \in H,\quad {x_{n + 1}} = {P_{C}} \bigl[(1 - {\alpha_{n}}){x_{n}} - {\beta_{n}}\nabla g({x_{n}})\bigr], $$
(4.7)

where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \([0,1]\) that satisfy conditions (i)-(iii) in Theorem 3.3. By using Theorem 3.3, we have the following convergence result.

Theorem 4.2

Assume \(\Gamma\ne\emptyset\) and \(\{x_{n}\}\) is generated by (4.7), then \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,\nabla g)}}\theta\).

Proof

Note that \(\nabla g(x)=B^{*}(I-P_{Q})Bx\). It is clear that ∇g is \({\Vert B \Vert ^{2}}\)-Lipschitz continuous and monotone, by Theorem 3.3 we conclude that \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,\nabla g)}}\theta=P_{\Gamma}\theta\). This completes the proof. □

Finally, we apply our results to the minimum-norm fixed point problem for pseudocontractive mappings.

Theorem 4.3

Let C be a nonempty, bounded, closed, and convex subset of a real Hilbert space H. Let \(T:C\to C\) be a hemicontinuous pseudocontractive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). Assume that \(\{ \alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \([0,1]\) that satisfy the following conditions:

  1. (i)

    \(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

  2. (ii)

    \({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

  3. (iii)

    \(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).

Then the sequence \(\{x_{n}\}\) generated by

$$ x_{1}\in C,\quad {x_{n + 1}} = {P_{C}}\bigl[\bigl(1 - ({\alpha_{n}}+\beta _{n})\bigr){x_{n}}+{ \beta_{n}}T{x_{n}}\bigr],\quad n \ge1, $$
(4.8)

converges in norm to \({x^{*}} = {P_{\operatorname{Fix}(T)}}\theta\).

Proof

Put \(A=I-T\). Since \(T:C\to C\) is a hemicontinuous pseudocontractive mapping, then A is a hemicontinuous monotone operator. It follows from Theorem 1.1 that \(\operatorname{VI}(C,A)\ne\emptyset \). From the boundedness of C, we know that \(\{Ax_{n}\}\) and \(\{Ay_{n}\}\) are bounded. By Theorem 3.2, the iterative sequence \(\{x_{n}\}\) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). By Lemma 2.7 and noting that T is a self-mapping, we know that \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\). This completes the proof. □

Theorem 4.4

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\to C\) be a k-Lipschitz continuous pseudocontractive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two sequences in \([0,1]\) that satisfy the following conditions:

  1. (i)

    \(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);

  2. (ii)

    \({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);

  3. (iii)

    \(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).

Then the sequence \(\{x_{n}\}\) generated by

$$ x_{1}\in C, \quad {x_{n + 1}} = {P_{C}}\bigl[\bigl(1 - ({\alpha_{n}}+\beta _{n})\bigr){x_{n}}+{ \beta_{n}}T{x_{n}}\bigr],\quad n \ge1, $$
(4.9)

converges in norm to \({x^{*}} = {P_{\operatorname{Fix}(T)}}\theta\).

Proof

Put \(A=I-T\). Since \(T:C\to C\) is a k-Lipschitz continuous pseudocontractive mapping, A is a \((k+1)\)-Lipschitz continuous monotone operator. By Lemma 2.7 and our assumption, we see that \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\ne\emptyset\). By Theorem 3.3, the iterative sequence \(\{x_{n}\}\) generated by (4.9) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta=P_{\operatorname{Fix}(T)}\theta\). This completes the proof. □

Remark 4.1

Theorem 4.2 improves some related results of [10] and [11] in the sense that the iterative parametric sequences do not depend on the norm of operator A. Theorem 4.3 seems to be a new result. Theorem 4.4 is similar to Theorem 3.2 of [7] with a different condition (iii) and different arguments.

References

  1. Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)

    MATH  Google Scholar 

  2. Duvaut, D, Lions, JL: Inequalities in Mechanics and Physics. Springer, Berlin (1976)

    Book  MATH  Google Scholar 

  3. Zhou, HY, Pei, YW: A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. (2013). doi:10.1007/s10957-013-0470-x

    Google Scholar 

  4. Zhou, HY, Pei, YW: A new iteration method for variational inequalities on set of common fixed points for a finite family of quasi-pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, 218 (2014)

    Article  Google Scholar 

  5. Korpelevich, GM: The extragradient method for finding saddle points and the other problem. Matecon 12, 747-756 (1976)

    MATH  Google Scholar 

  6. Chen, RD, Su, YF, Xu, HK: Regularization and iteration methods for a class of monotone variational inequalities. Taiwan. J. Math. 13, 739-752 (2009)

    MATH  MathSciNet  Google Scholar 

  7. Yao, YH, Marino, G, Xu, H-K, Liou, Y-C: Construction of minimum-norm fixed points of pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, 206 (2014)

    Article  MathSciNet  Google Scholar 

  8. Minty, GJ: On the maximal domain of a ‘monotone’ function. Mich. Math. J. 8, 135-157 (1961)

    Article  MATH  MathSciNet  Google Scholar 

  9. Browder, FE: Nonlinear monotone operators and convex sets in Banach spaces. Bull. Am. Math. Soc. 71, 780-785 (1965)

    Article  MATH  MathSciNet  Google Scholar 

  10. Zhou, HY, Pei, YW, Zhou, Y: Minimum-norm fixed point of nonexpansive mappings with applications. Optimization (2013). doi:10.1080/02331934.2013.811667

    Google Scholar 

  11. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (11071053).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Zhou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors contributed equally. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, Y., Zhou, H. & Wang, P. Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces. J Inequal Appl 2015, 135 (2015). https://doi.org/10.1186/s13660-015-0659-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0659-7

MSC

Keywords