Skip to main content

Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert space

Abstract

We know that variational inequality problem is very important in the nonlinear analysis. The main purpose of this paper is to propose an iterative method for finding an element of the set of solutions of a variational inequality problem with a monotone and Lipschitz continuous mapping in Hilbert space. This iterative method is based on the extragradient method. We get a weak convergence theorem. Using this result, we obtain three weak convergence theorems for the equilibrium problem, the constrained convex minimization problem, and the split feasibility problem.

1 Introduction

The variational inequality problem is a generalization of the nonlinear complementarity problem. It is widely used in economics, engineering, mechanics, signal processing, image processing, and so on. The variational inequality was first derived from the mechanics problems in the early 1960s. In 1964, the existence and uniqueness of solutions of variational inequalities were presented for the first time. Subsequently, some scientists have published a series of articles. In the 1970s, the variational inequality problem had been used in many fields. In the 1990s, the variational inequality problem became more important in nonlinear analysis.

Let \(\mathbb{R}\) be the set of real numbers. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and norm \(\|\cdot\|\) and let C be a nonempty closed convex subset of H. A mapping \(A:C\rightarrow H\) is called monotone if

$$ \langle Ax-Ay, x-y\rangle\geq0, \quad\forall x,y\in C. $$

A mapping \(A:C\rightarrow H\) is called Lipschitz continuous if there exists \(k\in\mathbb{R}\) with \(k>0\) such that

$$ \|Ax-Ay\|\leq k\|x-y\|,\quad \forall x,y\in C. $$

Such A is called k-Lipschitz continuous. If \(k=1\), such A is called a nonexpansive mapping. The variational inequality problem is to find \(x^{*}\in C\) such that

$$ \bigl\langle Ax^{*}, x-x^{*}\bigr\rangle \geq0, \quad \forall x\in C. $$
(1.1)

We denote the set of solutions of this variational inequality problem by \(\operatorname{VI}(C,A)\).

In 1976, Korpelevich [1] proposed the following so-called extragradient method for solving the variational inequality problem in the finite-dimensional Euclidean space \(\mathbb{R}^{n}\).

Theorem 1.1

[1]

Let C be a nonempty closed convex subset of an n-dimensional Euclidean space \(\mathbb{R}^{n}\). Let A be a monotone and k-Lipschitz continuous mapping of C into H. Assume that \(\operatorname{VI}(C,A)\) is nonempty. Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences generated by \(x_{0}=x\in C\) and

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda Ay_{n}), \end{array}\displaystyle \right . $$
(1.2)

for every \(n=0,1,2,\ldots\) , where \(\lambda\in(0,\frac{1}{k})\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to the same point \(z\in \operatorname{VI}(C,A)\).

In this paper, based on the extragradient method, we introduce an iterative method for finding an element of the set of solutions of a variational inequality problem for a monotone and Lipschitz continuous mapping in Hilbert space. We obtain a weak convergence theorem. As applications, we can use this result to solve equilibrium problems, constrained convex minimization problems, and split feasibility problems.

2 Preliminaries

Let \(\mathbb{R}\) be the set of real numbers. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and norm \(\|\cdot\|\). Let \(\{x_{n}\}\) be a sequence in H, we denote the sequence \(\{x_{n}\}\) converging weakly to x by \(x_{n}\rightharpoonup x\) and the sequence \(\{x_{n}\}\) converging strongly to x by \(x_{n}\rightarrow x\). Let C be a nonempty closed convex subset of H. For each \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that

$$ \|x-P_{C}x\|\leq\|x-y\|,\quad \forall y\in C. $$
(2.1)

\(P_{C}\) is called the metric projection of H into C. We know that \(P_{C}\) is nonexpansive. A set-valued mapping \(T:H\rightarrow2^{H}\) is called monotone if

$$ \langle x-y, u-v\rangle\geq0, \quad \forall(x,u),(y,v)\in G(T). $$

A monotone mapping \(T:H\rightarrow2^{H}\) is called maximal if its graph is not properly contained in the graph of any other monotone mapping on H. It is known that a monotone mapping T is maximal if and only if for \((x,u)\in H\times H\), \(\langle x-y, u-v\rangle\geq0\) for each \((y,v)\in G(T)\) implies \(u\in Tx\).

Lemma 2.1

[2]

Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_{C}x\) if and only if we have the inequality

$$ \langle x-z, z-y\rangle\geq0,\quad \forall y\in C. $$
(2.2)

Lemma 2.2

[2]

Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_{C}x\) if and only if we have the inequality

$$ \|x-y\|^{2}\geq\|x-z\|^{2}+\|y-z \|^{2}, \quad \forall y\in C. $$
(2.3)

Lemma 2.3

[3]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be a monotone and k-Lipschitz continuous mapping of C into H and let \(N_{C}v\) be the normal cone to C at \(v\in C\); i.e.,

$$ N_{C}v=w\in H\mbox{:}\quad \langle v-u, w\rangle\geq0, \quad \forall u\in C. $$

Define

$$ Tv= \left \{ \textstyle\begin{array}{l@{\quad}l} Av+N_{C}v, &\forall v\in C, \\ \emptyset, &\forall v\notin C. \end{array}\displaystyle \right . $$

Then T is maximal monotone and \(0\in Tv\) if and only if \(v\in \operatorname{VI}(C,A)\).

Lemma 2.4

[4]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H satisfying the properties:

  1. (i)

    \(\lim_{n\rightarrow\infty}\|x_{n}-u\|\) exists for each \(u\in C\);

  2. (ii)

    \(\omega_{w}(x_{n})\subset C\).

Then \(\{x_{n}\}\) converges weakly to a point in C.

Lemma 2.5

[5]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H. Suppose that

$$ \|x_{n+1}-u\|\leq\|x_{n}-u\|,\quad \forall u\in C, $$

for every \(n=0,1,2,\ldots\) . Then the sequence \(\{P_{C}x_{n}\}\) converges strongly to a point in C.

3 Main results

The main task of this article is to find an element of the set of solutions of a variational inequality problem with a monotone and Lipschitz continuous mapping in Hilbert space. We obtain a weak convergence theorem.

Theorem 3.1

Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let A be a monotone and k-Lipschitz continuous mapping of C into H. Assume that \(\operatorname{VI}(C,A)\neq\emptyset\). Let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated by \(x_{0}=x\in C\) and

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda_{n}Ax_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda_{n}Ay_{n}), \end{array}\displaystyle \right . $$
(3.1)

for every \(n=0,1,2,\ldots\) , where \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{k})\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge weakly to the same point \(z\in \operatorname{VI}(C,A)\), where \(z=\lim_{n\rightarrow\infty} P_{\operatorname{VI}(C,A)}x_{n}\).

Proof

For each \(u\in \operatorname{VI}(C,A)\). From Lemma 2.2, we have

$$\begin{aligned} \|x_{n+1}-u\|^{2} \leq&\|x_{n}- \lambda_{n}Ay_{n}-u\|^{2}-\|x_{n}-\lambda _{n}Ay_{n}-x_{n+1}\|^{2} \\ =&\|x_{n}-u\|^{2}-\|x_{n}-x_{n+1} \|^{2}+2\lambda_{n}\langle Ay_{n}, u-x_{n+1}\rangle \\ =&\|x_{n}-u\|^{2}-\|x_{n}-x_{n+1} \|^{2}+2\lambda_{n}\bigl(\langle Ay_{n}, u-y_{n}\rangle \\ &{}+\langle Ay_{n}, y_{n}-x_{n+1}\rangle\bigr) \\ =&\|x_{n}-u\|^{2}-\|x_{n}-x_{n+1} \|^{2}+2\lambda_{n}\bigl(\langle Ay_{n}-Au, u-y_{n}\rangle \\ &{}+\langle Au, u-y_{n}\rangle+\langle Ay_{n}, y_{n}-x_{n+1}\rangle\bigr) \\ \leq&\|x_{n}-u\|^{2}-\|x_{n}-x_{n+1} \|^{2}+2\lambda_{n}\langle Ay_{n}, y_{n}-x_{n+1}\rangle \\ =&\|x_{n}-u\|^{2}-\|x_{n}-y_{n} \|^{2}-2\langle x_{n}-y_{n}, y_{n}-x_{n+1} \rangle \\ &{}-\|y_{n}-x_{n+1}\|^{2}+2\lambda_{n} \langle Ay_{n}, y_{n}-x_{n+1}\rangle \\ =&\|x_{n}-u\|^{2}-\|x_{n}-y_{n} \|^{2}-\|y_{n}-x_{n+1}\|^{2} \\ &{}+2\langle x_{n}-\lambda_{n}Ay_{n}-y_{n}, x_{n+1}-y_{n}\rangle. \end{aligned}$$

Then, from Lemma 2.1, we obtain

$$\begin{aligned}& \langle x_{n}-\lambda_{n}Ay_{n}-y_{n}, x_{n+1}-y_{n}\rangle \\& \quad = \langle x_{n}-\lambda_{n}Ax_{n}-y_{n}, x_{n+1}-y_{n}\rangle +\langle\lambda_{n}Ax_{n}- \lambda_{n}Ay_{n}, x_{n+1}-y_{n}\rangle \\& \quad \leq \langle\lambda_{n}Ax_{n}-\lambda_{n}Ay_{n}, x_{n+1}-y_{n}\rangle \\& \quad = \lambda_{n}\langle Ax_{n}-Ay_{n}, x_{n+1}-y_{n}\rangle \\& \quad \leq \lambda_{n}\|Ax_{n}-Ay_{n}\| \|x_{n+1}-y_{n}\| \\& \quad \leq \lambda_{n}k\|x_{n}-y_{n}\| \|x_{n+1}-y_{n}\|. \end{aligned}$$

So, we have

$$\begin{aligned} \|x_{n+1}-u\|^{2} \leq&\|x_{n}-u \|^{2}-\|x_{n}-y_{n}\|^{2}- \|y_{n}-x_{n+1}\|^{2} \\ &{}+2\lambda_{n}k\|x_{n}-y_{n}\| \|x_{n+1}-y_{n}\| \\ \leq&\|x_{n}-u\|^{2}-\|x_{n}-y_{n} \|^{2}-\|y_{n}-x_{n+1}\|^{2} \\ &{}+\lambda_{n}^{2}k^{2}\|x_{n}-y_{n} \|^{2}+\|x_{n+1}-y_{n}\| ^{2} \\ \leq&\|x_{n}-u\|^{2}+\bigl(\lambda_{n}^{2}k^{2}-1 \bigr)\|x_{n}-y_{n}\| ^{2} \\ \leq&\|x_{n}-u\|^{2}. \end{aligned}$$
(3.2)

Therefore, there exists

$$ c=\lim_{n\rightarrow\infty}\|x_{n}-u\| $$
(3.3)

and the sequence \(\{x_{n}\}\) is bounded. From (3.2), we also get

$$ \bigl(1-\lambda_{n}^{2}k^{2}\bigr) \|x_{n}-y_{n}\|^{2}\leq\|x_{n}-u \|^{2}-\| x_{n+1}-u\|^{2}. $$

So, we obtain

$$ \|x_{n}-y_{n}\|^{2}\leq \frac{1}{1-\lambda_{n}^{2}k^{2}}\bigl(\|x_{n}-u\| ^{2}-\|x_{n+1}-u \|^{2}\bigr). $$
(3.4)

Hence

$$ x_{n}-y_{n}\rightarrow0,\quad n\rightarrow \infty. $$
(3.5)

On the other hand, we have

$$\begin{aligned} \Vert x_{n+1}-y_{n}\Vert =&\bigl\Vert P_{C}(x_{n}-\lambda_{n}Ay_{n})-P_{C}(x_{n}- \lambda_{n}Ax_{n})\bigr\Vert \\ \leq&\bigl\Vert (x_{n}-\lambda_{n}Ay_{n})-(x_{n}- \lambda_{n}Ax_{n})\bigr\Vert \\ =&\Vert \lambda_{n}Ax_{n}-\lambda_{n}Ay_{n} \Vert \\ =&\lambda_{n}\Vert Ax_{n}-Ay_{n}\Vert \\ \leq&\lambda_{n}k\Vert x_{n}-y_{n}\Vert . \end{aligned}$$
(3.6)

Hence

$$ x_{n+1}-y_{n}\rightarrow0,\quad n\rightarrow \infty. $$
(3.7)

Since A is Lipschitz continuous, we get

$$ Ax_{n+1}-Ay_{n}\rightarrow0,\quad n\rightarrow \infty. $$
(3.8)

From

$$ \|x_{n+1}-x_{n}\|\leq\|x_{n+1}-y_{n}\|+ \|y_{n}-x_{n}\|, $$

we have

$$ x_{n+1}-x_{n}\rightarrow0, \quad n\rightarrow \infty. $$
(3.9)

Since \(\{x_{n}\}\) is bounded, there is a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) that converges weakly to a point z. We prove that \(z\in \operatorname{VI}(C,A)\). From (3.5) and (3.9), we have \(y_{n_{i}}\rightharpoonup z\) and \(x_{n_{i}+1}\rightharpoonup z\).

Let

$$ Tv= \left \{ \textstyle\begin{array}{l@{\quad}l} Av+N_{C}v,& \forall v\in C, \\ \emptyset,& \forall v\notin C. \end{array}\displaystyle \right . $$

From Lemma 2.3, we know that T is maximal monotone and \(0\in Tv\) if and only if \(v\in \operatorname{VI}(C,A)\).

For each \((v,w)\in G(T)\), we have

$$ w\in Tv=Av+N_{C}v. $$

Hence

$$ w-Av\in N_{C}v. $$

So, we obtain

$$ \langle v-p, w-Av\rangle\geq0,\quad \forall p\in C. $$
(3.10)

On the other hand, from \(v\in C\) and

$$ x_{n+1}=P_{C}(x_{n}-\lambda_{n}Ay_{n}), $$

we get

$$ \langle x_{n}-\lambda_{n}Ay_{n}-x_{n+1}, x_{n+1}-v\rangle\geq0 $$

and hence

$$ \biggl\langle v-x_{n+1}, \frac{x_{n+1}-x_{n}}{\lambda_{n}}+Ay_{n} \biggr\rangle \geq0. $$
(3.11)

Therefore from (3.10) and (3.11), we obtain

$$\begin{aligned}& \langle v-x_{n_{i}+1}, w\rangle \\& \quad \geq \langle v-x_{n_{i}+1}, Av\rangle \\& \quad \geq \langle v-x_{n_{i}+1}, Av\rangle -\biggl\langle v-x_{n_{i}+1}, \frac{x_{n_{i}+1}-x_{n_{i}}}{\lambda _{n_{i}}}+Ay_{n_{i}}\biggr\rangle \\& \quad = \langle v-x_{n_{i}+1}, Av-Ay_{n_{i}}\rangle -\biggl\langle v-x_{n_{i}+1}, \frac{x_{n_{i}+1}-x_{n_{i}}}{\lambda _{n_{i}}}\biggr\rangle \\& \quad = \langle v-x_{n_{i}+1}, Av-Ax_{n_{i}+1}\rangle+\langle v-x_{n_{i}+1}, Ax_{n_{i}+1}-Ay_{n_{i}}\rangle \\& \qquad {}-\biggl\langle v-x_{n_{i}+1}, \frac{x_{n_{i}+1}-x_{n_{i}}}{\lambda _{n_{i}}}\biggr\rangle \\& \quad \geq \langle v-x_{n_{i}+1}, Ax_{n_{i}+1}-Ay_{n_{i}} \rangle-\biggl\langle v-x_{n_{i}+1}, \frac{x_{n_{i}+1}-x_{n_{i}}}{\lambda_{n_{i}}}\biggr\rangle . \end{aligned}$$
(3.12)

As \(i\rightarrow\infty\), we have

$$ \langle v-z, w\rangle\geq0. $$
(3.13)

Since T is maximal monotone, we have \(0\in Tz\) and hence \(z\in \operatorname{VI}(C,A)\).

From Lemma 2.4, we get

$$ x_{n}\rightharpoonup z\in \operatorname{VI}(C,A). $$
(3.14)

Since \(x_{n}-y_{n}\rightarrow0\), we also have

$$ y_{n}\rightharpoonup z\in \operatorname{VI}(C,A). $$
(3.15)

From Lemma 2.5, we obtain

$$ z=\lim_{n\rightarrow\infty}P_{\operatorname{VI}(C,A)}x_{n}. $$
(3.16)

 □

4 Application

In the applications of this method, they are useful in nonlinear analysis and optimization problems in Hilbert space. This section is concerned with three weak convergence theorems for the equilibrium problem, the constrained convex minimization problem, and the split feasibility problem by Theorem 3.1.

Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let F be a bifunction of \(C\times C\) into \(\mathbb{R}\). The equilibrium problem [6] is to find \(x^{*}\) such that

$$ F\bigl(x^{*},y\bigr)\geq0,\quad \forall y\in C. $$
(4.1)

The set of solutions of problem (4.1) is denoted by \(\operatorname{EP}(F)\).

Lemma 4.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a bifunction of \(C\times C\) into \(\mathbb{R}\) satisfying the properties:

  1. (A1)

    \(F(x,x)\)=0 for all \(x\in C\);

  2. (A2)

    for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and differentiable.

Then \(z\in \operatorname{EP}(F)\) if and only if \(z\in \operatorname{VI}(C,S)\), where \(Sx= \nabla F_{y}(x,y)|_{y=x}\).

Proof

Let \(z\in \operatorname{EP}(F)\). For each \(y\in C\), \(z+\lambda(y-z)= \lambda y+(1-\lambda)z\in C\), \(\forall\lambda\in(0,1)\). Since for each \(x\in C\), \(y\mapsto F(x,y)\) is differentiable. Then

$$ \langle Sz, y-z\rangle=\lim_{\lambda\rightarrow0^{+}} \frac{F(z,z+\lambda(y-z))-F(z,z)}{\lambda}\geq0. $$

Conversely. If \(z\in \operatorname{VI}(C,S)\); i.e., \(\langle\nabla F_{y}(z,y)|_{y=z}, y-z\rangle\geq0\), \(\forall y\in C\). Since for each \(x\in C\), \(y\mapsto F(x,y)\) is convex. Then \(F(z,y)\geq F(z,z)=0\). □

Applying Theorem 3.1 and Lemma 4.1, we obtain the following result.

Theorem 4.2

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a bifunction of \(C\times C\) into \(\mathbb{R}\) satisfying (A1) and (A2). Assume that S is monotone and k-Lipschitz continuous and \(\operatorname{EP}(F)\neq\emptyset\). Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences generated by \(x_{0}=x\in C\) and

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda_{n}Sx_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda_{n}Sy_{n}), \end{array}\displaystyle \right . $$
(4.2)

for every \(n=0,1,2,\ldots\) , where \(S(x)=\nabla F_{y}(x,y)|_{y=x}\) and \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{k})\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge weakly to the same point \(z\in \operatorname{EP}(F)\), where \(z=\lim_{n\rightarrow\infty} P_{\operatorname{EP}(F)}x_{n}\).

Proof

Putting \(A=S\) in Theorem 3.1, we get the desired result by Lemma 4.1. □

Consider the following constrained convex minimization problem [7]: Find \(x^{*}\in C\) such that

$$ f\bigl(x^{*}\bigr)=\min_{x\in C}f(x), $$
(4.3)

where C is a nonempty closed convex subset of a real Hilbert space H and f is a real-valued convex function.

Lemma 4.3

Let H is a real Hilbert space and let C be a nonempty closed convex subset of H. Let f be a convex function of H into \(\mathbb{R}\). If f is differentiable, then z is a solution of (4.3) if and only if \(z\in \operatorname{VI}(C,\nabla f)\).

Proof

Let z be a solution of (4.3). For each \(x\in C\), \(z+\lambda(x-z)\in C\), \(\forall\lambda\in(0,1)\). Since f is differentiable, we have

$$ \bigl\langle \nabla f(z), x-z\bigr\rangle =\lim_{\lambda\rightarrow0^{+}} \frac{f(z+\lambda(x-z))-f(z)}{\lambda}\geq0. $$

Conversely, if \(z\in \operatorname{VI}(C,S)\), \(\langle\nabla f(z), x-z\rangle\geq0\), \(\forall x\in C\). Since f is convex, we have

$$ f(x)\geq f(z)+\bigl\langle \nabla f(z),x-z\bigr\rangle \geq f(z). $$

Hence z is a solution of (4.3). □

Applying Theorem 3.1 and Lemma 4.3, we obtain the following result.

Theorem 4.4

Let H is a real Hilbert space and let C be a nonempty closed convex subset of H. Let f be a function of H into \(\mathbb{R}\). Assume f is differentiable and we assume that the set of solutions of (4.3) is nonempty and ∇f is k-Lipschitz continuous. Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences generated by \(x_{0}=x\in C\) and

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda_{n}\nabla f(x_{n})), \\ x_{n+1}=P_{C}(x_{n}-\lambda_{n}\nabla f(y_{n})), \end{array}\displaystyle \right . $$
(4.4)

for every \(n=0,1,2,\ldots\) , where \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{k})\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge weakly to the same point z, where z is a solution of (4.3).

Proof

Since f is convex, we see that ∇f is monotone. Putting \(A=\nabla f\) in Theorem 3.1, we obtain the desired result by Lemma 4.3. □

Very recently, the split feasibility problem (SFP) [8–11] has been proposed. It is very important in nonlinear analysis and optimization problems. The SFP is to find a point \(x^{*}\) such that

$$ x^{*}\in C\quad \mbox{and}\quad Bx^{*}\in Q, $$
(4.5)

where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\) and B is a bounded linear operator of \(H_{1}\) into \(H_{2}\).

Lemma 4.5

[8]

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C and Q be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\). Let B be a bounded linear operator of \(H_{1}\) into \(H_{2}\). Assume that \(C\cap B^{-1}Q\) is nonempty. Let \(\lambda \geq0\). Then the following propositions are equivalent:

  1. (i)

    \(z\in \operatorname{VI}(C,B^{*}(I-P_{Q})B)\);

  2. (ii)

    \(z=P_{C}(I-\lambda B^{*}(I-P_{Q})B)z\);

  3. (iii)

    \(z\in C\cap B^{-1}Q\),

where \(B^{*}\) is the adjoint operator of B.

Lemma 4.6

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let B be a bounded linear operator of \(H_{1}\) into \(H_{2}\) such that \(B\neq0\). Let Q be a nonempty closed convex subset of \(H_{2}\). Then \(B^{*}(I-P_{Q})B\) is monotone and \(\|B\|^{2}\)-Lipschitz continuous.

Proof

Let \(x,y\in H_{1}\),

$$\begin{aligned}& \bigl\langle B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By,x-y \bigr\rangle \\& \qquad {}-\frac{1}{\Vert B\Vert ^{2}}\bigl\Vert B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By \bigr\Vert ^{2} \\& \quad = \bigl\langle B^{*}\bigl[(I-P_{Q})Bx-(I-P_{Q})By \bigr],x-y\bigr\rangle \\& \qquad {}-\frac{1}{\Vert B\Vert ^{2}}\bigl\Vert B^{*}\bigl[(I-P_{Q})Bx-(I-P_{Q})By \bigr]\bigr\Vert ^{2} \\& \quad = \bigl\langle (I-P_{Q})Bx-(I-P_{Q})By,Bx-By\bigr\rangle \\& \qquad {}-\frac{1}{\Vert B\Vert ^{2}}\bigl\Vert B^{*}\bigl[(I-P_{Q})Bx-(I-P_{Q})By \bigr]\bigr\Vert ^{2} \\& \quad \geq \bigl\Vert (I-P_{Q})Bx-(I-P_{Q})By\bigr\Vert ^{2} \\& \qquad {}-\bigl\Vert (I-P_{Q})Bx-(I-P_{Q})By\bigr\Vert ^{2} \\& \quad = 0. \end{aligned}$$

Hence

$$\begin{aligned}& \bigl\Vert B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By \bigr\Vert ^{2} \\& \quad \leq \Vert B\Vert ^{2}\bigl\langle B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By,x-y \bigr\rangle \\& \quad \leq \Vert B\Vert ^{2}\bigl\Vert B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By \bigr\Vert \Vert x-y\Vert . \end{aligned}$$

So, we obtain

$$ \bigl\Vert B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By \bigr\Vert \leq \Vert B\Vert ^{2}\Vert x-y\Vert . $$

On the other hand, we have

$$\begin{aligned}& \bigl\langle B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By,x-y \bigr\rangle \\& \quad \geq \frac{1}{\Vert B\Vert ^{2}}\bigl\Vert B^{*}(I-P_{Q})Bx-B^{*}(I-P_{Q})By \bigr\Vert ^{2} \\& \quad \geq 0. \end{aligned}$$

Then \(B^{*}(I-P_{Q})B\) is monotone and \(\|B\|^{2}\)-Lipschitz continuous. □

Applying Theorem 3.1 and Lemma 4.5, we obtain the following result.

Theorem 4.7

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C and Q be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\). Let \(B:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(B\neq0\). Assume that \(C\cap B^{-1}Q\) is nonempty. Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences generated by \(x_{0}=x\in C\) and

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda_{n}B^{*}(I-P_{Q})Bx_{n}), \\ x_{n+1}=P_{C}(x_{n}-\lambda_{n}B^{*}(I-P_{Q})By_{n}), \end{array}\displaystyle \right . $$
(4.6)

for every \(n=0,1,2,\ldots\) , where \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{\|B\|^{2}})\). Then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge weakly to the same point \(z\in C\cap B^{-1}Q\), where \(z=\lim_{n\rightarrow\infty}P_{C\cap B^{-1}Q}x_{n}\).

Proof

By Lemma 4.6, we get \(B^{*}(I-P_{Q})B\) is monotone and \(\|B\|^{2}\)-Lipschitz continuous. Putting \(A=B^{*}(I-P_{Q})B\) and \(k=\|B\|^{2}\) in Theorem 3.1, we get the desired result by Lemma 4.5. □

5 Numerical result

In this section, we use our iterative method to solve some specific practical numerical calculation problems. By using the algorithm in Theorem 4.4 and Theorem 4.7, we illustrate its convergence in solving constrained convex minimization problem and linear system of equations.

The first example is the constrained convex minimization problem of a function of one variable, which uses the algorithm in Theorem 4.4.

Example 1

In Theorem 4.4, we suppose that \(H=\mathbb{R}\) and \(C=[0,2]\). Consider the constrained convex minimization problem (4.3) and let the function

$$ f(x)=x^{3}-3x, \quad \forall x\in[0,2]. $$
(5.1)

Then the problem (4.3) can be written as

$$ \min_{x\in[0,2]}\bigl(x^{3}-3x\bigr). $$
(5.2)

It is easy to find a point \(x^{*}=1\) solving the problem (5.2). We can know that ∇f is monotone and 12-Lipschitz continuous. Take \(k=12\) and \(\lambda_{n}=\frac{1}{36(n+1)}+\frac{1}{36}\).

Then by Theorem 4.4, the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) are generated by

$$ \left \{ \textstyle\begin{array}{l} y_{n}=P_{C}[x_{n}-(\frac{1}{36(n+1)}+\frac {1}{36})(3x_{n}^{2}-3)], \\ x_{n+1}=P_{C}[x_{n}-(\frac{1}{36(n+1)}+\frac{1}{36})(3y_{n}^{2}-3)]. \end{array}\displaystyle \right . $$
(5.3)

As \(n\rightarrow\infty\), we have \(x_{n}\rightarrow x^{*}=1\).

From Table 1, we can see that with the increase of the number of iterations, \(\{x_{n}\}\) approaches the solution \(x^{*}\) and the errors gradually approach zero.

Table 1 Numerical results as regards Example 1

The second example is a \(3\times3\) linear system of equations, which use the algorithm in Theorem 4.7.

Example 2

In Theorem 4.7, we suppose that \(H_{1}=H_{2}=\mathbb{R}^{3}\). Take

$$\begin{aligned}& A=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 4&3&0\\ 3&4&-1\\ 0&-1&4 \end{array}\displaystyle \right ), \end{aligned}$$
(5.4)
$$\begin{aligned}& b=\left ( \textstyle\begin{array}{@{}c@{}} 24\\ 30\\ -24 \end{array}\displaystyle \right ). \end{aligned}$$
(5.5)

Let \(B=A\), \(C=\mathbb{R}^{3}\) and \(Q=\{b\}\). Then the SFP (4.5) is transformed into the problem of system of linear equations. That is to say, \(x_{*}\) is the solution of linear system of equations \(Ax=b\) and

$$ x^{*}=\left ( \textstyle\begin{array}{@{}c@{}} 3\\ 4\\ -5 \end{array}\displaystyle \right ).$$
(5.6)

Then by Theorem 4.7, the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) are generated by

$$ \left \{ \textstyle\begin{array}{l} y_{n}=x_{n}-(\frac{1}{300(n+1)}+\frac {1}{300})(A^{*}Ax_{n}-A^{*}b), \\ x_{n+1}=x_{n}-(\frac{1}{300(n+1)}+\frac{1}{300})(A^{*}Ay_{n}-A^{*}b). \end{array}\displaystyle \right . $$
(5.7)

As \(n\rightarrow\infty\), we have \(x_{n}\rightarrow x^{*}\).

From Table 2, we can also see that with the increase of iterative number, \(x_{n}\) approaches the exact solution \(x^{*}\) and the errors gradually approach zero.

Table 2 Numerical results as regards Example 2

6 Conclusion

The variational inequality problem is a very important field of study in mathematics. It is not only playing an important role in optimization problems and nonlinear analysis, but also widely used in many fields, such as economics, mechanics, signal processing, etc. So, more and more scientists devote their efforts to the study of variational inequalities. For a variational inequalities, we mainly study the algorithm and its convergence, existence and uniqueness of the solutions. In a real Hilbert space, The gradient-projection method for solving the variational inequality problem for an inverse-strongly monotone mapping has been studied. But this method will not be used if the inverse-strongly monotone is changed to monotone in the condition. So we propose a new iterative method to solve it. In this paper, we introduce an iterative method for finding an element of the set of solutions of a variational inequality problem with a monotone and Lipschitz continuous mapping in Hilbert space. In particular, under certain conditions, equilibrium problem, constrained convex minimization problem and split feasibility problem are, respectively, equivalent to a variational inequality problem. Then the new weak convergence theorem are obtained. The algorithm in Theorem 3.1 improves and extends Korpelevich’s method [1] in the following ways:

  1. (i)

    The finite-dimensional Euclidean space \(\mathbb{R}^{n}\) is extended to the case of an infinite-dimensional Hilbert space H.

  2. (ii)

    The fixed coefficient λ is extended to the case of a sequence \(\{\lambda_{n}\}\).

Recently, the variational inequality problem has been further developed. This will attract more scholars interested in the study of the variational inequality problem. Many scholars will devote their efforts to its study. Then the variational inequality problem can be better developed in the future.

References

  1. Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747-756 (1976)

    MathSciNet  MATH  Google Scholar 

  2. Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Takahashi, W, Nadezhkina, N: Weak convergence theorem by an extragradient method for nonexpensive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Takahashi, W, Toyoda, M: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417-428 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  6. Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441-453 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  11. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This paper was supported by Fundamental Research Funds for the Central Universities (Grant: 3122016L006). Ming Tian was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Jiang, BN. Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert space. J Inequal Appl 2016, 286 (2016). https://doi.org/10.1186/s13660-016-1237-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1237-3

MSC

Keywords