Skip to main content

General viscosity iterative approximation for solving unconstrained convex optimization problems

Abstract

In this paper, we combine a sequence of contractive mappings \(\{h_{n}\}\) with the proximal operator and propose a generalized viscosity approximation method for solving the unconstrained convex optimization problems in a real Hilbert space H. We show that, under reasonable parameter conditions, our algorithm strongly converges to the unique solution of a variational inequality problem. Our result presented in the paper improves and extends the corresponding results reported by many authors recently.

1 Introduction

Since the inception in 1978, the unconstrained minimization problem (1.1) has received much attention due to its applications in signal processing, image reconstruction and in particular in compressed sensing. In this paper, let H be a Hilbert space with the inner product \(\langle\, , \, \rangle\) and the induced norm \(\|\cdot\|\). Let \(\Gamma _{0}(H)\) be the space of convex functions in H that are proper, lower semicontinuous and convex. We will deal with the convex unconstrained optimization problem of the following type:

$$ \min_{x\in H}f(x)+g(x), $$
(1.1)

where \(f,g\in\Gamma_{0}(H)\). In general, f is differentiable and g is subdifferential.

As we know, problem (1.1) was first studied in [1] and provided a nature vehicle to study various generic optimization models under a common framework. Many methods have been already proposed to solve problem (1.1) (see [24]). Many important classes of optimization problems can be cast in this form, and problem (1.1) is a common problem in control theory. See, for instance, [3] as a special case of (1.1) where due to the involvement of the \(l_{1}\) norm which promotes sparsity that we can get a good result on solving the corresponding problem. We mention in particular the classical works developed by [2] and [4], where a lot of weak convergence results have been discussed.

Definition 1.1

(see [5, 6])

The proximal operator of \(\varphi\in\Gamma_{0}(H)\) is defined by

$$\operatorname{prox}_{\varphi}(x)=\operatorname{arg}\min _{\nu\in H}\biggl\{ \varphi(\nu)+\frac{1}{2}\|\nu-x\| ^{2}\biggr\} ,\quad x\in H. $$

The proximal operator of φ of order \(\lambda>0\) is defined as the proximal operator of λφ, that is,

$$\operatorname{prox}_{\lambda\varphi}(x)=\operatorname{arg}\min _{\nu\in H}\biggl\{ \varphi(\nu)+\frac {1}{2\lambda}\|\nu-x \|^{2}\biggr\} , \quad x\in H . $$

Lemma 1.2

(see [4])

Let \(f,g\in\Gamma_{0}(H)\). Let \(x^{*}\in H\) and \(\lambda>0\). Assume that f is finite-valued and differential on H. Then \(x^{*}\) is a solution to (1.1) if and only if \(x^{*}\) solves the fixed point equation

$$ x^{*}=\bigl(\operatorname{prox}_{\lambda g}(I-\lambda\nabla f) \bigr)x^{*}. $$
(1.2)

The fixed point equation (1.2) immediately yields the following fixed point algorithm which is also known as the proximal algorithm [7] for solving (1.1) as follows.

Initialize \(x_{0}\in H\) and iterate

$$ x_{n+1}=\bigl(\operatorname{prox}_{\lambda_{n}g}(I- \lambda_{n}\nabla f)\bigr)x_{n}, $$
(1.3)

where \(\{\lambda_{n}\}\) is a sequence of positive real numbers. Meanwhile, in [8], the authors Combettes and Wajs also proved that the algorithm converged weakly. Recently, Xu [4] introduced the relaxed proximal algorithm.

Initialize \(x_{0}\in H\) and iterate

$$ x_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}\bigl(\operatorname{prox}_{\lambda_{n}g}(I- \lambda_{n}\nabla f)\bigr)x_{n}, $$
(1.4)

where \(\{\alpha_{n}\}\) is the sequence of relaxation parameters and \(\{ \lambda_{n}\}\) is a sequence of positive real numbers, and obtain weak convergence.

However, it is well known that strongly convergent algorithms are very important for solving infinite dimensional problems and viscosity can effectively transfer weak convergence of certain iterative algorithm to convergence strongly under appropriate conditions. Recently, based on an idea introduced in the work of Moudafi and Thakur [9], Yao et al. [10], Shehu [11] and Shehu et al. [12, 13] proposed some iteration algorithms for solving proximal split feasibility problems which are related to problem (1.1). They obtained strong convergence.

In this paper, motivated by works [4, 914], we combine a consequence of contractive mappings \(\{h_{n}\}\) with the proximal operator and propose a generalized viscosity approximation method for solving problem (1.1). We propose our main iterative scheme and obtain strong convergence theorem for solving unconstrained convex minimization problems by the general iterative method. Meanwhile, we get the convergence point of the iterative method which is also the unique solution of the variational inequality problem (3.1). Further an example will be given to demonstrate the effectiveness of our iterative scheme.

2 Preliminaries

We adopt the following notations.

Let \(T: H\rightarrow H\) be a self-mapping of H.

  1. (1)

    \(x_{n}\rightarrow x\) stands for the strong convergence of \(\{x_{n}\}\) to x; \(x_{n}\rightharpoonup x\) stands for the weak convergence of \(\{x_{n}\} \) to x.

  2. (2)

    Use \(\operatorname{Fix}(T)\) to denote the set of fixed points of T; that is, \(\operatorname{Fix}(T)=\{x\in H: Tx=x\}\).

  3. (3)

    \(\omega_{w}(x_{n}):=\{x : \exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).

In this paper, in order to prove our result, we collect some facts and tools in a Hilbert space H. We shall make full use of the following lemmas, definitions and propositions in a real Hilbert space H.

Lemma 2.1

Let H be a real Hilbert space. There holds the following inequality:

$$\|x-y\|^{2}\leq\|x\|^{2}+2\langle x+y,y\rangle,\quad \forall x,y\in H. $$

Recall that given a closed subset C of a real Hilbert space H, for any \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that

$$\|x-P_{C}x\|\leq\|x-y\| \quad \mbox{for all }y\in C. $$

Such \(P_{C}x\) is called the metric (or the nearest point) projection of H onto C.

Lemma 2.2

(see [15])

Let C be a nonempty closed convex subject of a real Hilbert space H. Given \(x\in H\) and \(z\in C\), then \(y=P_{C}x\) if and only if we have the relation

$$\langle x-y,y-z\rangle\geq0 \quad \textit{for all }z\in C. $$

Definition 2.3

A mapping \(F:H\rightarrow H\) is said to be:

  1. (i)

    Lipschitzian if there exists a positive constant L such that

    $$\|F x-Fy\|\leq L\|x-y\|,\quad \forall x,y\in H. $$

    In particular, if \(L=1\), we say F is nonexpansive, namely

    $$\|Fx-Fy\|\leq\|x-y\|,\quad \forall x,y\in H; $$

    if \(L\in(0,1)\), we say F is contractive, namely

    $$\|Fx-Fy\|\leq L\|x-y\|, \quad \forall x,y\in H. $$
  2. (ii)

    α-averaged mapping (α-av for short) if

    $$F=(1-\alpha)I+\alpha T, $$

    where \(\alpha\in(0,1)\) and \(T:H\rightarrow H\) is nonexpansive.

Lemma 2.4

(see [16])

Let \(h:H\rightarrow H\) be a ρ-contraction with \(\rho\in(0,1)\) and \(T:H\rightarrow H\) be a nonexpansive mapping. Then:

  1. (i)

    \(I-h\) is \((1-\rho)\)-strongly monotone:

    $$\bigl\langle (I-h)x-(I-h)y,x-y\bigr\rangle \geq(1-\rho)\|x-y\|^{2},\quad \forall x,y\in H. $$
  2. (ii)

    \(I-T\) is monotone:

    $$\bigl\langle (I-T)x-(I-T)y,x-y\bigr\rangle \geq0,\quad \forall x,y\in H. $$

Lemma 2.5

(see [17], Demiclosedness principle)

Let H be a real Hilbert space, and let \(T:H\rightarrow H \) be a nonexpansive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in H weakly converging to x and if \(\{(I-T)x_{n}\}\) converges strongly to y, then \((I-T)x=y\); in particular, if \(y=0\), then \(x\in \operatorname{Fix}(T)\).

Lemma 2.6

(see [7] or [18])

Assume that \(\{s_{n}\}\) is a sequence of nonnegative real numbers such that

$$\begin{aligned}& s_{n+1}\leq(1-\gamma_{n})s_{n}+ \gamma_{n}\delta_{n},\quad n\geq0, \\& s_{n+1}\leq s_{n}-\eta_{n}+\varphi_{n}, \quad n\geq0, \end{aligned}$$

where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\), \((\eta_{n})\) is a sequence of nonnegative real numbers and \((\delta_{n})\) and \((\varphi_{n})\) are two sequences in \(\mathbb{R}\) such that

  1. (i)

    \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\);

  2. (ii)

    \(\lim_{n\rightarrow\infty}\varphi_{n}=0\);

  3. (iii)

    \(\lim_{k\rightarrow\infty}\eta_{n_{k}}=0\) implies \(\limsup_{k\rightarrow\infty}\delta_{n_{k}}\leq0\) for any subsequence \((n_{k})\subset(n)\).

Then \(\lim_{n\rightarrow\infty}s_{n}=0\).

Proposition 2.7

(see [19])

If \(T_{1}, T_{2},\ldots, T_{n} \) are averaged mappings, we can get that \(T_{n}T_{n-1}\cdots T_{1}\) is averaged. In particular, if \(T_{i}\) is \(\alpha _{i}\)-av, \(i=1,2\), where \(\alpha_{i} \in(0,1)\), then \(T_{2}T_{1}\) is \((\alpha _{2}+\alpha_{1}-\alpha_{2}\alpha_{1})\)-av.

Proposition 2.8

(see[20])

If \(f:H\rightarrow\mathbb{R}\) is a differentiable functional, then we denote by f the gradient of f. Assume that f is Lipschitz continuous on H. The operator \(V_{\lambda}=\operatorname{prox}_{\lambda g} (I-\lambda\nabla f)\) is \(\frac{2+\lambda L}{4}\)-av for each \(0<\lambda <\frac{2}{L}\).

Lemma 2.9

The proximal identity

$$ \operatorname{prox}_{\lambda\varphi}x=\operatorname{prox}_{\mu\varphi} \biggl(\frac{\mu}{\lambda}x+\biggl(1-\frac {\mu}{\lambda}\biggr)\operatorname{prox}_{\lambda\varphi}x \biggr) $$
(2.1)

holds for \(\varphi\in\Gamma_{0}(H)\), \(\lambda>0\) and \(\mu>0\).

3 Main results

In this section, we combine a sequence of contractive mappings and apply a more generalized viscosity iterative method for approximating the unique fixed point of the following variational inequality problem:

$$ \bigl\langle (I-h)x^{*},\tilde{x}-x^{*}\bigr\rangle \geq0,\quad \forall \tilde{x}\in \operatorname{Fix}(V_{\lambda}), $$
(3.1)

where \(h: H\rightarrow H\) is ρ-contractive.

Suppose that the contractive sequence \(\{h_{n}(x)\}\) is uniformly convergent for any \(x\in D\), where D is any bounded subset of H. The uniform convergence of \(\{h_{n}(x)\}\) on D is denoted by \(h_{n}(x) \rightrightarrows h(x)\) (\(n \rightarrow\infty\)), \(x\in D\).

Theorem 3.1

Let \(f,g\in\Gamma_{0}(H)\) and assume that (1.1) is consistent. Let \(\{ h_{n}(x_{n})\}\) be a sequence of \(\rho_{n}\)-contractive self-maps of H with \(0\leq\rho_{l}=\liminf_{n\rightarrow\infty}\rho_{n}\leq\limsup_{n\rightarrow\infty}\rho_{n}=\rho_{u}<1\) and \(V_{\lambda_{n}}=\operatorname{prox}_{\lambda _{n}g}(I-\lambda_{n}\nabla f)\), where f is L-Lipschitzian. Assume that \(\{h_{n}(x)\}\) is uniformly convergent for any \(x\in D\), where D is any bounded subset of H. Given \(x_{0}\in H\) and define the sequence \(\{x_{n}\}\) by the following iterative algorithm:

$$ x_{n+1}=\alpha_{n}h_{n}(x_{n})+(1- \alpha_{n})V_{\lambda_{n}}x_{n}, $$
(3.2)

where \(\lambda_{n}\in(0,\frac{2}{L})\), \(\alpha_{n}\in(0,\frac{2+\lambda_{n} L}{4})\). Suppose that

  1. (i)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty}\lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n}<\frac{2}{L}\).

Then \(\{x_{n}\}\) converges strongly to \(x^{*}\), where \(x^{*}\) is a solution of (1.1), which is also the unique solution of the variational inequality problem (3.1).

Proof

Let S be a nonempty solution set of (1.1).

Step 1. Show that \(\{x_{n}\}\) is bounded.

For any \(x^{*}\in S\),

$$\begin{aligned}& \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad =\bigl\Vert \alpha_{n}h_{n}(x_{n})+(1- \alpha_{n})V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert \\& \quad =\bigl\Vert \alpha_{n}\bigl(h_{n}(x_{n})-x^{*} \bigr)+(1-\alpha_{n}) \bigl(V_{\lambda_{n}}x_{n}-x^{*}\bigr) \bigr\Vert \\& \quad \leq\alpha_{n}\bigl\Vert h_{n}(x_{n})-h_{n} \bigl(x^{*}\bigr)\bigr\Vert +\alpha_{n}\bigl\Vert h_{n} \bigl(x^{*}\bigr)-x^{*}\bigr\Vert \\& \qquad {}+(1-\alpha _{n})\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert \\& \quad \leq\alpha_{n}\rho_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert +\alpha_{n}\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert +(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \quad \leq\alpha_{n}\rho_{u}\bigl\Vert x_{n}-x^{*}\bigr\Vert +\alpha_{n}\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert +(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \quad =\bigl(1-\alpha_{n}(1-\rho_{u})\bigr)\bigl\Vert x_{n}-x^{*}\bigr\Vert +\alpha_{n}(1-\rho_{u}) \frac{\Vert h_{n}(x^{*})-x^{*}\Vert }{1-\rho_{u}}. \end{aligned}$$
(3.3)

From the uniform convergence of \(\{h_{n}\}\) on D, it is easy to get the boundedness of \(\{h_{n}(x^{*})\}\). Thus there exists a positive constant \(M_{1}\) such that \(\|h_{n}(x^{*})-x^{*}\|\leq M_{1}\). By induction, we obtain

$$\bigl\Vert x_{n}-x^{*}\bigr\Vert \leq\max\biggl\{ \bigl\Vert x_{0}-x^{*}\bigr\Vert , \frac{M_{1}}{1-\rho_{u}}\biggr\} , $$

which implies that the sequence \(\{x_{n}\}\) is bounded.

Step 2. Show that for any sequence \((n_{k})\subset(n)\),

$$\lim_{k\rightarrow\infty}\|x_{n_{k}}-V_{\lambda_{n_{k}}}x_{n_{k}} \|=0. $$

Firstly, note that from (3.2) we have

$$\begin{aligned}& \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad =\bigl\Vert \alpha_{n}h_{n}(x_{n})+(1- \alpha_{n})V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert ^{2} \\& \quad =\bigl\Vert \alpha_{n}\bigl(h_{n}(x_{n})-x^{*} \bigr)+(1-\alpha_{n}) \bigl(V_{\lambda_{n}}x_{n}-x^{*}\bigr) \bigr\Vert ^{2} \\& \quad =\alpha_{n}^{2}\bigl\Vert h_{n}(x_{n})-x^{*} \bigr\Vert ^{2}+(1-\alpha_{n})^{2}\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert ^{2}+2 \alpha_{n}(1-\alpha_{n})\bigl\langle h_{n}(x_{n})-x^{*},V_{\lambda _{n}}x_{n}-x^{*} \bigr\rangle \\& \quad \leq2\alpha_{n}^{2}\bigl(\bigl\Vert h_{n}(x_{n})-h_{n}\bigl(x^{*}\bigr)\bigr\Vert ^{2}+\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert ^{2}\bigr) \\& \qquad {} +(1-\alpha_{n})^{2}\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert + 2\alpha_{n}(1- \alpha_{n})\bigl\langle h_{n}(x_{n})-x^{*},V_{\lambda_{n}}x_{n}-x^{*} \bigr\rangle \\& \quad \leq2\alpha_{n}^{2}\bigl(\rho_{n}^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert ^{2}\bigr) \\& \qquad {} +(1-\alpha_{n})^{2}\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert + 2\alpha_{n}(1- \alpha_{n})\bigl\langle h_{n}(x_{n})-x^{*},V_{\lambda_{n}}x_{n}-x^{*} \bigr\rangle \\& \quad \leq2\alpha_{n}^{2}\bigl(\rho_{n}^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert ^{2}\bigr)+ 2\alpha _{n}(1-\alpha_{n})\rho_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2} \\& \qquad {} +(1-\alpha_{n})^{2}\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert + 2\alpha_{n}(1- \alpha_{n})\bigl\langle h_{n}\bigl(x^{*}\bigr)-x^{*},V_{\lambda_{n}}x_{n}-x^{*} \bigr\rangle \\& \quad =\bigl[1-\alpha_{n}\bigl(2-\alpha_{n}\bigl(1+2{ \rho_{n}}^{2}\bigr)-2(1-\alpha_{n}) \rho_{n}\bigr)\bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2} \\& \qquad {} +2{\alpha_{n}}^{2}\bigl\Vert h_{n} \bigl(x^{*}\bigr)-x^{*}\bigr\Vert ^{2}+2\alpha_{n}(1- \alpha_{n})\bigl\langle h_{n}\bigl(x^{*}\bigr)-x^{*},V_{\lambda_{n}}x_{n}-x^{*} \bigr\rangle . \end{aligned}$$
(3.4)

Secondly, since \(V_{\lambda_{n}} is \frac{2+\lambda_{n}L}{4} \)-av, we can rewrite

$$ V_{\lambda_{n}}=\operatorname{prox}_{\lambda_{n}g}(I- \lambda_{n}\nabla f)=(1-w_{n})I+w_{n}T_{n}, $$
(3.5)

where \(w_{n}=\frac{2+\lambda_{n}L}{4}\), \(T_{n}\) is nonexpansive and, by condition (iii), we get \(\frac{1}{2}<\liminf_{n\rightarrow\infty}w_{n}\leq\limsup_{n\rightarrow \infty}w_{n}<1\). Thus, we have from (3.2) and (3.5)

$$\begin{aligned}& \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad =\bigl\Vert \alpha_{n}h_{n}(x_{n})+(1- \alpha_{n})V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert ^{2} \\& \quad =\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}+\alpha_{n} \bigl(h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr) \bigr\Vert ^{2} \\& \quad =\bigl\Vert V_{\lambda_{n}}x_{n}-x^{*}\bigr\Vert ^{2}+{\alpha_{n}}^{2}\bigl\Vert h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle V_{\lambda_{n}}x_{n}-x^{*},h_{n}(x_{n})-V_{\lambda _{n}}x_{n} \bigr\rangle \\& \quad =\bigl\Vert (1-w_{n})x_{n}+ w_{n}T_{n}x_{n}-x^{*}\bigr\Vert ^{2}+{\alpha_{n}}^{2}\bigl\Vert h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr\Vert ^{2} \\& \qquad {}+2\alpha_{n}\bigl\langle V_{\lambda _{n}}x_{n}-x^{*},h_{n}(x_{n})-V_{\lambda_{n}}x_{n} \bigr\rangle \\& \quad =(1-w_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+w_{n}\bigl\Vert T_{n}x_{n}-T_{n}x^{*} \bigr\Vert ^{2}-w_{n}(1-w_{n})\Vert T_{n}x_{n}-x_{n}\Vert ^{2} \\& \qquad {} +{\alpha_{n}}^{2}\bigl\Vert h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle V_{\lambda_{n}}x_{n}-x^{*},h_{n}(x_{n})-V_{\lambda_{n}}x_{n} \bigr\rangle \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- w_{n}(1-w_{n})\Vert T_{n}x_{n}-x_{n} \Vert ^{2}+{\alpha_{n}}^{2}\bigl\Vert h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr\Vert ^{2} \\& \qquad {} +2\alpha_{n}\bigl\langle V_{\lambda_{n}}x_{n}-x^{*},h_{n}(x_{n})-V_{\lambda_{n}}x_{n} \bigr\rangle . \end{aligned}$$
(3.6)

Set

$$\begin{aligned}& s_{n}=\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2} , \qquad \gamma_{n}=\alpha_{n}\bigl(2-\alpha_{n} \bigl(1+2{\rho _{n}}^{2}\bigr)-2(1-\alpha_{n}) \rho_{n}\bigr) , \\& \delta_{n}=\frac{1}{2-\alpha_{n}(1+2{\rho_{n}}^{2})-2(1-\alpha_{n})\rho _{n}}\bigl[2{\alpha_{n}}\bigl\Vert h_{n}\bigl(x^{*}\bigr)-x^{*}\bigr\Vert ^{2} \\& \hphantom{ \delta_{n}={}}{}+2(1-\alpha_{n})\bigl\langle h_{n} \bigl(x^{*}\bigr)-x^{*},V_{\lambda_{n}}x_{n}-x^{*}\bigr\rangle \bigr] , \\& \eta_{n}=w_{n}(1-w_{n})\Vert T_{n}x_{n}-x_{n}\Vert ^{2} , \\& \varphi_{n}={\alpha_{n}}^{2}\bigl\Vert h_{n}(x_{n})-V_{\lambda_{n}}x_{n}\bigr\Vert ^{2}+2\alpha _{n}\bigl\langle V_{\lambda_{n}}x_{n}-x^{*},h_{n}(x_{n})-V_{\lambda_{n}}x_{n} \bigr\rangle , \end{aligned}$$

since \(\gamma_{n}\rightarrow0\), \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\) (\(\lim_{n\rightarrow\infty}(2-\alpha_{n}(1+2{\rho_{n}}^{2})-2(1-\alpha_{n})\rho _{n})=2(1-\rho_{u})>0\)) and \(\varphi_{n}\rightarrow0\) (\(\alpha_{n}\rightarrow0\)) hold obviously, so in order to complete the proof by using Lemma 2.6, it suffices to verify that \(\eta_{n_{k}}\rightarrow0\) (\(k\rightarrow\infty\)) implies that

$$\limsup_{k\rightarrow\infty}\delta_{n_{k}}\leq0 $$

for any subsequence \((n_{k})\subset(n)\).

Indeed, \(\eta_{n_{k}}\rightarrow0\) (\(k\rightarrow\infty\)) implies that \(\| T_{n_{k}}x_{n_{k}}-x_{n_{k}}\|\rightarrow0\) (\(k\rightarrow\infty\)) due to condition (iii). So, from (3.5), we have

$$ \|x_{n_{k}}-V_{\lambda_{n_{k}}}x_{n_{k}} \|=w_{n_{k}}\|x_{n_{k}}-T_{n_{k}}x_{n_{k}}\| \rightarrow0. $$
(3.7)

Step 3. Show that

$$ \omega_{w}(x_{n_{k}})\subset S. $$
(3.8)

Here, \(\omega_{w}(x_{n_{k}})\) is the set of all weak cluster points of \(\{ x_{n_{k}}\}\). To see (3.8), we prove as follows. Take \(\tilde{x} \in\omega _{w}\{x_{n_{k}}\}\) and assume that \(\{x_{n_{k_{j}}}\}\) is a subsequence of \(\{ x_{n_{k}}\}\) weakly converging to . Without loss of generality, we rewrite \(\{x_{n_{k_{j}}}\}\) as \(\{x_{n_{k}}\}\) and may assume \(\lambda_{n_{j}}\rightarrow\lambda\), then \(0<\lambda<\frac{2}{L}\). Set \(V_{\lambda}=\operatorname{prox}_{\lambda g}(I-\lambda\nabla f)\), then \(V_{\lambda}\) is nonexpansive. Set

$$y_{k}=x_{n_{k}}-\lambda_{n_{k}}\nabla f(x_{n_{k}}), \qquad z_{k}=x_{n_{k}}-\lambda\nabla f(x_{n_{k}}). $$

Using the proximal identity of Lemma 2.9, we deduce that

$$\begin{aligned}& \Vert V_{\lambda_{n_{k}}}x_{n_{k}}-V_{\lambda}x_{n_{k}}\Vert \\& \quad =\Vert \operatorname{prox}_{\lambda_{n_{k}}g}y_{k}- \operatorname{prox}_{\lambda g}z_{k}\Vert \\& \quad =\biggl\Vert \operatorname{prox}_{\lambda g}\biggl( \frac{\lambda}{\lambda_{n_{k}}}y_{k}+ \biggl(1-\frac{\lambda}{\lambda_{n_{k}}}\biggr) \operatorname{prox}_{{\lambda _{n_{k}}}g}y_{k}\biggr)-\operatorname{prox}_{\lambda g}z_{k} \biggr\Vert \\& \quad \leq\biggl\Vert \frac{\lambda}{\lambda_{n_{k}}}y_{k}+\biggl(1- \frac{\lambda}{\lambda _{n_{k}}}\biggr)\operatorname{prox}_{\lambda_{n_{k}}g}y_{k}-z_{k} \biggr\Vert \\& \quad \leq\frac{\lambda}{\lambda_{n_{k}}}\Vert y_{k}-z_{k}\Vert + \biggl(1-\frac{\lambda}{\lambda _{n_{k}}}\biggr)\Vert \operatorname{prox}_{\lambda_{n_{k}}g}y_{k}-z_{k} \Vert \\& \quad =\frac{\lambda}{\lambda_{n_{k}}}|\lambda_{n_{k}}-\lambda|\bigl\Vert \nabla f(x_{n_{k}})\bigr\Vert +\biggl(1-\frac{\lambda}{\lambda_{n_{k}}}\biggr)\Vert \operatorname{prox}_{\lambda _{n_{k}}g}y_{k}-z_{k}\Vert . \end{aligned}$$
(3.9)

Since \(\{x_{n}\}\) is bounded, f is Lipschitz continuous and \(\lambda_{n_{k}}\rightarrow\lambda\), we immediately derive from the last relation that \(\|V_{\lambda_{n_{k}}}x_{n_{k}}-V_{\lambda}x_{n_{k}}\|\rightarrow 0\). As a result, we find

$$ \|x_{n_{k}}-V_{\lambda}x_{n_{k}}\|\leq \|x_{n_{k}}-V_{\lambda_{n_{k}}}x_{n_{k}}\|+\| V_{\lambda_{n_{k}}}x_{n_{k}}-V_{\lambda}x_{n_{k}}\|\rightarrow0. $$
(3.10)

Using Lemma 2.5, we get \(\omega_{w}(x_{n_{k}})\subset S\). Meanwhile, since \(\{h_{n}(x)\}\) is uniformly on D, we have

$$ \limsup_{k\rightarrow\infty}\bigl\langle h_{n_{k}} \bigl(x^{*}\bigr)-x^{*},x_{n_{k}}-x^{*}\bigr\rangle =\bigl\langle h\bigl(x^{*} \bigr)-x^{*},\tilde{x}-x^{*}\bigr\rangle ,\quad \forall\tilde{x} \in S. $$
(3.11)

Also, since \(x^{*}\) is the unique solution of the variational inequality problem (3.1), we get

$$\limsup_{k\rightarrow\infty}\bigl\langle h_{n_{k}}\bigl(x^{*} \bigr)-x^{*},x_{n_{k}}-x^{*}\bigr\rangle \leq0, $$

and hence \(\limsup_{k\rightarrow\infty}\delta_{n_{k}}=0\). □

4 Numerical result

In this section, we consider the following simple numerical example to demonstrate the effectiveness, realization and convergence of Theorem 3.1. Through the following numerical example, we can see that the convergent point, which is generated by (3.2), is not only the solution of (1.1) but is very close to the solution of the problem \(Ax=b\).

Example 1

Let \(H=\mathbb{R}^{m}\). Define \(h_{n}(x)=\frac{1}{100}x\). Take \(f(x)=\frac{1}{2}\|Ax-b\|^{2}\), thus we can get that \(\nabla f(x)=A^{T}(Ax-b)\) with Lipschitz coefficient \(L=\|A^{T}A\|\), where \(A^{T}\) represents the transpose of A. Take \(g=\|x\|_{1}\), then \(V_{\lambda _{n}g}x=\arg\min_{v\in H}\{\lambda_{n} g(v)+\frac{1}{2}\|v-x\|^{2}\}=\arg\min_{v\in H}\{\lambda\|v\|_{1}+\frac{1}{2}\|v-x\|^{2}\}\). From [20], we also know that \(\operatorname{prox}_{\lambda_{n}\|\cdot\|_{1}}x=[\operatorname{prox}_{\lambda_{n}|\cdot |}x_{1}, \operatorname{prox}_{\lambda_{n}|\cdot|}x_{2},\ldots, \operatorname{prox}_{\lambda_{n}|\cdot |}x_{m}]^{T}\), where \(\operatorname{prox}_{\lambda_{n}|\cdot|}x_{i}=\max\{|x_{i}|-\lambda_{n},0\} \operatorname{sign}(x_{i})\) (\(i=1,2,\ldots,m\)). Give \(\alpha_{n}=\frac{1}{1\text{,}000*n}\) for every \(n\geq1\). Fix \(\lambda_{n}=\frac{1}{150*L^{\frac{1}{2}}}\), generate a random matrix

$$ A= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -4.3814 & -4.0867 & 9.8355 & -2.9770 & 11.4368 \\ -5.3162 & 9.7257 & -5.2225 & 1.7658 & 9.7074 \\ -4.1397 & -4.3827 & 20.0339 & 9.5099 & -4.3200 \\ 6.4894 & -3.6008 & 7.0589 & 14.1585 & -16.0452 \\ 10.2885 & 14.5797 & 0.4747 & 17.4626 & 1.5539 \\ -12.3712 & -21.9349 & -3.3341 & 7.1354 & 3.1741 \\ 4.1361 & -5.7709 & 1.4400 & -16.3867 & -7.6009 \\ -8.1879 & 5.1973 & -0.1416 & -11.5553 & -0.0952 \\ -6.8981 & -6.6670 & 8.6415 & 1.1342 & 3.9836 \\ 8.8397 & 1.8026 & 5.5085 & 6.8296 & 11.7061 \\ 4.7586 & 14.1223 & 0.2261 & -0.4787 & 17.0133 \\ -5.0971 & -0.0285 & 9.1987 & 1.4981 & 14.0493 \\ 10.3412 & 2.9157 & -7.7770 & 5.6670 & -13.8262 \\ 2.4447 & 8.0844 & 2.1304 & 8.7968 & 20.3888 \\ 9.2393 & 2.6692 & 6.4166 & 4.2549 & -13.1472 \\ -4.1641 & 12.2469 & -0.4358 & 5.8242 & -10.0650 \\ 0.6452 & 6.0029 & -13.6151 & 3.4759 & -1.8184 \end{array}\displaystyle \right )^{T} $$
(4.1)

and a random vector

$$ b= ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 189.0722 & 42.6524 & 635.1979 & 281.8669 & 538.5967 \end{array}\displaystyle )^{T}. $$
(4.2)

Then, by Theorem 3.1, the sequence \(\{x_{n}\}\) is generated by

$$ x_{n+1}=\frac{1}{1\text{,}000*n}*\frac{1}{100}x_{n}+ \biggl(1-\frac {1}{1\text{,}000*n}\biggr)\operatorname{prox}_{\lambda\|\cdot\|_{1}} \bigl(x_{n}-A^{T}(Ax_{n}-b)\bigr). $$
(4.3)

As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}\). Then, through taking a distinct initial guess \(x_{0}\), using software Matlab, we obtain the numerical experiment results in Tables 1 and 2, where

$$\begin{aligned}& x_{56}= \left ( \textstyle\begin{array}{@{}c@{}} 7.1006\\ -2.9422\\ 8.8367\\ 2.8348\\ 4.5842\\ -3.9470\\ 0.0197\\ -4.4153\\ 3.6363\\ 10.2320\\ 5.8671\\ 7.0921\\ -3.9847\\ 7.8388\\ 3.5086\\ -5.5774\\ -8.0655 \end{array}\displaystyle \right ),\qquad x_{157}= \left ( \textstyle\begin{array}{@{}c@{}} 7.1148\\ -3.0244\\ 8.7745\\ 2.8338\\ 4.5735\\ -3.9616\\ 0.1036\\ -4.5064\\ 3.5960\\ 10.3482\\ 5.8919\\ 7.0715\\ -3.9318\\ 7.8632\\ 3.5382\\ -5.7408\\ -8.1055 \end{array}\displaystyle \right ),\qquad x_{223\text{,}462}= \left ( \textstyle\begin{array}{@{}c@{}} 4.2108\\ 0\\ 11.8683\\ 0\\ 0\\ -2.0724\\ 0\\ 0\\ 0\\ 24.9734\\ 2.5163\\ 2.4147\\ 0\\ 7.6589\\ 0\\ 0\\ -12.6599 \end{array}\displaystyle \right ), \end{aligned}$$
(4.4)
$$\begin{aligned}& x_{57}= \left ( \textstyle\begin{array}{@{}c@{}} 7.8596\\ -2.2018\\ 8.8707\\ 3.3315\\ 4.7086\\ -2.7415\\ 1.7903\\ -3.2518\\ 4.3445\\ 10.8322\\ 6.5156\\ 7.5856\\ -2.7904\\ 8.1903\\ 4.2573\\ -5.1121\\ -6.8677 \end{array}\displaystyle \right ),\qquad x_{154}= \left ( \textstyle\begin{array}{@{}c@{}} 7.8725\\ -2.2857\\ 8.8060\\ 3.3304\\ 4.6981\\ -2.7551\\ 1.8729\\ -3.3467\\ 4.3031\\ 10.9491\\ 6.5397\\ 7.5637\\ -2.7378\\ 8.2151\\ 4.2857\\ -5.2790\\ -6.9080 \end{array}\displaystyle \right ),\qquad x_{218\text{,}128}= \left ( \textstyle\begin{array}{@{}c@{}} 4.7788\\ 0\\ 12.0778\\ 0\\ 0\\ -2.0434\\ 0\\ 0\\ 0\\ 25.3370\\ 2.7797\\ 2.3643\\ 0\\ 7.0517\\ 0\\ 0\\ -11.9260 \end{array}\displaystyle \right ), \end{aligned}$$
(4.5)

where \(x_{n}\) is the point which is generated by Theorem 3.1. Then we know the convergence point of \(x_{n}\) is the solution of problem (1.1). Until now, it has not been easy to get an exact solution about the problem of \(Ax=b\). Therefore, there exist a lot of algorithms to get the approximate solution about it. Also, by a series of analyses, we know that \(x_{n}\) is very close to satisfying the problem of \(Ax=b\). To some extent, we can say that Theorem 3.1 solved both (1.1) and \(Ax=b\). Further, as we know, many practical problems in applied sciences such as signal processing and image reconstruction are formulated as the problem of \(Ax=b\). So, our theorem is very useful for solving those problems.

Table 1 \(\pmb{x_{0}=(0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0)^{T}}\)
Table 2 \(\pmb{x_{0}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ )^{T}}\)

References

  1. Auslender, A: Minimisation de fonctions localement Lipschitziennes: applications a la programmation mi-convexe, mi-differentiable. In: Mangasarian, OL, Meyer, RR, Robinson, SM (eds.) Nonlinear Programming, vol. 3, pp. 429-460. Academic Press, New York (1978)

    Google Scholar 

  2. Mahammand, AA, Naseer, S, Xu, HK: Properties and iterative methods for the Q-lasso. Abstr. Appl. Anal. 2013, Article ID 250943 (2013)

    Google Scholar 

  3. Taheri, S, Mammadov, M, Seifollahi, S: Globally convergent algorithms for solving unconstrained optimization problems. Optimization (2015). doi:10.1080/02331934.2012.745529

    MATH  MathSciNet  Google Scholar 

  4. Xu, HK: Properties and iterative methods for the lasso and its variants. Chin. Ann. Math., Ser. B 35(3), 501-518 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  5. Moreau, JJ: Proprietes des applications ‘prox’. C. R. Acad. Sci. Paris, Sér. A Math. 256, 1069-1071 (1963)

    MATH  MathSciNet  Google Scholar 

  6. Moreau, JJ: Proximite et dualite dans un espace hilbertien. Bull. Soc. Math. Fr. 93, 279-299 (1965)

    MathSciNet  Google Scholar 

  7. He, SN, Yang, CP: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013). doi:10.1155/2013/942315

    MathSciNet  Google Scholar 

  8. Combettes, PL, Wajs, R: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168-1200 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  9. Moudafi, A, Thakur, BS: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 8, 2099-2110 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  10. Yao, Z, Cho, SY, Kang, SM, Zhu, LJ: A regularized algorithm for the proximal split feasibility problem. Abstr. Appl. Anal. 2014, Article ID 894272 (2014). doi:10.1155/2014/894272

    MathSciNet  Google Scholar 

  11. Shehu, Y: Iterative methods for convex proximal split feasibility problems and fixed point problems. Afr. Math. (2015). doi:10.1007/s13370-015-0344-5

    MathSciNet  Google Scholar 

  12. Shehu, Y, Cai, G, Iyiola, OS: Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015, 123 (2015). doi:10.1186/s13663-015-0375-5

    Article  MathSciNet  Google Scholar 

  13. Shehu, Y, Ogbuisi, FU: Convergence analysis for proximal split feasibility problems and fixed point problems. J. Appl. Math. Comput. 48, 221-239 (2015)

    Article  MathSciNet  Google Scholar 

  14. Duan, PC, He, SN: Generalized viscosity approximation methods for nonexpansive mappings. Fixed Point Theory Appl. 2014, 68 (2014). doi:10.1186/1687-1812-2014-68

    Article  MathSciNet  Google Scholar 

  15. Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-346 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  16. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  17. Geobel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

    Book  Google Scholar 

  18. Yang, CP, He, SN: General alternative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 203 (2014)

    Article  Google Scholar 

  19. Combettes, PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53, 475-504 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  20. Micchelli, CA, Shen, LX, Xu, YS: Proximity algorithms for image models: denoising. Inverse Probl. 27, 045009 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Fundamental Research Funds for the Central Universities (3122015L007) and the National Nature Science Foundation of China (11501566).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peichao Duan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duan, P., Song, M. General viscosity iterative approximation for solving unconstrained convex optimization problems. J Inequal Appl 2015, 334 (2015). https://doi.org/10.1186/s13660-015-0857-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0857-3

MSC

Keywords