Skip to main content

Multi-step iterative algorithms with regularization for triple hierarchical variational inequalities with constraints of mixed equilibria, variational inclusions, and convex minimization

Abstract

In this paper, we introduce and analyze a relaxed iterative algorithm by virtue of Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, the regularization method, and the averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs), the solution set of finitely many variational inclusions and the set of minimizers of convex minimization problem (CMP), which is just a unique solution of a triple hierarchical variational inequality (THVI) in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solving a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many variational inclusions, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others.

MSC:49J30, 47H09, 47J20, 49M05.

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H and P C be the metric projection of H onto C. Let S:CH be a nonlinear mapping on C. We denote by Fix(S) the set of fixed points of S and by R the set of all real numbers. A mapping S:CH is called L-Lipschitz continuous if there exists a constant L0 such that

SxSyLxy,x,yC.

In particular, if L=1 then S is called a nonexpansive mapping; if L[0,1) then S is called a contraction.

Let A:CH be a nonlinear mapping on C. We consider the following variational inequality problem (VIP): find a point xC such that

Ax,yx0,yC.
(1.1)

The solution set of VIP (1.1) is denoted by VI(C,A).

The VIP (1.1) was first discussed by Lions [1]. There are many applications of VIP (1.1) in various fields; see, e.g., [25]. It is well known that, if A is a strongly monotone and Lipschitz continuous mapping on C, then VIP (1.1) has a unique solution. In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (1.1) in Euclidean space R n :

{ y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) , n 0 ,

with τ>0 a given number, which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention from many authors, who improved it in various ways; see, e.g., [720] and references therein, to name but a few.

Let φ:CR be a real-valued function, A:HH be a nonlinear mapping and Θ:C×CR be a bifunction. In 2008, Peng and Yao [8] introduced the generalized mixed equilibrium problem (GMEP) of finding xC such that

Θ(x,y)+φ(y)φ(x)+Ax,yx0,yC.
(1.2)

We denote the set of solutions of GMEP (1.2) by GMEP(Θ,φ,A). The GMEP (1.2) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others. The GMEP is further considered and studied; see, e.g., [10, 16, 18, 19, 2123]. In particular, if φ=0, then GMEP (1.2) reduces to the generalized equilibrium problem (GEP) which is to find xC such that

Θ(x,y)+Ax,yx0,yC.

It was introduced and studied by Takahashi and Takahashi [24]. The set of solutions of GEP is denoted by GEP(Θ,A).

If A=0, then GMEP (1.2) reduces to the mixed equilibrium problem (MEP) which is to find xC such that

Θ(x,y)+φ(y)φ(x)0,yC.

It was considered and studied in [25]. The set of solutions of MEP is denoted by MEP(Θ,φ).

If φ=0, A=0, then GMEP (1.2) reduces to the equilibrium problem (EP) which is to find xC such that

Θ(x,y)0,yC.

It was considered and studied in [26, 27]. The set of solutions of EP is denoted by EP(Θ). It is worth to mention that the EP is an unified model of several problems, namely, variational inequality problems, optimization problems, saddle point problems, complementarity problems, fixed point problems, Nash equilibrium problems, etc.

It was assumed in [8] that Θ:C×CR is a bifunction satisfying conditions (A1)-(A4) and φ:CR is a lower semicontinuous and convex function with restriction (B1) or (B2), where

  • (A1) T(x,x)=0 for all x?C;

  • (A2) T is monotone, i.e., T(x,y)+T(y,x)=0 for any x,y?C;

  • (A3) T is upper-hemicontinuous, i.e., for each x,y,z?C,

    lim?sup t ? 0 + T ( t z + ( 1 - t ) x , y ) =T(x,y);
  • (A4) T(x,·) is convex and lower semicontinuous for each x?C;

  • (B1) for each x?H and r>0, there exists a bounded subset D x ?C and y x ?C such that, for any z?C\ D x ,

    T(z, y x )+f( y x )-f(z)+ 1 r y x -z,z-x><0;
  • (B2) C is a bounded set.

Given a positive number r>0. Let T r ( Θ , φ ) :HC be the solution set of the auxiliary mixed equilibrium problem, that is, for each xH,

T r ( Θ , φ ) (x):= { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

On the other hand, let B be a single-valued mapping of C into H and R be a multivalued mapping with D(R)=C. Consider the following variational inclusion: find a point xC such that

0Bx+Rx.
(1.3)

We denote by I(B,R) the solution set of the variational inclusion (1.3). In particular, if B=R=0, then I(B,R)=C. If B=0, then problem (1.3) becomes the inclusion problem introduced by Rockafellar [28]. It is well known that problem (1.3) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria, and game theory, etc. Let a set-valued mapping R:D(R)H 2 H be maximal monotone. We define the resolvent operator J R , λ :H D ( R ) ¯ associated with R and λ as follows:

J R , λ = ( I + λ R ) 1 ,xH,

where λ is a positive number.

In 1998, Huang [9] studied problem (1.3) in the case where R is maximal monotone and B is strongly monotone and Lipschitz continuous with D(R)=C=H. Subsequently, Zeng et al. [29] further studied problem (1.3) in the case which is more general than Huang’s one [9]. Moreover, the authors of [29] obtained the same strong convergence conclusion as in Huang’s result [9]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more details, refer to [11, 12, 30, 31] and the references therein.

Let f:CR be a convex and continuously Fréchet differentiable functional. Consider the convex minimization problem (CMP) of minimizing f over the constraint set C,

minimize  { f ( x ) : x C } .
(1.4)

It and its special cases were considered and studied in [13, 14, 3234]. We denote by Γ the set of minimizers of CMP (1.4). The gradient-projection algorithm (GPA) generates a sequence { x n } determined by the gradient f and the metric projection P C :

x n + 1 := P C ( x n λ f ( x n ) ) ,n0,
(1.5)

or, more generally,

x n + 1 := P C ( x n λ n f ( x n ) ) ,n0,
(1.6)

where, in both (1.5) and (1.6), the initial guess x 0 is taken from C arbitrarily, the parameters λ or λ n are positive real numbers. The convergence of algorithms (1.5) and (1.6) depends on the behavior of the gradient f. As a matter of fact, it is well known that, if f is α-strongly monotone and L-Lipschitz continuous, then, for 0<λ< 2 α L 2 , the operator P C (Iλf) is a contraction; hence, the sequence { x n } defined by the GPA (1.5) converges in norm to the unique solution of CMP (1.4). More generally, if { λ n } is chosen to satisfy the property

0< lim inf n λ n lim sup n λ n < 2 α L 2 ,

then the sequence { x n } defined by the GPA (1.6) converges in norm to the unique minimizer of CMP (1.4). If the gradient f is only assumed to be Lipschitz continuous, then { x n } can only be weakly convergent if H is infinite-dimensional (a counterexample is given in Section 5 of Xu [33]). Recently, Xu [33] used averaged mappings to study the convergence analysis of the GPA, which is hence an operator-oriented approach.

Very recently, Ceng and Al-Homidan [23] introduced and analyzed the following iterative algorithm by the hybrid steepest-descent viscosity method and derived its strong convergence under appropriate conditions.

Theorem CA (see [[23], Theorem 21])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let f:CR be a convex functional with L-Lipschitz continuous gradient f. Let M, N be two integers. Let Θ k be a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function, where k{1,2,,M}. Let B k :HH and A i :CH be μ k -inverse strongly monotone and η i -inverse strongly monotone, respectively, where k{1,2,,M}, i{1,2,,N}. Let F:HH be a κ-Lipschitzian and η-strongly monotone operator with positive constants κ,η>0. Let V:HH be an l-Lipschitzian mapping with constant l0. Let 0<μ< 2 η κ 2 and 0γl<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that Ω:= k = 1 M GMEP( Θ k , φ k , B k ) i = 1 N VI(C, A i )Γ and that either (B1) or (B2) holds. For arbitrarily given x 1 H, let { x n } be a sequence generated by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n B M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n B M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n B 1 ) x n , v n = P C ( I λ N , n A N ) P C ( I λ N 1 , n A N 1 ) P C ( I λ 2 , n A 2 ) P C ( I λ 1 , n A 1 ) u n , x n + 1 = s n γ V x n + β n x n + ( ( 1 β n ) I s n μ F ) T n v n , n 1 ,

where P C (I λ n f)= s n I+(1 s n ) T n (here T n is nonexpansive, s n = 2 λ n L 4 (0, 1 2 ) for each λ n (0, 2 L )). Assume that the following conditions hold:

  1. (i)

    s n (0, 1 2 ) for each λ n (0, 2 L ), and lim n s n =0 ( lim n λ n = 2 L );

  2. (ii)

    { β n }(0,1) and 0< lim inf n β n lim sup n β n <1;

  3. (iii)

    { λ i , n }[ a i , b i ](0,2 η i ) and lim n | λ i , n + 1 λ i , n |=0 for all i{1,2,,N};

  4. (iv)

    { r k , n }[ e k , f k ](0,2 μ k ) and lim n | r k , n + 1 r k , n |=0 for all k{1,2,,M}.

Then { x n } converges strongly as λ n 2 L ( s n 0) to a point x Ω, which is a unique solution in Ω to the VIP:

( μ F γ V ) x , p x 0,pΩ.

Equivalently, x = P Ω (I(μFγV)) x .

In 2009, Yao et al. [35] considered the following hierarchical fixed point problem (HFPP): find hierarchically a fixed point of a nonexpansive mapping T with respect to another nonexpansive mapping S, namely; find x ˜ Fix(T) such that

x ˜ S x ˜ , x ˜ x0,xFix(T).
(1.7)

The solution set of HFPP (1.7) is denoted by Λ. It is not hard to check that solving HFPP (1.7) is equivalent to the fixed point problem of the composite mapping P Fix ( T ) S, i.e., find x ˜ C such that x ˜ = P Fix ( T ) S x ˜ . The authors of [35] introduced and analyzed the following iterative algorithm for solving HFPP (1.7):

{ y n = β n S x n + ( 1 β n ) x n , x n + 1 = α n V x n + ( 1 α n ) T y n , n 0 .
(1.8)

Theorem YLM (see [[35], Theorem 3.2])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S and T be two nonexpansive mappings of C into itself. Let V:CC be a fixed contraction with α(0,1). Let { α n } and { β n } be two sequences in (0,1). For any given x 0 C, let { x n } be the sequence generated by (1.8). Assume that the sequence { x n } is bounded and that

  1. (i)

    n = 0 α n =;

  2. (ii)

    lim n 1 α n | 1 β n 1 β n 1 |=0, lim n 1 β n |1 α n 1 α n |=0;

  3. (iii)

    lim n β n =0, lim n α n β n =0 and lim n β n 2 α n =0;

  4. (iv)

    Fix(T)intC;

  5. (v)

    there exists a constant k>0 such that xTxkDist(x,Fix(T)) for each xC, where Dist(x,Fix(T))= inf y Fix ( T ) xy. Then { x n } converges strongly to x ˜ = P Λ V x ˜ which solves the VIP: x ˜ S x ˜ , x ˜ x0, xFix(T).

Very recently, Iiduka [36, 37] considered a variational inequality with a variational inequality constraint over the set of fixed points of a nonexpansive mapping. Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problem, it is referred to as a triple hierarchical constrained optimization problem (THCOP). He presented some examples of THCOP and developed iterative algorithms to find the solution of such a problem. The convergence analysis of the proposed algorithms is also studied in [36, 37]. Since the original problem is a variational inequality, in this paper, we call it a triple hierarchical variational inequality (THVI). Subsequently, Ceng et al. [38] introduced and considered the following triple hierarchical variational inequality (THVI):

Problem I Let S,T:CC be two nonexpansive mappings with Fix(T), V:CH be a ρ-contractive mapping with constant ρ[0,1) and F:CH be a κ-Lipschitzian and η-strongly monotone mapping with constants κ,η>0. Let 0<μ<2η/ κ 2 and 0<γτ where τ=1 1 μ ( 2 η μ κ 2 ) . Consider the following THVI: find x Ξ such that

( μ F γ V ) x , x x 0,xΞ,

in which Ξ denotes the solution set of the following hierarchical variational inequality (HVI): find z Fix(T) such that

( μ F γ S ) z , z z 0,zFix(T),

where the solution set Ξ is assumed to be nonempty.

The authors of [38] proposed both implicit and explicit iterative methods and studied the convergence analysis of the sequences generated by the proposed methods. In this paper, we introduce and study the following triple hierarchical variational inequality (THVI) with constraints of mixed equilibria, variational inequalities, and convex minimization problem.

Problem II Let M, N be two integers. Let f:CR be a convex functional with L-Lipschitz continuous gradient f. Let Θ k be a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function, where k{1,2,,M}. Let R i :C 2 H be a maximal monotone mapping and let A k :HH and B i :CH be μ k -inverse strongly monotone and η i -inverse strongly monotone, respectively, where k{1,2,,M}, i{1,2,,N}. Let S:HH be a nonexpansive mapping and { T n } n = 1 be a sequence of nonexpansive mappings on H. Let F:HH be a κ-Lipschitzian and η-strongly monotone operator with positive constants κ,η>0. Let V:HH be an l-Lipschitzian mapping with constant l0. Let 0<μ< 2 η κ 2 , 0<γτ, and 0γl<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . Consider the following triple hierarchical variational inequality (THVI): find x Ξ such that

( μ F γ V ) x , x x 0,xΞ,
(1.9)

where Ξ denotes the solution set of the following hierarchical variational inequality (HVI): find z Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , B k ) i = 1 N I( B i , R i )Γ such that

( μ F γ S ) z , z z 0,zΩ,
(1.10)

where the solution set Ξ is assumed to be nonempty.

Motivated and inspired by the above facts, we introduce and analyze a relaxed iterative algorithm by virtue of Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, the regularization method, and the averaged mapping approach to the GPA. It is proven that, under appropriate assumptions, the proposed algorithm converges strongly to a common element x Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )Γ of the fixed point set of infinitely many nonexpansive mappings { T n } n = 1 , the solution set of finitely many GMEPs, the solution set of finitely many variational inclusions and the set of minimizers of CMP (1.4), which is just a unique solution of the THVI (1.9). In addition, we also consider the application of the proposed algorithm to solving a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many variational inclusions and CMP (1.4). That is, under very mild conditions, it is proven that the proposed algorithm converges strongly to a unique solution x Ω of the VIP: (γVμF) x ,x x 0, xΩ; equivalently, P Ω (I(μFγV)) x = x . The results obtained in this paper improve and extend the corresponding results announced by many others.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H. We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n }, i.e.,

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

Recall that a mapping A:CH is called

  1. (i)

    monotone if

    AxAy,xy0,x,yC;
  2. (ii)

    η-strongly monotone if there exists a constant η>0 such that

    AxAy,xyη x y 2 ,x,yC;
  3. (iii)

    α-inverse strongly monotone if there exists a constant α>0 such that

    AxAy,xyα A x A y 2 ,x,yC.

It is obvious that if A is α-inverse strongly monotone, then A is monotone and 1 α -Lipschitz continuous. Moreover, we also have, for all u,vC and λ>0,

( I λ A ) u ( I λ A ) v 2 = ( u v ) λ ( A u A v ) 2 = u v 2 2 λ A u A v , u v + λ 2 A u A v 2 u v 2 + λ ( λ 2 α ) A u A v 2 .
(2.1)

So, if λ2α, then IλA is a nonexpansive mapping from C to H.

The metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

Some important properties of projections are gathered in the following proposition.

Proposition 2.1 For given xH and zC:

  1. (i)

    z= P C xxz,yz0, yC;

  2. (ii)

    z= P C x x z 2 x y 2 y z 2 , yC;

  3. (iii)

    P C x P C y,xy P C x P C y 2 , yH.

Consequently, P C is nonexpansive and monotone.

Definition 2.1 A mapping T:HH is said to be:

  1. (a)

    nonexpansive if

    TxTyxy,x,yH;
  2. (b)

    firmly nonexpansive if 2TI is nonexpansive, or equivalently, if T is 1-inverse strongly monotone (1-ism),

    xy,TxTy T x T y 2 ,x,yH;

alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive; projections are firmly nonexpansive.

It can easily be seen that if T is nonexpansive, then IT is monotone. It is also easy to see that a projection P C is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields.

Definition 2.2 A mapping T:HH is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,

T(1α)I+αS,

where α(0,1) and S:HH is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are 1 2 -averaged mappings.

Proposition 2.2 (see [39])

Let T:HH be a given mapping.

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (ii)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (iii)

    T is averaged if and only if the complement IT is ν-ism for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.3 (see [39, 40])

Let S,T,V:HH be given operators.

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (v)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

The notation Fix(T) denotes the set of all fixed points of the mapping T, that is, Fix(T)={xH:Tx=x}.

Next we list some elementary conclusions for the MEP.

Proposition 2.4 (see [25])

Assume that Θ:C×CR satisfies (A1)-(A4) and let φ:CR be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r ( Θ , φ ) :HC as follows:

T r ( Θ , φ ) (x)= { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (i)

    for each xH, T r ( Θ , φ ) (x) is nonempty and single-valued;

  2. (ii)

    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x,yH,

    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
  3. (iii)

    Fix( T r ( Θ , φ ) )=MEP(Θ,φ);

  4. (iv)

    MEP(Θ,φ) is closed and convex;

  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x, T s ( Θ , φ ) xx for all s,t>0 and xH.

We need some facts and tools in a real Hilbert space H, which are listed as lemmas below.

Lemma 2.1 Let X be a real inner product space. Then we have the following inequality:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.2 Let A:CH be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition  2.1(i)) implies

uVI(C,A)u= P C (uλAu),λ>0.

Lemma 2.3 (see [[41], Demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a nonexpansive self-mapping on C with Fix(T). Then IT is demiclosed. That is, whenever { x n } is a sequence in C weakly converging to some xC and the sequence {(IT) x n } strongly converges to some y, it follows that (IT)x=y. Here I is the identity operator of H.

Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence of nonnegative numbers in [0,1]. For any n0, define a mapping W n on C as follows:

{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(2.2)

Such a mapping W n is called the W-mapping generated by T n , T n 1 ,, T 1 and λ n , λ n 1 ,, λ 1 .

Lemma 2.4 (see [[42], Lemma 3.2])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix( T n ) and let { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Then, for every xC and k1 the limit lim n U n , k x exists where U n , k is defined as in (2.2).

Remark 2.1 (see [[43], Remark 3.1])

It can be found from Lemma 2.4 that if D is a nonempty bounded subset of C, then for ϵ>0 there exists n 0 k such that, for all n> n 0 ,

sup x D U n , k x U k xϵ.

Remark 2.2 (see [[43], Remark 3.2])

Utilizing Lemma 2.4, we define a mapping W:CC as follows:

Wx= lim n W n x= lim n U n , 1 x,xC.

Such a W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 , . Since W n is nonexpansive, W:CC is also nonexpansive. If { x n } is a bounded sequence in C, then we put D={ x n :n1}. Hence, it is clear from Remark 2.1 that, for an arbitrary ϵ>0, there exists N 0 1 such that, for all n> N 0 ,

W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 xϵ.

This implies that

lim n W n x n W x n =0.

Lemma 2.5 (see [[42], Lemma 3.3])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix( T n ), and let { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Then Fix(W)= n = 1 Fix( T n ).

The following lemma can easily be proven, and therefore we omit the proof.

Lemma 2.6 Let V:HH be an l-Lipschitzian mapping with constant l0, and F:HH be a κ-Lipschitzian and η-strongly monotone operator with positive constants κ,η>0. Then for 0γl<μη,

( μ F γ V ) x ( μ F γ V ) y , x y (μηγl) x y 2 ,x,yH.

That is, μFγV is strongly monotone with constant μηγl>0.

Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in (0,1] and let μ>0. Associated with a nonexpansive mapping T:CH, we define the mapping T λ :CH by

T λ x:=TxλμF(Tx),xC,

where F:HH is an operator such that, for some positive constants κ,η>0, F is κ-Lipschitzian and η-strongly monotone on H; that is, F satisfies the conditions:

FxFyκxyandFxFy,xyη x y 2

for all x,yH.

Lemma 2.7 (see [[44], Lemma 3.1])

T λ is a contraction provided 0<μ< 2 η κ 2 ; that is,

T λ x T λ y (1λτ)xy,x,yC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Lemma 2.8 (see [45])

Let { a n } be a sequence of nonnegative real numbers satisfying the property

a n + 1 (1 s n ) a n + s n t n + ϵ n ,n0,

where { s n }[0,1] and { t n } are such that

  1. (i)

    n = 1 s n =;

  2. (ii)

    either lim sup n t n 0 or n = 1 s n | t n |<;

  3. (iii)

    n = 1 ϵ n < where ϵ n 0, n0.

Then lim n a n =0.

Lemma 2.9 (see [41])

Let H be a real Hilbert space. Then the following hold:

  1. (a)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    if { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

Finally, recall that a set-valued mapping T:D(T)H 2 H is called monotone if for all x,yD(T), fTx and gTy imply

fg,xy0.

A set-valued mapping T is called maximal monotone if T is monotone and (I+λT)D(T)=H for each λ>0, where I is the identity mapping of H. We denote by G(T) the graph of T. It is well known that a monotone mapping T is maximal if and only if, for (x,f)H×H, fg,xy0 for every (y,g)G(T) implies fTx. Next we provide an example to illustrate the concept of maximal monotone mapping.

Let A:CH be a monotone, k-Lipschitz continuous mapping and let N C v be the normal cone to C at vC, i.e.,

N C v= { u H : v p , u 0 , p C } .

Define

T ˜ v={ A v + N C v , if  v C , , if  v C .

Then T ˜ is maximal monotone (see [28]) such that

0 T ˜ vvVI(C,A).
(2.3)

Let R:D(R)H 2 H be a maximal monotone mapping. Let λ,μ>0 be two positive numbers.

Lemma 2.10 (see [46])

We have the resolvent identity

J R , λ x= J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) ,xH.

Remark 2.3 For λ,μ>0, we have the following relation:

J R , λ x J R , μ yxy+|λμ| ( 1 λ J R , λ x y + 1 μ x J R , μ y ) ,x,yH.
(2.4)

Indeed, whenever λμ, utilizing Lemma 2.10 we deduce that

J R , λ x J R , μ y = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) J R , μ y μ λ x + ( 1 μ λ ) J R , λ x y μ λ x y + ( 1 μ λ ) J R , λ x y x y + | λ μ | λ J R , λ x y .

Similarly, whenever λ<μ, we get

J R , λ x J R , μ yxy+ | λ μ | μ x J R , μ y.

Combining the above two cases we conclude that (2.4) holds.

In terms of Huang [9] (see also [29]), we have the following property for the resolvent operator J R , λ :H D ( R ) ¯ .

Lemma 2.11 J R , λ is single-valued and firmly nonexpansive, i.e.,

J R , λ x J R , λ y,xy J R , λ x J R , λ y 2 ,x,yH.

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.12 (see [12])

Let R be a maximal monotone mapping with D(R)=C. Then for any given λ>0, uC is a solution of problem (1.6) if and only if uC satisfies

u= J R , λ (uλBu).

Lemma 2.13 (see [29])

Let R be a maximal monotone mapping with D(R)=C and let B:CH be a strongly monotone, continuous, and single-valued mapping. Then for each zH, the equation z(B+λR)x has a unique solution x λ for λ>0.

Lemma 2.14 (see [12])

Let R be a maximal monotone mapping with D(R)=C and B:CH be a monotone, continuous and single-valued mapping. Then (I+λ(R+B))C=H for each λ>0. In this case, R+B is maximal monotone.

3 Strong convergence theorems for the THVI and HFPP

In this section, we will introduce and analyze a relaxed iterative algorithm for finding a solution of the THVI (1.9) with constraints of several problems: finitely many GMEPs, finitely many variational inclusions, and CMP (1.4) in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, the regularization method, and the averaged mapping approach to the GPA. We prove the strong convergence of the proposed algorithm to a unique solution of THVI (1.9) under suitable conditions. In addition, we also consider the application of the proposed algorithm to solving a hierarchical fixed point problem with the same constraints.

Let f:CR be a convex functional with L-Lipschitz continuous gradient f. It is worth emphasizing that the regularization, in particular, the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. Consider the regularized minimization problem

min x C f α (x):=f(x)+ α 2 x 2 ,

where α>0 is the regularization parameter.

The advantage of a regularization method is its possible strong convergence to the minimum-norm solution of the optimization problem under investigation. The disadvantage is, however, its implicity, and hence explicit iterative methods seem more attractive, with which Xu was also concerned in [33, 47]. Very recently, some approximation methods are proposed in [13, 14, 32, 48] to solve the vector optimization problem and split feasibility problem by virtue of the regularization method.

We are now in a position to state and prove the first main result in this paper.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let M, N be two integers. Let f:CR be a convex functional with L-Lipschitz continuous gradient f. Let Θ k be a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function, where k{1,2,,M}. Let R i :C 2 H be a maximal monotone mapping and let A k :HH and B i :CH be μ k -inverse strongly monotone and η i -inverse strongly monotone, respectively, where k{1,2,,M}, i{1,2,,N}. Let S:HH be a nonexpansive mapping, { T n } n = 1 be a sequence of nonexpansive mappings on H and { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Let F:HH be a κ-Lipschitzian and η-strongly monotone operator with positive constants κ,η>0. Let V:HH be an l-Lipschitzian mapping with constant l0. Let 0<λ< 2 L , 0<μ< 2 η κ 2 , 0<γτ and 0γl<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that either (B1) or (B2) holds. Let { β n } and { θ n } be sequences in (0,1) and { α n } be a sequence in (0,) with n = 1 α n <. For arbitrarily given x 1 H, let { x n } be a sequence generated by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) u n = × T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = θ n γ S x n + ( I θ n μ F ) P C ( v n λ f α n ( v n ) ) , x n + 1 = β n γ V x n + ( I β n μ F ) W n P C ( y n λ f α n ( y n ) ) , n 1 ,
(3.1)

where W n is the W-mapping defined by (2.2). Suppose that the following conditions are satisfied:

  • (H1) ? n = 1 8 ß n =8 and lim n ? 8 1 ß n |1- ? n - 1 ? n |=0;

  • (H2) lim n ? 8 1 ß n | 1 ? n - 1 ? n - 1 |=0 and lim n ? 8 1 ? n |1- ß n - 1 ß n |=0;

  • (H3) lim n ? 8 ? n =0, lim n ? 8 a n ? n =0 and lim n ? 8 ? n ß n =0;

  • (H4) lim n ? 8 | a n - a n - 1 | ß n ? n =0 and lim n ? 8 b n ß n ? n =0;

  • (H5) { ? i , n }?[ a i , b i ]?(0,2 ? i ) and lim n ? 8 | ? i , n - ? i , n - 1 | ß n ? n =0 for all i?{1,2,,N};

  • (H6) { r k , n }?[ e k , f k ]?(0,2 µ k ) and lim n ? 8 | r k , n - r k , n - 1 | ß n ? n =0 for all k?{1,2,,M}.

Then we have the following:

  1. (i)

    lim n x n + 1 x n θ n =0;

  2. (ii)

    ω w ( x n )Ω;

  3. (iii)

    ω w ( x n )Ξ provided x n y n =o( θ n ) additionally.

Proof First of all, let us show that P C (Iλ f α ) is ξ-averaged for each λ(0, 2 α + L ), where

ξ= 2 + λ ( α + L ) 4 (0,1).

Indeed, note that the Lipschitzian property of f implies that f is 1 L -ism [49] (see also [33]), that is,

f ( x ) f ( y ) , x y 1 L f ( x ) f ( y ) 2 .

Observe that

( α + L ) f α ( x ) f α ( y ) , x y = ( α + L ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α L x y 2 + L f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .

Hence, it follows that f α =αI+f is 1 α + L -ism. Thus, λ f α is 1 λ ( α + L ) -ism according to Proposition 2.2(ii). By Proposition 2.2(iii) the complement Iλ f α is λ ( α + L ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.3(iv), we know that, for each λ(0, 2 α + L ), P C (Iλ f α ) is ξ-averaged with

ξ= 1 2 + λ ( α + L ) 2 1 2 λ ( α + L ) 2 = 2 + λ ( α + L ) 4 (0,1).

This shows that P C (Iλ f α ) is nonexpansive. Furthermore, for λ(0, 2 L ), utilizing the fact that lim n 2 α n + L = 2 L , we may assume that

0<λ< 2 α n + L ,n1.

Consequently, it follows that, for each integer n0, P C (Iλ f α n ) is ξ n -averaged with

ξ n = 1 2 + λ ( α n + L ) 2 1 2 λ ( α n + L ) 2 = 2 + λ ( α n + L ) 4 (0,1).

This immediately implies that P C (Iλ f α n ) is nonexpansive for all n1. Put

Δ n k = T r k , n ( Θ k , φ k ) (I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) (I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) (I r 1 , n A 1 ) x n

for all k{1,2,,M} and n1,

Λ n i = J R i , λ i , n (I λ i , n B i ) J R i 1 , λ i 1 , n (I λ i 1 , n B i 1 ) J R 1 , λ 1 , n (I λ 1 , n B 1 )

for all i{1,2,,N}, Δ n 0 =I and Λ n 0 =I, where I is the identity mapping on H. Then we have u n = Δ n M x n and v n = Λ n N u n .

We divide the rest of the proof into several steps.

Step 1. We prove that { x n } is bounded.

Indeed, taking into account the assumption Ξ in Problem II, we know that Ω. Take pΩ arbitrarily. Then from (2.1) and Proposition 2.4(ii) we have

u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)

Similarly, we have

v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p .
(3.3)

Combining (3.2) and (3.3), we have

v n p x n p.
(3.4)

Utilizing Lemma 2.7, from (3.1), and (3.4) we obtain

y n p = θ n γ ( S x n S p ) + ( I θ n μ F ) P C ( I λ f α n ) v n ( I θ n μ F ) P C ( I λ f α n ) p + ( I θ n μ F ) P C ( I λ f α n ) p ( I θ n μ F ) P C ( I λ f ) p + θ n ( γ S μ F ) p θ n γ S x n S p + ( I θ n μ F ) P C ( I λ f α n ) v n ( I θ n μ F ) P C ( I λ f α n ) p + ( I θ n μ F ) P C ( I λ f α n ) p ( I θ n μ F ) P C ( I λ f ) p + θ n ( γ S μ F ) p θ n γ x n p + ( 1 θ n τ ) P C ( I λ f α n ) v n P C ( I λ f α n ) p + ( 1 θ n τ ) P C ( I λ f α n ) p P C ( I λ f ) p + θ n ( γ S μ F ) p θ n γ x n p + ( 1 θ n τ ) v n p + ( 1 θ n τ ) ( I λ f α n ) p ( I λ f ) p + θ n ( γ S μ F ) p = θ n γ x n p + ( 1 θ n τ ) v n p + ( 1 θ n τ ) λ α n p + θ n ( γ S μ F ) p θ n γ x n p + ( 1 θ n τ ) x n p + λ α n p + θ n ( γ S μ F ) p = ( 1 θ n ( τ γ ) ) x n p + θ n ( γ S μ F ) p + λ α n p = ( 1 θ n ( τ γ ) θ n ) x n p + θ n ( τ γ ) ( γ S μ F ) p τ γ + λ α n p max { x n p , ( γ S μ F ) p τ γ } + λ α n p ,

and hence

x n + 1 p = β n γ ( V x n V p ) + ( I β n μ F ) W n P C ( I λ f α n ) y n ( I β n μ F ) W n P C ( I λ f α n ) p + ( I β n μ F ) W n P C ( I λ f α n ) p ( I β n μ F ) W n P C ( I λ f ) p + β n ( γ V μ F ) p β n γ V x n V p + ( I β n μ F ) W n P C ( I λ f α n ) y n ( I β n μ F ) W n P C ( I λ f α n ) p + ( I β n μ F ) W n P C ( I λ f α n ) p ( I β n μ F ) W n P C ( I λ f ) p + β n ( γ V μ F ) p β n γ l x n p + ( 1 β n τ ) P C ( I λ f α n ) y n P C ( I λ f α n ) p + ( 1 β n τ ) P C ( I λ f α n ) p P C ( I λ f ) p + β n ( γ V μ F ) p β n γ l x n p + ( 1 β n τ ) y n p + ( 1 β n τ ) ( I λ f α n ) p ( I λ f ) p + β n ( γ V μ F ) p = β n γ l x n p + ( 1 β n τ ) y n p + ( 1 β n τ ) λ α n p + β n ( γ V μ F ) p β n γ l x n p + ( 1 β n τ ) [ max { x n p , ( γ S μ F ) p τ γ } + λ α n p ] + λ α n p + β n ( γ V μ F ) p ( 1 β n ( τ γ l ) ) max { x n p , ( γ S μ F ) p τ γ } + 2 λ α n p + β n ( γ V μ F ) p = ( 1 β n ( τ γ l ) ) max { x n p , ( γ S μ F ) p τ γ } + β n ( τ γ l ) ( γ V μ F ) p τ γ l + 2 λ α n p max { x n p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 λ α n p .

Let us show that, for all n1,

x n + 1 pmax { x 1 p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } +2 i = 1 n λ α i p.
(3.5)

Indeed, for n=1, it is clear that (3.5) holds. Assume that (3.5) holds for some n1. Observe that

x n + 2 p max { x n + 1 p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 λ α n + 1 p max { max { x 1 p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 i = 1 n λ α i p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 λ α n + 1 p max { x 1 p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 i = 1 n λ α i p + 2 λ α n + 1 p = max { x 1 p , ( γ S μ F ) p τ γ , ( γ V μ F ) p τ γ l } + 2 i = 1 n + 1 λ α i p .

By induction, (3.5) holds for all n1. Taking into account n = 1 α n <, we know that { x n } is bounded and so are the sequences { u n }, { v n }, { y n }.

Step 2. We prove that lim n x n + 1 x n θ n =0.

Indeed, for simplicity, put v ˜ n = P C ( v n λ f α n ( v n )) and y ˜ n = P C ( y n λ f α n ( y n )). Then y n = θ n γS x n +(I θ n μF) v ˜ n and x n + 1 = β n γV x n +(I β n μF) W n y ˜ n for every n1. We observe that

v ˜ n v ˜ n 1 P C ( I λ f α n ) v n P C ( I λ f α n ) v n 1 + P C ( I λ f α n ) v n 1 P C ( I λ f α n 1 ) v n 1 v n v n 1 + P C ( I λ f α n ) v n 1 P C ( I λ f α n 1 ) v n 1 v n v n 1 + ( I λ f α n ) v n 1 ( I λ f α n 1 ) v n 1 = v n v n 1 + λ f α n ( v n 1 ) λ f α n 1 ( v n 1 ) = v n v n 1 + λ | α n α n 1 | v n 1 .
(3.6)

Similarly, we get

y ˜ n y ˜ n 1 y n y n 1 +λ| α n α n 1 | y n 1 .
(3.7)

Also, it is easy to see from (3.1) that

{ y n = θ n γ S x n + ( I θ n μ F ) v ˜ n , y n 1 = θ n 1 γ S x n 1 + ( I θ n 1 μ F ) v ˜ n 1

and

{ x n + 1 = β n γ V x n + ( I β n μ F ) W n y ˜ n , x n = β n 1 γ V x n 1 + ( I β n 1 μ F ) W n 1 y ˜ n 1 .

Hence we obtain

y n y n 1 = θ n ( γ S x n γ S x n 1 ) + ( θ n θ n 1 ) ( γ S x n 1 μ F v ˜ n 1 ) + ( I θ n μ F ) v ˜ n ( I θ n μ F ) v ˜ n 1

and

x n + 1 x n = β n ( γ V x n γ V x n 1 ) + ( β n β n 1 ) ( γ V x n 1 μ F W n 1 y ˜ n 1 ) + ( I β n μ F ) W n y ˜ n ( I β n μ F ) W n 1 y ˜ n 1 .

Utilizing Lemma 2.7, we deduce from (3.6) and (3.7) that

y n y n 1 θ n γ S x n γ S x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( I θ n μ F ) v ˜ n ( I θ n μ F ) v ˜ n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( 1 θ n τ ) v ˜ n v ˜ n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( 1 θ n τ ) ( v n v n 1 + λ | α n α n 1 | v n 1 )
(3.8)

and

x n + 1 x n β n γ V x n γ V x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( I β n μ F ) W n y ˜ n ( I β n μ F ) W n 1 y ˜ n 1 β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) W n y ˜ n W n 1 y ˜ n 1 β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) ( W n y ˜ n W n y ˜ n 1 + W n y ˜ n 1 W n 1 y ˜ n 1 ) β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) ( y ˜ n y ˜ n 1 + W n y ˜ n 1 W n 1 y ˜ n 1 ) β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) ( y n y n 1 + λ | α n α n 1 | y n 1 + W n y ˜ n 1 W n 1 y ˜ n 1 ) ,
(3.9)

where τ=1 1 μ ( 2 η μ κ 2 ) .

Utilizing (2.1) and (2.4), we obtain

v n + 1 v n = Λ n + 1 N u n + 1 Λ n N u n = J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + | λ N , n + 1 λ N , n | × ( 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ) | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + Λ n + 1 N 1 u n + 1 Λ n N 1 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + Λ n + 1 N 2 u n + 1 Λ n N 2 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + + | λ 1 , n + 1 λ 1 , n | ( B 1 Λ n + 1 0 u n + 1 + M ˜ ) + Λ n + 1 0 u n + 1 Λ n 0 u n M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n ,
(3.10)

where

sup n 1 { 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n } M ˜ ,

for some M ˜ >0 and sup n 1 { i = 1 N B i Λ n + 1 i 1 u n + 1 + M ˜ } M ˜ 0 for some M ˜ 0 >0.

Utilizing Proposition 2.4(ii), (v) we deduce that

u n + 1 u n = Δ n + 1 M x n + 1 Δ n M x n = T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + ( I r M , n A M ) Δ n + 1 M 1 x n + 1 ( I r M , n A M ) Δ n M 1 x n | r M , n + 1 r M , n | r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + | r M , n + 1 r M , n | A M Δ n + 1 M 1 x n + 1 + Δ n + 1 M 1 x n + 1 Δ n M 1 x n = | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + Δ n + 1 M 1 x n + 1 Δ n M 1 x n | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + + | r 1 , n + 1 r 1 , n | [ A 1 Δ n + 1 0 x n + 1 + 1 r 1 , n + 1 T r 1 , n + 1 ( Θ 1 , φ 1 ) ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ] + Δ n + 1 0 x n + 1 Δ n 0 x n M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n ,
(3.11)

where M ˜ 1 >0 is a constant such that, for each n1,

k = 1 M [ A k Δ n + 1 k 1 x n + 1 + 1 r k , n + 1 T r k , n + 1 ( Θ k , φ k ) ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ] M ˜ 1 .

In the meantime, from (2.2), since W n , T n , and U n , i are all nonexpansive, we have

W n + 1 y ˜ n W n y ˜ n = λ 1 T 1 U n + 1 , 2 y ˜ n λ 1 T 1 U n , 2 y ˜ n λ 1 U n + 1 , 2 y ˜ n U n , 2 y ˜ n = λ 1 λ 2 T 2 U n + 1 , 3 y ˜ n λ 2 T 2 U n , 3 y ˜ n λ 1 λ 2 U n + 1 , 3 y ˜ n U n , 3 y ˜ n λ 1 λ 2 λ n U n + 1 , n + 1 y ˜ n U n , n + 1 y ˜ n M ˜ 2 i = 1 n λ i ,
(3.12)

where M ˜ 2 is a constant such that U n + 1 , n + 1 y ˜ n + U n , n + 1 y ˜ n M ˜ 2 for each n1. So, from (3.8)-(3.12), and { λ n }(0,b](0,1) it follows that

y n y n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( 1 θ n τ ) ( v n v n 1 + λ | α n α n 1 | v n 1 ) θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( 1 θ n τ ) ( M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + u n u n 1 + λ | α n α n 1 | v n 1 ) θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + ( 1 θ n τ ) ( M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + M ˜ 1 k = 1 M | r k , n r k , n 1 | + x n x n 1 + λ | α n α n 1 | v n 1 ) ( 1 θ n ( τ γ ) ) x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + M ˜ 1 k = 1 M | r k , n r k , n 1 | + λ | α n α n 1 | v n 1 x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + M ˜ 1 k = 1 M | r k , n r k , n 1 | + λ | α n α n 1 | v n 1 ,

and hence

x n + 1 x n β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) ( y n y n 1 + λ | α n α n 1 | y n 1 + W n y ˜ n 1 W n 1 y ˜ n 1 ) β n γ l x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + ( 1 β n τ ) { x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + M ˜ 1 k = 1 M | r k , n r k , n 1 | + λ | α n α n 1 | v n 1 + λ | α n α n 1 | y n 1 + M ˜ 2 i = 1 n 1 λ i } ( 1 β n ( τ γ l ) ) x n x n 1 + | β n β n 1 | γ V x n 1 μ F W n 1 y ˜ n 1 + | θ n θ n 1 | γ S x n 1 μ F v ˜ n 1 + M ˜ 0 i = 1 N | λ i , n λ i , n 1 | + M ˜ 1 k = 1 M | r k , n r k , n 1 | + | α n α n 1 | λ ( v n 1 + y n 1 ) + M ˜ 2 b n 1 ( 1 β n ( τ γ l ) ) x n x n 1 + M ˜ 3 ( | α n α n 1 | + | β n β n 1 | + | θ n θ n 1 | + i = 1 N | λ i , n λ i , n 1 | + k = 1 M | r k , n r k , n 1 | + b n 1 ) ,

where sup n 1 {γV x n μF W n y ˜ n +γS x n μF v ˜ n +λ( v n + y n )+ M ˜ 0 + M ˜ 1 + M ˜ 2 } M ˜ 3 for some M ˜ 3 >0. Therefore

x n + 1 x n θ n ( 1 β n ( τ γ l ) ) x n x n 1 θ n + M ˜ 3 ( | α n α n 1 | θ n + | β n β n 1 | θ n + | θ n θ n 1 | θ n + i = 1 N | λ i , n λ i , n 1 | θ n + k = 1 M | r k , n r k , n 1 | θ n + b n 1 θ n ) = ( 1 ( τ γ l ) β n ) x n x n 1 θ n 1 + ( 1 ( τ γ l ) β n ) x n x n 1 ( 1 θ n 1 θ n 1 ) + M ˜ 3 ( | α n α n 1 | θ n + | β n β n 1 | θ n + | θ n θ n 1 | θ n + i = 1 N | λ i , n λ i , n 1 | θ n + k = 1 M | r k , n r k , n 1 | θ n + b n 1 θ n ) ( 1 ( τ γ l ) β n ) x n x n 1 θ n 1 + ( τ γ l ) β n M ˜ 4 τ γ l { 1 β n | 1 θ n 1 θ n 1 | + | α n α n 1 | β n θ n + 1 θ n | 1 β n 1 β n | + 1 β n | 1 θ n 1 θ n | + i = 1 N | λ i , n λ i , n 1 | β n θ n + k = 1 M | r k , n r k , n 1 | β n θ n + b n 1 β n θ n } ,
(3.13)

where sup n 1 { x n + 1 x n + M ˜ 3 } M ˜ 4 for some M ˜ 4 >0. From (H1), (H2), and (H4)-(H6) it follows that n = 1 (τγl) β n = and

lim n M ˜ 4 τ γ l { 1 β n | 1 θ n 1 θ n 1 | + | α n α n 1 | β n θ n + 1 θ n | 1 β n 1 β n | + 1 β n | 1 θ n 1 θ n | + i = 1 N | λ i , n λ i , n 1 | β n θ n + k = 1 M | r k , n r k , n 1 | β n θ n + b n 1 β n θ n } = 0 .

Thus, applying Lemma 2.8 to (3.13), we immediately conclude that

lim n x n + 1 x n θ n =0.

So, from (H3) it follows that

lim n x n + 1 x n =0.

Step 3. We prove that lim n x n u n =0, lim n x n v n =0 and lim n y n v ˜ n / θ n =0.

Indeed, for pΩ we have

v ˜ n p = P C ( I λ f α n ) v n P C ( I λ f ) p P C ( I λ f α n ) v n P C ( I λ f α n ) p + P C ( I λ f α n ) p P C ( I λ f ) p v n p + P C ( I λ f α n ) p P C ( I λ f ) p v n p + λ α n p .
(3.14)

Similarly, we get

y ˜ n p y n p+λ α n p.
(3.15)

Note that

y n p = θ n γ S x n θ n μ F p + ( I θ n μ F ) v ˜ n ( I θ n μ F ) p = θ n ( γ S x n μ F p ) + ( 1 θ n ) ( v ˜ n p ) + θ n [ ( I μ F ) v ˜ n ( I μ F ) p ] = θ n ( γ S x n + ( I μ F ) v ˜ n p ) + ( 1 θ n ) ( v ˜ n p ) .

Hence we have

y n v ˜ n = θ n ( γ S x n + ( I μ F ) v ˜ n v ˜ n ) .

Utilizing Lemma 2.9(b), from (3.14) we have

y n p 2 = θ n ( γ S x n + ( I μ F ) v ˜ n p ) + ( 1 θ n ) ( v ˜ n p ) 2 = θ n γ S x n + ( I μ F ) v ˜ n p 2 + ( 1 θ n ) v ˜ n p 2 θ n ( 1 θ n ) γ S x n + ( I μ F ) v ˜ n v ˜ n 2 = θ n γ S x n + ( I μ F ) v ˜ n p 2 + ( 1 θ n ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 θ n γ S x n + ( I μ F ) v ˜ n p 2 + v ˜ n p 2 1 θ n θ n y n v ˜ n 2 θ n γ S x n + ( I μ F ) v ˜ n p 2 + ( v n p + λ α n p ) 2 1 θ n θ n y n v ˜ n 2 = v n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 .
(3.16)

Furthermore, utilizing Lemmas 2.1 and 2.7 we obtain from (3.4), (3.15), and (3.16)

x n + 1 p 2 = β n γ V x n β n μ F p + ( I β n μ F ) W n y ˜ n ( I β n μ F ) W n p 2 ( β n γ V x n β n μ F p + ( I β n μ F ) W n y ˜ n ( I β n μ F ) W n p ) 2 ( β n γ V x n μ F p + ( 1 β n τ ) y ˜ n p ) 2 β n 1 τ γ V x n μ F p 2 + ( 1 β n τ ) y ˜ n p 2 β n 1 τ [ γ V x n γ V p 2 + 2 γ V p μ F p , γ V x n μ F p ] + ( 1 β n τ ) y ˜ n p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) ( y n p + λ α n p ) 2 = β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ y n p 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] .
(3.17)

On the other hand, observe that

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2
(3.18)

and

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 Λ n i 1 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 .
(3.19)

Combining (3.17)-(3.19), we get

x n + 1 p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ Λ n i u n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ Δ n k x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] ( 1 β n ( τ γ 2 l 2 τ ) ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 1 θ n θ n y n v ˜ n 2 ] + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 v n p + λ α n p ) + λ α n p ( 2 y n p + λ α n p ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 1 θ n θ n y n v ˜ n 2 ] + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 v n p + λ α n p ) + λ α n p ( 2 y n p + λ α n p ) ,
(3.20)

which hence implies that

( 1 β n τ ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 + 1 θ n θ n y n v ˜ n 2 ] x n p 2 x n + 1 p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 v n p + λ α n p ) + λ α n p ( 2 y n p + λ α n p ) x n x n + 1 ( x n p + x n + 1 p ) + β n 2 τ γ V p μ F p γ V x n μ F p + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 v n p + λ α n p ) + λ α n p ( 2 y n p + λ α n p ) .

Since α n 0, β n 0, θ n 0, x n + 1 x n 0, and { x n }, { y n }, { v n }, { v ˜ } are bounded sequences, it follows from λ(0, 2 L ), { λ i , n }[ a i , b i ](0,2 η i ), and { r k , n }[ e k , f k ](0,2 μ k ) that

lim n 1 θ n θ n y n v ˜ n = 0 and lim n A k Δ n k 1 x n A k p = lim n B i Λ n i 1 u n B i p = 0 ,
(3.21)

for all k{1,2,,M} and i{1,2,,N}. It is clear that

lim n y n v ˜ n θ n = lim n 1 1 θ n ( 1 θ n θ n y n v ˜ n ) =0.
(3.22)

By Proposition 2.4(ii) and Lemma 2.9(a) we have

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p , Δ n k x n p = 1 2 ( ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 + Δ n k x n p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p ( Δ n k x n p ) 2 ) 1 2 ( Δ n k 1 x n p 2 + Δ n k x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 ) ,

which implies that

Δ n k x n p 2 Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 = Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 r k , n 2 A k Δ n k 1 x n A k p 2 + 2 r k , n Δ n k 1 x n Δ n k x n , A k Δ n k 1 x n A k p Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .
(3.23)

By Lemma 2.9(a) and Lemma 2.11, we obtain

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p , Λ n i u n p = 1 2 ( ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 + Λ n i u n p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p ( Λ n i u n p ) 2 ) 1 2 ( Λ n i 1 u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( x n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) ,

which implies

Λ n i u n p 2 x n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 = x n p 2 Λ n i 1 u n Λ n i u n 2 λ i , n 2 B i Λ n i 1 u n B i p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n , B i Λ n i 1 u n B i p x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p .
(3.24)

So, combining (3.17) and (3.24) we conclude that

x n + 1 p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ Λ n i u n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ] ( 1 β n ( τ γ 2 l 2 τ ) ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ,

which yields

( 1 β n τ ) Λ n i 1 u n Λ n i u n 2 x n p 2 x n + 1 p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) x n x n + 1 ( x n p + x n + 1 p ) + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) .

Since α n 0, β n 0, θ n 0, x n + 1 x n 0, and { x n }, { y n }, { v n }, { v ˜ } are bounded sequences, it follows from (3.21), λ(0, 2 L ), and { λ i , n }[ a i , b i ](0,2 η i ) that

lim n Λ n i 1 u n Λ n i u n =0,i{1,2,,N}.
(3.25)

In the meantime, combining (3.17) and (3.23) we conclude that

x n + 1 p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ u n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ Δ n k x n p 2 + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ] ( 1 β n ( τ γ 2 l 2 τ ) ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) ,

which leads to

( 1 β n τ ) Δ n k 1 x n Δ n k x n 2 x n p 2 x n + 1 p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) x n x n + 1 ( x n p + x n + 1 p ) + β n 2 τ γ V p μ F p γ V x n μ F p + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + λ α n p ( 2 v n p + λ α n p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 + λ α n p ( 2 y n p + λ α n p ) .

Since α n 0, β n 0, θ n 0, x n + 1 x n 0, and { x n }, { y n }, { v n }, { v ˜ } are bounded sequences, it follows from (3.21), λ(0, 2 L ), and { r k , n }[ e k , f k ](0,2 μ k ) that

lim n Δ n k 1 x n Δ n k x n =0,k{1,2,,M}.
(3.26)

Hence from (3.25) and (3.26) we get

x n u n = Δ n 0 x n Δ n M x n Δ n 0 x n Δ n 1 x n + Δ n 1 x n Δ n 2 x n + + Δ n M 1 x n Δ n M x n 0 as  n
(3.27)

and

u n v n = Λ n 0 u n Λ n N u n Λ n 0 u n Λ n 1 u n + Λ n 1 u n Λ n 2 u n + + Λ n N 1 u n Λ n N u n 0 as  n ,
(3.28)

respectively. Thus, from (3.27) and (3.28) we obtain

x n v n x n u n + u n v n 0 as  n .
(3.29)

Step 4. We prove that lim n v n v ˜ n =0, lim n y n y ˜ n =0, lim n x n y n =0, and lim n y n W y n =0.

Indeed, utilizing Lemma 2.1 and Proposition 2.4, for pΩ we have from (3.4), (3.16), and (3.17) for pΩ

x n + 1 p 2 β n 1 τ [ γ V x n γ V p 2 + 2 γ V p μ F p , γ V x n μ F p ] + ( 1 β n τ ) y ˜ n p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) P C ( I λ f α n ) y n P C ( I λ f ) p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) ( I λ f ) y n ( I λ f ) p λ α n y n 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ ( I λ f ) y n ( I λ f ) p 2 2 λ α n y n , ( I λ f ) y n ( I λ f ) p ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ y n p 2 + λ ( λ 2 L ) f ( y n ) f ( p ) 2 + 2 λ α n y n ( I λ f ) y n ( I λ f ) p ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ θ n γ S x n + ( I μ F ) v ˜ n p 2 + v ˜ n p 2 1 θ n θ n y n v ˜ n 2 + λ ( λ 2 L ) f ( y n ) f ( p ) 2 + 2 λ α n y n ( I λ f ) y n ( I λ f ) p ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ θ n γ S x n + ( I μ F ) v ˜ n p 2 + v n p 2 + λ ( λ 2 L ) f ( v n ) f ( p ) 2 + 2 λ α n v n ( I λ f ) v n ( I λ f ) p + λ ( λ 2 L ) f ( y n ) f ( p ) 2 + 2 λ α n y n ( I λ f ) y n ( I λ f ) p ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ θ n γ S x n + ( I μ F ) v ˜ n p 2 + x n p 2 + λ ( λ 2 L ) f ( v n ) f ( p ) 2 + 2 λ α n v n ( I λ f ) v n ( I λ f ) p + λ ( λ 2 L ) f ( y n ) f ( p ) 2 + 2 λ α n y n ( I λ f ) y n ( I λ f ) p ] ( 1 β n ( τ γ 2 l 2 τ ) ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) λ ( λ 2 L ) [ f ( v n ) f ( p ) 2 + f ( y n ) f ( p ) 2 ] + 2 λ α n ( v n ( I λ f ) v n ( I λ f ) p + y n ( I λ f ) y n ( I λ f ) p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) λ ( λ 2 L ) [ f ( v n ) f ( p ) 2 + f ( y n ) f ( p ) 2 ] + 2 λ α n ( v n ( I λ f ) v n ( I λ f ) p + y n ( I λ f ) y n ( I λ f ) p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 ,

which immediately implies that

( 1 β n τ ) λ ( 2 L λ ) [ f ( v n ) f ( p ) 2 + f ( y n ) f ( p ) 2 ] x n p 2 x n + 1 p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ α n ( v n ( I λ f ) v n ( I λ f ) p + y n ( I λ f ) y n ( I λ f ) p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 x n x n + 1 ( x n p + x n + 1 p ) + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ α n ( v n ( I λ f ) v n ( I λ f ) p + y n ( I λ f ) y n ( I λ f ) p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 .

Since α n 0, β n 0, θ n 0, x n + 1 x n 0, and { x n }, { y n }, { v n }, { v ˜ } are bounded sequences, it follows from λ(0, 2 L ) that

lim n f ( v n ) f ( p ) =0and lim n f ( y n ) f ( p ) =0.
(3.30)

Furthermore, from the firm nonexpansiveness of P C we obtain

v ˜ n p 2 = P C ( I λ f α n ) v n P C ( I λ f ) p 2 ( I λ f α n ) v n ( I λ f ) p , v ˜ n p = 1 2 { ( I λ f α n ) v n ( I λ f ) p 2 + v ˜ n p 2 ( I λ f α n ) v n ( I λ f ) p ( v ˜ n p ) 2 } 1 2 { v n p 2 + 2 λ f α n ( v n ) f ( p ) ( I λ f α n ) v n ( I λ f ) p + v ˜ n p 2 v n v ˜ n 2 + 2 λ v n v ˜ n , f α n ( v n ) f ( p ) λ 2 f α n ( v n ) f ( p ) 2 } ,

and so

v ˜ n p 2 v n p 2 v n v ˜ n 2 + 2 λ f α n ( v n ) f ( p ) ( I λ f α n ) v n ( I λ f ) p + 2 λ v n v ˜ n , f α n ( v n ) f ( p ) λ 2 f α n ( v n ) f ( p ) 2 v n p 2 v n v ˜ n 2 + 2 λ f α n ( v n ) f ( p ) ( I λ f α n ) v n ( I λ f ) p + 2 λ v n v ˜ n f α n ( v n ) f ( p ) .
(3.31)

Similarly, we get

n y ˜ n p 2 y n p 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n , f α n ( y n ) f ( p ) λ 2 f α n ( y n ) f ( p ) 2 y n p 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n f α n ( y n ) f ( p ) .
(3.32)

Thus, we have from (3.4), (3.16), (3.17), (3.31), and (3.32)

x n + 1 p 2 β n 1 τ [ γ V x n γ V p 2 + 2 γ V p μ F p , γ V x n μ F p ] + ( 1 β n τ ) y ˜ n p 2 β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ y n p 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n f α n ( y n ) f ( p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v ˜ n p 2 + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n f α n ( y n ) f ( p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ v n p 2 v n v ˜ n 2 + 2 λ f α n ( v n ) f ( p ) ( I λ f α n ) v n ( I λ f ) p + 2 λ v n v ˜ n f α n ( v n ) f ( p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 1 θ n θ n y n v ˜ n 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n f α n ( y n ) f ( p ) ] β n γ 2 l 2 τ x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + ( 1 β n τ ) [ x n p 2 v n v ˜ n 2 + 2 λ f α n ( v n ) f ( p ) ( I λ f α n ) v n ( I λ f ) p + 2 λ v n v ˜ n f α n ( v n ) f ( p ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 y n y ˜ n 2 + 2 λ f α n ( y n ) f ( p ) ( I λ f α n ) y n ( I λ f ) p + 2 λ y n y ˜ n f α n ( y n ) f ( p ) ] ( 1 β n ( τ γ 2 l 2 τ ) ) x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) ( v n v ˜ n 2 + y n y ˜ n 2 ) + 2 λ f α n ( v n ) f ( p ) ( ( I λ f α n ) v n ( I λ f ) p + v n v ˜ n ) + 2 λ f α n ( y n ) f ( p ) ( ( I λ f α n ) y n ( I λ f ) p + y n y ˜ n ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 x n p 2 + β n 2 τ γ V p μ F p γ V x n μ F p ( 1 β n τ ) ( v n v ˜ n 2 + y n y ˜ n 2 ) + 2 λ f α n ( v n ) f ( p ) ( ( I λ f α n ) v n ( I λ f ) p + v n v ˜ n ) + 2 λ f α n ( y n ) f ( p ) ( ( I λ f α n ) y n ( I λ f ) p + y n y ˜ n ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 ,

which hence leads to

( 1 β n τ ) ( v n v ˜ n 2 + y n y ˜ n 2 ) x n p 2 x n + 1 p 2 + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ f α n ( v n ) f ( p ) ( ( I λ f α n ) v n ( I λ f ) p + v n v ˜ n ) + 2 λ f α n ( y n ) f ( p ) ( ( I λ f α n ) y n ( I λ f ) p + y n y ˜ n ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 x n x n + 1 ( x n p + x n + 1 p ) + β n 2 τ γ V p μ F p γ V x n μ F p + 2 λ f α n ( v n ) f ( p ) ( ( I λ f α n ) v n ( I λ f ) p + v n v ˜ n ) + 2 λ f α n ( y n ) f ( p ) ( ( I λ f α n ) y n ( I λ f ) p + y n y ˜ n ) + θ n γ S x n + ( I μ F ) v ˜ n p 2 .

Since α n 0, β n 0, θ n 0, x n + 1 x n 0, and { x n }, { y n }, { y ˜ n }, { v n }, { v ˜ } are bounded sequences, it follows from (3.30) that

lim n v n v ˜ n =0and lim n y n y ˜ n =0.
(3.33)

Note that

x n y n x n v n + v n v ˜ n + v ˜ n y n .

Hence from (3.22), (3.29), and (3.33) it follows that

lim n x n y n =0.
(3.34)

Furthermore, it is easy to see from (3.1) that

x n + 1 W n y ˜ n = β n (γV x n μF W n y ˜ n ).

Hence we get

x n W n y ˜ n x n x n + 1 + x n + 1 W n y ˜ n x n x n + 1 + β n γ V x n μ F W n y ˜ n .

So, it follows from x n x n + 1 0 and β n 0 that

lim n x n W n y ˜ n =0.
(3.35)

Also, note that

W n y n y n W n y n W n y ˜ n + W n y ˜ n x n + x n y n y n y ˜ n + W n y ˜ n x n + x n y n .

Thus, combining (3.33)-(3.35), we find that

lim n W n y n y n =0.

Taking into account that y n W y n y n W n y n + W n y n W y n , from Remark 2.2 and the boundedness of { y n } we immediately get

lim n y n W y n =0.
(3.36)

Step 5. We prove that ω w ( x n )Ω.

Indeed, we first prove that ω w ( x n )Ω. In fact, since H is reflexive and { x n } is bounded, there exists at least a weak convergence subsequence of { x n }. Hence, as is well known, ω w ( x n ). Now, take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. From (3.27), (3.29), and (3.34)-(3.26) we have u n i w, v n i w, y n i w, Λ n i m u n i w and Δ n i k x n i w, where m{1,2,,N} and k{1,2,,M}. Utilizing Lemma 2.3, we deduce from y n i w and (3.36) that wFix(W)= n = 1 Fix( T n ) (due to Lemma 2.5). Next, we prove that w m = 1 N I( B m , R m ). As a matter of fact, since B m is η m -inverse strongly monotone, B m is a monotone and Lipschitz continuous mapping. It follows from Lemma 2.14 that R m + B m is maximal monotone. Let (v,g)G( R m + B m ), i.e., g B m v R m v. Again, since Λ n m u n = J R m , λ m , n (I λ m , n B m ) Λ n m 1 u n , n1, m{1,2,,N}, we have

Λ n m 1 u n λ m , n B m Λ n m 1 u n (I+ λ m , n R m ) Λ n m u n ,

that is,

1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) R m Λ n m u n .

In terms of the monotonicity of R m , we get

v Λ n m u n , g B m v 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) 0

and hence

v Λ n m u n , g v Λ n m u n , B m v + 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) = v Λ n m u n , B m v B m Λ n m u n + B m Λ n m u n B m Λ n m 1 u n + 1 λ m , n ( Λ n m 1 u n Λ n m u n ) v Λ n m u n , B m Λ n m u n B m Λ n m 1 u n + v Λ n m u n , 1 λ m , n ( Λ n m 1 u n Λ n m u n ) .

In particular,

v Λ n i m u n i , g v Λ n i m u n i , B m Λ n i m u n i B m Λ n i m 1 u n i + v Λ n i m u n i , 1 λ m , n i ( Λ n i m 1 u n i Λ n i m u n i ) .

Since Λ n m u n Λ n m 1 u n 0 (due to (3.25)) and B m Λ n m u n B m Λ n m 1 u n 0 (due to the Lipschitz continuity of B m ), we conclude from Λ n i m u n i w and { λ i , n }[ a i , b i ](0,2 η i ) that

lim i v Λ n i m u n i , g =vw,g0.

It follows from the maximal monotonicity of B m + R m that 0( R m + B m )w, i.e., wI( B m , R m ). Therefore w m = 1 N I( B m , R m ). Next we prove that w k = 1 M GMEP( Θ k , φ k , A k ). Since Δ n k x n = T r k , n ( Θ k , φ k ) (I r k , n A k ) Δ n k 1 x n , n1, k{1,2,,M}, we have

Θ k ( Δ n k x n , y ) + φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n 0 .

By (A2), we have

φ k (y) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n Θ k ( y , Δ n k x n ) .

Let z t =ty+(1t)w for all t(0,1] and yC. This implies that z t C. Then we have

z t Δ n k x n , A k z t φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t z t Δ n k x n , A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) = φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t A k Δ n k x n + z t Δ n k x n , A k Δ n k x n A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) .
(3.37)

By (3.26), we have A k Δ n k x n A k Δ n k 1 x n 0 as n. Furthermore, by the monotonicity of A k , we obtain z t Δ n k x n , A k z t A k Δ n k x n 0. Then by (A4) we obtain

z t w, A k z t φ k (w) φ k ( z t )+ Θ k ( z t ,w).
(3.38)

Utilizing (A1), (A4), and (3.38), we obtain

0 = Θ k ( z t , z t ) + φ k ( z t ) φ k ( z t ) t Θ k ( z t , y ) + ( 1 t ) Θ k ( z t , w ) + t φ k ( y ) + ( 1 t ) φ k ( w ) φ k ( z t ) t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) z t w , A k z t = t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) t y w , A k z t ,

and hence

0 Θ k ( z t ,y)+ φ k (y) φ k ( z t )+(1t)yw, A k z t .

Letting t0, we have, for each yC,

0 Θ k (w,y)+ φ k (y) φ k (w)+yw, A k w.

This implies that wGMEP( Θ k , φ k , A k ) and hence w k = 1 M GMEP( Θ k , φ k , A k ). Thus, wΩ= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) m = 1 N I( B m , R m ).

Further, let us show that wΓ. As a matter of fact, from x n y n 0 and y n y ˜ n 0 we know that y n i w and y ˜ n i w. Define

T ˜ v={ f ( v ) + N C v , if  v C , , if  v C ,

where N C v={uH:vp,u0,pC}. Then T ˜ is maximal monotone and 0 T ˜ v if and only if vVI(C,f); see [28] for more details. Let (v,u)G( T ˜ ). Then we have

u T ˜ v=f(v)+ N C v

and hence

uf(v) N C v.

So, we have

v p , u f ( v ) 0,pC.

On the other hand, from

y ˜ n = P C ( y n λ f α n ( y n ) ) andvC,

we have

y n λ f α n ( y n ) y ˜ n , y ˜ n v 0,

and hence

v y ˜ n , y ˜ n y n λ + f α n ( y n ) 0.

Therefore, from

uf(v) N C vand y ˜ n i C,

we have

v y ˜ n i , u v y ˜ n i , f ( v ) v y ˜ n i , f ( v ) v y ˜ n i , y ˜ n i y n i λ + f α n i ( y n i ) = v y ˜ n i , f ( v ) v y ˜ n i , y ˜ n i y n i λ + f ( y n i ) α n i v y ˜ n i , y n i = v y ˜ n i , f ( v ) f ( y ˜ n i ) + v y ˜ n i , f ( y ˜ n i ) f ( y n i ) v y ˜ n i , y ˜ n i y n i λ α n i v y ˜ n i , y n i v y ˜ n i , f ( y ˜ n i ) f ( y n i ) v y ˜ n i , y ˜ n i y n i λ α n i v y ˜ n i , y n i .

Hence, we obtain

vw,u0,as i.

Since T ˜ is maximal monotone, we have w T ˜ 1 0, and hence, wVI(C,f), which leads to wΓ. Consequently, w n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) m = 1 N I( B m , R m )Γ=:Ω. This shows that ω w ( x n )Ω.

Step 6. We prove that ω w ( x n )Ξ provided x n y n =o( θ n ) additionally.

Indeed, let w ω w ( x n ) be the same as mentioned in Step 5. Then we get x n i w. Utilizing Lemma 2.7, from (3.1) and (3.4) we obtain, for pΩ,

y n p 2 = θ n γ ( S x n S p ) + ( I θ n μ F ) P C ( I λ f α n ) v n ( I θ n μ F ) P C ( I λ f α n ) p + ( I θ n μ F ) P C ( I λ f α n ) p ( I θ n μ F ) P C ( I λ f ) p + θ n ( γ S μ F ) p 2 θ n γ ( S x n S p ) + ( I θ n μ F ) P C ( I λ f α n ) v n ( I θ n μ F ) P C ( I λ f α n ) p + ( I θ n μ F ) P C ( I λ f α n ) p ( I θ n μ F ) P C ( I λ f ) p 2 + 2 θ n ( γ S μ F ) p , y n p [ θ n γ S x n S p + ( I θ n μ F ) P C ( I λ f α n ) v n ( I θ n μ F ) P C ( I λ f α n ) p + ( I θ n μ F ) P C ( I λ f α n ) p ( I θ n μ F ) P C ( I λ f ) p ] 2 + 2 θ n ( γ S μ F ) p , y n p [ θ n γ x n p + ( 1 θ n τ ) P C ( I λ f α n ) v n P C ( I λ f α n ) p + ( 1 θ n τ ) P C ( I λ f α n ) p P C ( I λ f ) p ] 2 + 2 θ n ( γ S μ F ) p , y n p [ θ n γ x n p + ( 1 θ n τ ) v n p + ( 1 θ n τ ) ( I λ f α n ) p ( I λ f ) p ] 2 + 2 θ n ( γ S μ F ) p , y n p = [ θ n γ x n p + ( 1 θ n τ ) v n p + ( 1 θ n τ ) λ α n p ] 2 + 2 θ n ( γ S μ F ) p , y n p [ θ n γ x n p + ( 1 θ n τ ) x n p + λ α n p ] 2 + 2 θ n ( γ S μ F ) p , y n p = [ ( 1 θ n ( τ γ ) ) x n p + λ α n p ] 2 + 2 θ n ( γ S μ F ) p , y n p ( x n p + λ α n p ) 2 + 2 θ n ( γ S μ F ) p , y n p ,
(3.39)

which immediately implies that

2 ( γ S μ F ) p , p y n 1 θ n ( ( x n p + λ α n p ) 2 y n p 2 ) x n y n + λ α n p θ n ( x n p + y n p + λ α n p ) .

This, together with x n y n =o( θ n ) and α n =o( θ n ), leads to

lim sup n ( γ S μ F ) p , p y n 0.

Observe that

lim sup n ( γ S μ F ) p , p x n = lim sup n ( ( γ S μ F ) p , y n x n + ( γ S μ F ) p , p y n ) = lim sup n ( γ S μ F ) p , p y n 0 .

So, it follows from x n i w that

( γ S μ F ) p , p w 0,pΩ.

Also, note that 0<γτ and

μ η τ μ η 1 1 μ ( 2 η μ κ 2 ) 1 μ ( 2 η μ κ 2 ) 1 μ η 1 2 μ η + μ 2 κ 2 1 2 μ η + μ 2 η 2 κ 2 η 2 κ η .

It is clear that

( μ F γ S ) x ( μ F γ S ) y , x y (μηγ) x y 2 ,x,yH.

Hence, it follows from 0<γτμη that μFγS is monotone. Since w ω w ( x n )Ω, by Minty’s lemma [41] we have

( γ S μ F ) w , p w 0,pΩ;

that is, wΞ. Therefore ω w ( x n )Ξ. This completes the proof. □

Theorem 3.2 Assume that all the conditions in Theorem  3.1 hold. Then we have:

  1. (i)

    { x n } converges strongly to a point x Ω, which is a unique solution of the VIP: (γVμF) x ,p x 0, pΩ; equivalently,

    P Ω ( I ( μ F γ V ) ) x = x ;
  2. (ii)

    { x n } converges strongly to a unique solution of THVI (1.9) provided x n y n =o( θ n ) additionally.

Proof Observe that

( μ F γ V ) x ( μ F γ V ) y , x y (μηγl) x y 2 ,x,yH.

Hence we know that μFγV is (μηγl)-strongly monotone with constant (μηγl)>0. In the meantime, it is easy to see that μFγV is (μκ+γl)-Lipschitzian with constant μκ+γl>0. Thus, there exists a unique solution x Ω of the VIP

( γ V μ F ) x , p x 0,pΩ.
(3.40)

Equivalently, x = P Ω (I(μFγV)) x . Now, let us show that

lim sup n ( γ V μ F ) x , x n x 0.

Since { x n } is bounded, we may assume, without loss of generality, that there exists a subsequence { x n i } of { x n } such that x n i w and

lim sup n ( γ V μ F ) x , x n x = lim i ( γ V μ F ) x , x n i x = ( γ V μ F ) x , w x .

In terms of Theorem 3.1(ii), we know that w ω w ( x n )Ω. So, from (3.39) it follows that

lim sup n ( γ V μ F ) x , x n x = ( γ V μ F ) x , w x 0.
(3.41)

Next, let us show that lim n x n x =0. In fact, from (3.1) and (3.39) with p= x we get

x n + 1 x 2 = β n γ ( V x n V x ) + ( I β n μ F ) W n P C ( I λ f α n ) y n ( I β n μ F ) W n P C ( I λ f α n ) x + ( I β n μ F ) W n P C ( I λ f α n ) x ( I β n μ F ) W n P C ( I λ f ) x + β n ( γ V μ F ) x 2 [ β n γ V x n V x + ( I β n μ F ) W n P C ( I λ f α n ) y n ( I β n μ F ) W n P C ( I λ f α n ) x + ( I β n μ F ) W n P C ( I λ f α n ) x ( I β n μ F ) W n P C ( I λ f ) x ] 2 + 2 β n ( γ V μ F ) x , x n + 1 x [ β n γ l x n x + ( 1 β n τ ) P C ( I λ f α n ) y n P C ( I λ f α n ) x + ( 1 β n τ ) P C ( I λ f α n ) x P C ( I λ f ) x ] 2 + 2 β n ( γ V μ F ) x , x n + 1 x [ β n γ l x n x + ( 1 β n τ ) y n x + ( 1 β n τ ) ( I λ f α n ) x ( I λ f ) x ] 2 + 2 β n ( γ V μ F ) x , x n + 1 x = [ β n γ l x n x + ( 1 β n τ ) ( y n x + λ α n x ) ] 2 + 2 β n ( γ V μ F ) x , x n + 1 x = [ β n τ γ l τ x n x + ( 1 β n τ ) ( y n x + λ α n x ) ] 2 + 2 β n ( γ V μ F ) x , x n + 1 x β n γ 2 l 2 τ x n x 2 + ( 1 β n τ ) ( y n x + λ α n x ) 2 + 2 β n ( γ V μ F ) x , x n + 1 x β n γ 2 l 2 τ x n x 2 + ( 1 β n τ ) y n x 2 + λ α n x ( 2 y n x + λ α n x ) + 2 β n ( γ V μ F ) x , x n + 1 x β n γ 2 l 2 τ x n x 2 + ( 1 β n τ ) [ ( x n x + λ α n x ) 2 + 2 θ n ( γ S μ F ) x , y n x ] + λ α n x ( 2 y n x + λ α n x ) + 2 β n ( γ V μ F ) x , x n + 1 x β n γ 2 l 2 τ x n x 2 + ( 1 β n τ ) x n x 2 + λ α n x ( 2 x n x + λ α n x ) + 2 ( 1 β n τ ) θ n ( γ S μ F ) x , y n x + λ α n x ( 2 y n x + λ α n x ) + 2 β n ( γ V μ F ) x , x n + 1 x = ( 1 β n τ 2 γ 2 l 2 τ ) x n x 2 + β n τ 2 γ 2 l 2 τ τ τ 2 γ 2 l 2 [ 2 ( 1 β n τ ) θ n β n ( γ S μ F ) x , y n x + 2 ( γ V μ F ) x , x n + 1 x ] + 2 λ α n x ( x n x + y n x + λ α n x ) .
(3.42)

Since n = 1 α n <, n = 1 β n =, lim n θ n β n =0, and lim sup n (γVμF) x , x n + 1 x 0 (due to (3.41)), we deduce that n = 1 2λ α n ( x n x + y n x +λ α n x )<. n = 1 β n τ 2 ( γ l ) 2 τ =, and

lim sup n τ τ 2 ( γ l ) 2 [ 2 ( 1 β n τ ) θ n β n ( γ S μ F ) x , y n x + 2 ( γ V μ F ) x , x n + 1 x ] 0.

Therefore, applying Lemma 2.8 to (3.42), we infer that lim n x n x =0.

On the other hand, let us suppose that x n y n =o( θ n ). Then by Theorem 3.1(iii) we know that ω w ( x n )Ξ. Since μFγV:HH is (μκ+γl)-Lipschitzian and (μηγl)-strongly monotone, there exists a unique solution x Ξ of the VIP

γ V x μ F x , x x 0,xΞ.
(3.43)

Since the sequence { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n ( γ V μ F ) x , x n x = lim i ( γ V μ F ) x , x n i x .
(3.44)

Also, since H is reflexive and { x n } is bounded, and without loss of generality we may assume that x n i x ¯ Ξ (due to Theorem 3.1(iii)). Taking into account that x is the unique solution of the VIP (3.43), we deduce from (3.44) that

lim sup n ( γ V μ F ) x , x n + 1 x ( γ V μ F ) x , x ¯ x 0.
(3.45)

Repeating the same argument as in (3.42) we immediately conclude that

x n + 1 x 2 ( 1 β n τ 2 γ 2 l 2 τ ) x n x 2 + β n τ 2 γ 2 l 2 τ τ τ 2 γ 2 l 2 [ 2 ( 1 β n τ ) θ n β n ( γ S μ F ) x , y n x + 2 ( γ V μ F ) x , x n + 1 x ] + 2 λ α n x ( x n x + y n x + λ α n x ) .
(3.46)

Repeating the same arguments as above, we can readily see that lim n x n x =0. This completes the proof. □

Remark 3.1 It is obvious that our iterative algorithm (3.1) is very different from Ceng and Al-Homidan’s iterative one in [[23], Theorem 21] and Yao et al.’s iterative one (1.8). Here, the two-step iterative scheme in [[35], Theorem 3.2] and the three-step iterative scheme in [[23], Theorem 21] are combined to develop our four-step iterative scheme (3.1) for the THVI (1.9). It is worth pointing out that under the lack of the assumptions similar to those in [[35], Theorem 3.2], e.g., { x n } is bounded, Fix(T)intC and xTxkDist(x,Fix(T)), xC for some k>0, the sequence { x n } generated by (3.1) converges strongly to a point x n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )Γ=:Ω, which is a unique solution of the VIP: γV x μF x ,x x 0, xΩ; equivalently, P Ω (I(μFγV)) x = x (see Theorem 3.2(i)).

Remark 3.2 Theorems 3.1 and 3.2 improve, extend, supplement, and develop work by Yao et al. [[35], Theorems 3.1 and 3.2] and Ceng and Al-Homidan [[23], Theorem 21] in the following aspects:

  1. (a)

    Our THVI (1.9) with the unique solution x Ξ satisfying

    x = P n = 1 Fix ( T n ) k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) Γ ( I ( μ F γ S ) ) x

    is more general than the problem of finding a point x ˜ C satisfying x ˜ = P Fix ( T ) S x ˜ in [35] and than the problem of finding a point x k = 1 M GMEP( Θ k , φ k , B k ) i = 1 N VI(C, A i )Γ in [[23], Theorem 21].

  2. (b)

    Our four-step iterative scheme (3.1) for THVI (1.9) is more flexible, more advantageous and more subtle than Ceng and Al-Homidan’s three-step iterative one in [[23], Theorem 21] and than Yao et al.’s two-step iterative one (1.8) because it can be used to solve several kinds of problems, e.g., the THVI, the HFPP, and the problem of finding a common point of four sets: n = 1 Fix( T n ), k = 1 M GMEP( Θ k , φ k , A k ), i = 1 N I( B i , R i ), and Γ. In addition, it also drops the crucial requirements that Fix(T)intC and xTxkDist(x,Fix(T)), xC for some k>0 in [[35], Theorem 3.2 (v)].

  3. (c)

    The argument techniques in our Theorems 3.1 and 3.2 are very different from the argument ones in [[35], Theorems 3.1 and 3.2] and from the argument ones in [[23], Theorem 21], because we use the W-mapping approach to fixed points of infinitely many nonexpansive mappings { T n } n = 1 (see Lemmas 2.4 and 2.5), the properties of resolvent operators and maximal monotone mappings (see Proposition 2.4, Remark 2.3 and Lemmas 2.10-2.14), the fixed point equation x = P C (Iλf) x equivalent to the CMP (1.4) and the contractive coefficient estimates for the contractions associated with nonexpansive mappings (see Lemma 2.7).

  4. (d)

    Compared with the proof in [[23], Theorem 21], our proof (see the arguments in Theorem 3.1) makes use of Minty’s lemma [41] to derive ω w ( x n )Ξ because our Theorem 3.1 involves the quite complex problem, i.e., the THVI (1.9). The THVI (1.9) involves the HFPP for the nonexpansive mapping S and infinitely many nonexpansive mappings { T n } n = 1 but the problem in [[23], Theorem 21] involves no HFPP for nonexpansive mappings.

References

  1. Lions JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris; 1969.

    Google Scholar 

  2. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  Google Scholar 

  3. Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

  4. Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice-Hall, Englewood Cliffs; 1986.

    Google Scholar 

  5. Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.

    Book  Google Scholar 

  6. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    Google Scholar 

  7. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006,10(5):1293–1303.

    MathSciNet  Google Scholar 

  8. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  Google Scholar 

  9. Huang NJ: A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998,35(10):9–14. 10.1016/S0898-1221(98)00067-4

    Article  MathSciNet  Google Scholar 

  10. Ceng LC, Ansari QH, Schaible S: Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53: 69–96. 10.1007/s10898-011-9703-4

    Article  MathSciNet  Google Scholar 

  11. Fang YP, Huang NJ: H -accretive operators and resolvent operator technique for solving variational inclusions in Banach spaces. Appl. Math. Lett. 2004, 17: 647–653. 10.1016/S0893-9659(04)90099-7

    Article  MathSciNet  Google Scholar 

  12. Ceng LC, Ansari QH, Wong MM, Yao JC: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012,13(2):403–422.

    MathSciNet  Google Scholar 

  13. Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012,64(4):633–642. 10.1016/j.camwa.2011.12.074

    Article  MathSciNet  Google Scholar 

  14. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012,75(4):2116–2125. 10.1016/j.na.2011.10.012

    Article  MathSciNet  Google Scholar 

  15. Ceng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7

    Article  MathSciNet  Google Scholar 

  16. Ceng LC, Yao JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72: 1922–1937. 10.1016/j.na.2009.09.033

    Article  MathSciNet  Google Scholar 

  17. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061

    Article  MathSciNet  Google Scholar 

  18. Cai G, Bu SQ: Strong and weak convergence theorems for general mixed equilibrium problems and variational inequality problems and fixed point problems in Hilbert spaces. J. Comput. Appl. Math. 2013, 247: 34–52.

    Article  MathSciNet  Google Scholar 

  19. Ceng LC, Petrusel A: Relaxed extragradient-like method for general system of generalized mixed equilibria and fixed point problem. Taiwan. J. Math. 2012,16(2):445–478.

    MathSciNet  Google Scholar 

  20. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  Google Scholar 

  21. Ceng LC, Guu SM, Yao JC: Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 92

    Google Scholar 

  22. Ceng LC, Hu HY, Wong MM: Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 2011,15(3):1341–1367.

    MathSciNet  Google Scholar 

  23. Ceng LC, Al-Homidan S: Algorithms of common solutions for generalized mixed equilibria, variational inclusions, and constrained convex minimization. Abstr. Appl. Anal. 2014., 2014: Article ID 132053

    Google Scholar 

  24. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    Article  MathSciNet  Google Scholar 

  25. Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  Google Scholar 

  26. Colao V, Marino G, Xu HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 2008, 344: 340–352. 10.1016/j.jmaa.2008.02.041

    Article  MathSciNet  Google Scholar 

  27. Ceng LC, Petrusel A, Yao JC: Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. J. Optim. Theory Appl. 2009, 143: 37–58. 10.1007/s10957-009-9549-9

    Article  MathSciNet  Google Scholar 

  28. Rockafellar RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  29. Zeng LC, Guu SM, Yao JC: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005,50(3–4):329–337. 10.1016/j.camwa.2005.06.001

    Article  MathSciNet  Google Scholar 

  30. Zhang SS, Lee Joseph HW, Chan CK: Algorithms of common solutions for quasi variational inclusions and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-y

    Article  Google Scholar 

  31. Peng JW, Wang Y, Shyu DS, Yao JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems and fixed point problems. J. Inequal. Appl. 2008., 15: Article ID 720371

    Google Scholar 

  32. Ceng LC, Al-Homidan S, Ansari QH: Iterative algorithms with regularization for hierarchical variational inequality problems and convex minimization problems. Fixed Point Theory Appl. 2013., 2013: Article ID 284

    Google Scholar 

  33. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MathSciNet  Google Scholar 

  34. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  Google Scholar 

  35. Yao Y, Liou YC, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009,31(1–2):433–445. 10.1007/s12190-008-0222-5

    Article  MathSciNet  Google Scholar 

  36. Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 2009, 71: 1292–1297. 10.1016/j.na.2009.01.133

    Article  MathSciNet  Google Scholar 

  37. Iiduka H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s10957-010-9769-z

    Article  MathSciNet  Google Scholar 

  38. Ceng LC, Ansari QH, Yao JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7

    Article  MathSciNet  Google Scholar 

  39. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  40. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157

    Article  MathSciNet  Google Scholar 

  41. Goebel K, Kirk WA: Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  Google Scholar 

  42. O’Hara JG, Pillay P, Xu HK: Iterative approaches to convex feasibility problems in Banach spaces. Nonlinear Anal. 2006,64(9):2022–2042. 10.1016/j.na.2005.07.036

    Article  MathSciNet  Google Scholar 

  43. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 064363

    Google Scholar 

  44. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003,119(1):185–201.

    Article  MathSciNet  Google Scholar 

  45. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002,66(2):240–256.

    Article  Google Scholar 

  46. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Groningen; 1976.

    Book  Google Scholar 

  47. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  48. Ceng LC, Yao JC: Approximate proximal methods in vector optimization. Eur. J. Oper. Res. 2007, 183: 1–19. 10.1016/j.ejor.2006.09.070

    Article  MathSciNet  Google Scholar 

  49. Baillon JB, Haddad G: Quelques propriétés des opérateurs angle-bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Lu-Chuan Ceng was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133), and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 103-2923-E-037-001-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng Liou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Wen, CF. & Liou, C. Multi-step iterative algorithms with regularization for triple hierarchical variational inequalities with constraints of mixed equilibria, variational inclusions, and convex minimization. J Inequal Appl 2014, 414 (2014). https://doi.org/10.1186/1029-242X-2014-414

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-414

Keywords