Skip to main content

Multi-step extragradient method with regularization for triple hierarchical variational inequalities with variational inclusion and split feasibility constraints

Abstract

By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method with regularization, a hybrid multi-step extragradient algorithm with regularization for finding a solution of triple hierarchical variational inequality problem is introduced and analyzed. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a unique solution of a triple hierarchical variational inequality problem which is defined over the set of solutions of a hierarchical variational inequality problem defined over the set of common solutions of finitely many generalized mixed equilibrium problems (GMEP), finitely many variational inclusions, fixed point problems, and the split feasibility problem (SFP). We also prove the strong convergence of the proposed algorithm to a common solution of the SFP, finitely many GMEPs, finitely many variational inclusions, and the fixed point problem of a strict pseudocontraction. The results presented in this paper improve and extend the corresponding results announced by several others.

MSC:49J30, 47H09, 47J20, 49M05.

1 Introduction

The following problems have their own importance because of their applications in diverse areas of science, engineering, social sciences, and management:

  • Equilibrium problems including variational inequalities.

  • Variational inclusion problems.

  • Split feasibility problems.

  • Fixed point problems.

One way or the other, these problems are related to each other. They are described as follows.

Equilibrium problem

Let C be a nonempty closed convex subset of a real Hilbert space H and Θ:C×CR be a real-valued bifunction. The equilibrium problem (EP) is to find an element xC such that

Θ(x,y)0,yC.

The set of solutions of EP is denoted by EP(Θ). It includes several problems, namely, variational inequality problems, optimization problems, saddle point problems, fixed point problems, etc., as special cases. For further details on EP, we refer to [16] and the references therein.

Let A:CH be a nonlinear operator. If Θ(x,y)=A(x),yx, then EP reduces to the variational inequality problem of finding xC such that

A ( x ) , y x 0,yC.

For further details on variational inequalities and their generalizations, we refer to [713] and the references therein.

During the last two decades, EP has been extended and generalized in several directions. The generalized mixed equilibrium problem (GMEP), one of the generalizations of EP, is to find xC such that

Θ(x,y)+φ(y)φ(x)+Ax,yx0,yC,
(1.1)

where φ:CR is a real-valued function. The set of solutions of GMEP is denoted by GMEP(Θ,φ,A). For different choices of operators/functions Θ, φ, and A, we get different forms of equilibrium problems. For applications of GMEP, we refer to [14, 15] and the references therein.

Variational inclusion problem

Let B:CH be a single-valued mapping and R:C 2 H be a set-valued mapping with D(R)=C, where D(R) denotes the domain of R. The variational inclusion problem is to find xC such that

0Bx+Rx.
(1.2)

We denote by I(B,R) the solution set of the variational inclusion problem (1.2). In particular, if B=R=0, then I(B,R)=C. If B=0, then problem (1.2) becomes the inclusion problem introduced by Rockafellar [16]. It is well known that problem (1.2) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria and game theory, etc. Let a set-valued mapping R:D(R)H 2 H be maximal monotone. We define the resolvent operator J R , λ :H D ( R ) ¯ associated with R and λ>0 as follows:

J R , λ = ( I + λ R ) 1 ,xH.

Huang [17] studied problem (1.2) in the case where R is maximal monotone and B is strongly monotone and Lipschitz continuous with D(R)=C=H. Zeng et al. [18] further studied problem (1.2) in a more general setting than in [17]. They gave the geometric convergence rate estimate for approximate solutions. Various types of iterative algorithms for solving variational inclusions have been further studied and developed in the literature; see, for example, [1922] and the references therein.

Split feasibility problem

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. The split feasibility problem (SFP) is to find a point x such that

xCandAxQ,
(1.3)

where A: H 1 H 2 is a bounded linear operator from H 1 to H 2 . We denote by Γ the solution set of the SFP. It is a model of an inverse problem which arises in phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as SFP; see, for example [23] and the references therein. Recently, it is found that the SFP can also be applied to study intensity-modulated radiation therapy (IMRT); see, for example, [24, 25] and the references therein. In the recent past, a wide variety of iterative methods have been proposed to solve SFP; see, for example, [2428] the references therein.

Fixed point problem

Let C be a nonempty subset of a H and T:CC be a mapping. The fixed point problem is to find an element xC such that T(x)=x.

It is a well-known problem and has tremendous applications in different branches of science, engineering, social sciences, and management.

The following proposition provides some relations among the above mentioned problems.

Proposition 1.1 Given x H, the following statements are equivalent:

  1. (a)

    x solves the SFP;

  2. (b)

    x solves the fixed point equation

    P C (Iλf) x = x ,

    where λ>0, f= A (I P Q )A, P Q is the projection operator and A is the adjoint of A;

  3. (c)

    x solves the variational inequality problem (VIP) of finding x C such that

    f ( x ) , x x 0,xC.

A variational inequality problem which is defined over the set of fixed points of a mapping is called hierarchical variational inequality problem; that is, when the set C in variational inequality formulation is equal to the set of fixed points of a mapping. A variational inequality problem which is defined over the set of solutions of a hierarchical variational inequality problem is called a triple hierarchical variational inequality problem. For further details on hierarchical variational inequality problems and triple hierarchical variational inequality problems, we refer to [29], a recent survey on these problems.

Very recently, Kong et al. [30] considered the following triple hierarchical variational inequality problem (THVIP).

Problem 1.1 Let C be a nonempty closed convex subset of a real Hilbert space H and F:CH be a κ-Lipschitzian and η-strongly monotone operator, where κ and η are positive constants. Let A:CH be a monotone and L-Lipschitzian mapping, V:CH be a ρ-contraction with coefficient ρ[0,1), S:CC be a nonexpansive mapping, and T:CC be a ξ-strictly pseudocontractive mapping with Fix(T)VI(C,A), where Fix(T) denotes the set of all fixed points of T. Let 0<μ< 2 η κ 2 and 0<γτ, where τ=1 1 μ ( 2 η μ κ 2 ) . Then the objective is to find x Ξ such that

( μ F γ V ) x , x x 0,xΞ,
(1.4)

where Ξ denotes the solution set of the hierarchical variational inequality problem (HVIP) of finding z Fix(T)VI(C,A) such that

( μ F γ S ) z , z z 0,zFix(T)VI(C,A).
(1.5)

Kong et al. [30] presented an algorithm for finding a solution of Problem 1.1. Under some conditions, they proved that the sequence { x n } generated by the proposed algorithm converges strongly to a point x Fix(T)VI(C,A) which is a unique solution of Problem 1.1 provided that {S x n } is bounded and x n + 1 x n + x n z n =o( ϵ n 2 ). They also showed under certain conditions that the sequence { x n } generated by proposed algorithm converges strongly to a unique solution x of the following VIP provided that x n + 1 x n + x n z n =o( ϵ n 2 ) and the sequence {S x n } is bounded:

find  x Ξ such that  F x , x x 0,xΞ.

In this paper, we consider the following triple hierarchical variational inequality problem (THVIP).

Problem 1.2 Let M, N be two positive integers. Assume that

  1. (i)

    F:HH is κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<γτ and 0<μ< 2 η κ 2 where τ=1 1 μ ( 2 η μ κ 2 ) ;

  2. (ii)

    for each k{1,2,,M}, Θ k :C×CR satisfies conditions (A1)-(A4) and φ k :CR{+} is a proper lower semicontinuous and convex function with restriction (B1) or (B2) (conditions (A1)-(A4) and (B1)-(B2) are given in the next section);

  3. (iii)

    for each k{1,2,,M} and i{1,2,,N}, R i :C 2 H is a maximal monotone mapping, and A k :HH and B i :CH are μ k -inverse strongly monotone and η i -inverse strongly monotone, respectively;

  4. (iv)

    T:HH is a ξ-strict pseudocontraction, S:HH is a nonexpansive mapping and V:HH is a ρ-contraction with coefficient ρ[0,1);

  5. (v)

    Ω:=( k = 1 M GMEP( Θ k , φ k , A k ))( i = 1 N I( B i , R i ))Fix(T)Γ.

Then the objective is to find x Ξ such that

( μ F γ V ) x , x x 0,xΞ,
(1.6)

where Ξ denotes the solution set of the hierarchical variational inequality problem (HVIP) of finding z Ω such that

( μ F γ S ) z , z z 0,zΩ.
(1.7)

By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method, Mann’s iteration method, and the gradient-projection method (GPM) with regularization, we introduce and analyze a hybrid multi-step extragradient algorithm with regularization in the setting of Hilbert spaces. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a unique solution of THVIP (1.6). The algorithm and convergence result of this paper extend and generalize several existing algorithms and results, respectively, in the literature.

2 Preliminaries

Throughout this paper, unless otherwise specified, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively. We write x n x (respectively, x n x) to indicate that the sequence { x n } converges (respectively, weakly) to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n }, that is,

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

Definition 2.1 A mapping T:HH is said to be

  1. (a)

    nonexpansive if

    TxTyxy,x,yH;
  2. (b)

    firmly nonexpansive if 2TI is nonexpansive, or equivalently, if T is 1-inverse strongly monotone (1-ism),

    xy,TxTy T x T y 2 ,x,yH;

alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive; projections are firmly nonexpansive.

It can easily be seen that if T is nonexpansive, then IT is monotone.

Definition 2.2 A mapping T:HH is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,

T(1α)I+αS,

where α(0,1) and S:HH is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are 1 2 -averaged mappings.

Proposition 2.1 [31]

Let T:HH be a given mapping.

  1. (a)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (b)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (c)

    T is averaged if and only if the complement IT is ν-ism for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.2 [31, 32]

Let S,T,V:HH be given operators.

  1. (a)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (b)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (c)

    If T=(1α)S+αV for some α(0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (d)

    The composite of finitely many averaged mappings is averaged, that is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (e)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

The notation Fix(T) denotes the set of all fixed points of the mapping T, that is, Fix(T)={xH:Tx=x}.

A mapping T:CC is said to be ξ-strictly pseudocontractive if there exists ξ[0,1) such that

T x T y 2 x y 2 +ξ ( I T ) x ( I T ) y 2 ,x,yC.

In this case, we also say that T is a ξ-strict pseudocontraction. We denote by Fix(S) the set of fixed points of S. In particular, if ξ=0, T is a nonexpansive mapping.

It is clear that, in a real Hilbert space H, T:CC is ξ-strictly pseudocontractive if and only if the following inequality holds:

TxTy,xy x y 2 1 ξ 2 ( I T ) x ( I T ) y 2 ,x,yC.

This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then IT is 1 ξ 2 -inverse strongly monotone; for further details, we refer to [33] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions.

Lemma 2.1 [[33], Proposition 2.1]

Let C be a nonempty closed convex subset of a real Hilbert space H and T:CC be a mapping.

  1. (a)

    If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition

    TxTy 1 + ξ 1 ξ xy,x,yC.
  2. (b)

    If T is a ξ-strictly pseudocontractive mapping, then the mapping IT is semiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ and (IT) x n 0, then (IT) x ˜ =0.

  3. (c)

    If T is ξ-(quasi-)strict pseudocontraction, then the fixed point set Fix(T) of T is closed and convex so that the projection P Fix ( T ) is well defined.

Lemma 2.2 [34]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CC be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that (γ+δ)ξγ. Then

γ ( x y ) + δ ( T x T y ) (γ+δ)xy,x,yC.

Lemma 2.3 (Demiclosedness principle)

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with Fix(S). Then IS is demiclosed. That is, whenever { x n } is a sequence in C weakly converging to some xC and the sequence {(IS) x n } strongly converges to some y, it follows that (IS)x=y, where I is the identity operator of H.

Definition 2.3 A nonlinear operator T with the domain D(T)H and the range R(T)H is said to be

  1. (a)

    monotone if

    TxTy,xy0,x,yD(T);
  2. (b)

    β-strongly monotone if there exists a constant β>0 such that

    TxTy,xyη x y 2 ,x,yD(T);
  3. (c)

    ν-inverse strongly monotone if there exists a constant ν>0 such that

    TxTy,xyν T x T y 2 ,x,yD(T).

It is easy to see that the projection P C is 1-inverse strongly monotone. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [35]. It is obvious that if T is ν-inverse strongly monotone, then T is monotone and 1 ν -Lipschitz continuous. Moreover, we also have, for all u,vD(T) and λ>0,

( I λ T ) u ( I λ T ) v 2 = ( u v ) λ ( T u T v ) 2 = u v 2 2 λ T u T v , u v + λ 2 T u T v 2 u v 2 + λ ( λ 2 ν ) T u T v 2 .
(2.1)

So, if λ2ν, then IλT is a nonexpansive mapping.

The metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

Some important properties of projections are gathered in the following proposition.

Proposition 2.3 For given xH and zC:

  1. (a)

    z= P C xxz,yz0, yC;

  2. (b)

    z= P C x x z 2 x y 2 y z 2 , yC;

  3. (c)

    P C x P C y,xy P C x P C y 2 , yH.

Consequently, P C is nonexpansive and monotone.

Let λ be a number in (0,1] and let μ>0. Associating with a nonexpansive mapping T:CH, we define the mapping T λ :CH by

T λ x:=TxλμF(Tx),xC,

where F:HH is an operator such that, for some positive constants κ,η>0, F is κ-Lipschitzian and η-strongly monotone on H, that is, F satisfies the conditions:

FxFyκxyandFxFy,xyη x y 2

for all x,yH.

Lemma 2.4 [[36], Lemma 3.1]

T λ is a contraction provided 0<μ< 2 η κ 2 , that is,

T λ x T λ y (1λτ)xy,x,yC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Lemma 2.5 Let A:CH be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition  2.3(a)) implies

uVI(C,A)u= P C (uλAu),λ>0.

Let C be a nonempty closed convex subset of H and Θ:C×CR satisfy the following conditions.

  • (A1) T(x,x)=0, ?x?C;

  • (A2) T is monotone, that is, T(x,y)+T(y,x)=0, ?x,y?C;

  • (A3) T is upper-hemicontinuous, that is, ?x,y,z?C,

    lim?sup t ? 0 + T ( t z + ( 1 - t ) x , y ) =T(x,y);
  • (A4) T(x,·) is convex and lower semicontinuous, for each x?C.

Let φ:CR be a lower semicontinuous and convex function satisfying either (B1) or (B2), where

  • (B1) for each x?H and r>0, there exist a bounded subset D x ?C and y x ?C such that, for any z?C\ D x ,

    T(z, y x )+f( y x )-f(z)+ 1 r y x -z,z-x><0;
  • (B2) C is a bounded set.

Given a positive number r>0. Let T r ( Θ , φ ) :HC be the solution set of the auxiliary mixed equilibrium problem, that is, for each xH,

T r ( Θ , φ ) (x):= { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Next we list some elementary conclusions for the MEP.

Proposition 2.4 [37]

Assume that Θ:C×CR satisfies (A1)-(A4) and let φ:CR be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r ( Θ , φ ) :HC as follows:

T r ( Θ , φ ) (x):= { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }

for all xH. Then

  1. (i)

    for each xH, T r ( Θ , φ ) (x) is nonempty and single-valued;

  2. (ii)

    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x,yH,

    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
  3. (iii)

    Fix( T r ( Θ , φ ) )=MEP(Θ,φ);

  4. (iv)

    MEP(Θ,φ) is closed and convex;

  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x, T s ( Θ , φ ) xx, for all s,t>0 and xH.

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.6 Let X be a real inner product space. Then we have the following inequality:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.7 Let H be a real Hilbert space. Then the following hold:

  1. (a)

    x y 2 = x 2 y 2 2xy,y, for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 , for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    if { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

Lemma 2.8 [38]

Let { a n } be a sequence of nonnegative real numbers satisfying the property

a n + 1 (1 s n ) a n + s n b n + t n ,n1,

where { s n }(0,1] and { b n } are such that:

  1. (i)

    n = 1 s n =;

  2. (ii)

    either lim sup n b n 0 or n = 0 | s n b n |<;

  3. (iii)

    n = 1 t n < where t n 0, for all n1.

Then lim n a n =0.

Recall that a set-valued mapping T:D(T)H 2 H is called monotone if, for all x,yD(T), fTx, and gTy imply fg,xy0. A set-valued mapping T is called maximal monotone if T is monotone and (I+λT)D(T)=H, for each λ>0, where I is the identity mapping of H. We denote by G(T) the graph of T. It is well known that a monotone mapping T is maximal if and only if, for (x,f)H×H, fg,xy0 for every (y,g)G(T) implies fTx.

Next we provide an example to illustrate the concept of maximal monotone mapping.

Let A:CH be a monotone, k-Lipschitz-continuous mapping and let N C v be the normal cone to C at vC, that is,

N C v= { u H : v p , u 0 , p C } .

Define

T ˜ v= { A v + N C v , if  v C , , if  v C .

Then T ˜ is maximal monotone (see [16]) such that

0 T ˜ vvVI(C,A).
(2.2)

Let R:D(R)H 2 H be a maximal monotone mapping. Let λ,μ>0 be two positive numbers.

Lemma 2.9 [39]

We have the resolvent identity

J R , λ x= J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) ,xH.

Remark 2.1 For λ,μ>0, we have the following relation:

J R , λ x J R , μ yxy+|λμ| ( 1 λ J R , λ x y + 1 μ x J R , μ y ) ,x,yH.
(2.3)

The following property for the resolvent operator J R , λ :H D ( R ) ¯ was considered in [17, 18].

Lemma 2.10 J R , λ is single-valued and firmly nonexpansive, that is,

J R , λ x J R , λ y,xy J R , λ x J R , λ y 2 ,x,yH.

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.11 [20]

Let R be a maximal monotone mapping with D(R)=C. Then, for any given λ>0, uC is a solution of problem (1.6) if and only if uC satisfies

u= J R , λ (uλBu).

Lemma 2.12 [18]

Let R be a maximal monotone mapping with D(R)=C and let B:CH be a strongly monotone, continuous and single-valued mapping. Then, for each zH, the equation z(B+λR)x has a unique solution x λ for λ>0.

Lemma 2.13 [20]

Let R be a maximal monotone mapping with D(R)=C and B:CH be a monotone, continuous and single-valued mapping. Then (I+λ(R+B))C=H, for each λ>0. In this case, R+B is maximal monotone.

3 Algorithms and convergence results

Let H be a real Hilbert space and f:HR be a function. Then the minimization problem

min x C f(x):= 1 2 A x P Q A x 2

is ill-posed. Xu [40] considered the following Tikhonov’s regularization problem:

min x C f α (x):= 1 2 A x P Q A x 2 + 1 2 α x 2 ,

where α>0 is the regularization parameter. It is clear that the gradient

f α =f+αI= A (I P Q )A+αI

is (α+ A 2 )-Lipschitz continuous.

Throughout the paper, unless otherwise specified, M, N are positive integers and C be a nonempty closed convex subset of a real Hilbert space H.

Algorithm 3.1 The notations and symbols are the same as in Problem 1.2. Start with a given arbitrary x 0 H, and compute a sequence { x n } by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = β n x n + γ n P C ( I λ n f α n ) v n + σ n T P C ( I λ n f α n ) v n , x n + 1 = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) y n , n 0 ,
(3.1)

where f α n = α n I+f.

The following result provides the strong convergence of the sequence generated by Algorithm 3.1.

Theorem 3.1 For each k{1,2,,M}, let Θ k :C×CR be a bifunction satisfying conditions (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2). For each k{1,2,,M} and i{1,2,,N}, let R i :C 2 H be a maximal monotone mapping and let A k :HH and B i :CH be μ k -inverse strongly monotone and η i -inverse strongly monotone, respectively. Let T:HH be a ξ-strictly pseudocontractive mapping, S:HH be a nonexpansive mapping and V:HH be a ρ-contraction with coefficient ρ[0,1). Let F:HH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<γτ and 0<μ< 2 η κ 2 , where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that the solution set Ξ of HVIP (1.7) is nonempty where Ω:=( k = 1 M GMEP( Θ k , φ k , A k ))( i = 1 N I( B i , R i ))Fix(T)Γ. Let { λ n }[a,b](0, 2 A 2 ), { α n }(0,) with n = 0 α n <, { ϵ n },{ δ n },{ β n },{ γ n },{ σ n }(0,1) with β n + γ n + σ n =1, and { λ i , n }[ a i , b i ](0,2 η i ), { r k , n }[ c k , d k ](0,2 μ k ) where i{1,2,,N} and k{1,2,,M}. Suppose that

  • (C1) lim n ? 8 d n =0, lim n ? 8 ? n =0, lim n ? 8 | ? n - ? n - 1 | d n ? n 2 ? n - 1 =0 and ? n = 0 8 ? n d n =8;

  • (C2) ? n = 1 8 | d n - d n - 1 |<8 or lim n ? 8 | d n - d n - 1 |/( d n ? n )=0;

  • (C3) ? n = 1 8 | ß n - ß n - 1 | ? n <8 or lim n ? 8 | ß n - ß n - 1 |/( d n ? n 2 )=0;

  • (C4) ? n = 1 8 | ? n - ? n - 1 | ? n <8 or lim n ? 8 | ? n - ? n - 1 |/( d n ? n 2 )=0;

  • (C5) ? n = 1 8 | ? n - ? n - 1 | ? n <8 or lim n ? 8 | ? n - ? n - 1 |/( d n ? n 2 )=0;

  • (C6) ? n = 1 8 | ? n a n - ? n - 1 a n - 1 | ? n <8 or lim n ? 8 | ? n a n - ? n - 1 a n - 1 |/( d n ? n 2 )=0;

  • (C7) { ß n }?[c,d]?(0,1), ( ? n + s n )?= ? n and lim?inf n ? 8 s n >0;

  • (C8) for each i=1,2,,N, ? n = 1 8 | ? i , n - ? i , n - 1 | ? n <8 or lim n ? 8 | ? i , n - ? i , n - 1 |/( d n ? n 2 )=0;

  • (C9) for each k=1,2,,M, ? n = 1 8 | r k , n - r k , n - 1 | ? n <8 or lim n ? 8 | r k , n - r k , n - 1 |/( d n ? n 2 )=0;

  • (C10) there exist positive constants ?, k ¯ >0 such that lim n ? 8 ? n 1 / ? / d n =0 and ? x n -T x n ?= k ¯ [ d ( x n , O ) ] ? , ?x?C for sufficiently large n=0.

If { x n } is a sequence generated by Algorithm 3.1 and {S x n } is bounded, then

  1. (a)

    x n + 1 x n =o( ϵ n );

  2. (b)

    ω w ( x n )Ω;

  3. (c)

    { x n } converges strongly to a point x Ω provided x n y n + α n =o( ϵ n 2 ), which is the unique solution of Problem 1.2.

Proof First of all, taking into account Ξ, we know that Ω. Observe that

μ η τ μ η 1 1 μ ( 2 η μ κ 2 ) 1 μ ( 2 η μ κ 2 ) 1 μ η 1 2 μ η + μ 2 κ 2 1 2 μ η + μ 2 η 2 κ 2 η 2 κ η

and

( μ F γ V ) x ( μ F γ V ) y , x y = μ F x F y , x y γ V x V y , x y μ η x y 2 γ ρ x y 2 = ( μ η γ ρ ) x y 2 , x , y H .

Since τγ>0 and κη, we deduce that μητγ>γρ and hence the mapping μFγV is (μηγρ)-strongly monotone. Moreover, it is clear that the mapping μFγV is (μκ+γρ)-Lipschitzian. Thus, there exists a unique solution x in Ξ to the VIP

( μ F γ V ) x , p x 0,pΞ,

that is, { x }=VI(Ξ,μFγV). Now, we put

Δ n k = T r k , n ( Θ k , φ k ) (I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) (I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) (I r 1 , n A 1 ) x n

for all k{1,2,,M} and n0,

Λ n i = J R i , λ i , n (I λ i , n B i ) J R i 1 , λ i 1 , n (I λ i 1 , n B i 1 ) J R 1 , λ 1 , n (I λ 1 , n B 1 )

for all i{1,2,,N}, Δ n 0 =I, and Λ n 0 =I, where I is the identity mapping on H. Then we have u n = Δ n M x n and v n = Λ n N u n .

Now, we show that P C (Iλ f α ) is ζ-averaged, for each λ(0, 2 α + A 2 ), where

ζ= 2 + λ ( α + A 2 ) 4 (0,1).

Indeed, it is easy to see that f= A (I P Q )A is 1 A 2 -ism, that is,

f ( x ) f ( y ) , x y 1 A 2 f ( x ) f ( y ) 2 .

Observe that

( α + A 2 ) f α ( x ) f α ( y ) , x y = ( α + A 2 ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α A 2 x y 2 + A 2 f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .

Hence, it follows that f α =αI+ A (I P Q )A is 1 α + A 2 -ism. Thus, by Proposition 2.1(b), λ f α is 1 λ ( α + A 2 ) -ism. From Proposition 2.1(c), the complement Iλ f α is λ ( α + A 2 ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.2(d), we see that, for each λ(0, 2 α + A 2 ), P C (Iλ f α ) is ζ-averaged with

ζ= 1 2 + λ ( α + A 2 ) 2 1 2 λ ( α + A 2 ) 2 = 2 + λ ( α + A 2 ) 4 (0,1).

This shows that P C (Iλ f α ) is nonexpansive. Taking into account that { λ n }[a,b](0, 2 A 2 ) and α n 0, we get

lim sup n 2 + λ n ( α n + A 2 ) 4 2 + b A 2 4 <1.

Without loss of generality, we may assume that ζ n := 2 + λ n ( α n + A 2 ) 4 <1, for each n0. So, P C (I λ n f α n ) is nonexpansive, for each n0. Similarly, since

lim sup n λ n ( α n + A 2 ) 2 b A 2 2 <1,

it may be confirmed that I λ n f α n is nonexpansive, for each n0.

We divide the rest of the proof into several steps.

Step 1. We prove that { x n } is bounded.

Indeed, take a fixed pΩ arbitrarily. Utilizing (2.1) and Proposition 2.4(b), we have

u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)

Utilizing (2.1) and Lemma 2.10, we have

v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p .
(3.3)

Combining (3.2) and (3.3), we have

v n p x n p.
(3.4)

For simplicity, put t n = P C (I λ n f α n ) v n , for each n0. Note that P C (Iλf)p=p for λ(0, 2 A 2 ). Hence, from (3.4), it follows that

t n p = P C ( I λ n f α n ) v n P C ( I λ n f ) p P C ( I λ n f α n ) v n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p v n p + ( I λ n f α n ) p ( I λ n f ) p = v n p + λ n α n p x n p + λ n α n p .
(3.5)

Since T is a ξ-strictly pseudocontractive mapping and ( γ n + σ n )ξ γ n , for all n0, by Lemma 2.2, we obtain from (3.1) and (3.5) that

y n p = β n x n + γ n t n + σ n T t n p = β n ( x n p ) + γ n ( t n p ) + σ n ( T t n p ) β n x n p + γ n ( t n p ) + σ n ( T t n p ) β n x n p + ( γ n + σ n ) t n p β n x n p + ( γ n + σ n ) [ x n p + λ n α n p ] β n x n p + ( γ n + δ n ) x n p + λ n α n p = x n p + λ n α n p .
(3.6)

Noticing the boundedness of {S x n }, we get sup n 0 γS x n μFp M ˆ for some M ˆ >0. Moreover, utilizing Lemma 2.4 and (3.1), (3.6), we deduce that { λ n }[a,b](0, 2 A 2 ) and 0<γτ that, for all n0,

x n + 1 p = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) y n p = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p = ϵ n δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p ϵ n [ δ n γ V x n μ F p + ( 1 δ n ) γ S x n μ F p ] + ( 1 ϵ n τ ) y n p ϵ n [ δ n ( γ V x n γ V p + γ V p μ F p ) + ( 1 δ n ) M ˆ ] + ( 1 ϵ n τ ) y n p ϵ n [ δ n γ ρ x n p + δ n γ V p μ F p + ( 1 δ n ) M ˆ ] + ( 1 ϵ n τ ) [ x n p + λ n α n p ] ϵ n [ δ n γ ρ x n p + max { M ˆ , γ V p μ F p } ] + ( 1 ϵ n τ ) [ x n p + λ n α n p ] ϵ n γ ρ x n p + ϵ n max { M ˆ , γ V p μ F p } + ( 1 ϵ n τ ) x n p + λ n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ϵ n max { M ˆ , γ V p μ F p } + λ n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ( τ γ ρ ) ϵ n max { M ˆ τ γ ρ , γ V p μ F p τ γ ρ } + λ n α n p max { x n p , M ˆ τ γ ρ , γ V p μ F p τ γ ρ } + α n b p .

By induction, we get

x n + 1 pmax { x 0 p , M ˆ τ γ ρ , γ V p μ F p τ γ ρ } + j = 0 n α j bp,n0.

Thus, { x n } is bounded since n = 0 α n <, and so are the sequences { t n }, { u n }, { v n }, and { y n }.

Step 2. We prove that lim n x n + 1 x n ϵ n =0.

Indeed, utilizing (2.1) and (2.3), we obtain

v n + 1 v n = Λ n + 1 N u n + 1 Λ n N u n = J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + | λ N , n + 1 λ N , n | × ( 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ) | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + Λ n + 1 N 1 u n + 1 Λ n N 1 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + Λ n + 1 N 2 u n + 1 Λ n N 2 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + + | λ 1 , n + 1 λ 1 , n | ( B 1 Λ n + 1 0 u n + 1 + M ˜ ) + Λ n + 1 0 u n + 1 Λ n 0 u n M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n ,
(3.7)

where

sup n 0 { 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n } M ˜

for some M ˜ >0 and sup n 0 { i = 1 N B i Λ n + 1 i 1 u n + 1 + M ˜ } M ˜ 0 for some M ˜ 0 >0.

Utilizing Proposition 2.4(b), (e), we deduce that

u n + 1 u n = Δ n + 1 M x n + 1 Δ n M x n = T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + ( I r M , n A M ) Δ n + 1 M 1 x n + 1 ( I r M , n A M ) Δ n M 1 x n | r M , n + 1 r M , n | r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + | r M , n + 1 r M , n | A M Δ n + 1 M 1 x n + 1 + Δ n + 1 M 1 x n + 1 Δ n M 1 x n = | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + Δ n + 1 M 1 x n + 1 Δ n M 1 x n | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + + | r 1 , n + 1 r 1 , n | [ A 1 Δ n + 1 0 x n + 1 + 1 r 1 , n + 1 T r 1 , n + 1 ( Θ 1 , φ 1 ) ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ] + Δ n + 1 0 x n + 1 Δ n 0 x n M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n ,
(3.8)

where M ˜ 1 >0 is a constant such that, for each n0,

k = 1 M [ A k Δ n + 1 k 1 x n + 1 + 1 r k , n + 1 T r k , n + 1 ( Θ k , φ k ) ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ] M ˜ 1 .

Furthermore, we define y n = β n x n +(1 β n ) w n for all n0. It follows that

w n + 1 w n = y n + 1 β n + 1 x n + 1 1 β n + 1 y n β n x n 1 β n = γ n + 1 t n + 1 + σ n + 1 T t n + 1 1 β n + 1 γ n t n + σ n T t n 1 β n = γ n + 1 ( t n + 1 t n ) + σ n + 1 ( T t n + 1 T t n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) t n + ( σ n + 1 1 β n + 1 σ n 1 β n ) T t n .
(3.9)

Since T is a ξ-strictly pseudocontractive mapping and ( γ n + σ n )ξ γ n , for all n0, by Lemma 2.2, we obtain

γ n + 1 ( t n + 1 t n ) + σ n + 1 ( T t n + 1 T t n ) ( γ n + 1 + σ n + 1 ) t n + 1 t n .
(3.10)

Also, utilizing the nonexpansivity of P C (I λ n f α n ), we have

t n + 1 t n = P C ( I λ n + 1 f α n + 1 ) v n + 1 P C ( I λ n f α n ) v n P C ( I λ n + 1 f α n + 1 ) v n + 1 P C ( I λ n + 1 f α n + 1 ) v n + P C ( I λ n + 1 f α n + 1 ) v n P C ( I λ n f α n ) v n v n + 1 v n + ( I λ n + 1 f α n + 1 ) v n ( I λ n f α n ) v n v n + 1 v n + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) .
(3.11)

Hence, from (3.7)-(3.11), it follows that

w n + 1 w n γ n + 1 ( t n + 1 t n ) + σ n + 1 ( T t n + 1 T t n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | t n + | σ n + 1 1 β n + 1 σ n 1 β n | T t n ( γ n + 1 + σ n + 1 ) t n + 1 t n 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | t n + | σ n + 1 1 β n + 1 σ n 1 β n | T t n = t n + 1 t n + | γ n + 1 1 β n + 1 γ n 1 β n | ( t n + T t n ) v n + 1 v n + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) + | γ n + 1 1 β n + 1 γ n 1 β n | ( t n + T t n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) + | γ n + 1 1 β n + 1 γ n 1 β n | ( t n + T t n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) + | γ n + 1 1 β n + 1 γ n 1 β n | ( t n + T t n ) .
(3.12)

In the meantime, simple calculation shows that

y n + 1 y n = β n ( x n + 1 x n )+(1 β n )( w n + 1 w n )+( β n + 1 β n )( x n + 1 w n + 1 ).

So, it follows from (3.12) that

y n + 1 y n β n x n + 1 x n + ( 1 β n ) w n + 1 w n + | β n + 1 β n | x n + 1 w n + 1 β n x n + 1 x n + ( 1 β n ) [ M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | γ n + 1 1 β n + 1 γ n 1 β n | ( t n + T t n ) + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) ] + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | ( 1 β n ) + γ n | β n + 1 β n | 1 β n + 1 ( t n + T t n ) + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | t n + T t n 1 d + | β n + 1 β n | ( x n + 1 w n + 1 + t n + T t n 1 d ) + | λ n + 1 α n + 1 λ n α n | v n + | λ n + 1 λ n | f ( v n ) x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | + | λ n + 1 α n + 1 λ n α n | + | λ n + 1 λ n | ) ,
(3.13)

where sup n 0 { x n + 1 w n + 1 + t n + T t n 1 d + v n +f( v n )+ M ˜ 0 + M ˜ 1 } M ˜ 2 for some M ˜ 2 >0.

On the other hand, we define z n := δ n V x n +(1 δ n )S x n , for all n0. Then it is well known that x n + 1 = ϵ n γ z n +(I ϵ n μF) y n , for all n0. Simple calculations show that

{ z n + 1 z n = ( δ n + 1 δ n ) ( V x n S x n ) + δ n + 1 ( V x n + 1 V x n ) z n + 1 z n = + ( 1 δ n + 1 ) ( S x n + 1 S x n ) , x n + 2 x n + 1 = ( ϵ n + 1 ϵ n ) ( γ z n μ F y n ) + ϵ n + 1 γ ( z n + 1 z n ) x n + 2 x n + 1 = + ( I λ n + 1 μ F ) y n + 1 ( I λ n + 1 μ F ) y n .

Since V is a ρ-contraction with coefficient ρ[0,1) and S is a nonexpansive mapping, we conclude that

z n + 1 z n | δ n + 1 δ n | V x n S x n + δ n + 1 V x n + 1 V x n + ( 1 δ n + 1 ) S x n + 1 S x n | δ n + 1 δ n | V x n S x n + δ n + 1 ρ x n + 1 x n + ( 1 δ n + 1 ) x n + 1 x n = ( 1 δ n + 1 ( 1 ρ ) ) x n + 1 x n + | δ n + 1 δ n | V x n S x n ,

which together with (3.13) and 0<γτ implies that

x n + 2 x n + 1 | ϵ n + 1 ϵ n | γ z n μ F y n + ϵ n + 1 γ z n + 1 z n + ( I ϵ n + 1 μ F ) y n + 1 ( I ϵ n + 1 μ F ) y n | ϵ n + 1 ϵ n | γ z n μ F y n + ϵ n + 1 γ z n + 1 z n + ( 1 ϵ n + 1 τ ) y n + 1 y n | ϵ n + 1 ϵ n | γ z n μ F y n + ϵ n + 1 γ [ ( 1 δ n + 1 ( 1 ρ ) ) x n + 1 x n + | δ n + 1 δ n | V x n S x n ] + ( 1 ϵ n + 1 τ ) [ x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | + | λ n + 1 α n + 1 λ n α n | + | λ n + 1 λ n | ) ] ( 1 ϵ n + 1 ( τ γ ) ϵ n + 1 δ n + 1 ( 1 ρ ) γ ) x n + 1 x n + | ϵ n + 1 ϵ n | γ z n μ F y n + ϵ n + 1 | δ n + 1 δ n | V x n S x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | + | λ n + 1 α n + 1 λ n α n | + | λ n + 1 λ n | ) ( 1 ϵ n + 1 δ n + 1 ( 1 ρ ) γ ) x n + 1 x n + M ˜ 3 { i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | ϵ n + 1 ϵ n | + ϵ n + 1 | δ n + 1 δ n | + | β n + 1 β n | + | γ n + 1 γ n | + | λ n + 1 α n + 1 λ n α n | + | λ n + 1 λ n | } ,

where sup n 0 {γ z n μF y n +V x n S x n + M ˜ 2 } M ˜ 3 for some M ˜ 3 >0. Consequently,

x n + 1 x n ϵ n ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x n 1 ϵ n + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | ϵ n + k = 1 M | r k , n r k , n 1 | ϵ n + | ϵ n ϵ n 1 | ϵ n + | δ n δ n 1 | + | β n β n 1 | ϵ n + | γ n γ n 1 | ϵ n + | λ n α n λ n 1 α n 1 | ϵ n + | λ n λ n 1 | ϵ n } = ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x n 1 ϵ n 1 + ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x n 1 ( 1 ϵ n 1 ϵ n 1 ) + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | ϵ n + k = 1 M | r k , n r k , n 1 | ϵ n + | ϵ n ϵ n 1 | ϵ n + | δ n δ n 1 | + | β n β n 1 | ϵ n + | γ n γ n 1 | ϵ n + | λ n α n λ n 1 α n 1 | ϵ n + | λ n λ n 1 | ϵ n } ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x n 1 ϵ n 1 + ϵ n δ n ( 1 ρ ) γ M ˜ 4 ( 1 ρ ) γ { | ϵ n ϵ n 1 | δ n ϵ n 2 ϵ n 1 + i = 1 N | λ i , n λ i , n 1 | δ n ϵ n 2 + k = 1 M | r k , n r k , n 1 | δ n ϵ n 2 + | ϵ n ϵ n 1 | δ n ϵ n 2 + | δ n δ n 1 | δ n ϵ n + | β n β n 1 | δ n ϵ n 2 + | γ n γ n 1 | δ n ϵ n 2 + | λ n α n λ n 1 α n 1 | δ n ϵ n 2 + | λ n λ n 1 | δ n ϵ n 2 } ,
(3.14)

where sup n 1 { x n x n 1 + M ˜ 3 } M ˜ 4 for some M ˜ 4 >0. Utilizing Lemma 2.8, we conclude from conditions (C1)-(C6) and (C8)-(C9) that n = 0 ϵ n δ n (1ρ)γ= and

lim n x n + 1 x n ϵ n =0.

So, as ϵ n 0, it follows that

lim n x n + 1 x n =0.

Step 3. We prove that lim n x n u n ϵ n =0, lim n x n v n ϵ n =0, lim n v n t n ϵ n =0 and lim n t n T t n ϵ n =0.

Indeed, utilizing Lemmas 2.2 and 2.7(b), from (3.1), (3.4)-(3.5) and ( γ n + σ n )ξ γ n , we deduce that

y n p 2 = β n x n + γ n t n + σ n T t n p 2 = β n ( x n p ) + ( 1 β n ) ( γ n t n + σ n T t n 1 β n p ) 2 = β n x n p 2 + ( 1 β n ) γ n t n + σ n T t n 1 β n p 2 β n ( 1 β n ) γ n t n + σ n T t n 1 β n x n 2 = β n x n p 2 + ( 1 β n ) γ n ( t n p ) + σ n ( T t n p ) 1 β n 2 β n ( 1 β n ) y n x n 1 β n 2 β n x n p 2 + ( 1 β n ) ( γ n + σ n ) 2 t n p 2 ( 1 β n ) 2 β n 1 β n y n x n 2 = β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) ( x n p + λ n α n p ) 2 β n 1 β n y n x n 2 β n ( x n p + λ n α n p ) 2 + ( 1 β n ) ( x n p + λ n α n p ) 2 β n 1 β n y n x n 2 = ( x n p + λ n α n p ) 2 β n 1 β n y n x n 2 ( x n p + α n b p ) 2 β n 1 β n y n x n 2 .
(3.15)

Observe that

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2
(3.16)

and

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 Λ n i 1 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2
(3.17)

for i{1,2,,N} and k{1,2,,M}. Combining (3.5), (3.15)-(3.17), we get

y n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) ( v n p + λ n α n p ) 2 β n x n p 2 + ( 1 β n ) ( v n p + α n b p ) 2 = β n x n p 2 + ( 1 β n ) [ v n p 2 + α n b p ( 2 v n p + α n b p ) ] β n x n p 2 + ( 1 β n ) v n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) Λ n i u n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) [ u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) [ Δ n k x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) [ x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + α n b p ( 2 v n p + α n b p ) = x n p 2 + ( 1 β n ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + α n b p ( 2 v n p + α n b p ) ,
(3.18)

which immediately leads to

( 1 d ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 ϵ n 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ϵ n 2 ] ( 1 β n ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 ϵ n 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ϵ n 2 ] x n p 2 y n p 2 ϵ n 2 + α n ϵ n 2 b p ( 2 v n p + α n b p ) x n y n ϵ n 2 ( x n p + y n p ) + α n ϵ n 2 b p ( 2 v n p + α n b p ) .

Since { λ i , n }[ a i , b i ](0,2 η i ), { r k , n }[ c k , d k ](0,2 μ k ), i{1,2,,N}, k{1,2,,M} and { v n }, { x n }, { y n } are bounded sequences, we obtain from x n y n + α n =o( ϵ n 2 ),

lim n A k Δ n k 1 x n A k p ϵ n =0and lim n B i Λ n i 1 u n B i p ϵ n =0
(3.19)

for all k{1,2,,M} and i{1,2,,N}.

Furthermore, by Proposition 2.4(b) and Lemma 2.7(a), we have

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p , Δ n k x n p = 1 2 ( ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 + Δ n k x n p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p ( Δ n k x n p ) 2 ) 1 2 ( Δ n k 1 x n p 2 + Δ n k x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 ) ,

which implies that

Δ n k x n p 2 Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 = Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 r k , n 2 A k Δ n k 1 x n A k p 2 + 2 r k , n Δ n k 1 x n Δ n k x n , A k Δ n k 1 x n A k p Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .
(3.20)

By Lemma 2.7(a) and Lemma 2.10, we obtain

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p , Λ n i u n p = 1 2 ( ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 + Λ n i u n p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p ( Λ n i u n p ) 2 ) 1 2 ( Λ n i 1 u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( x n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) ,

which immediately leads to

Λ n i u n p 2 x n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 = x n p 2 Λ n i 1 u n Λ n k u n 2 λ i , n 2 B i Λ n i 1 u n B i p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n , B i Λ n i 1 u n B i p x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p .
(3.21)

Combining (3.15) and (3.21), we conclude that

y n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) Λ n i u n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) [ x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p ] + α n b p ( 2 v n p + α n b p ) x n p 2 ( 1 β n ) Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + α n b p ( 2 v n p + α n b p ) ,

which yields

( 1 d ) Λ n i 1 u n Λ n i u n 2 ϵ n 2 ( 1 β n ) Λ n i 1 u n Λ n i u n 2 x n p 2 y n p 2 ϵ n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p ϵ n 2 + α n b p ( 2 v n p + α n b p ) x n y n ϵ n 2 ( x n p + y n p ) + 2 λ i , n Λ n i 1 u n Λ n i u n ϵ n B i Λ n i 1 u n B i p ϵ n + α n ϵ n 2 b p ( 2 v n p + α n b p ) .

So, it follows from { λ i , n }[ a i , b i ](0,2 η i ), i=1,2,,N, that

( 1 d ) Λ n i 1 u n Λ n i u n 2 ϵ n 2 x n y n ϵ n 2 ( x n p + y n p ) + 2 b i Λ n i 1 u n Λ n i u n ϵ n B i Λ n i 1 u n B i p ϵ n + α n ϵ n 2 b p ( 2 v n p + α n b p ) .
(3.22)

Now we claim that

lim n Λ n i 1 u n Λ n i u n ϵ n =0,i{1,2,,N}.
(3.23)

As a matter of fact, it is easy to see that, for each i{1,2,,N},

lim sup n Λ n i 1 u n Λ n i u n ϵ n .

If lim sup n Λ n i 1 u n Λ n i u n ϵ n <, then from (3.22) and lim n B i Λ n i 1 u n B i p ϵ n =0 (due to (3.9)), we have

( 1 d ) lim sup n Λ n i 1 u n Λ n i u n 2 ϵ n 2 lim sup n x n y n ϵ n 2 ( x n p + y n p ) + 2 b i lim sup n Λ n i 1 u n Λ n i u n ϵ n B i Λ n i 1 u n B i p ϵ n + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) lim sup n x n y n ϵ n 2 ( x n p + y n p ) + 2 b i lim sup n Λ n i 1 u n Λ n i u n ϵ n lim sup n B i Λ n i 1 u n B i p ϵ n + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) = 0 .

That is, lim n Λ n i 1 u n Λ n i u n ϵ n =0. If lim sup n Λ n i 1 u n Λ n i u n ϵ n =, then from (3.22), we have

( 1 d ) Λ n i 1 u n Λ n i u n ϵ n [ Λ n i 1 u n Λ n i u n ϵ n 2 b i 1 d B i Λ n i 1 u n B i p ϵ n ] x n y n ϵ n 2 ( x n p + y n p ) + α n ϵ n 2 b p ( 2 v n p + α n b p ) .
(3.24)

Since lim n B i Λ n i 1 u n B i p ϵ n =0 (due to (3.19)), it is easy to see that

lim sup n Λ n i 1 u n Λ n i u n ϵ n [ Λ n i 1 u n Λ n i u n ϵ n 2 b i 1 d B i Λ n i 1 u n B i p ϵ n ] =.

Thus, from (3.24), it follows that

= lim sup n ( 1 d ) Λ n i 1 u n Λ n i u n ϵ n [ Λ n i 1 u n Λ n i u n ϵ n 2 b i 1 d B i Λ n i 1 u n B i p ϵ n ] lim sup n x n y n ϵ n 2 ( x n p + y n p ) + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) = 0 ,

which leads to a contradiction. This shows that (3.23) holds.

Also, combining (3.3), (3.15), and (3.20), we deduce that

y n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) u n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) Δ n k x n p 2 + α n b p ( 2 v n p + α n b p ) β n x n p 2 + ( 1 β n ) [ x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p ] + α n b p ( 2 v n p + α n b p ) x n p 2 ( 1 β n ) Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + α n b p ( 2 v n p + α n b p ) ,

which yields

( 1 d ) Δ n k 1 x n Δ n k x n 2 ϵ n 2 ( 1 β n ) Δ n k 1 x n Δ n k x n 2 ϵ n 2 x n p 2 y n p 2 ϵ n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p ϵ n 2 + α n ϵ n 2 b p ( 2 v n p + α n b p ) x n y n ϵ n 2 ( x n p + y n p ) + 2 r k , n Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n + α n ϵ n 2 b p ( 2 v n p + α n b p ) .

So, it follows from { r k , n }[ c k , d k ](0,2 μ k ), k=1,2,,M, that

( 1 d ) Δ n k 1 x n Δ n k x n 2 ϵ n 2 x n y n ϵ n 2 ( x n p + y n p ) + 2 d k Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n + α n ϵ n 2 b p ( 2 v n p + α n b p ) .
(3.25)

Next, we claim that

lim n Δ n k 1 x n Δ n k x n ϵ n =0,k{1,2,,M}.
(3.26)

As a matter of fact, it is easy to see that, for each k{1,2,,M},

lim sup n Δ n k 1 x n Δ n k x n ϵ n .

If lim sup n Δ n k 1 x n Δ n k x n ϵ n <, then from (3.25) and lim sup n A k Δ n k 1 x n A k p ϵ n =0 (due to (3.19)), we have

lim sup n ( 1 d ) Δ n k 1 x n Δ n k x n 2 ϵ n 2 lim sup n x n y n ϵ n 2 ( x n p + y n p ) + 2 d k lim sup n Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) lim sup n x n y n ϵ n 2 ( x n p + y n p ) + 2 d k lim sup n Δ n k 1 x n Δ n k x n ϵ n lim sup n A k Δ n k 1 x n A k p ϵ n + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) = 0 .

That is, lim n Δ n k 1 x n Δ n k x n ϵ n =0. If lim sup n Δ n k 1 x n Δ n k x n ϵ n =, then from (3.25), we have

( 1 d ) Δ n k 1 x n Δ n k x n ϵ n [ Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n ] x n y n ϵ n 2 ( x n p + y n p ) + α n ϵ n 2 b p ( 2 v n p + α n b p ) .
(3.27)

Since lim n A k Δ n k 1 x n A k p ϵ n =0 (due to (3.19)), it is easy to see that

lim sup n Δ n k 1 x n Δ n k x n ϵ n [ Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n ] =.

Consequently, from (3.27), it follows that

= lim sup n ( 1 d ) Δ n k 1 x n Δ n k x n ϵ n [ Δ n k 1 x n Δ n k x n ϵ n A k Δ n k 1 x n A k p ϵ n ] lim sup n x n y n ϵ n 2 ( x n p + y n p ) + lim sup n α n ϵ n 2 b p ( 2 v n p + α n b p ) = 0 ,

which leads to a contradiction. This shows that (3.26) is valid. Therefore, from (3.23) and (3.26), we get

x n u n ϵ n = Δ n 0 x n Δ n M x n ϵ n Δ n 0 x n Δ n 1 x n ϵ n + Δ n 1 x n Δ n 2 x n ϵ n + + Δ n M 1 x n Δ n M x n ϵ n 0 as  n
(3.28)

and

u n v n ϵ n = Λ n 0 u n Λ n N u n ϵ n Λ n 0 u n Λ n 1 u n ϵ n + Λ n 1 u n Λ n 2 u n ϵ n + + Λ n N 1 u n Λ n N u n ϵ n 0 as  n ,
(3.29)

respectively. Thus, from (3.28) and (3.29), we obtain

x n v n ϵ n x n u n ϵ n + u n v n ϵ n 0as n.
(3.30)

On the other hand, note that Γ=VI(C,f). Then, utilizing Lemma 2.6 and the 1 A 2 -inverse strong monotonicity of f, we deduce from (2.1) that

t n p 2 ( I λ n f α n ) v n ( I λ n f ) p 2 = v n p λ n ( f ( v n ) f ( p ) ) λ n α n v n 2 v n p λ n ( f ( v n ) f ( p ) ) 2 2 λ n α n v n , ( I λ n f α n ) v n ( I λ n f ) p v n p 2 + λ n ( λ n 2 A 2 ) f ( v n ) f ( p ) 2 + 2 α n b v n v n p λ n ( f α n ( v n ) f ( p ) ) .
(3.31)

Combining (3.4), (3.15), and (3.31), we obtain

y n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ v n p 2 + λ n ( λ n 2 A 2 ) f ( v n ) f ( p ) 2 + 2 α n b v n v n p λ n ( f α n ( v n ) f ( p ) ) ] β n x n p 2 + ( 1 β n ) [ x n p 2 + λ n ( λ n 2 A 2 ) f ( v n ) f ( p ) 2 + 2 α n b v n v n p λ n ( f α n ( v n ) f ( p ) ) ] x n p 2 + ( 1 β n ) λ n ( λ n 2 A 2 ) f ( v n ) f ( p ) 2 + 2 α n b v n v n p λ n ( f α n ( v n ) f ( p ) ) ,

which together with { λ n }[a,b](0, 2 A 2 ) and { β n }[c,d](0,1) leads to

( 1 d ) a ( 2 A 2 b ) f ( v n ) f ( p ) 2 ϵ n 2 ( 1 β n ) λ n ( 2 A 2 λ n ) f ( v n ) f ( p ) 2 ϵ n 2 x n p 2 y n p 2 ϵ n 2 + 2 α n ϵ n 2 b v n v n p λ n ( f α n ( v n ) f ( p ) ) x n y n ϵ n 2 ( x n p + y n p ) + 2 α n ϵ n 2 b v n v n p λ n ( f α n ( v n ) f ( p ) ) .

Since { v n }, { x n }, and { y n } are bounded sequences, we deduce from x n y n + α n =o( ϵ n 2 ) that

lim n f ( v n ) f ( p ) ϵ n =0.

So, it is clear that

lim n f α n ( v n ) f ( p ) ϵ n =0.
(3.32)

Again, utilizing Proposition 2.3(c), from t n = P C (I λ n f α n ) v n and p= P C (I λ n f)p, we get

t n p 2 = P C ( I λ n f α n ) v n P C ( I λ n f ) p 2 ( I λ n f α n ) v n ( I λ n f ) p , t n p = 1 2 ( ( I λ n f α n ) v n ( I λ n f ) p 2 + t n p 2 ( I λ n f α n ) v n ( I λ n f ) p ( t n p ) 2 ) = 1 2 ( ( I λ n f α n ) v n ( I λ n f α n ) p λ n α n p 2 + t n p 2 ( I λ n f α n ) v n ( I λ n f ) p ( t n p ) 2 ) = 1 2 ( ( I λ n f α n ) v n ( I λ n f α n ) p 2 2 λ n α n p , ( I λ n f α n ) v n ( I λ n f ) p + t n p 2 ( I λ n f α n ) v n ( I λ n f ) p ( t n p ) 2 ) 1 2 ( v n p 2 2 λ n α n p , ( I λ n f α n ) v n ( I λ n f ) p + t n p 2 ( I λ n f α n ) v n ( I λ n f ) p ( t n p ) 2 ) 1 2 ( v n p 2 + 2 λ n α n p ( I λ n f α n ) v n ( I λ n f ) p + t n p 2 v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ) ,

which immediately leads to

t n p 2 v n p 2 + 2 λ n α n p ( I λ n f α n ) v n ( I λ n f ) p v n t n λ n ( f α n ( v n ) f ( p ) ) 2 .
(3.33)

Combining (3.4), (3.15), and (3.33), we obtain

y n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ v n p 2 + 2 λ n α n p ( I λ n f α n ) v n ( I λ n f ) p v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ] β n x n p 2 + ( 1 β n ) [ x n p 2 + 2 λ n α n p ( I λ n f α n ) v n ( I λ n f ) p v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ] x n p 2 + 2 λ n α n p ( I λ n f α n ) v n ( I λ n f ) p ( 1 β n ) v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ,

which immediately yields

( 1 d ) v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ϵ n 2 ( 1 β n ) v n t n λ n ( f α n ( v n ) f ( p ) ) 2 ϵ n 2 x n p 2 y n p 2 ϵ n 2 + 2 λ n α n ϵ n 2 p ( I λ n f α n ) v n ( I λ n f ) p x n y n ϵ n 2 ( x n p + y n p ) + 2 α n ϵ n 2 b p ( I λ n f α n ) v n ( I λ n f ) p .

Since { v n }, { x n }, and { y n } are bounded sequences, we deduce from x n y n + α n =o( ϵ n 2 ) that

lim n v n t n λ n ( f α n ( v n ) f ( p ) ) ϵ n =0.
(3.34)

Observe that

v n t n ϵ n v n t n λ n ( f α n ( v n ) f ( p ) ) ϵ n + λ n f α n ( v n ) f ( p ) ϵ n .

Thus, from (3.32) and (3.34), we have

lim n v n t n ϵ n =0.
(3.35)

Taking into account that x n t n ϵ n x n v n ϵ n + v n t n ϵ n , from (3.30) and (3.35), we get

lim n x n t n ϵ n =0.
(3.36)

Utilizing the relation y n x n = γ n ( t n x n )+ σ n (T t n x n ), we have

σ n ( T t n t n ) ϵ n = σ n ( T t n x n ) σ n ( t n x n ) ϵ n = y n x n γ n ( t n x n ) σ n ( t n x n ) ϵ n = y n x n ( 1 β n ) ( t n x n ) ϵ n y n x n ϵ n + ( 1 β n ) t n x n ϵ n y n x n ϵ n + t n x n ϵ n ,

which together with (3.36) and x n y n =o( ϵ n 2 ), implies that

lim n σ n ( T t n t n ) ϵ n =0.

Since lim inf n σ n >0, we obtain

lim n t n T t n ϵ n =0.
(3.37)

Step 4. We prove that ω w ( x n )Ω.

Indeed, since H is reflexive and { x n } is bounded, there exists at least a weak convergence subsequence of { x n }. Hence, ω w ( x n ). Now, take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. From (3.23), (3.26), (3.28), (3.30) and (3.36), we have u n i w, v n i w, t n i w, Λ n i m u n i w and Δ n i k x n i w, where m{1,2,,N} and k{1,2,,M}. Utilizing Lemma 2.1(b), we deduce from t n i w and (3.37) that wFix(T).

Next, we prove that w m = 1 N I( B m , R m ). As a matter of fact, since B m is η m -inverse strongly monotone, B m is a monotone and Lipschitz-continuous mapping. It follows from Lemma 2.13 that R m + B m is maximal monotone. Let (v,g)G( R m + B m ), that is, g B m v R m v. Again, since Λ n m u n = J R m , λ m , n (I λ m , n B m ) Λ n m 1 u n , n0, m{1,2,,N}, we have

Λ n m 1 u n λ m , n B m Λ n m 1 u n (I+ λ m , n R m ) Λ n m u n ,

that is,

1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) R m Λ n m u n .

In terms of the monotonicity of R m , we get

v Λ n m u n , g B m v 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) 0,

and hence

v Λ n m u n , g v Λ n m u n , B m v + 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) = v Λ n m u n , B m v B m Λ n m u n + B m Λ n m u n B m Λ n m 1 u n + 1 λ m , n ( Λ n m 1 u n Λ n m u n ) v Λ n m u n , B m Λ n m u n B m Λ n m 1 u n + v Λ n m u n , 1 λ m , n ( Λ n m 1 u n Λ n m u n ) .

In particular,

v Λ n i m u n i , g v Λ n i m u n i , B m Λ n i m u n i B m Λ n i m 1 u n i + v Λ n i m u n i , 1 λ m , n i ( Λ n i m 1 u n i Λ n i m u n i ) .

Since Λ n m u n Λ n m 1 u n 0 (due to (3.23)) and B m Λ n m u n B m Λ n m 1 u n 0 (due to the Lipschitz continuity of B m ), we conclude from Λ n i m u n i w and { λ i , n }[ a i , b i ](0,2 η i ) that

lim i v Λ n i m u n i , g =vw,g0.

It follows from the maximal monotonicity of B m + R m that 0( R m + B m )w, that is, wI( B m , R m ). Therefore, w m = 1 N I( B m , R m ).

Next we prove that w k = 1 M GMEP( Θ k , φ k , A k ). Since Δ n k x n = T r k , n ( Θ k , φ k ) (I r k , n A k ) Δ n k 1 x n , n0, k{1,2,,M}, we have

Θ k ( Δ n k x n , y ) + φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n 0 .

By (A2), we have

φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n Θ k ( y , Δ n k x n ) .

Let z t =ty+(1t)w, for all t(0,1] and yC. This implies that z t C. Then we have

z t Δ n k x n , A k z t φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t z t Δ n k x n , A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) = φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t A k Δ n k x n + z t Δ n k x n , A k Δ n k x n A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) .
(3.38)

By (3.26), we have A k Δ n k x n A k Δ n k 1 x n 0 as n. Furthermore, by the monotonicity of A k , we obtain z t Δ n k x n , A k z t A k Δ n k x n 0. Then, by (A4), we obtain

z t w, A k z t φ k (w) φ k ( z t )+ Θ k ( z t ,w).
(3.39)

Utilizing (A1), (A4), and (3.39), we obtain

0 = Θ k ( z t , z t ) + φ k ( z t ) φ k ( z t ) t Θ k ( z t , y ) + ( 1 t ) Θ k ( z t , w ) + t φ k ( y ) + ( 1 t ) φ k ( w ) φ k ( z t ) t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) z t w , A k z t = t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) t y w , A k z t ,

and hence

0 Θ k ( z t ,y)+ φ k (y) φ k ( z t )+(1t)yw, A k z t .

Letting t0, we have, for each yC,

0 Θ k (w,y)+ φ k (y) φ k (w)+yw, A k w.

This implies that wGMEP( Θ k , φ k , A k ), and hence, w k = 1 M GMEP( Θ k , φ k , A k ). Thus, wFix(T) k = 1 M GMEP( Θ k , φ k , A k ) m = 1 N I( B m , R m ).

Furthermore, let us show that wΓ. In fact, define

T ˜ v= { f ( v ) + N C v , if  v C , , if  v C ,

where N C v={uH:vx,u0,xC}. Then T ˜ is maximal monotone and 0 T ˜ v if and only if vVI(C,f); see [16]. Let (v, v ˜ )G( T ˜ ). Then we have v ˜ T ˜ v=f(v)+ N C v, and hence, v ˜ f(v) N C v. So, we have vx, v ˜ f(v)0, for all xC.

On the other hand, from t n = P C ( v n λ n f α n ( v n )) and vC, we get v n λ n f α n ( v n ) t n , t n v0, and hence,

v t n , t n v n λ n + f α n ( v n ) 0.

Therefore, from v ˜ f(v) N C v and t n i C, we have

v t n i , v ˜ v t n i , f ( v ) v t n i , f ( v ) v t n i , t n i v n i λ n i + f α n i ( v n i ) = v t n i , f ( v ) v t n i , t n i v n i λ n i + f ( v n i ) α n i v t n i , v n i = v t n i , f ( v ) f ( t n i ) + v t n i , f ( t n i ) f ( v n i ) v t n i , t n i v n i λ n i α n i v t n i , v n i v t n i , f ( t n i ) f ( v n i ) v t n i , t n i v n i λ n i α n i v t n i , v n i .

Hence, it is easy to see that vw, v ˜ 0 as i. Since T ˜ is maximal monotone, we have w T ˜ 1 0, and hence, wVI(C,f)=Γ. Consequently, w k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )Fix(T)Γ=:Ω. This shows that ω w ( x n )Ω.

Step 5. We prove that ω w ( x n )Ξ.

Indeed, utilizing Lemmas 2.6 and 2.4, from (3.1) and (3.6), we find that, for all pΩ,

x n + 1 p 2 = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) y n p 2 = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p 2 = ϵ n [ δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) ] + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p 2 = ϵ n [ δ n ( γ V x n γ V p ) + ( 1 δ n ) ( γ S x n γ S p ) ] + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p + ϵ n [ δ n ( γ V p μ F p ) + ( 1 δ n ) ( γ S p μ F p ) ] 2 ϵ n [ δ n ( γ V x n γ V p ) + ( 1 δ n ) ( γ S x n γ S p ) ] + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p [ ϵ n δ n ( γ V x n γ V p ) + ( 1 δ n ) ( γ S x n γ S p ) + ( I ϵ n μ F ) y n ( I ϵ n μ F ) p ] 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p [ ϵ n ( δ n γ ρ x n p + ( 1 δ n ) γ x n p ) + ( 1 ϵ n τ ) y n p ] 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ( 1 δ n ) ϵ n ( γ S p μ F p ) , x n + 1 p = [ ϵ n ( 1 δ n ( 1 ρ ) ) γ x n p + ( 1 ϵ n τ ) y n p ] 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p [ ϵ n ( 1 δ n ( 1 ρ ) ) γ x n p + ( 1 ϵ n τ ) ( x n p + α n b p ) ] 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p [ ϵ n ( 1 δ n ( 1 ρ ) ) γ ( x n p + α n b p ) + ( 1 ϵ n τ ) ( x n p + α n b p ) ] 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p = ( 1 ϵ n ( τ γ ) ϵ n δ n ( 1 ρ ) γ ) 2 ( x n p + α n b p ) 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p ( 1 ϵ n ( τ γ ) ϵ n δ n ( 1 ρ ) γ ) ( x n p + α n b p ) 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p ( 1 ϵ n δ n ( 1 ρ ) γ ) [ x n p 2 + α n b p ( 2 x n p + α n b p ) ] + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p ( 1 ϵ n δ n ( 1 ρ ) γ ) x n p 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p + α n b p ( 2 x n p + α n b p ) .
(3.40)

Take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. Utilizing (3.40), we obtain, for all pΩ,

x n + 1 p 2 ( 1 ϵ n ( τ γ ) ϵ n δ n ( 1 ρ ) γ ) ( x n p + α n b p ) 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p ( x n p + α n b p ) 2 + 2 ϵ n δ n ( γ V p μ F p ) , x n + 1 p + 2 ϵ n ( 1 δ n ) ( γ S p μ F p ) , x n + 1 p ,

which implies that

( μ F γ S ) p , x n p ( μ F γ S ) p , x n x n + 1 + ( μ F γ S ) p , x n + 1 p ( μ F γ S ) p x n x n + 1 + ( x n p + α n b p ) 2 x n + 1 p 2 2 ϵ n ( 1 δ n ) + δ n 1 δ n ( γ V μ F ) p , x n + 1 p ( μ F γ S ) p x n x n + 1 + ( x n x n + 1 + α n b p ) ( x n p + x n + 1 p + α n b p ) 2 ϵ n ( 1 δ n ) + δ n 1 δ n ( γ V μ F ) p x n + 1 p .
(3.41)

Since the combination of the boundedness of { x n }, δ n 0, α n =o( ϵ n 2 ), and x n x n + 1 =o( ϵ n ) (due to Step 2) implies that

lim n ( x n x n + 1 + α n b p ) ( x n p + x n + 1 p + α n b p ) 2 ϵ n ( 1 δ n ) =0,

from (3.41), we conclude that

( μ F γ S ) p , w p = lim i ( μ F γ S ) p , x n i p lim sup n ( μ F γ S ) p , x n p 0 , p Ω ,

that is,

( μ F γ S ) p , w p 0,pΩ.
(3.42)

Since μFγS is (μηγ)-strongly monotone and (μκ+γ)-Lipschitz continuous, by Minty’s lemma [41] we know that (3.42) is equivalent to the VIP

( μ F γ S ) w , p w 0,pΩ.
(3.43)

So, it follows that wVI(Ω,μFγS)=:Ξ. This shows that ω w ( x n )Ξ.

Step 6. We prove that x n x where { x }=VI(Ξ,μFγV).

Indeed, note that { x }=VI(Ξ,μFγV). Since { x n } is bounded and H is reflexive, there exists a subsequence { x n i } of { x n } such that x n i w and

lim sup n ( γ V μ F ) x , x n x = lim sup i ( γ V μ F ) x , x n i x = ( γ V μ F ) x , w x .

According to Step 5, we get wΞ. So, it follows from { x }=VI(Ξ,μFγV) that

lim sup n ( γ V μ F ) x , x n x = ( γ V μ F ) x , w x 0.

However, from x ΞΩ and condition (C10), we deduce that, for sufficiently large n0,

( γ S μ F ) x , x n + 1 x = ( γ S μ F ) x , x n + 1 P Ω x n + 1 + ( γ S μ F ) x , P Ω x n + 1 x ( γ S μ F ) x , x n + 1 P Ω x n + 1 ( γ S μ F ) x d ( x n + 1 , Ω ) ( γ S μ F ) x ( 1 k ¯ x n + 1 T x n + 1 ) 1 / θ .
(3.44)

Utilizing Lemma 2.1(a), we have, for sufficiently large n0,

x n + 1 T x n + 1 x n + 1 T x n + T x n T x n + 1 1 + ξ 1 ξ x n x n + 1 + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) y n T x n 1 + ξ 1 ξ x n x n + 1 + y n T x n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) μ F y n 1 + ξ 1 ξ x n x n + 1 + y n x n + x n T x n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) μ F y n 1 + ξ 1 ξ x n x n + 1 + y n x n + x n T t n + T t n T x n + ϵ n γ δ n ( V x n S x n ) + γ S x n μ F y n 1 + ξ 1 ξ x n x n + 1 + y n x n + x n t n + t n T t n + T t n T x n + ϵ n γ δ n ( V x n S x n ) + γ S x n μ F y n 1 + ξ 1 ξ x n x n + 1 + y n x n + ( 1 + 1 + ξ 1 ξ ) t n x n + t n T t n + ϵ n M ˜ 4 ,
(3.45)

where M ˜ 4 = sup n 0 γ δ n (V x n S x n )+γS x n μF y n <. Hence, for a large enough constant k ¯ 1 >0, from (3.44) and (3.45), we have, for sufficiently large n0,

( γ S μ F ) x , x n + 1 x ( γ S μ F ) x ( 1 k ¯ x n + 1 T x n + 1 ) 1 / θ ( γ S μ F ) x { 1 k ¯ [ 1 + ξ 1 ξ x n x n + 1 + y n x n + ( 1 + 1 + ξ 1 ξ ) t n x n + t n T t n + ϵ n M ˜ 4 ] } 1 / θ k ¯ 1 ( ϵ n + x n x n + 1 + y n x n + x n t n + t n T t n ) 1 / θ k ¯ 1 ϵ n 1 / θ ( 1 + x n x n + 1 + y n x n + x n t n + t n T t n ϵ n ) 1 / θ .
(3.46)

Next we prove that lim n x n x =0. As a matter of fact, putting p= x in (3.40), we obtain from (3.46) that

x n + 1 x 2 ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x 2 + 2 ϵ n δ n ( γ V μ F ) x , x n + 1 x + 2 ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x + α n b x ( 2 x n x + α n b x ) ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x 2 + ϵ n δ n ( 1 ρ ) γ 2 ( 1 ρ ) γ [ ( γ V μ F ) x , x n + 1 x + ( 1 δ n ) ( γ S μ F ) x , x n + 1 x δ n ] + α n b x ( 2 x n x + α n b x ) ( 1 ϵ n δ n ( 1 ρ ) γ ) x n x 2 + ϵ n δ n ( 1 ρ ) γ 2 ( 1 ρ ) γ [ ( γ V μ F ) x , x n + 1 x + k ¯ 1 ϵ n 1 / θ δ n ( 1 + x n x n + 1 + y n x n + x n t n + t n T t n ϵ n ) 1 / θ ] + α n b x ( 2 x n x + α n b x ) .
(3.47)

Since n = 1 ϵ n δ n =, n = 1 α n <, ϵ n 1 / θ δ n 0, x n y n =o( ϵ n 2 ), and x n x n + 1 =o( ϵ n ), we conclude from (3.36), (3.37), and x n x that n = 1 ϵ n δ n (1ρ)γ=, n = 1 α n b x (2 x n x + α n b x ), and

lim sup n 2 ( 1 ρ ) γ [ ( γ V μ F ) x , x n + 1 x + k ¯ 1 ϵ n 1 / θ δ n ( 1 + x n x n + 1 + y n x n + x n t n + t n T t n ϵ n ) 1 / θ ] 0 .

Therefore, applying Lemma 2.8 to (3.47), we infer that lim n x n x =0. This completes the proof. □

Remark 3.1 Algorithm 3.1 and Theorem 3.1 extend and generalize algorithms and convergence results in [28, 30].

References

  1. Ansari QH: Metric Spaces - Including Fixed Point Theory and Set-Valued Maps. Narosa Publishing House, New Delhi; 2010.

    MATH  Google Scholar 

  2. Bianchi M, Schaible S: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90: 31–43. 10.1007/BF02192244

    Article  MathSciNet  MATH  Google Scholar 

  3. Bianchi M, Schaible S: Equilibrium problems under generalized convexity and generalized monotonicity. J. Glob. Optim. 2004, 30: 121–134. 10.1007/s10898-004-8269-9

    Article  MathSciNet  MATH  Google Scholar 

  4. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994,63(1–4):123–145.

    MathSciNet  MATH  Google Scholar 

  5. Flores-Bazán F: Existence theorems for generalized noncoercive equilibrium problems: the quasi-convex case. SIAM J. Optim. 2000, 11: 675–690.

    Article  MathSciNet  MATH  Google Scholar 

  6. Flores-Bazán F: Existence theory for finite-dimensional pseudomonotone equilibrium problems. Acta Appl. Math. 2003, 77: 249–297. 10.1023/A:1024971128483

    Article  MathSciNet  MATH  Google Scholar 

  7. Ansari QH, Lalitha CS, Mehta M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton; 2014.

    MATH  Google Scholar 

  8. Facchinei F, Pang J-S I. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  9. Facchinei F, Pang J-S II. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  10. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  MATH  Google Scholar 

  11. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.

    MATH  Google Scholar 

  12. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  MATH  Google Scholar 

  13. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    Article  MathSciNet  MATH  Google Scholar 

  14. Ansari QH, Wong N-C, Yao J-C: The existence of nonlinear inequalities. Appl. Math. Lett. 1999,12(5):89–92. 10.1016/S0893-9659(99)00062-2

    Article  MathSciNet  MATH  Google Scholar 

  15. Gwinner J: Stability of monotone variational inequalities with various applications. In Variational Inequalities and Network Equilibrium Problems. Edited by: Giannessi F, Maugeri A. Plenum, New York; 1995:123–142.

    Google Scholar 

  16. Rockafellar RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  MATH  Google Scholar 

  17. Huang N-J: A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998,35(10):9–14. 10.1016/S0898-1221(98)00067-4

    Article  MathSciNet  MATH  Google Scholar 

  18. Zeng L-C, Guu S-M, Yao J-C: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005,50(3–4):329–337. 10.1016/j.camwa.2005.06.001

    Article  MathSciNet  MATH  Google Scholar 

  19. Ceng L-C, Al-Homidan S: Algorithms of common solutions for generalized mixed equilibria, variational inclusions, and constrained convex minimization. Abstr. Appl. Anal. 2014., 2014: Article ID 132053

    Google Scholar 

  20. Ceng L-C, Ansari QH, Wong MM, Yao J-C: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012,13(2):403–422.

    MathSciNet  MATH  Google Scholar 

  21. Ceng L-C, Ansari QH, Schaible S: Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53: 69–96. 10.1007/s10898-011-9703-4

    Article  MathSciNet  MATH  Google Scholar 

  22. Ceng L-C, Ansari QH, Yao J-C: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061

    Article  MathSciNet  MATH  Google Scholar 

  23. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems 2002,18(2):441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  MATH  Google Scholar 

  24. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  25. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multi-sets split feasibility problem. J. Math. Anal. Appl. 2007,327(2):1244–1256. 10.1016/j.jmaa.2006.05.010

    Article  MathSciNet  MATH  Google Scholar 

  26. Ansari QH, Rehan A: Split feasibility and fixed point problems. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:281–322.

    Google Scholar 

  27. Ceng L-C, Ansari QH, Yao J-C: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074

    Article  MathSciNet  MATH  Google Scholar 

  28. Ceng L-C, Ansari QH, Yao J-C: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012

    Article  MathSciNet  MATH  Google Scholar 

  29. Ansari QH, Ceng L-C, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:231–280.

    Google Scholar 

  30. Kong Z-R, Ceng L-C, Ansari QH, Pang C-T: Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013., 2013: Article ID 718624

    Google Scholar 

  31. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  32. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157

    Article  MathSciNet  MATH  Google Scholar 

  33. Marino G, Xu H-K: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055

    Article  MathSciNet  MATH  Google Scholar 

  34. Yao Y, Liou Y-C, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480. 10.1016/j.camwa.2010.03.036

    Article  MathSciNet  MATH  Google Scholar 

  35. Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

    Article  MathSciNet  MATH  Google Scholar 

  36. Xu H-K, Kim T-H: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003,119(1):185–201.

    Article  MathSciNet  MATH  Google Scholar 

  37. Ceng L-C, Yao J-C: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  MATH  Google Scholar 

  38. Xu H-K: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002,66(1):240–256. 10.1112/S0024610702003332

    Article  MathSciNet  MATH  Google Scholar 

  39. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Groningen; 1976.

    Book  MATH  Google Scholar 

  40. Xu H-K: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Problems 2010., 26: Article ID 105018

    Google Scholar 

  41. Goebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

In this research, the first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133), and PhD Program Foundation of Ministry of Education of China (20123127110002). The second author and third author were supported partly by the National Science Council of the Republic of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chin-Tzong Pang.

Additional information

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Pang, CT. & Wen, CF. Multi-step extragradient method with regularization for triple hierarchical variational inequalities with variational inclusion and split feasibility constraints. J Inequal Appl 2014, 492 (2014). https://doi.org/10.1186/1029-242X-2014-492

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-492

Keywords