Skip to main content

Error bounds of regularized gap functions for weak vector variational inequality problems

Abstract

In this paper, by the nonlinear scalarization method, a global error bound of a weak vector variational inequality is established via a regularized gap function. The result extends some existing results in the literature.

MSC:49K40, 90C31.

1 Introduction

Throughout this paper, let K be a closed convex subset of an Euclidean space R n and F: R n →B( R n , R m ) be a continuously differentiable mapping. We consider a weak vector variational inequality (WVVI) of finding x ∗ ∈K such that

〈 F ( x ∗ ) , x − x ∗ 〉 ∉−intC,∀x∈K,

where C⊆ R m is a closed convex and pointed cone with nonempty interior intC. (WVVI) was firstly introduced by Giannessi [1]. It has been shown to have many applications in vector optimization problems and traffic equilibrium problems (e.g., [2, 3]).

Error bounds are to depict the distance from a feasible solution to the solution set, and have played an important role not only in sensitivity analysis but also in convergence analysis of iterative algorithms. Recently, kinds of error bounds have been presented for weak vector variational inequalities in [4–7]. By using a scalarization approach of Konnov [8], Li and Mastroeni [5] established the error bounds for two kinds of (WVVIs) with set-valued mappings. By a regularized gap function and a D-gap function for a weak vector variational inequality, Charitha and Dutta [4] obtained the error bounds of (WVVI), respectively. Recently, in virtue of the regularized gap functions, Sun and Chai [6] studied some error bounds for generalized (WVVIs). By using the image space analysis, Xu and Li [7] got a gap function for (WVVI). Then, they established an error bound for (WVVI) without the convexity of the constraint set. These papers have a common characteristic: the solution set of (WVVI) is a singleton [6, 7]. Even though the solution set of (WVVI) is not a singleton [4, 5], the solution set of the corresponding variational inequality (VI) is a singleton, when their results reduce to (VI).

In this paper, by the nonlinear scalarization method, we study a global error bound of (WVVI). This paper is organized as follows. In Section 2, we establish a global error bound of (VI) via the generalized gap functions. In Section 3, we discuss a global error bound of (WVVI) by the nonlinear scalarization method.

2 A global error bound of (VI)

Let h: R n →R∪{+∞} be a proper lower semicontinuous function, and let S={x∈ R n |h(x)≤0}. h has a global error bound if there exists τ>0 such that

d(x,S)≤τh ( x ) + ,∀x∈X,

where h ( x ) + :=max{h(x),0} and d(x,S):=inf{∥x−s∥|s∈S} if S is nonempty and d(x,S)=+∞ if S is empty. f: R n → R n is said to be coercive on K if

lim x ∈ K , ∥ x ∥ → + ∞ 〈 f ( x ) , x − y 〉 ∥ x ∥ =+∞,∀y∈K.

f: R n → R n is said to be strongly monotone on R n with the modulus λ>0 if

〈 f ( x ) − f ( x ′ ) , x − x ′ 〉 ≥λ ∥ x − x ′ ∥ 2 ,∀x, x ′ ∈ R n .

In this section, we establish a global error bound of (VI) of finding x∈K such that

〈 f ( x ) , y − x 〉 ≥0,∀y∈K,

where f: R n → R n is a continuously differentiable mapping.

To study the error bound of (VI), we need to construct a class of merit functions which were made to reformulate (VI) as an optimization problem; see [9–16]. One of such functions is a generalized regularized gap function [17] defined by

f γ (x):=− inf y ∈ K { 〈 f ( x ) , y − x 〉 + γ φ ( x , y ) } ,∀x∈ R n ,γ>0,
(1)

where φ: R n × R n →R is a real-valued function with the following properties:

(P1) φ is continuously differentiable on R n × R n .

(P2) φ(x,y)≥0, ∀x,y∈ R n and the equality holds if and only if x=y.

(P3) φ(x,⋅) is uniformly strongly convex on R n with the modulus β>0 in the sense that

φ(x, y 1 )−φ(x, y 2 )≥ 〈 ∇ 2 φ ( x , y 2 ) , y 1 − y 2 〉 +β ∥ y 1 − y 2 ∥ 2 ,∀x, y 1 , y 2 ∈ R n ,

where ∇ 2 φ denotes the partial derivative of φ with respect to the second variable.

(P4) ∇ 2 φ(⋅,y) is uniformly Lipschitz continuous on R n with the modulus α, i.e., for all x∈ R n ,

∥ ∇ 2 φ ( x , y 1 ) − ∇ 2 φ ( x , y 2 ) ∥ ≤α∥ y 1 − y 2 ∥,∀ y 1 , y 2 ∈ R n .

Now we recall some properties of φ in (1).

Proposition 2.1 The following statements hold for each x,y∈K:

  1. (i)

    〈 ∇ 2 φ(x,y),u〉≤α∥x−y∥∥u∥, ∀u∈span(K−x).

  2. (ii)

    β ∥ x − y ∥ 2 ≤φ(x,y)≤(α−β) ∥ x − y ∥ 2 .

  3. (iii)

    φ(x,y)−〈 ∇ 2 φ(x,y),y−x〉≥−(α−β) ∥ x − y ∥ 2 .

  4. (iv)

    ∇ 2 φ(x,y)=0 if and only if x=y.

Proof Parts (i)-(iii) are taken from [[14], Lemma 2.1] and part (iv) is from [[17], Lemma 2.1]. □

Remark 2.1 In light of (ii) in Proposition 2.1, it holds true that α≥2β.

Then we list some basic properties of the generalized regularized gap function f γ .

Proposition 2.2 The following conclusions are valid for (VI).

  1. (i)

    For every x∈ R n , there exists a unique vector y γ φ (x)∈K at which the infimum in (1) is attained, i.e.,

    f γ (x)=− 〈 f ( x ) , y γ φ ( x ) − x 〉 −γφ ( x , y γ φ ( x ) ) .
  2. (ii)

    f γ is a gap function of (VI).

  3. (iii)

    x= y γ φ (x) if and only if x is a solution of (VI).

  4. (iv)

    f γ is continuously differentiable on R n with

    ∇ f γ (x)=−∇f(x) ( y γ φ ( x ) − x ) +f(x)−γ ∇ 1 φ ( x , y γ φ ( x ) ) .
  5. (v)

    y γ φ and f γ are both locally Lipschitz on R n .

  6. (vi)

    If f is coercive on K, then (VI) has a nonempty compact solution set.

  7. (vii)

    f γ (x)≥βγ ∥ y γ φ ( x ) − x ∥ 2 , ∀x∈K.

Proof Parts (i)-(iv) are from [16], part (v) from [[18], Lemma 3.1] and part (vi) from [[11], Proposition 2.2.7].

It follows from (ii) and (iii) that we only need to prove (vii) for x∈K∖S. Since y γ φ (x) is the minimizer of the function

G(â‹…):= 〈 f ( x ) , â‹… − x 〉 +γφ(x,â‹…)on K,

the first-order optimality condition implies that

〈 ∇ G ( y γ φ ( x ) ) , y − y γ φ ( x ) 〉 ≥0,∀y∈K.

Letting y=x, we get

〈 ∇ G ( y γ φ ( x ) ) , − y γ φ ( x ) + x 〉 ≥0,

i.e.,

〈 f ( x ) + γ ∇ 2 φ ( x , y γ φ ( x ) ) , y γ φ ( x ) − x 〉 ≤0.

It follows from (P3) that the mapping G is strongly convex on R n with the modulus β>0, i.e., ∀x, y 1 , y 2 ∈ R n

G(x, y 1 )−G(x, y 2 )≥ 〈 f ( x ) + γ ∇ 2 φ ( x , y 2 ) , y 1 − y 2 〉 +βγ ∥ y 1 − y 2 ∥ 2 .

Letting y 1 =x and y 2 = y γ φ (x), by f γ (x)=−G(x, y γ φ (x)), we obtain

f γ (x)≥ 〈 f ( x ) + γ ∇ 2 φ ( x , y γ φ ( x ) ) , x − y γ φ ( x ) 〉 +βγ ∥ y γ φ ( x ) − x ∥ 2 .

Thus, one has f γ (x)≥βγ ∥ y γ φ ( x ) − x ∥ 2 . □

Theorem 2.1 Let f be coercive on K and γ(α−2β)<μ. Assume that φ satisfies

(P5) 〈 ∇ 1 φ(x, y γ φ (x))+ ∇ 2 φ(x, y γ φ (x)), y γ φ (x)−x〉≥0, ∀x∈K.

Suppose further that the following condition holds:

μ:=inf { 〈 d , ∇ f ( x ) d 〉 | x ∈ K ∖ S , d = y γ φ ( x ) − x ∥ y γ φ ( x ) − x ∥ } >0,
(2)

where S is the solution set of (VI). Then f γ has a global error bound with the modulus

max { 2 β γ μ + 2 γ β − γ α , 2 β γ β γ } .

Proof It follows from (vi) of Proposition 2.2 that S is a nonempty compact set of K. If x∈S, then the assertion obviously holds. Let x∈K∖S. Then f γ (x)>0. For brevity, we denote w:= y γ φ (x)−x and d:= w ∥ w ∥ . It follows from [[19], Theorem 2.5] that we only need to prove

∇ f γ (x)d≤−min { μ + 2 γ β − γ α 2 β γ , β γ 2 } .
(3)

It follows from (iv) of Proposition 2.2 that

∇ f γ ( x ) w = 〈 − ∇ f ( x ) w + f ( x ) − γ ∇ 1 φ ( x , y γ φ ( x ) ) , w 〉 = 〈 − ∇ f ( x ) w , w 〉 + 〈 f ( x ) , w 〉 + γ φ ( x , y γ φ ( x ) ) − γ [ 〈 ∇ 1 φ ( x , y γ φ ( x ) ) , w 〉 + φ ( x , y γ φ ( x ) ) ] = 〈 − ∇ f ( x ) w , w 〉 − f γ ( x ) − γ [ 〈 ∇ 1 φ ( x , y γ φ ( x ) ) , w 〉 + φ ( x , y γ φ ( x ) ) ] .

By (P5) and (2), we have

∇ f γ (x)w≤−μ ∥ w ∥ 2 − f γ (x)−γ [ − 〈 ∇ 2 φ ( x , y γ φ ( x ) ) , w 〉 + φ ( x , y γ φ ( x ) ) ] .

It follows from (iii) of Proposition 2.1 that

∇ f γ (x)w≤−μ ∥ w ∥ 2 − f γ (x)+γ(α−β) ∥ w ∥ 2 .

Thus,

∇ f γ (x)d= ∇ f γ ( x ) w ∥ w ∥ 2 f γ ( x ) ≤ − [ μ − γ ( α − β ) ] ∥ w ∥ 2 f γ ( x ) − f γ ( x ) 2 ∥ w ∥ .
(4)

In light of (4) and (vii) of Proposition 2.2, we have

∇ f γ (x)d≤− [ μ − γ ( α − β ) ] ∥ w ∥ 2 f γ ( x ) − β γ 2 .

If μ<γ(α−β), then it follows from γ(α−2β)<μ that

∇ f γ (x)d≤ [ γ ( α − β ) − μ ] 2 β γ − β γ 2 = γ α − μ − 2 γ β 2 β γ <0.
(5)

If μ≥γ(α−β), then

∇ f γ (x)d≤− β γ 2 .
(6)

Hence, (3) follows from (5) and (6). The proof is complete. □

Now we use two examples to show that (2) cannot be dropped and that Theorem 2.1 is applicable, respectively.

Example 2.1 Consider K=R, φ(x,y)= 1 2 | x − y | 2 , γ= 1 2 and f(x)= x 3 . Then we can easily get that α=2β=1, y γ φ (x)=x−2f(x), f γ (x)= x 6 and S={0}. It is clear that f is coercive on K and μ=0. Thus, (2) does not hold. Moreover, it is obvious that f γ does not have a global error bound.

Example 2.2 Consider K=[0,+∞), φ(x,y)= 1 2 | x − y | 2 , γ=1 and f(x)=x. Then we can easily get that α=2β=1, ∇f(x)=1, y γ φ (x)=0, f γ (x)= 1 2 x 2 and S={0}. It is clear that f is coercive on K and (2) holds. Thus, it follows from Theorem 2.1 that f γ has a global error bound.

By [[21], Proposition 2.3(ii)] Huang and Ng [[14], Theorem 2.1] have obtained the following conclusion. Now we give a slightly different proof by Theorem 2.1.

Corollary 2.1 Let f be strongly monotone on R n with the modulus λ>0 and γ(α−2β)<λ. Assume that φ satisfies (P5). Then f γ has a global error bound with the modulus

max { 2 β γ λ + 2 γ β − γ α , 2 β γ β γ } .

Proof Let x∈K∖S and w= y γ φ (x)−x. Since f is continuously differentiable, then

f(x+tw)=f(x)+t 〈 ∇ f ( x ) , w 〉 +o(t),

where o ( t ) t →0 as t→0. Since f is strongly monotone with the modulus λ, one has

〈 f ( x + t w ) − f ( x ) , t w 〉 ≥λ ∥ t w ∥ 2 ,

which implies that

〈 w , ∇ f ( x ) w 〉 ≥λ ∥ w ∥ 2 .

Thus, (2) holds. Moreover, the strong monotonicity of f implies the coerciveness of f (cf. [[22], Remark 2.1]). Thus, by Theorem 2.1, we get that f γ has a global error bound. □

3 A global error bound of (WVVI)

In this section, by the nonlinear scalarization method and by Theorem 2.1, we discuss a global error bound of (WVVI). The dual cone of C is defined by C ∗ :={ξ∈ R m :〈ξ,z〉≥0,∀z∈C}. For each ξ∈ R m , ∥ξ∥:=sup{|〈ξ,z〉|:∥z∥≤1}, where 〈ξ,z〉 denotes the value of ξ at z. Let e∈intC and B e ∗ :={ξ∈ C ∗ :〈ξ,e〉=1}. It is well known that B e ∗ is a compact convex base of C ∗ .

Lemma 3.1 [20]

S⊃ ⋃ ξ ∈ C ∗ ∖ { 0 } S ξ = ⋃ ξ ∈ B e ∗ S ξ ,

where S ξ :={x∈K:〈 ∑ i = 1 m ξ i F i ( x ∗ ),x− x ∗ 〉≥0,∀y∈K} and S is the solution set of (WVVI).

Recall the generalized regularized gap function for (WVVI) which is defined by

ϕ γ (x):= min ξ ∈ B e ∗ f γ (x,ξ),

where f γ (x,ξ)= max y ∈ K {〈 ∑ i = 1 m ξ i F i (x),x−y〉−γφ(x,y)}. When ϕ(x,y)= 1 2 ∥ x − y ∥ 2 , the generalized regularized gap function reduces to the regularized gap function which was defined in [4].

Theorem 3.1 Let γ(α−2β)< min ξ ∈ B e ∗ μ ξ . Assume that φ satisfies (P5). For each ξ∈ B e ∗ , suppose that ξ∘F is coercive on K, and that the following condition holds:

μ ξ :=inf { 〈 d , ( ξ ∘ ∇ F ) ( x ) d 〉 | x ∈ K ∖ S , d = y γ φ ( x ) − x ∥ y γ φ ( x ) − x ∥ } >0.
(7)

Then ϕ γ has a global error bound with the modulus

max { max ξ ∈ B e ∗ 2 β γ μ ξ + 2 γ β − γ α , 2 β γ β γ } .

Proof It follows from (vi) of Proposition 2.2 that S ξ is a nonempty compact set of K for each ξ∈ B e ∗ . If x∈S, then the assertion obviously holds. Let x∈K∖S. Then Ï• γ (x)>0 and there exists ξ 0 ∈ B e ∗ such that f γ (x, ξ 0 )= Ï• γ (x). It follows from Theorem 2.1 that

d(x, S ξ 0 )≤ τ ξ 0 f γ ( x , ξ 0 ) ,

where Ï„ ξ =max{ 2 β γ μ ξ + 2 γ β − γ α , 2 β γ β γ }. Thus, by Lemma 3.1, one has

d(x,S)≤d(x, S ξ 0 )≤ τ ξ 0 ⋅ f γ ( x , ξ 0 ) ≤ max ξ ∈ B e ∗ τ ξ ⋅ ϕ γ ( x ) .

Hence, ϕ γ has a global error bound with the modulus max ξ ∈ B e ∗ τ ξ . □

Remark 3.1 If F i is strongly monotone with the modulus λ i for i=1,2,…,m and C= R + m , it follows from [[21], Proposition 2.3] that

〈 d , ( ξ ∘ ∇ F ) ( x ) d 〉 ≥λ ∥ d ∥ 2 ,∀d∈ R n ,ξ∈ B e ∗ .

Moreover, the strong monotonicity of F i implies the coerciveness of F i (cf. [[22], Remark 2.1]) and that (VI) has a unique solution (cf. [[11], Theorem 2.3.3]). Thus, by Theorem 3.1, we get that Ï• γ has a global error bound. Hence, our results extend those of [[4], Theorem 2.9].

References

  1. Giannessi F: Theorems of the alternative, quadratic programs and complementarity problems. In Variational Inequalities and Complementarity Problems. Edited by: Cottle RW, Giannessi F, Lions JL. Wiley, New York; 1980:151–186.

    Google Scholar 

  2. Chen GY, Goh CJ, Yang XQ: Vector network equilibrium problems and nonlinear scalarization methods. Math. Methods Oper. Res. 1999, 49: 239–253.

    MathSciNet  MATH  Google Scholar 

  3. Li SJ, Teo KL, Yang XQ: A remark on a standard and linear vector network equilibrium problem with capacity constraints. Eur. J. Oper. Res. 2008, 184: 13–23. 10.1016/j.ejor.2005.11.059

    Article  MathSciNet  MATH  Google Scholar 

  4. Charitha C, Dutta J: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 2010, 6: 497–510.

    MathSciNet  MATH  Google Scholar 

  5. Li J, Mastroeni G: Vector variational inequalities involving set-valued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 2010, 145: 355–372. 10.1007/s10957-009-9625-1

    Article  MathSciNet  MATH  Google Scholar 

  6. Sun XK, Chai Y: Gap functions and error bounds for generalized vector variational inequalities. Optim. Lett. 2014, 8: 1663–1673. 10.1007/s11590-013-0685-7

    Article  MathSciNet  MATH  Google Scholar 

  7. Xu YD, Li SJ: Gap functions and error bounds for weak vector variational inequalities. Optimization 2014, 63: 1339–1352. 10.1080/02331934.2012.721115

    Article  MathSciNet  MATH  Google Scholar 

  8. Konnov IV: A scalarization approach for vector variational inequalities with applications. J. Glob. Optim. 2005, 32: 517–527. 10.1007/s10898-003-2688-x

    Article  MathSciNet  MATH  Google Scholar 

  9. Auchmuty G: Variational principles for variational inequalities. Numer. Funct. Anal. Optim. 1989, 10: 863–874. 10.1080/01630568908816335

    Article  MathSciNet  MATH  Google Scholar 

  10. Auslender A: Optimization, Méthodes Numériques. Masson, Paris; 1976.

    MATH  Google Scholar 

  11. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.

    MATH  Google Scholar 

  12. Fukushima M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 1992, 53: 99–110. 10.1007/BF01585696

    Article  MathSciNet  MATH  Google Scholar 

  13. Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms, and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255

    Article  MathSciNet  MATH  Google Scholar 

  14. Huang LR, Ng KF: Equivalent optimization formulations and error bounds for variational inequality problems. J. Optim. Theory Appl. 2005, 125: 299–314. 10.1007/s10957-004-1839-7

    Article  MathSciNet  MATH  Google Scholar 

  15. Ng KF, Tan LL: Error bounds of regularized gap functions for nonsmooth variational inequality problems. Math. Program. 2007, 110: 405–429. 10.1007/s10107-006-0007-2

    Article  MathSciNet  MATH  Google Scholar 

  16. Wu JH, Florian M, Marcotte P: A general descent framework for the monotone variational inequality problem. Math. Program. 1993, 61: 281–300. 10.1007/BF01582152

    Article  MathSciNet  MATH  Google Scholar 

  17. Yamashita N, Taji K, Fukushima M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 1997, 92: 439–456. 10.1023/A:1022660704427

    Article  MathSciNet  MATH  Google Scholar 

  18. Li G, Tang C, Wei Z: Error bound results for generalized D-gap functions of nonsmooth variational inequality problems. J. Comput. Appl. Math. 2010, 233: 2795–2806. 10.1016/j.cam.2009.11.025

    Article  MathSciNet  MATH  Google Scholar 

  19. Ng KF, Zheng XY: Error bounds for lower semicontinuous functions in normed spaces. SIAM J. Optim. 2001, 12: 1–17. 10.1137/S1052623499358884

    Article  MathSciNet  MATH  Google Scholar 

  20. Lee GM, Kim DS, Lee BS, Yen ND: Vector variational inequality as a tool for studying vector optimization problems. Nonlinear Anal. 1998, 34: 745–765. 10.1016/S0362-546X(97)00578-6

    Article  MathSciNet  MATH  Google Scholar 

  21. Jiang HY, Qi LQ: Local uniqueness and convergence of iterative methods for nonsmooth variational inequalities. J. Math. Anal. Appl. 1995, 196: 314–331. 10.1006/jmaa.1995.1412

    Article  MathSciNet  MATH  Google Scholar 

  22. Li G, Ng KF: Error bounds of generalized D-gap functions for nonsmooth and nonmonotone variational inequality problems. SIAM J. Optim. 2009, 20: 667–690. 10.1137/070696283

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by the Natural Science Foundation of Shaanxi Province, China (Grant number: 2014JQ1023).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghua Li.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M. Error bounds of regularized gap functions for weak vector variational inequality problems. J Inequal Appl 2014, 331 (2014). https://doi.org/10.1186/1029-242X-2014-331

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-331

Keywords