We suggest new dual algorithms and iterative methods for solving monotone generalized variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space by using the dual gap function. Under the suitable conditions, we prove the convergence of the proposed algorithms and estimate their complexity to reach an -solution. Some preliminary computational results are reported.
Let be a convex subset of the real Euclidean space , be a continuous mapping from into , and be a lower semicontinuous convex function from into . We say that a point is a solution of the following generalized variational inequality if it satisfies
where denotes the standard dot product in .
Associated with the problem (GVI), the dual form of this is expressed as following which is to find such that
In recent years, this generalized variational inequalities become an attractive field for many researchers and have many important applications in electricity markets, transportations, economics, and nonlinear analysis (see [1–9]).
It is well known that the interior quadratic and dual technique are powerfull tools for analyzing and solving the optimization problems (see [10–16]). Recently these techniques have been used to develop proximal iterative algorithm for variational inequalities (see [17–22]).
In addition Nesterov  introduced a dual extrapolation method for solving variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space.
In this paper we extend results in  to the generalized variational inequality problem (GVI) in the dual space. In the first approach, a gap function is constructed such that , for all and if and only if solves (GVI). Namely, we first develop a convergent algorithm for (GVI) with being monotone function satisfying a certain Lipschitz type condition on . Next, in order to avoid the Lipschitz condition we will show how to find a regularization parameter at every iteration such that the sequence converges to a solution of (GVI).
The remaining part of the paper is organized as follows. In Section 2, we present two convergent algorithms for monotone and generalized variational inequality problems with Lipschitzian condition and without Lipschitzian condition. Section 3 deals with some preliminary results of the proposed methods.
First, let us recall the well-known concepts of monotonicity that will be used in the sequel (see ).
Let be a convex set in , and . The function is said to be
(i)pseudomonotone on if
(ii)monotone on if for each ,
(iii)strongly monotone on with constant if for each ,
(iv)Lipschitz with constant on (shortly -Lipschitz), if
Note that when is differentiable on some open set containing , then, since is lower semicontinuous proper convex, the generalized variational inequality (GVI) is equivalent to the following variational inequalities (see [25, 26]):
Find such that
Throughout this paper, we assume that:
(A 1) the interior set of , int is nonempty,
(A 2) the set is bounded,
(A 3) is upper semicontinuous on , and is proper, closed convex and subdifferentiable on ,
(A 4) is monotone on .
In special case , problem (GVI) can be written by the following.
Find such that
It is well known that the problem (VI) can be formulated as finding the zero points of the operator , where
The dual gap function of problem (GVI) is defined as follows:
The following lemma gives two basic properties of the dual gap function (2.7) whose proof can be found, for instance, in .
The function is a gap function of (GVI), that is,
(i) for all ,
(ii) and if and only if is a solution to (DGVI). Moreover, if is pseudomonotone then is a solution to (DGVI) if and only if it is a solution to (GVI).
The problem may not be solvable and the dual gap function may not be well-defined. Instead of using gap function , we consider a truncated dual gap function . Suppose that fixed and . The truncated dual gap function is defined as follows:
For the following consideration, we define as a closed ball in centered at and radius , and . The following lemma gives some properties for .
Under assumptions (A1)–(A4), the following properties hold.
(i)The function is well-defined and convex on .
(ii)If a point is a solution to (DGVI) then .
(iii)If there exists such that and , and is pseudomonotone, then is a solution to (DGVI) (and also (GVI)).
(i) Note that is upper semicontinuous on for and is bounded. Therefore, the supremum exists which means that is well-defined. Moreover, since is convex on and is the supremum of a parametric family of convex functions (which depends on the parameter ), then is convex on
(ii) By definition, it is easy to see that for all . Let be a solution of (DGVI) and . Then we have
In particular, we have
for all . Thus
this implies .
(iii) For some , means that is a solution to (DGVI) restricted to . Since is pseudomonotone, is also a solution to (GVI) restricted to . Since , for any , we can choose sufficiently small such that
where (2.13) follows from the convexity of . Since , dividing this inequality by , we obtain that is a solution to (GVI) on . Since is pseudomonotone, is also a solution to (DGVI).
Let be a nonempty, closed convex set and . Let us denote the Euclidean distance from to and the point attained this distance, that is,
The following lemma gives a tool for the next discussion.
For any and for any , the function and the mapping defined by (2.14) satisfy
Inequality (2.15) is obvious from the property of the projection (see ). Now, we prove the inequality (2.16). For any , applying (2.15) we have
Using the definition of and noting that and taking minimum with respect to in (2.18), then we have
which proves (2.16).
From the definition of , we have
Since , applying (2.15) with instead of and for (2.20), we obtain the last inequality in Lemma 2.4.
For a given integer number , we consider a finite sequence of arbitrary points , a finite sequence of arbitrary points and a finite positive sequence . Let us define
Then upper bound of the dual gap function is estimated in the following lemma.
Suppose that Assumptions (A1)–(A4) are satisfied and
Then, for any ,
(i), for all , .
(i) We define as the Lagrange function of the maximizing problem . Using duality theory in convex optimization, then we have
(ii) From the monotonicity of and (2.22), we have
Combining (2.24), Lemma 2.5(i) and
3. Dual Algorithms
Now, we are going to build the dual interior proximal step for solving (GVI). The main idea is to construct a sequence such that the sequence tends to 0 as . By virtue of Lemma 2.5, we can check whether is an -solution to (GVI) or not.
The dual interior proximal step at the iteration is generated by using the following scheme:
where and are given parameters, is the solution to (2.22).
The following lemma shows an important property of the sequence .
The sequence generated by scheme (3.1) satisfies
where , and . As a consequence, we have
We replace by and by into (2.16) to obtain
Using the inequality (3.4) with , , and noting that , we get
This implies that
From the subdifferentiability of the convex function to scheme (3.1), using the first-order necessary optimality condition, we have
for all . This inequality implies that
We apply inequality (3.4) with , and and using (3.8) to obtain
Combine this inequality and (3.6), we get
On the other hand, if we denote , then it follows that
Combine (3.10) and (3.11), we get
which proves (3.2).
On the other hand, from (3.9) we have
Then the inequality (3.3) is deduced from this inequality and (3.6).
The dual algorithm is an iterative method which generates a sequence based on scheme (3.1). The algorithm is presented in detail as follows:
One has the following.
Given a tolerance , fix an arbitrary point and choose , . Take and .
For each , execute four steps below.
Compute a projection point by taking
Solve the strongly convex programming problem
to get the unique solution .
Find such that
If , where is a given tolerance, then stop.
Otherwise, increase by 1 and go back to Step 1.
Compute the final output as:
Now, we prove the convergence of Algorithm 3.2 and estimate its complexity.
Suppose that assumptions (A1)–(A3) are satisfied and is -Lipschitz continuous on . Then, one has
where is the final output defined by the sequence in Algorithm 3.2. As a consequence, the sequence converges to 0 and the number of iterations to reach an -solution is , where denotes the largest integer such that .
From , where and , we get
Substituting (3.20) into (3.2), we obtain
Using this inequality with for all and , we obtain
If we choose for all in (2.21), then we have
Hence, from Lemma 2.5(ii), we have
Using inequality (3.22) and , it implies that
Note that . It follows from the inequalities (3.24) and (3.25) that
which implies that . The termination criterion at Step 4, , using inequality (2.26) we obtain and the number of iterations to reach an -solution is .
If there is no the guarantee for the Lipschitz condition, but the sequences and are uniformly bounded, we suppose that
then the algorithm can be modified to ensure that it still converges. The variant of Algorithm 3.2 is presented as Algorithm 3.4 below.
One has the following.
Fix an arbitrary point and set . Take and . Choose for all .
For each execute the following steps.
Compute the projection point by taking
Solve the strong convex programming problem
to get the unique solution .
Find such that
If , where is a given tolerance, then stop.
Otherwise, increase by 1, update and go back to Step 1.
Compute the final output as
The next theorem shows the convergence of Algorithm 3.4.
Let assumptions (A1)–(A3) be satisfied and the sequence be generated by Algorithm 3.4. Suppose that the sequences and are uniformly bounded by (3.27). Then, we have
As a consequence, the sequence converges to 0 and the number of iterations to reach an -solution is .
If we choose for all in (2.21), then we have . Since , it follows from Step 3 of Algorithm 3.4 that
From (3.34) and Lemma 2.5(ii), for all we have
We define . Then, we have
We consider, for all
Then derivative of is given by
Thus is nonincreasing. Combining this with (3.36) and , we have
From Lemma 3.1, and , we have
Combining (3.39) and this inequality, we have
By induction on , it follows from (3.41) and that
From (3.35) and (3.42), we obtain
which implies that . The remainder of the theorem is trivially follows from (3.33).
4. Illustrative Example and Numerical Results
In this section, we illustrate the proposed algorithms on a class of generalized variational inequalities (GVI), where is a polyhedral convex set given by
where , . The cost function is defined by
where , is a symmetric positive semidefinite matrix and . The function is defined by
Then is subdifferentiable, but it is not differentiable on .
For this class of problem (GVI) we have the following results.
Let . Then
(i)if is -strongly monotone on , then is monotone on whenever .
(ii)if is -strongly monotone on , then is -strongly monotone on whenever .
(iii)if is -Lipschitz on , then is -Lipschitz on .
Since is -strongly monotone on , that is
Then (i) and (ii) easily follow.
Using the Lipschitz condition, it is not difficult to obtain (iii).
To illustrate our algorithms, we consider the following data.
with , , . From Lemma 4.1, we have is monotone on . The subproblems in Algorithm 3.2 can be solved efficiently, for example, by using MATLAB Optimization Toolbox R2008a. We obtain the approximate solution
Now we use Algorithm 3.4 on the same variational inequalities except that
where the components of the are defined by: , with randomly chosen in and the components of are randomly chosen in . The function is given by Bnouhachem . Under these assumptions, it can be proved that is continuous and monotone on .
With and the tolerance , we obtained the computational results (see, the Table 1).
Table 1. Numerical results: Algorithm 3.4 with .
The authors would like to thank the referees for their useful comments, remarks and suggestions. This work was completed while the first author was staying at Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by Kyungnam University Research Fund, 2010.
Anh, PN, Muu, LD, Nguyen, VH, Strodiot, JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. Journal of Optimization Theory and Applications. 124(2), 285–306 (2005). Publisher Full Text
Bello Cruz, JY, Iusem, AN: Convergence of direct methods for paramontone variational inequalities. Computational Optimization and Applications. 46(2), 247–263 (2010). Publisher Full Text
Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Mathematical Programming. 53(1), 99–110 (1992). Publisher Full Text
Mashreghi, J, Nasri, M: Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory. Nonlinear Analysis: Theory, Methods & Applications. 72(3-4), 2086–2099 (2010). PubMed Abstract | Publisher Full Text
Noor, MA: Iterative schemes for quasimonotone mixed variational inequalities. Optimization. 50(1-2), 29–44 (2001). Publisher Full Text
Zhu, DL, Marcotte, P: Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM Journal on Optimization. 6(3), 714–726 (1996). Publisher Full Text
Fang, SC, Peterson, EL: Generalized variational inequalities. Journal of Optimization Theory and Applications. 38(3), 363–383 (1982). Publisher Full Text
Iusem, AN, Nasri, M: Inexact proximal point methods for equilibrium problems in Banach spaces. Numerical Functional Analysis and Optimization. 28(11-12), 1279–1308 (2007). Publisher Full Text
Kim, JK, Kim, KS: A new system of generalized nonlinear mixed quasivariational inequalities and iterative algorithms in Hilbert spaces. Journal of the Korean Mathematical Society. 44(4), 823–834 (2007). Publisher Full Text
Waltz, RA, Morales, JL, Nocedal, J, Orban, D: An interior algorithm for nonlinear optimization that combines line search and trust region steps. Mathematical Programming. 107(3), 391–408 (2006). Publisher Full Text
Auslender, A, Teboulle, M: Interior projection-like methods for monotone variational inequalities. Mathematical Programming. 104(1), 39–68 (2005). Publisher Full Text
Bnouhachem, A: An LQP method for pseudomonotone variational inequalities. Journal of Global Optimization. 36(3), 351–363 (2006). Publisher Full Text
Iusem, AN, Nasri, M: Augmented Lagrangian methods for variational inequality problems. RAIRO Operations Research. 44(1), 5–25 (2010). Publisher Full Text
Kim, JK, Buong, N: Regularization inertial proximal point algorithm for monotone hemicontinuous mapping and inverse strongly monotone mappings in Hilbert spaces. Journal of Inequalities and Applications. 2010, (2010)
Nesterov, Y: Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming. 109(2-3), 319–344 (2007). Publisher Full Text
Mangasarian, OL, Solodov, MV: A linearly convergent derivative-free descent method for strongly monotone complementarity problems. Computational Optimization and Applications. 14(1), 5–16 (1999). Publisher Full Text
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization. 14(5), 877–898 (1976). Publisher Full Text