- Research
- Open access
- Published:
A shrinkage-thresholding projection method for sparsest solutions of LCPs
Journal of Inequalities and Applications volume 2014, Article number: 51 (2014)
Abstract
In this paper, we study the sparsest solutions of linear complementarity problems (LCPs), which study has many applications, such as bimatrix games and portfolio selections. Mathematically, the underlying model is NP-hard in general. By transforming the complementarity constraints into a fixed point equation with projection type, we propose an regularization projection minimization model for relaxation. Through developing a thresholding representation of solutions for a key subproblem of this regularization model, we design a shrinkage-thresholding projection (STP) algorithm to solve this model and also analyze convergence of STP algorithm. Numerical results demonstrate that the STP method can efficiently solve this regularized model and get a sparsest solution of LCP with high quality.
MSC:90C33, 90C26, 90C90.
1 Introduction
Given a matrix and a vector , the linear complementarity problem, denoted by , is to find a vector such that
The set of solutions to this problem is denoted by . Throughout the paper, we always suppose .
Many real-world phenomena in engineering, physics, mechanics, and economics are governed by linear complementarity problems. Extensive studies of LCP have been done, see the books [1–3] and the references therein. Numerical methods for solving LCPs, such as the Newton method, the interior point method, and the nonsmooth equation method, have been extensively investigated in the literature. However, it seems that there is no study of the sparsest solutions for LCPs. In fact, in real applications, it is very necessary to investigate the sparsest solution of the LCPs. For example this is so in bimatrix games [1] and portfolio selections [4]. For more details, see [5].
In this paper, we try to find the sparsest solution of the LCP, which has the smallest number of nonzero entries. To be specific, we seek a vector by solving the -norm minimization problem,
where stands for the number of nonzero components of x. A solution of (1) is called the sparsest solution of the LCP.
The above minimization problem (1) is in fact a sparse optimization with equilibrium constraints. From the point of view of constraint conditions, it is not easy to get solutions due to the equilibrium constraints, as well as the discontinuous objective function.
To overcome the difficulty, we make use of the C-functions to construct the penalty of violating the equilibrium constraints. Recall that a function is called a C-function, where C stands for complementarity, if for any pair ,
Define by
The C-function , the extension of the ‘min’ function, is defined as follows:
where , with the vector and the matrix , and is the ‘Euclidean projector’ onto the nonnegative orthant. It is clear that
Combining (2) and (3), we can obtain the equivalence between and the fixed point equation , that is,
In the view of the objective function, problem (1) is an -norm minimization problem, which is combination and generally NP-hard. The complexity of this model is generally proportional with the number of variables. In order to overcome such a difficultly, many researchers have suggested to relax the norm and instead consider the norm; see [6–11]. Hence, in this paper we consider applying the norm to find the sparsest solution of LCPs, and we obtain the following minimization problem to approximate (1):
where , .
By further employing the regularization term for seeking sparsity, we introduce a new variable, , in order to simplify the objective function and carry on an alternative iteration. We apply the following regularization projection minimization problem to approximate (1):
where is a given regularization parameter and refers to the Euclidean norm. We call (6) the regularization projection minimization problem.
This paper is organized as follows. In Section 2, we approximate (1) by the regularization projection minimization problem (6), and show theoretically that (6) is a good approximation. In Section 3, we develop a shrinkage-thresholding representation theory for the subproblem of (6) and propose a shrinkage-thresholding projection (STP) algorithm for (6). The convergency of the STP algorithm is proved. Numerical results are demonstrated in Section 4 to show that (6) is promising to provide a sparsest solution of LCP.
2 The regularized approximation
In this section, we study the relation between the regularization projection model (6) and the original model (5), which indicates that the regularized model is a good approximation.
Theorem 2.1 For any fixed , the solution set of (6) is nonempty and bounded. Let be a solution sequence of (6), and be any positive sequence converging to 0. If , then has at least one accumulation point, and any accumulation point of is a solution of (5).
Proof For any fixed , it is easy to show the coercivity of the objective function in (6), which refers to the property that
Also notice for all and , . This, together with (7), implies that the level set
is nonempty and compact, where and are given points. The solution set of (6) is nonempty and bounded since is continuous on â„’.
Now we show the second part of this theorem. Let and . From (4), we have . Note that is a solution of (6) with , and . We have
Employing (8), we easily find that for any ,
Hence the whole sequence is bounded, and it has at least one accumulation point. One easily notices that the sequence is the same case as , which refers to the inequality . Let , be arbitrary accumulation points of and , respectively, and . Then there exists a subsequence of , say , such that
We obtain by letting tend to ∞ in . Taking to both sides of the inequality
we get , which implies , that is, . From with tending to ∞, we get . Then by the arbitrariness of , we know is a solution of problem (5). This completes the proof. □
3 Solution representation, algorithm, and convergence
Fixing , we consider an unconstrained minimization subproblem:
For fixed λ, a minimizer for the convex function (10) must satisfy the corresponding optimality conditions
where the shrinkage-thresholding operator is defined by
It demonstrates that a solution of the subproblem (10) can be analytically expressed by (11).
By the solution representation, we construct the following shrinkage-thresholding projection (STP) algorithm to solve the regularization projection minimization problem (6).
Now we will show that the STP algorithm is well defined, that is, (13) is implementable. Before doing this, we need the following lemmas.
Lemma 3.1 [12]
Let be a metric projection operator onto a nonempty closed convex set . Given and , define
then is nondecreasing with respect to α.
Lemma 3.2 The step size in (13) must exist.
Proof From Lemma 3.1, we see that is nondecreasing with respect to α. It is obvious that is strictly increasing with respect to α since before the iterations stop. It follows that the term
is strictly increasing with respect to α before the iterations stop. Thus is strictly decreasing with respect to the nonnegative integer m before the iterations stop. Note that and , then we have
One notes that for and any . It follows that before the iterations stop. If , then
is just what we seek; if , there is a positive integer m such that
then
is just what we seek. All in all, the step size in (13) must exist. □
We now begin to analyze the convergence of the proposed STP algorithm.
Theorem 3.1 Let be a sequence generated by STP algorithm, then
-
(i)
is monotonically decreasing and converges to a constant ;
-
(ii)
and are bounded and suppose , then and are both asymptotically regular, i.e.,
Moreover, any accumulation point of is a solution of .
Proof (i) From (10) and (11), we have
It follows that
Since is monotonically decreasing, together with the inequality (13), we get
Hence,
Combining (16) with (17), we get
which shows that is monotonically decreasing. Since is bounded from below, converges to a constant . This verifies (i) of Theorem 3.1.
(ii) From the fact that , which is bounded, we see that and are both bounded.
From (18) and , we have
This then implies
Thus is convergent, which yields
The above two limitations and the inequality
yield
Since is bounded, has at least one accumulation. Let be an accumulation point of and a subsequence converging to . Since , without loss of generality, we suppose as . It follows that
Combining (20) with (19), we get , which gives and this means . The proof is thus complete. □
4 Numerical experiments
In this section, we present some numerical experiments to demonstrate the effectiveness of our STP algorithm. All the numerical experiments were performed on a DELL computer (2.99 GHz, 2 GB of RAM), using MATLAB 7.10.
In the STP algorithm, the maximum number of iterations is set to 500. We end the STP algorithm, if or if it reaches the maximum number of iterations. We set , , , , and to be zero vectors as the initial points.
Test 1: Z-matrix LCPs [5]
Let us consider where
Here is the identity matrix of order n and .
Such a matrix M is widely used in statistics. It is clear that M is a Z-matrix as well as a positive semi-definite matrix. For any scalars , we know that the vectors are solutions to , because
Among all the solutions, the vector is the unique sparsest solution.
We test the STP algorithm for different dimensions with , respectively. In this set of experiments, we set and . The results are displayed in Table 1.
In Table 1, ‘’ denotes the Euclidean distance between and , which is in fact the value of the merit function at , ‘gap’ denotes the Euclidean distance between and the true sparsest solution , ‘Spar’ denotes the number of the entries of such that , and ‘time’ denotes the computational time in seconds.
From Table 1, we can see that the STP algorithm is effective to find the sparse solutions of LCPs. The sparsity of our solution is the same as the sparsity of the true sparse solution . Moreover, the and the ‘gap’ decrease as the dimension of the matrix M increases, which indicates that the larger size of the problem, the more effective the algorithm is.
Test 2: Randomly created LCPs with positive semidefinite matrices
In this subsection, we test STP for randomly created LCPs with positive semidefinite matrices.
First, we state the way of constructing LCPs and their solutions. Let a matrix () be generated with the standard normal distribution and let . Let the sparse vector be generated with the standard normal distribution and its sparsity be as follows: the sparsity is set to be if ; the sparsity is set to be if ; the sparsity is set to be if . After the matrix M and the sparse vector have been generated, a vector can be constructed such that is a solution of the . Then can be regard as a sparse solution of the . Let M and q be input to our STP algorithm, then STP will output a solution . We must emphasize that the sparsity of may be smaller than that of since maybe not the sparsest solution of the . In this case, is a sparser than .
In this set of experiments, ‘iter’ denotes the number of iterations for outputting , ‘Spar-i’ denotes the number of the entry of satisfying , and ‘Spar-o’ denotes the number of the entry of satisfying . We set and .
We test the STP algorithm for different dimensions with , respectively. For each case, we randomly run 10 times and compute the average values of ‘iter’, ‘Spar-i’, ‘Spar-o’ and , respectively. The results are displayed in Table 2.
From Table 2, we can see that the STP algorithm works very fast. Even for , it only takes 29.7 seconds to yield a very sparse solution to the LCP. The values of are all less than , which indicates the output points are solutions of . Moreover, the output solution is sparser than . When the dimension of M increases, the accuracy does not decrease but increases and the time cost by STP increases slowly. These phenomena show that STP is very robust. We can draw the conclusion that STP is very efficient for finding the sparsest solution of LCPs.
Remark The continuation method of the regularized parameter λ plays an important role in STP for find sparsest solutions of high quality. Moveover, a large amount of numerical experiments indicate that STP is very robust whenever .
5 Conclusions
In this paper, we concentrate on finding the sparsest solutions of LCPs. We propose an regularized projection minimization model. Then we develop a thresholding representation theory for the subproblem of regularized projection minimization problem, and design a shrinkage-thresholding projection (STP) algorithm to solve the regularized model. The convergence of the STP algorithm is proved. Preliminary numerical results indicate that the regularized model as well as the STP method are promising to find sparsest solutions of LCPs.
References
Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, Boston; 1992.
Facchinei F, Pang JS Springer Series in Operations Research. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003. vols. I, II
Ferris MC, Mangasarian OL, Pang JS: Complementarity: Applications, Algorithms and Extensions. Kluwer Academic, Dordrecht; 2001.
Xie J, He S, Zhang S: Randomized portfolio selection with constraints. Pac. J. Optim. 2008, 4: 87-112.
Shang, M, Zhang, C, Xiu, N: Minimal Zero Norm Solutions of Linear Complementarity Problems. J. Optim. Theory Appl. (submitted)
Figueiredo MAT, Nowak RD: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12: 906-916. 10.1109/TIP.2003.814255
Starck JL, Donoho DL, Candès EJ: Astronomical image representation by the curevelet transform. Astron. Astrophys. 2003, 398: 785-800. 10.1051/0004-6361:20021571
Daubechies I, Defrise M, De Mol C: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57: 1413-1457. 10.1002/cpa.20042
Figueiredo MAT, Nowak RD, Wright SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1: 586-597.
Cand‘es EJ, Romberg J, Tao T: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59: 1207-1223. 10.1002/cpa.20124
Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52: 1289-1306.
Toint PHL: Global convergence of a class of trust region methods for nonconvex minimization in Hilbert space. IMA J. Numer. Anal. 1988, 8: 231-252. 10.1093/imanum/8.2.231
Acknowledgements
We would like to thank the two referees for their valuable comments. This research was supported by the National Basic Research Program of China (2010CB732501), the National Natural Science Foundation of China (71271021), the Fundamental Research Funds for the Central Universities of China (2011YJS075), STRD plan of Shijiazhuang (135790075A) and the Scientific Research Fund of Hebei Provincial Education Department (QN20132030).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors completed the paper together. Both authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Shang, M., Nie, C. A shrinkage-thresholding projection method for sparsest solutions of LCPs. J Inequal Appl 2014, 51 (2014). https://doi.org/10.1186/1029-242X-2014-51
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-51