- Research
- Open access
- Published:
Large-update interior point algorithm for -linear complementarity problem
Journal of Inequalities and Applications volume 2014, Article number: 363 (2014)
Abstract
It is well known that each barrier function defines an interior point algorithm and each barrier function is determined by its univariate kernel function. In this paper we present a new large-update primal-dual interior point algorithm for solving -linear complementarity problem (LCP) based on a parametric version of the kernel function in (Bai et al. in SIAM J. Optim. 13:766-782, 2003). We show that the algorithm has iteration complexity, where p is a barrier function parameter and κ is the handicap of the matrix. This is the best known complexity result for such a method.
MSC:90C33, 90C51.
1 Introduction
In this paper, we consider the standard form of LCP as follows:
where and is a -matrix and xs denotes the componentwise (Hadamard) product of the vectors x and s.
The matrix M is a -matrix if it is a -matrix for some , where , where denotes the i th component of the vector Mξ, , . Note that M is a -matrix if and only if M is positive semidefinite.
In the following, we give some examples for -matrices.
Example 1.1 The matrix
is , for all . Indeed, since , for , and . Hence for all . For , and . Then , for all . Thus, is , for all .
Example 1.2 For , the matrix
is , for all . . If , and . Hence , for all . If , and . , for all . Thus, is , for all .
Linear complementarity problems (LCPs) have many applications in science, economics and engineering. LCPs include linear and quadratic programming, fixed point problems and sets of piecewise-linear equations, bimatrix equilibrium points and variational inequalities [1]. A large-update interior point method (IPM) is one of the most efficient numerical methods for various optimization problems.
Peng-Roos-Terlaky [2–4] proposed new variants of interior point methods (IPMs) based on self-regular barrier functions and achieved so far the best known complexity for large-update methods with a specific self-regular barrier function. Bai-Ghami-Roos [5] proposed a new primal-dual IPM for linear optimization (LO) problem based on eligible barrier functions and a unified scheme for analyzing the algorithm based on four conditions of the kernel function and Bai-Lesaja-Roos [6] generalized to -LCP. Cho [7] and Cho-Kim [8] extended the complexity analysis for LO problem to -LCP. Amini-Haseli [9] and Amini-Peyghami [10] introduced generalized versions of the kernel functions in [5] and improved the complexity results for large-update methods for LO and -LCP, respectively. Recently, Lesaja-Roos [11] proposed a unified analysis of the IPM for -LCP based on the class of eligible barrier functions which was first introduced by Bai-Ghami-Roos [5] for LO. Wang-Bai [12] generalized interior point algorithm for LO to P-matrix LCP over symmetric cones based on the same kernel function. Wang-Lesaja [13] extended the full NT-step infeasible IPM for symmetric cone LO to the Cartesian -symmetric cone LCP and the algorithm is small-update method.
The most challenging question in this research area is whether or not there exists a kernel function for which the iteration bound for large-update method is the same as or even better than currently best known bound for such methods [5]. Bai-Ghami-Roos [14] proposed a new efficient large-update IPM for LO based on a barrier-type function which is not a barrier function in the usual sense since it has finite value at the boundary of the feasible region. Despite this, they obtained the best known iteration bound. Ghami [15] proposed various versions of interior point algorithms based on kernel functions and showed that the kernel function in [14] seems promising through numerical tests. Wang-Bai [16] proposed a generalized version of the kernel function in [14] which has a parameter in the growth term for -horizontal LCPs and obtained the best known complexity bound when the parameter value equals 1, i.e. the same kernel function in [14]. This implies that the parameter in the growth term does not improve the complexity of the algorithm except 1.
Motivated by this, we introduce a parameter in the barrier term of the kernel function in [14] and obtained the best known complexity result for large-update methods for all parameters. Note that when the parameter in the barrier term grows, the barrier function grows faster when t approaches zero.
The paper is organized as follows: In Section 2, we introduce the generic IPM and give some examples of -matrices. In Section 3, we introduce a class of barrier functions and propose a new large-update interior point algorithm for -LCP. In Section 4, we derive the complexity results for the algorithm. Finally concluding remarks are given in Section 5.
Throughout the paper, and denote the set of n-dimensional nonnegative vectors and positive vectors, respectively. For , and denote the i th component and the smallest component of the vector x, respectively. We denote by D the diagonal matrix from a vector d and e, the n-dimensional vector of ones. The index set . For , if there exists a positive constant such that , for all , and if there exist positive constants and such that , for all . For , and , where Z is the set of integers. log denotes the natural logarithm.
2 Preliminaries
In this section we recall some basic concepts and introduce the generic interior point algorithm. The basic idea of IPMs for LCP is to replace the second equation in (1.1) by the parameterized equation , . Now we consider the following system:
Without loss of generality, we assume that (1.1) satisfies the interior point condition (IPC), i.e., there exists a such that [17]. Since M is a -matrix for some and (1.1) satisfies the IPC, the system (2.1) has a unique solution for . We denote the solution of (2.1) by which is called the μ-center for . The set of μ-centers is called the central path of (1.1). Since the limit of the μ-centers satisfies (1.1) as , it yields the solution for (1.1) [17]. IPMs follow the central path approximately and approach the solution of (1.1) as .
For given , by applying Newton’s method to the system (2.1), we have the following Newton system:
where and . By Lemma 4.1 of [17], the system (2.2) has a unique solution . By taking a step along the search direction , one constructs a new iteration , where
for some step size . For notational convenience, we define the following:
Using (2.3), we can rewrite the system (2.2) as follows:
where and . Note that the right-hand side of the second equation in (2.4) equals the negative gradient of the classical logarithmic barrier function , i.e.,
where and , for . We call the kernel function of the classical logarithmic barrier function .
Assuming that we are given a strictly feasible point which is in a τ-neighborhood of the given μ-center, the generic interior point algorithm works as in Algorithm 1.
3 New algorithm
Consider a class of kernel functions as follows:
Then we have the first three derivatives of as follows:
From (3.1) and (3.2), we have for , and ,
From (3.3), is strictly convex and has a minimum value 0 at .
Lemma 3.1 Let be defined as in (3.1). Then for and ,
-
(i)
, ,
-
(ii)
, ,
-
(iii)
, .
Proof For (i), using (3.2) with , we have
For (ii), , .
For (iii), it is clear from (3.2). □
Corollary 3.2 Let . By Lemma 3.1(i) and Lemma 2.1.2 in [4], we have , i.e., is exponentially convex, for all .
Remark 3.3 By Lemma 3.1(ii), (iii), and Lemma 2.4 in [5], , , .
Lemma 3.4 For as in (3.1), we have
Proof Using (3.3), we have
and
□
Lemma 3.5 Let be the inverse function of for . Then we have
Proof Let for . Then . Using the first inequality in Lemma 3.4, we have . Then we have . □
Lemma 3.6 Let be the inverse function of for . Then , , , .
Proof Let , for . By the definition of ρ, , for and . By (3.2) and , . Hence . □
Define for as in (3.1) and ,
Since is strictly convex and minimal at , we have
We use as the proximity function to measure the distance between the current iteration and corresponding μ-center. Also, we define the norm-based proximity measure as follows:
Note that . In this paper, we replace the right-hand side of (2.5), , by as in (3.4). This defines a new search direction and proximity function.
In the following we compute upper bound of proximity function during an outer iteration.
Lemma 3.7 Let and be defined as in (3.5) and (3.4), respectively. Then , , .
Proof Using (3.5) and the second inequality of Lemma 3.4,
Hence we have . □
Lemma 3.8 Let and . If and , then .
Proof If , then . Suppose that . Let . Since , , i.e., . This implies that , . Let . Then is monotone decreasing in σ. Since and , .
Let . Then and hence . Let . is monotone increasing in p and L, respectively. Since and , . Hence . □
4 Complexity analysis
In this section we give the iteration complexity of the algorithm for large-update methods. For complexity analysis of the algorithm we follow a similar framework to the one defined in [5] for LO problems. In the following we compute the bound of the growth of the barrier function during an outer iteration of the algorithm.
Using Lemma 3.1(ii), (iii), and Theorem 3.2 in [5], we obtain the following lemma.
Lemma 4.1 Let ϱ be defined as in Lemma 3.5. If , then
In the following we compute the upper bounds of when we update the barrier parameter μ.
Theorem 4.2 Let and . If , then we have
Proof Define . Then and and . Hence, we have
Using Lemma 4.1, (4.2), and Lemma 3.5, we have
□
Define
We will use for the upper bounds of for large-update methods.
Remark 4.3 Let . Without loss of generality, we can assume that . Indeed, when , and , if . In the algorithm we take .
Remark 4.4 For large-update method with and , we have and .
In the following we compute a default step size which keeps the iterates strictly feasible and decreases the value of barrier function during inner iterations. For fixed μ, if we take a step size α, then we have new iterations , . Using (2.3), we have
Thus we have
Define for ,
Then is the difference of proximities between a new iteration and a current iteration for fixed μ. Assume that for some , and , for all . By Corollary 3.2,
Define
Then we have and . By taking the derivative of with respect to α, we have
Using (2.5) and (3.5), we have
Differentiating with respect to α, we have
Since , is strictly convex in α unless . Since M is a -matrix and from (2.2), for ,
where and . Since and , we have
For notational convenience, we denote , , and . To estimate the bound for and , we need the following technical lemma.
Lemma 4.5 (Modification of Lemma 4.1 in [8])
and .
Lemma 4.6 (Modification of Lemma 4.2 in [8])
, and .
Using (4.5) and Lemma 4.6, we have the following lemma.
Lemma 4.7 (Modification of Lemma 4.3 in [8])
Let δ be defined as in (3.5). Then we have
Using (4.5) and Lemma 4.7, we have the following lemma.
Lemma 4.8 (Modification of Lemma 4.4 in [8])
If the step size α satisfies the inequality
then .
Lemma 4.9 (Modification of Lemma 4.5 in [8])
Let ρ be defined as in Lemma 3.6 and . Then, in the worst case, the largest step size α satisfying (4.6) is given by
Lemma 4.10 (Modification of Lemma 4.6 in [8])
Let ρ and be defined as in Lemma 4.9. Then
Let
Letting , we have and . By (3.2) and , we have
By (4.8) with (3.2) and , (4.9), Lemma 3.7 with and ,
Define the default step size as follows:
Using Lemma 4.6, Lemma 3.8, and (4.10), ≥ ≥ ≥ ≥ > , for all . In the same way, , for all . Hence we can use the exponential convexity of .
Lemma 4.11 (Lemma 1.3.3 in [4])
Let a function h be twice differentiable and convex with , and let h attain its (global) minimum at . If is monotonically increasing on , then
Using , (4.5), , and Lemma 4.11, we have the following lemma.
Lemma 4.12 Let be defined as in (4.7). If the step size α is such that , then
Using Lemma 4.12, (4.10), and Lemma 3.7, we have the following theorem.
Theorem 4.13 For as in (4.10), .
Proposition 4.14 (Proposition 1.3.2 in [4])
Let be a sequence of positive numbers such that
where and . Then .
We define the value of after the μ-update as and the subsequent values in the same outer iteration are denoted as , . Then we have . If we let K be the number of inner iterations per an outer iteration, then we have , . In the following theorem we give the bound for the total number of iterations.
Theorem 4.15 Let a -LCP be given. If , then the total number of iterations of the algorithm to get an ϵ-approximate solution is bounded by
Proof Using Proposition 4.14 with and , we obtain the number of inner iterations . If the central path parameter μ has the initial value and is updated by multiplying , , then after at most iterations we have [18]. For the total number of iterations, we multiply the number of inner iterations by that of the outer iterations. Hence the total number of iterations is bounded by . □
5 Concluding remarks
Wang-Bai [16] defined a parametric version of the kernel function in [14], the parameter is in the growth term of the kernel function, and generalized the algorithm for LO to -LCPs based on this kernel function. Ghami-Roos-Steihaug [19] extended the algorithm for LO to semidefinite optimization based on the kernel function in [16]. However, they obtained the best known complexity bound for large-update methods when the kernel function takes the form in [14].
Motivated by this, we consider a parametric version of the kernel function in [14] with parameters in the barrier term of the kernel function. For large-update methods, by taking and , we obtained , for , which is the best known complexity bound for such a method.
Further research will be concerned with a numerical test and extension to general problems.
References
Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.
Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002, 93: 129-171. 10.1007/s101070200296
Peng J, Roos C, Terlaky T: Primal-dual interior-point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optim. 2002, 13: 179-203. 10.1137/S1052623401383236
Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.
Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101-128. 10.1137/S1052623403423114
Bai YQ, Lesaja G, Roos C:A new class of polynomial interior-point algorithms for linear complementarity problems. Pac. J. Optim. 2008, 4: 19-41.
Cho GM:A new large-update interior point algorithm for linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256-278.
Cho GM, Kim MK:A new large-update interior point algorithm for LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169-1183. 10.1016/j.amc.2006.04.060
Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259-270. 10.1017/S1446181100012827
Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501-513. 10.1016/j.amc.2008.11.002
Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for -linear complementarity problems. SIAM J. Optim. 2010, 20: 3014-3039. 10.1137/090766735
Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P -matrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012, 152: 739-772. 10.1007/s10957-011-9938-8
Wang GQ, Lesaja G:Full Nesterov-Todd step feasible interior-point method for the Cartesian -SCLCP. Optim. Methods Softw. 2013, 28: 600-618. 10.1080/10556788.2013.781600
Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766-782.
Ghami, ME: New primal-dual interior-point methods based on kernel functions. Dissertation, Delft University of Technology (2005)
Wang GQ, Bai YQ:Polynomial interior-point algorithms for horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248-263. 10.1016/j.cam.2009.07.014
Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.
Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization: An Interior Approach. Wiley, Chichester; 1997.
Ghami ME, Roos C, Steihaug T: A generic primal-dual interior point method for semidefinite optimization based on a new class of kernel functions. Optim. Methods Softw. 2010, 25: 387-403. 10.1080/10556780903239048
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2010094).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Cho, GM. Large-update interior point algorithm for -linear complementarity problem. J Inequal Appl 2014, 363 (2014). https://doi.org/10.1186/1029-242X-2014-363
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-363