Skip to main content

Large-update interior point algorithm for P -linear complementarity problem

Abstract

It is well known that each barrier function defines an interior point algorithm and each barrier function is determined by its univariate kernel function. In this paper we present a new large-update primal-dual interior point algorithm for solving P -linear complementarity problem (LCP) based on a parametric version of the kernel function in (Bai et al. in SIAM J. Optim. 13:766-782, 2003). We show that the algorithm has O((1+2κ) ( log p ) 3 n (logn)log n μ 0 ϵ ) iteration complexity, where p is a barrier function parameter and κ is the handicap of the matrix. This is the best known complexity result for such a method.

MSC:90C33, 90C51.

1 Introduction

In this paper, we consider the standard form of LCP as follows:

s=Mx+q,xs=0,x0,s0,
(1.1)

where x,s,q R n and M R n × n is a P -matrix and xs denotes the componentwise (Hadamard) product of the vectors x and s.

The matrix M is a P -matrix if it is a P (κ)-matrix for some κ0, where P (κ):={M R n × n (1+4κ) i I + ( ξ ) [ ξ ] i [ M ξ ] i + i I ( ξ ) [ ξ ] i [ M ξ ] i 0,ξ R n }, where [ M ξ ] i denotes the i th component of the vector , I + (ξ)={1in: ξ i [ M ξ ] i 0}, I (ξ)={1in: ξ i [ M ξ ] i <0}. Note that M is a P (0)-matrix if and only if M is positive semidefinite.

In the following, we give some examples for P (κ)-matrices.

Example 1.1 The matrix

M 1 =( 0 2 + 4 κ 2 0 )

is P (κ), for all κ0. Indeed, since x( M 1 x)= ( ( 2 + 4 κ ) x 1 x 2 , 2 x 1 x 2 ) T , for x 1 x 2 >0, I + ={1} and I ={2}. Hence (1+4κ)(2+4κ) x 1 x 2 2 x 1 x 2 =4κ x 1 x 2 (3+4κ)0 for all κ0. For x 1 x 2 <0, I + ={2} and I ={1}. Then (1+4κ)(2 x 1 x 2 )+(2+4κ) x 1 x 2 =4κ x 1 x 2 0, for all κ0. Thus, M 1 is P (κ), for all κ0.

Example 1.2 For c0, the matrix

M 2 =( 0 1 + 4 κ 0 1 0 0 0 0 c )

is P (κ), for all κ0. x( M 2 x)= ( ( 1 + 4 κ ) x 1 x 2 , x 1 x 2 , c x 3 2 ) T . If x 1 x 2 >0, I + ={1,3} and I ={2}. Hence ( 1 + 4 κ ) 2 x 1 x 2 +c(1+4κ) x 3 2 x 1 x 2 =8κ(1+2κ) x 1 x 2 +c(1+4κ) x 3 2 0, for all κ0. If x 1 x 2 <0, I + ={2,3} and I ={1}. (1+4κ)( x 1 x 2 +c x 3 2 )+(1+4κ) x 1 x 2 =c(1+4κ) x 3 2 0, for all κ0. Thus, M 2 is P (κ), for all κ0.

Linear complementarity problems (LCPs) have many applications in science, economics and engineering. LCPs include linear and quadratic programming, fixed point problems and sets of piecewise-linear equations, bimatrix equilibrium points and variational inequalities [1]. A large-update interior point method (IPM) is one of the most efficient numerical methods for various optimization problems.

Peng-Roos-Terlaky [24] proposed new variants of interior point methods (IPMs) based on self-regular barrier functions and achieved so far the best known complexity for large-update methods with a specific self-regular barrier function. Bai-Ghami-Roos [5] proposed a new primal-dual IPM for linear optimization (LO) problem based on eligible barrier functions and a unified scheme for analyzing the algorithm based on four conditions of the kernel function and Bai-Lesaja-Roos [6] generalized to P (κ)-LCP. Cho [7] and Cho-Kim [8] extended the complexity analysis for LO problem to P (κ)-LCP. Amini-Haseli [9] and Amini-Peyghami [10] introduced generalized versions of the kernel functions in [5] and improved the complexity results for large-update methods for LO and P (κ)-LCP, respectively. Recently, Lesaja-Roos [11] proposed a unified analysis of the IPM for P (κ)-LCP based on the class of eligible barrier functions which was first introduced by Bai-Ghami-Roos [5] for LO. Wang-Bai [12] generalized interior point algorithm for LO to P-matrix LCP over symmetric cones based on the same kernel function. Wang-Lesaja [13] extended the full NT-step infeasible IPM for symmetric cone LO to the Cartesian P (κ)-symmetric cone LCP and the algorithm is small-update method.

The most challenging question in this research area is whether or not there exists a kernel function for which the iteration bound for large-update method is the same as or even better than currently best known bound for such methods [5]. Bai-Ghami-Roos [14] proposed a new efficient large-update IPM for LO based on a barrier-type function which is not a barrier function in the usual sense since it has finite value at the boundary of the feasible region. Despite this, they obtained the best known iteration bound. Ghami [15] proposed various versions of interior point algorithms based on kernel functions and showed that the kernel function in [14] seems promising through numerical tests. Wang-Bai [16] proposed a generalized version of the kernel function in [14] which has a parameter in the growth term for P (κ)-horizontal LCPs and obtained the best known complexity bound when the parameter value equals 1, i.e. the same kernel function in [14]. This implies that the parameter in the growth term does not improve the complexity of the algorithm except 1.

Motivated by this, we introduce a parameter in the barrier term of the kernel function in [14] and obtained the best known complexity result for large-update methods for all parameters. Note that when the parameter in the barrier term grows, the barrier function grows faster when t approaches zero.

The paper is organized as follows: In Section 2, we introduce the generic IPM and give some examples of P (κ)-matrices. In Section 3, we introduce a class of barrier functions and propose a new large-update interior point algorithm for P -LCP. In Section 4, we derive the complexity results for the algorithm. Finally concluding remarks are given in Section 5.

Throughout the paper, R + n and R + + n denote the set of n-dimensional nonnegative vectors and positive vectors, respectively. For x R n , [ x ] i and x 1 denote the i th component and the smallest component of the vector x, respectively. We denote by D the diagonal matrix from a vector d and e, the n-dimensional vector of ones. The index set I:={1,2,,n}. For g 1 (t), g 2 (t): R + + R + + , g 1 (t)=O( g 2 (t)) if there exists a positive constant c 1 such that g 1 (t) c 1 g 2 (t), for all t>0, and g 1 (t)=Θ( g 2 (t)) if there exist positive constants c 2 and c 3 such that c 2 g 2 (t) g 1 (t) c 3 g 2 (t), for all t>0. For aR, a:=max{mZma} and a:=min{nZna}, where Z is the set of integers. log denotes the natural logarithm.

2 Preliminaries

In this section we recall some basic concepts and introduce the generic interior point algorithm. The basic idea of IPMs for LCP is to replace the second equation in (1.1) by the parameterized equation xs=μe, μ>0. Now we consider the following system:

s=Mx+q,xs=μe,x>0,s>0.
(2.1)

Without loss of generality, we assume that (1.1) satisfies the interior point condition (IPC), i.e., there exists a ( x 0 , s 0 )>0 such that s 0 =M x 0 +q [17]. Since M is a P (κ)-matrix for some κ0 and (1.1) satisfies the IPC, the system (2.1) has a unique solution for μ>0. We denote the solution of (2.1) by (x(μ),s(μ)) which is called the μ-center for μ>0. The set of μ-centers is called the central path of (1.1). Since the limit of the μ-centers satisfies (1.1) as μ0, it yields the solution for (1.1) [17]. IPMs follow the central path approximately and approach the solution of (1.1) as μ0.

For given (x,s):=( x 0 , s 0 ), by applying Newton’s method to the system (2.1), we have the following Newton system:

MΔx+Δs=0,SΔx+XΔs=μexs,
(2.2)

where X:=diag(x) and S:=diag(s). By Lemma 4.1 of [17], the system (2.2) has a unique solution (Δx,Δs). By taking a step along the search direction (Δx,Δs), one constructs a new iteration ( x + , s + ), where

x + :=x+αΔx, s + :=s+αΔs,

for some step size α0. For notational convenience, we define the following:

v:= x s μ ,d:= x s , d x := v Δ x x , d s := v Δ s s .
(2.3)

Using (2.3), we can rewrite the system (2.2) as follows:

M ¯ d x + d s =0, d x + d s = v 1 v,
(2.4)

where M ¯ :=DMD and D:=diag(d). Note that the right-hand side of the second equation in (2.4) equals the negative gradient of the classical logarithmic barrier function Ψ l (v), i.e.,

d x + d s = Ψ l (v),
(2.5)

where Ψ l (v):= i = 1 n ψ l ( v i ) and ψ l (t):= t 2 1 2 logt, for t>0. We call ψ l the kernel function of the classical logarithmic barrier function Ψ l (v).

Assuming that we are given a strictly feasible point (x,s) which is in a τ-neighborhood of the given μ-center, the generic interior point algorithm works as in Algorithm 1.

Algorithm 1
figure 1

The generic interior point algorithm

3 New algorithm

Consider a class of kernel functions ψ(t) as follows:

ψ(t):= ( log p ) ( t 2 1 ) 2 + 1 σ ( p σ ( 1 t ) 1 ) ,pe,σ1,t>0.
(3.1)

Then we have the first three derivatives of ψ(t) as follows:

ψ ( t ) = ( log p ) ( t p σ ( 1 t ) ) , ψ ( t ) = ( log p ) ( 1 + σ ( log p ) p σ ( 1 t ) ) , ψ ( 3 ) ( t ) = σ 2 ( log p ) 3 p σ ( 1 t ) .
(3.2)

From (3.1) and (3.2), we have for pe, σ1 and t>0,

ψ (1)=ψ(1)=0, ψ (t)>logp, lim t 0 + ψ(t)<, lim t ψ(t)=.
(3.3)

From (3.3), ψ(t) is strictly convex and has a minimum value 0 at t=1.

Lemma 3.1 Let ψ(t) be defined as in (3.1). Then for pe and σ1,

  1. (i)

    t ψ (t)+ ψ (t)0, t> 1 σ log p ,

  2. (ii)

    t ψ (t) ψ (t)0, t>0,

  3. (iii)

    ψ ( 3 ) (t)<0, t>0.

Proof For (i), using (3.2) with t> 1 σ log p , we have

t ψ (t)+ ψ (t)=2t(logp)+ ( σ ( log p ) t 1 ) (logp) p σ ( 1 t ) >0.

For (ii), t ψ (t) ψ (t)=(σ(logp)t+1)(logp) p σ ( 1 t ) >0, t>0.

For (iii), it is clear from (3.2). □

Corollary 3.2 Let t 1 , t 2 1 σ log p . By Lemma  3.1(i) and Lemma  2.1.2 in [4], we have ψ( t 1 t 2 ) 1 2 (ψ( t 1 )+ψ( t 2 )), i.e., ψ(t) is exponentially convex, for all t> 1 σ log p .

Remark 3.3 By Lemma 3.1(ii), (iii), and Lemma 2.4 in [5], ψ (t) ψ (βt)β ψ (t) ψ (βt)>0, t>0, β>1.

Lemma 3.4 For ψ(t) as in (3.1), we have

log p 2 ( t 1 ) 2 ψ(t) 1 2 log p ( ψ ( t ) ) 2 ,pe,σ1,t>0.

Proof Using (3.3), we have

ψ(t)= 1 t 1 ξ ψ (ζ)dζdξ 1 t 1 ξ (logp)dζdξ= log p 2 ( t 1 ) 2

and

ψ ( t ) = 1 t 1 ξ ψ ( ζ ) d ζ d ξ 1 log p 1 t 1 ξ ψ ( ξ ) ψ ( ζ ) d ζ d ξ = 1 log p 1 t ψ ( ξ ) ψ ( ξ ) d ξ = 1 log p 1 t ψ ( ξ ) d ψ ( ξ ) = 1 2 log p ( ψ ( t ) ) 2 .

 □

Lemma 3.5 Let ϱ:[0,)[1,) be the inverse function of ψ(t) for t1. Then we have

ϱ(u)1+ 2 u log p ,pe,u0.

Proof Let u:=ψ(t) for t1. Then ϱ(u)=t. Using the first inequality in Lemma 3.4, we have u=ψ(t) log p 2 ( t 1 ) 2 . Then we have t=ϱ(u)1+ 2 u log p . □

Lemma 3.6 Let ρ:[0,)(0,1] be the inverse function of 1 2 ψ (t) for 0<t1. Then ρ(z) σ ( log p ) log ( 2 z log p + 1 ) σ log p , p>e, σ1, z0.

Proof Let z:= 1 2 ψ (t), for 0<t1. By the definition of ρ, ρ(z)=t, for z0 and 2z=ψ(t). By (3.2) and 0<t1, p σ ( 1 t ) = 2 z log p +t 2 z log p +1. Hence t=ρ(z) σ ( log p ) log ( 2 z log p + 1 ) σ log p . □

Define for ψ(t) as in (3.1) and v R + + n ,

Ψ(v):=Ψ(x,s,μ):= i = 1 n ψ ( [ v ] i ) .
(3.4)

Since Ψ(v) is strictly convex and minimal at v=e, we have

Ψ(v)=0v=ex=x(μ),s=s(μ).

We use Ψ(v) as the proximity function to measure the distance between the current iteration and corresponding μ-center. Also, we define the norm-based proximity measure δ(v) as follows:

δ(v):= 1 2 Ψ ( v ) = 1 2 d x + d s .
(3.5)

Note that δ(v)=0v=eΨ(v)=0. In this paper, we replace the right-hand side of (2.5), Ψ l (v), by Ψ(v) as in (3.4). This defines a new search direction and proximity function.

In the following we compute upper bound of proximity function during an outer iteration.

Lemma 3.7 Let δ(v) and Ψ(v) be defined as in (3.5) and (3.4), respectively. Then δ(v) log p 2 Ψ ( v ) , v R + + n , pe.

Proof Using (3.5) and the second inequality of Lemma 3.4,

δ 2 (v)= 1 4 i = 1 n ( ψ ( [ v ] i ) ) 2 log p 2 i = 1 n ψ ( [ v ] i ) = log p 2 Ψ(v).

Hence we have δ(v) log p 2 Ψ ( v ) . □

Lemma 3.8 Let L8 and Ψ(v)L. If σ1+2log(1+L) and pe, then v 1 3 2 σ log p .

Proof If v 1 1, then v 1 1> 3 2 σ log p . Suppose that v 1 <1. Let t:= v 1 . Since Ψ(v)L, ψ(t)L, i.e., ( log p ) ( t 2 1 ) 2 + 1 σ ( p σ ( 1 t ) 1)L. This implies that 1 σ ( p σ ( 1 t ) 1)L+ ( log p ) ( 1 t 2 ) 2 L+ log p 2 , p 1 σ t 1 + σ ( L + log p 2 ) p σ 1 . Let g 1 (σ):= 1 + σ ( L + log p 2 ) p σ 1 . Then g 1 (σ) is monotone decreasing in σ. Since σ1+2log(1+L) and pe, p 1 σ t 1 + ( 1 + 2 log ( 1 + L ) ) ( L + log p 2 ) p 2 log ( 1 + L ) 1 + ( 1 + 2 log ( 1 + L ) ) ( L + log p 2 ) e 2 log ( 1 + L ) = 1 + ( 1 + 2 log ( 1 + L ) ) ( L + log p 2 ) ( 1 + L ) 2 .

Let g 2 (L):= 1 + ( 1 + 2 log ( 1 + L ) ) ( L + log p 2 ) ( 1 + L ) 2 . Then p 1 σ t g 2 (L) and hence t 1 σ log p log p g 2 ( L ) . Let g 3 (p,L):= p g 2 ( L ) := p ( 1 + L ) 2 1 + ( 1 + 2 log ( 1 + L ) ) ( L + log p 2 ) . g 3 (p,L) is monotone increasing in p and L, respectively. Since pe and L8, g 3 (p,L) 81 e 1 + 8.5 ( 1 + 4 log 3 ) 4.6. Hence t log ( 4.6 ) σ log p 3 2 σ log p . □

4 Complexity analysis

In this section we give the iteration complexity of the algorithm for large-update methods. For complexity analysis of the algorithm we follow a similar framework to the one defined in [5] for LO problems. In the following we compute the bound of the growth of the barrier function during an outer iteration of the algorithm.

Using Lemma 3.1(ii), (iii), and Theorem 3.2 in [5], we obtain the following lemma.

Lemma 4.1 Let ϱ be defined as in Lemma  3.5. If Ψ(v)τ, then

Ψ(βv)nψ ( β ϱ ( τ n ) ) ,v R + + n ,β1.
(4.1)

In the following we compute the upper bounds of Ψ(v) when we update the barrier parameter μ.

Theorem 4.2 Let 0<θ<1 and v + := v 1 θ . If Ψ(v)τ, then we have

Ψ( v + ) θ ( log p ) n + 2 2 τ ( log p ) n + 2 τ 2 ( 1 θ ) ,pe.

Proof Define ψ b (t):= 1 σ ( p σ ( 1 t ) 1). Then ψ(t)= ( log p ) ( t 2 1 ) 2 + ψ b (t) and ψ b (t)=(logp) p σ ( 1 t ) <0 and ψ b (1)=0. Hence, we have

ψ(t) ( log p ) ( t 2 1 ) 2 ,t1,pe.
(4.2)

Using Lemma 4.1, (4.2), and Lemma 3.5, we have

Ψ ( v + ) n ψ ( 1 1 θ ϱ ( τ n ) ) n log p 2 ( ϱ 2 ( τ n ) 1 θ 1 ) n log p 2 ( ( 1 + 2 τ n log p ) 2 1 θ 1 ) = θ ( log p ) n + 2 2 τ ( log p ) n + 2 τ 2 ( 1 θ ) .

 □

Define

Ψ ¯ 0 := θ ( log p ) n + 2 2 τ ( log p ) n + 2 τ 2 ( 1 θ ) .
(4.3)

We will use Ψ ¯ 0 for the upper bounds of Ψ(v) for large-update methods.

Remark 4.3 Let L:= Ψ ¯ 0 . Without loss of generality, we can assume that L8. Indeed, when pe, τ1 and n2, L θ n + 2 2 τ n + 2 τ 2 ( 1 θ ) θ n + 2 2 n + 2 2 ( 1 θ ) θ + 3 1 θ >8 if θ> 5 9 . In the algorithm we take σ:=1+2log(1+L).

Remark 4.4 For large-update method with τ=O(n) and θ=Θ(1), we have Ψ ¯ 0 =O((logp)n) and σ=O(log((logp)n)).

In the following we compute a default step size which keeps the iterates strictly feasible and decreases the value of barrier function during inner iterations. For fixed μ, if we take a step size α, then we have new iterations x + :=x+αΔx, s + :=s+αΔs. Using (2.3), we have

x + = x ( e + α Δ x x ) = x ( e + α d x v ) = x v ( v + α d x ) , s + = s ( e + α Δ s s ) = s ( e + α d s v ) = s v ( v + α d s ) .

Thus we have

v + := x + s + μ = ( v + α d x ) ( v + α d s ) .

Define for α>0,

f(α):=Ψ( v + )Ψ(v).
(4.4)

Then f(α) is the difference of proximities between a new iteration and a current iteration for fixed μ. Assume that for some α0, [ v ] i +α [ d x ] i > 1 σ log q and [ v ] i +α [ d s ] i > 1 σ log q , for all iI. By Corollary 3.2,

Ψ( v + )=Ψ ( ( v + α d x ) ( v + α d s ) ) 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) .

Define

f 1 (α):= 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) Ψ(v).

Then we have f(α) f 1 (α) and f(0)= f 1 (0)=0. By taking the derivative of f 1 (α) with respect to α, we have

f 1 (α)= 1 2 i = 1 n ( ψ ( [ v ] i + α [ d x ] i ) [ d x ] i + ψ ( [ v ] i + α [ d s ] i ) [ d s ] i ) .

Using (2.5) and (3.5), we have

f 1 (0)= 1 2 Ψ ( v ) T ( d x + d s )= 1 2 Ψ ( v ) T Ψ(v)=2 δ 2 (v).
(4.5)

Differentiating f 1 (α) with respect to α, we have

f 1 (α)= 1 2 i = 1 n ( ψ ( [ v ] i + α [ d x ] i ) [ d x ] i 2 + ψ ( [ v ] i + α [ d s ] i ) [ d s ] i 2 ) .

Since f 1 (α)>0, f 1 (α) is strictly convex in α unless d x = d s =0. Since M is a P (κ)-matrix and MΔx=Δs from (2.2), for Δx R n ,

(1+4κ) i I + [ Δ x ] i [ Δ s ] i + i I [ Δ x ] i [ Δ s ] i 0,

where I + ={iI: [ Δ x ] i [ Δ s ] i 0} and I =I I + . Since d x d s = v 2 Δ x Δ s x s = Δ x Δ s μ and μ>0, we have

(1+4κ) i I + [ d x ] i [ d s ] i + i I [ d x ] i [ d s ] i 0.

For notational convenience, we denote δ:=δ(v), Ψ:=Ψ(v), σ + := i I + [ d x ] i [ d s ] i and σ := i I [ d x ] i [ d s ] i . To estimate the bound for d x and d s , we need the following technical lemma.

Lemma 4.5 (Modification of Lemma 4.1 in [8])

σ + δ 2 and σ (1+4κ) δ 2 .

Lemma 4.6 (Modification of Lemma 4.2 in [8])

i = 1 n ( [ d x ] i 2 + [ d s ] i 2 )4(1+2κ) δ 2 , d x 2δ 1 + 2 κ and d s 2δ 1 + 2 κ .

Using (4.5) and Lemma 4.6, we have the following lemma.

Lemma 4.7 (Modification of Lemma 4.3 in [8])

Let δ be defined as in (3.5). Then we have

f 1 (α)2(1+2κ) δ 2 ψ ( v 1 2αδ 1 + 2 κ ).

Using (4.5) and Lemma 4.7, we have the following lemma.

Lemma 4.8 (Modification of Lemma 4.4 in [8])

If the step size α satisfies the inequality

ψ ( v 1 2αδ 1 + 2 κ )+ ψ ( v 1 ) 2 δ 1 + 2 κ ,
(4.6)

then f 1 (α)0.

Lemma 4.9 (Modification of Lemma 4.5 in [8])

Let ρ be defined as in Lemma  3.6 and a:=1+ 1 1 + 2 κ . Then, in the worst case, the largest step size α satisfying (4.6) is given by

α ¯ := 1 2 δ 1 + 2 κ ( ρ ( δ ) ρ ( a δ ) ) .
(4.7)

Lemma 4.10 (Modification of Lemma 4.6 in [8])

Let ρ and α ¯ be defined as in Lemma  4.9. Then

α ¯ 1 ( 1 + 2 κ ) ψ ( ρ ( a δ ) ) .

Let

α ˜ := 1 ( 1 + 2 κ ) ψ ( ρ ( a δ ) ) .
(4.8)

Letting t:=ρ(aδ), we have 0<t1 and ψ (t)=2aδ. By (3.2) and 1a2, we have

(logp) p σ ( 1 t ) (logp)+4δ.
(4.9)

By (4.8) with (3.2) and σ1, (4.9), Lemma 3.7 with Ψτ1 and pe,

α ˜ 1 σ ( 1 + 2 κ ) ( log p ) ( 1 + p σ ( 1 t ) log p ) 1 σ ( 1 + 2 κ ) ( log p ) ( 1 + 4 δ + log p ) 1 σ δ ( 1 + 2 κ ) ( log p ) ( 2 + 4 + 2 log p ) 1 2 σ δ ( 1 + 2 κ ) ( log p ) 2 ( 2 + 2 ) .

Define the default step size α ˆ as follows:

α ˆ := 1 2 σ δ ( 1 + 2 κ ) ( log p ) 2 ( 2 + 2 ) .
(4.10)

Using Lemma 4.6, Lemma 3.8, and (4.10), [ v ] i + α ¯ [ d x ] i v 1 2 α ¯ 1 + 2 κ δ 3 2 σ log p 1 σ 1 + 2 κ ( log p ) 2 ( 2 + 2 ) ( 3 2 1 ( 2 + 2 ) log p ) 1 σ log p ( 3 2 1 2 + 2 ) 1 σ log p > 1 σ log p , for all iI. In the same way, [ v ] i + α ¯ [ d x ] i > 1 σ log p , for all iI. Hence we can use the exponential convexity of ψ(t).

Lemma 4.11 (Lemma 1.3.3 in [4])

Let a function h be twice differentiable and convex with h(0)=0, h (0)<0 and let h attain its (global) minimum at t >0. If h (t) is monotonically increasing on t[0, t ], then

h(t) t h ( 0 ) 2 ,0t t .

Using f 1 (0)=0, (4.5), ψ <0, and Lemma 4.11, we have the following lemma.

Lemma 4.12 Let α ¯ be defined as in (4.7). If the step size α is such that α α ¯ , then

f(α)α δ 2 .

Using Lemma 4.12, (4.10), and Lemma 3.7, we have the following theorem.

Theorem 4.13 For α ˆ as in (4.10), f( α ˆ ) Ψ 1 2 4 ( 1 + 2 ) ( 1 + 2 κ ) σ ( log p ) 3 2 .

Proposition 4.14 (Proposition 1.3.2 in [4])

Let t 0 , t 1 ,, t K ¯ be a sequence of positive numbers such that

t k + 1 t k λ t k 1 γ ,k=0,1,, K ¯ 1,

where λ>0 and 0<γ1. Then K ¯ t 0 γ λ γ .

We define the value of Ψ(v) after the μ-update as Ψ 0 and the subsequent values in the same outer iteration are denoted as Ψ k , k=1,2, . Then we have Ψ 0 Ψ ¯ 0 . If we let K be the number of inner iterations per an outer iteration, then we have Ψ K 1 >τ, 0 Ψ K τ. In the following theorem we give the bound for the total number of iterations.

Theorem 4.15 Let a P (κ)-LCP be given. If τ1, then the total number of iterations of the algorithm to get an ϵ-approximate solution is bounded by

8 ( 1 + 2 ) ( 1 + 2 κ ) σ ( log p ) 3 2 Ψ ¯ 0 1 2 θ log n μ 0 ϵ .
(4.11)

Proof Using Proposition 4.14 with γ:= 1 2 and λ:= 1 4 ( 1 + 2 ) ( 1 + 2 κ ) σ ( log p ) 3 2 , we obtain the number of inner iterations 4 2 (2+ 2 )(1+2κ)σ ( log p ) 3 2 Ψ ¯ 0 1 2 . If the central path parameter μ has the initial value μ 0 >0 and is updated by multiplying 1θ, 0<θ<1, then after at most 1 θ log n μ 0 ϵ iterations we have nμ<ϵ [18]. For the total number of iterations, we multiply the number of inner iterations by that of the outer iterations. Hence the total number of iterations is bounded by 8 ( 1 + 2 ) ( 1 + 2 κ ) σ ( log p ) 3 2 Ψ ¯ 0 1 2 θ log n μ 0 ϵ . □

5 Concluding remarks

Wang-Bai [16] defined a parametric version of the kernel function in [14], the parameter is in the growth term of the kernel function, and generalized the algorithm for LO to P (κ)-LCPs based on this kernel function. Ghami-Roos-Steihaug [19] extended the algorithm for LO to semidefinite optimization based on the kernel function in [16]. However, they obtained the best known complexity bound for large-update methods when the kernel function takes the form in [14].

Motivated by this, we consider a parametric version of the kernel function in [14] with parameters in the barrier term of the kernel function. For large-update methods, by taking τ=O(n) and θ=Θ(1), we obtained O((1+2κ) ( log p ) 3 n (logn)log n μ 0 ϵ ), for pe, which is the best known complexity bound for such a method.

Further research will be concerned with a numerical test and extension to general problems.

References

  1. Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.

    MATH  Google Scholar 

  2. Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002, 93: 129-171. 10.1007/s101070200296

    Article  MathSciNet  MATH  Google Scholar 

  3. Peng J, Roos C, Terlaky T: Primal-dual interior-point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optim. 2002, 13: 179-203. 10.1137/S1052623401383236

    Article  MathSciNet  MATH  Google Scholar 

  4. Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.

    MATH  Google Scholar 

  5. Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101-128. 10.1137/S1052623403423114

    Article  MathSciNet  MATH  Google Scholar 

  6. Bai YQ, Lesaja G, Roos C:A new class of polynomial interior-point algorithms for P (κ) linear complementarity problems. Pac. J. Optim. 2008, 4: 19-41.

    MathSciNet  MATH  Google Scholar 

  7. Cho GM:A new large-update interior point algorithm for P (κ) linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256-278.

    Article  MathSciNet  Google Scholar 

  8. Cho GM, Kim MK:A new large-update interior point algorithm for P (κ) LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169-1183. 10.1016/j.amc.2006.04.060

    Article  MathSciNet  MATH  Google Scholar 

  9. Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259-270. 10.1017/S1446181100012827

    Article  MathSciNet  MATH  Google Scholar 

  10. Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for P (κ) linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501-513. 10.1016/j.amc.2008.11.002

    Article  MathSciNet  MATH  Google Scholar 

  11. Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for P (κ)-linear complementarity problems. SIAM J. Optim. 2010, 20: 3014-3039. 10.1137/090766735

    Article  MathSciNet  MATH  Google Scholar 

  12. Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P -matrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012, 152: 739-772. 10.1007/s10957-011-9938-8

    Article  MathSciNet  MATH  Google Scholar 

  13. Wang GQ, Lesaja G:Full Nesterov-Todd step feasible interior-point method for the Cartesian P (κ)-SCLCP. Optim. Methods Softw. 2013, 28: 600-618. 10.1080/10556788.2013.781600

    Article  MathSciNet  MATH  Google Scholar 

  14. Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766-782.

    Article  MATH  MathSciNet  Google Scholar 

  15. Ghami, ME: New primal-dual interior-point methods based on kernel functions. Dissertation, Delft University of Technology (2005)

    Google Scholar 

  16. Wang GQ, Bai YQ:Polynomial interior-point algorithms for P (κ) horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248-263. 10.1016/j.cam.2009.07.014

    Article  MathSciNet  MATH  Google Scholar 

  17. Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.

    Chapter  Google Scholar 

  18. Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization: An Interior Approach. Wiley, Chichester; 1997.

    MATH  Google Scholar 

  19. Ghami ME, Roos C, Steihaug T: A generic primal-dual interior point method for semidefinite optimization based on a new class of kernel functions. Optim. Methods Softw. 2010, 25: 387-403. 10.1080/10556780903239048

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2010094).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gyeong-Mi Cho.

Additional information

Competing interests

The author declares that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cho, GM. Large-update interior point algorithm for P -linear complementarity problem. J Inequal Appl 2014, 363 (2014). https://doi.org/10.1186/1029-242X-2014-363

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-363

Keywords