Skip to main content

A note on the optimality condition for a bilevel programming

Abstract

The equality type Mordukhovich coderivative rule for a solution mapping to a second-order cone constrained parametric variational inequality is derived under the constraint nondegenerate condition, which improves the result published recently. The rule established is then applied to deriving a necessary and sufficient local optimality condition for a bilevel programming with a second-order cone constrained lower level problem.

1 Introduction

In this paper, we focus on the following bilevel programming (BP):

$$ \begin{aligned} &\min f(x,y) \\ &\quad \mbox{s.t. } y \in S(x), \end{aligned} $$
(1)

where \(f(\cdot,\cdot):\Re^{n}\times\Re^{m}\rightarrow\Re\) is continuously differentiable and \(S(x)\) is the optimal solution set of the following problem:

$$ \begin{aligned} &\min \psi(x,y) \\ &\quad \mbox{s.t. } A(x)y+b\in\mathcal{K}^{p} \end{aligned} $$
(2)

with \(\psi(\cdot,\cdot):\Re^{n}\times\Re^{m}\rightarrow\Re\) being a continuously differentiable convex mapping, \(b\in\Re^{p}\), \(A(\cdot):\Re^{n}\rightarrow\Re^{p\times m}\), and \(\mathcal {K}^{p}\subseteq \Re^{p}\) being the second-order cone (SOC), also called the Lorentz cone, defined by

$$\mathcal{K}^{p}:=\bigl\{ z=(z_{0}, z_{1})\in\Re \times\Re^{p-1}: z_{0}\geq\| z_{1}\| \bigr\} , $$

where \(\|\cdot\|\) stands for the Euclidean norm. If \(p=1\), then \(\mathcal{K}^{p}\) is the set of nonnegative reals \(\Re_{+}\), in this case, problem (1) is the bilevel programming studied by [1] and [2]. If the lower level problem of BP is replaced by its KKT condition, this is a mathematical program with a second-order cone complementarity problem among the constraints [3].

Since for fixed \(x\in\Re^{n}\), problem (2) is a convex problem, the solution mapping \(S(\cdot)\) in (1) can be rewritten as

$$ S(x):= \bigl\{ y\in\Re^{m}: \bigl\langle F(x,y), y'-y\bigr\rangle \geq0, \forall y'\in\Omega (x) \bigr\} , $$
(3)

where \(F(x,y):=\nabla_{y}\psi(x,y)\) and \(\Omega: \Re^{n} \rightrightarrows\Re^{m}\) is a convex-valued multifunction defined by

$$ \Omega(x)=\bigl\{ y\in\Re^{m} :A(x)y+b\in \mathcal{K}^{p}\bigr\} . $$
(4)

For fixed x, \(S(x)\) denotes the solution set of a variational inequality problem, which has been intensively studied by [4–8].

To establish the necessary and sufficient local optimality condition for bilevel programming (1), a crucial step is to compute generalized differentiation for the solution mapping \(S(\cdot)\) defined by (3). The generalized differentiation in our study is Mordukhovich’s coderivative [4], which plays an important role in characterizations of metric regularity and openness properties of set-valued mappings; see [9] and the references therein.

Mordukhovich and Outrata [5] has established upper estimations of the coderivatives for the solution mapping (3) with \(\mathcal{K}^{p}\) being a closed convex set under appropriate calmness assumptions and constraint qualifications. However, the equality type calculus rules of the coderivatives of a solution mapping S (3) are not mentioned. Recently, Zhang et al. [10] has established equality type calculus rules of the coderivatives of a solution mapping S (3) under the constraint nondegenerate condition and applied the results obtained to deriving necessary and sufficient condition of the Lipschitz-like property [4] of the solution mapping S (3).

In this paper, the equality type representation of the coderivative of a solution mapping S (3) is established under conditions weaker than [10], Theorem 3.2, and it then is used to obtain a necessary and sufficient local optimality conditions for the bilevel programming (1). This is done on the basis of an exact description of the coderivative of the normal cone operator onto the second-order cone.

This paper is organized as follows: Section 2 gives preliminaries needed throughout the paper. In Section 3, the main results are established, i.e., the equality type calculus rule of the coderivatives of a solution mapping S (3) is established and then used to derive the optimality condition of bilevel programming (1). Some examples are provided.

2 Preliminaries

Throughout this paper we use the following notations. For an extended real-valued function \(\varphi: \Re^{n}\rightarrow \Re\cup\{\pm\infty\}\), \(\nabla\varphi(x)\) denotes its the gradient of φ at x. For a continuously differentiable mapping \(\phi:\Re^{n}\rightarrow\Re^{m} \), \(\mathcal{J}\phi(x)\) denotes the Jacobian of ϕ at x. We use \(\mathbb{B}_{n}\), \(\|\cdot\|\) and \(\Re_{+}\) to stand for the closed unit ball in \(\Re^{n}\), the Euclidean norm and the nonnegative reals, respectively. \([| x| ]=\{tx: t\in\Re\}\), \(S^{\bot}=\{\eta\in\Re^{n}: \langle\eta, x\rangle=0, \forall x\in S\}\), \(\operatorname{Sp}(S)=\Re_{+}(S-S)\) and \(\operatorname{lin}(C)\) denote the linear space generated by vector x, the orthogonal complement of the set \(S\subseteq\Re^{n}\), the linear space generated by S and the linearity subspace of the convex cone C, respectively.

Given a closed set \(\Xi\subset\Re^{n}\) and a point \(\bar{x}\in \Xi\), the Mordukhovich limiting normal cone to Ξ at x̄ is defined by

$$N_{\Xi}(\bar{x}):=\limsup_{x\stackrel{\Xi}{\rightarrow} \bar{x}}\widehat {N}_{\Xi}(x), $$

see for instance [11] and [4], where the cone

$$\widehat{N}_{\Xi}(\bar{x}):= \biggl\{ x^{*}\in\Re^{n} \Bigm| \limsup_{x\stackrel{\Xi}{\rightarrow}\bar{x}}\frac{\langle x^{*},x-\bar{x}\rangle}{\|x-\bar{x}\|}\leq0 \biggr\} $$

is called the regular normal cone to Ξ at x̄ with ‘lim sup’ being the outer limit of a set-valued mapping or the upper limit of a real-valued function; see [11]. It follows from the definition that \(\widehat{N}_{\Xi}(\bar{x})\subseteq N_{\Xi}(\bar{x})\). If the above inclusion becomes equality, we say that Ξ is normally regular at x̄ (or Clarke regular by [11]). According to [11], Theorem 6.9, each convex set is normally regular at all its points.

For set-valued maps, the definition of the coderivative was introduced by Mordukhovich in [12] based on the Mordukhovich limiting normal cone.

Definition 2.1

Consider a mapping \(S:\Re^{n}\rightrightarrows\Re^{m}\) and a point \(\bar{x}\in \operatorname{dom}S\). The coderivative of S at x̄ for any \(\bar{u} \in S(\bar{x})\) is the mapping \(D^{*}S(\bar{x},\bar{u}):\Re^{m}\rightrightarrows \Re^{n}\) defined by

$$D^{*}S(\bar{x},\bar{u}) (y) = \bigl\{ v: (v,-y)\in N_{\operatorname{gph}S}(\bar{x}, \bar{u}) \bigr\} . $$

The notation \(D^{*}S(\bar{x},\bar{u})\) is simplified to \(D^{*}S(\bar{x})\) when S is single-valued at x̄, \(S(\bar{x})=\{\bar{u}\}\). Similarly, and with the same provision for simplified notation, the regular coderivative \(\widehat{D}^{*}S(\bar{x},\bar{u}):\Re ^{m}\rightrightarrows \Re^{n}\) is defined by

$$\widehat{D}^{*}S(\bar{x},\bar{u}) (y) = \bigl\{ v: (v,-y)\in \widehat{N}_{\operatorname{gph}S}( \bar{x},\bar{u}) \bigr\} . $$

Next we give the following proposition to show the description of the coderivative of some special set-valued mappings.

Proposition 2.1

([10], Proposition 2.1)

For any \((\bar{x},\bar{y})\in\operatorname{gph}N_{\mathcal{K}^{p}}\), let \(\bar{z}=\bar{x}+\bar{y}\).

  1. (1)

    In the case when \(\bar{x} \ne0\), \(\bar{y} \ne0\), we have

    $$ {\widehat{D}}^{*}N_{\mathcal{K}^{p}}(\bar{x},\bar {y}) \bigl(y^{*}\bigr)={D}^{*}N_{\mathcal{K}^{p}}( \bar{x},\bar{y}) \bigl(y^{*}\bigr)=\left \{ \textstyle\begin{array}{l@{\quad}l} [| \eta| ]+ \frac{\bar{z}_{0}-\|\bar{z}_{1}\| }{\bar{z}_{0}+\|\bar{z}_{1}\|} \bigl( {\scriptsize\begin{matrix}{} \frac{\bar{z}_{1}^{T}y^{*}_{1}}{\|\bar{z}_{1}\|}\cr -y^{*}_{1} \end{matrix}} \bigr), & \eta^{T}y^{*}=0, \\ \emptyset, & \textit{otherwise}, \end{array}\displaystyle \right . $$

    where \(\eta= (1, -\frac{\bar{z}_{1}^{T}}{\|\bar{z}_{1}\|} )^{T}\).

  2. (2)

    In the case when \(\bar{z} \in\operatorname{int}(\mathcal {K}^{p})^{-}\), we have

    $$ {\widehat{D}}^{*}N_{\mathcal{K}^{p}}(\bar{x},\bar {y}) \bigl(y^{*}\bigr)={D}^{*}N_{\mathcal{K}^{p}}( \bar{x},\bar{y}) \bigl(y^{*}\bigr)=\left \{ \textstyle\begin{array}{l@{\quad}l} \Re^{p},& y^{*}=0, \\ \emptyset, & y^{*}\neq0. \end{array}\displaystyle \right . $$
  3. (3)

    In the case when \(\bar{z} \in\operatorname{int}\mathcal {K}^{p}\), we have

    $$ {\widehat{D}}^{*}N_{\mathcal{K}^{p}}(\bar{x},\bar {y}) \bigl(y^{*}\bigr)={D}^{*}N_{\mathcal{K}^{p}}( \bar{x},\bar{y}) \bigl(y^{*}\bigr)= \{0\}_{p} $$

    for any \(y^{*}\in\Re^{p}\).

We need the following stability notations; see [4].

Definition 2.2

Consider the multifunction \(F : \Re ^{m} \rightrightarrows \Re^{n}\).

  1. (a)

    (Lipschitz-like property) We say F has Lipschitz-like property at \((\bar{y},\bar{x})\in\operatorname{gph} F\), if there exist some \(\kappa > 0\) and some neighborhoods U of x̄ and V of ȳ such that

    $$F\bigl(y'\bigr) \cap U \subset F(y)+ \kappa\bigl\Vert y'-y\bigr\Vert \mathbb{B}_{n}\quad \mbox{for all } y, y' \in V. $$
  2. (b)

    (Calmness) We say F is calm at \((\bar{y},\bar{x})\in\operatorname{gph} F\) if there exist some \(k > 0\) and some neighborhoods U of x̄ and V of ȳ such that

    $$d\bigl(x, F(\bar{y})\bigr)\leq k \| y-\bar{y}\|\quad \mbox{for all } y\in V , x \in F(y)\cap U. $$

We know from the definition that the calmness property is weaker than the Lipschitz-like property. As shown in [11], Theorem 9.43, F has Lipschitz-like property at \((\bar{y},\bar{x})\in\operatorname{gph} F\) if and only if the coderivative condition

$$ D^{*}F(\bar{y},\bar{x}) (0)=\{0\}, $$
(5)

see [13], Proposition 2.8. This condition is the famous Mordukhovich criterion [11], Theorem 9.40.

Under the calmness condition, when the constraint set is structured, the normal cones can be estimated or calculated.

Proposition 2.2

([14], Theorem 4.1)

Assume the multifunction \(M: \Re^{n_{2}}\rightrightarrows\Re^{n_{1}}\), defined by

$$M(q):=\bigl\{ z\in Z: G(z)+q\in K\bigr\} $$

for closed sets \(Z\subseteq \Re^{n_{1}}\) and \(K\subseteq\Re^{n_{2}}\) and a \(\mathcal{C}^{1}\) mapping \(G:\Re^{n_{1}}\rightarrow\Re^{n_{2}}\), is calm at \((0, \bar{z})\in \operatorname{gph} M\). Then one has

$$ N_{M(0)}(\bar{z})\subseteq \mathcal{J}G( \bar{z})^{T}N_{K}\bigl(G(\bar{z})\bigr)+N_{Z}( \bar{z}). $$
(6)

We know from [11], Theorem 6.14 that

$$\widehat{N}_{M(0)}(\bar{z})\supseteq \mathcal{J}G(\bar{z})^{T} \widehat{N}_{K}\bigl(G(\bar{z})\bigr)+\widehat{N}_{Z}( \bar{z}). $$

Thus if, in addition, Z is normally regular at z̄ and K is normally regular at \(G(\bar{z})\), then C is normally regular at z̄ and inclusion in (6) becomes equality.

We know from [4], Theorem 4.32 that M defined in Proposition 2.2 is Lipschitz-like around \((0,\bar{z})\in\operatorname{gph} M\) if the following constraint qualification holds:

$$ \left . \textstyle\begin{array}{l} 0\in\mathcal{J}G(\bar{z})^{T}\eta+ N_{Z}(\bar{z}), \\ \eta\in N_{K}(G(\bar{z})) \end{array}\displaystyle \right \} \quad \Rightarrow\quad \eta=0. $$
(7)

3 Main results

In this section, we provide conditions ensuring the equality type calculus rule of the coderivatives of a solution mapping S (3), which is an improvement of [10], Theorem 3.2. The result obtained is used to derive a necessary and sufficient condition for the bilevel programming (1).

We know from the definition of normal cone in convex analysis that the solution mapping S (3) can be rewritten as

$$S(x)=\bigl\{ y\in\Re^{m}: 0\in F(x,y)+N_{\Omega(x)}(y)\bigr\} , $$

where \(N_{\Omega(x)}(y)\) denotes the normal cone of \(\Omega(x)\) at y. For a parameter \(\bar{x}\in\Re^{n}\), if the following Slater constraint qualification (SCQ) is satisfied at x̄:

$$ \exists \bar{y}\in\Re^{m} \quad \mbox{such that}\quad A( \bar{x})\bar{y}+b\in\operatorname{int}\mathcal{K}^{p}, $$
(8)

then, by [11], Theorem 10.49, we can compute the normal cone \(N_{\Omega(\bar{x})}(y)\) at \(y\in\Omega(\bar{x})\) and obtain

$$ N_{\Omega(\bar{x})}(y)=A(\bar{x})^{T}N_{\mathcal{K}^{p}} \bigl(A(\bar{x})y+b\bigr). $$
(9)

We need the following conditions, which are popularly used conditions in SOCP.

Definition 3.1

Let \(\bar{x}\in\Re^{n}\), \(\bar{y}\in \Omega(\bar{x})\) and \(\bar{v}\in N_{\Omega(\bar{x})}(\bar{y})\).

  1. (a)

    We say that the constraint nondegenerate condition (\(\mathcal{CNC}\)) holds true at ȳ for x̄, if

    $$ A(\bar{x})\Re^{m}+\operatorname{lin} \bigl(T_{\mathcal{K}^{p}}\bigl(A(\bar {x})\bar{y}+b\bigr) \bigr)=\Re^{p}. $$
    (10)
  2. (b)

    We say the strict complementarity (\(\mathcal{SC}\)) condition holds at \((\bar{x},\bar{y},\bar{v})\), if

    $$\lambda\in\operatorname{ri} N_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b\bigr) $$

    for all λ satisfying \(\lambda\in N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\) and \(A(\bar{x})^{T}\lambda =\bar{v}\).

We introduce the Lagrangian mapping \(\mathcal{L}:\Re^{n}\times\Re^{m}\times\Re^{p}\rightarrow\Re^{m}\) defined by

$$ \mathcal{L}(x,y,\lambda ):=F(x,y)+A(x)^{T}\lambda $$
(11)

and the Lagrangian multiplier mapping \(\Lambda:\Re^{n}\times\Re^{m}\rightrightarrows\Re^{p}\) defined by

$$\Lambda(x,y):= \bigl\{ \lambda\in\Re^{p} \mid \mathcal {L}(x,y, \lambda)=0 \bigr\} . $$

In [10], Theorem 3.2, an equality type representation of the coderivative of a solution mapping S (3) has been established under some constraint qualifications, we cite it as a lemma.

Lemma 3.1

Assume the SCQ (8) holds for x̄ and the multifunction \(P: \Re^{m}\times\Re^{p}\rightrightarrows\Re^{n}\times\Re^{m}\times\Re^{p}\) defined by

$$ P(\gamma,q):= \bigl\{ (x,y,\lambda)\in\Re^{n}\times \Re^{m}\times \Re^{p} \mid \mathcal{L}(x,y,\lambda)+\gamma=0 \bigr\} \cap M(q) $$
(12)

is calm at the points \((0, 0, \bar{x}, \bar{y}, \bar{\lambda})\) with \(\bar{\lambda}\in\Lambda(\bar{x}, \bar{y})\cap N_{\mathcal {K}^{p}}(A(\bar{x})\bar{y}+b)\), where the multifunction \(M: \Re^{2p}\rightrightarrows\Re^{n+m+p}\) is defined by

$$ M(q):=\left \{(x,y,\lambda)\in\Re ^{n}\times \Re^{m}\times \Re^{p} \biggm| q+ \left( \textstyle\begin{array}{@{}c@{}} A(x)y+b \\ \lambda \end{array}\displaystyle \right)\in\operatorname{gph} N_{\mathcal{K}^{p}} \right \}. $$
(13)

Then:

  1. (a)

    In the case when \(\bar{z}_{0}> \|\bar{z}_{1}\|\), where \(\bar {z}:=A(\bar{x})\bar{y}+b\), we have for any \(\bar{\lambda}\in \Lambda(\bar{x}, \bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\),

    $$\begin{aligned} D^{*}S(\bar{x},\bar{y}) \bigl(y^{*}\bigr) =& \bigl\{ \bigl( \mathcal{J}_{x}\mathcal{L}(\bar{x},\bar{y},\bar {\lambda}) \bigr)^{T}u+\bigl(\mathcal{J}_{x}\bigl(A(\bar{x})\bar{y} \bigr)\bigr)^{T}w \mid \\ &0\in y^{*}+\bigl(\mathcal{J}_{y}\mathcal{L}(\bar{x},\bar{y},\bar{ \lambda })\bigr)^{T}u+A(\bar{x})^{T}w, \\ &w\in D^{*}N_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b,\bar{\lambda }\bigr) \bigl(A(\bar{x})u\bigr) \bigr\} \end{aligned}$$
    (14)

    holds for all \(y^{*}\in\Re^{m}\).

  2. (b)

    In the case when \(\bar{z}_{0}= \|\bar{z}_{1}\|\), if the mapping \(M(\cdot)\) (13) is calm at \((0,\bar{x}, \bar{y}, \bar{\lambda})\) for any \(\bar{\lambda}\in\Lambda(\bar{x}, \bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\), \(\mathcal {CNC}\) (10) holds at ȳ for x̄ and \(\mathcal{SC}\) condition holds at \((\bar{x},\bar{y},-F(\bar{x},\bar{y}))\). Then the equality (14) holds for any \(\bar{\lambda}\in\Lambda(\bar{x}, \bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\).

Under conditions weaker than the ones in (b) of Theorem 3.2 in [10], we obtain the same equality type coderivative rule as follows.

Theorem 3.1

Assume:

  1. (a)

    SCQ (8) holds for x̄ and \(P(\gamma ,q)\) (12) is calm at the points \((0,0,\bar{x},\bar {y},\bar{\lambda})\) with \(\bar{\lambda}\in\Lambda(\bar{x},\bar {y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\).

  2. (b)

    \(\mathcal{CNC}\) (10) holds at ȳ for x̄ and \(\mathcal{SC}\) condition holds at \((\bar{x},\bar{y},-F(\bar {x},\bar{y}))\).

Then in the case when \(\bar{z}_{0}=\|\bar{z}_{1}\|\), the equality (14) holds for any \(\bar{\lambda}\in\Lambda(\bar{x},\bar {y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\).

Proof

According to Lemma 3.1(b), we need to show under conditions (a) and (b) that the mapping \(M(\cdot)\) (13) is calm at \((0,\bar{x},\bar{y},\bar{\lambda})\) for any \(\bar{\lambda}\in \Lambda(\bar{x},\bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar {y}+b)\). We know from Definition 2.2 that the calmness of \(M(\cdot)\) at \((0,\bar{x},\bar{y},\bar{\lambda})\) is ensured by the Lipschitz-like property of \(M(\cdot)\) at \((0,\bar{x},\bar {y},\bar{\lambda})\), which holds under the condition

$$ \left . \textstyle\begin{array}{l} 0=\mathcal{J}(A(x)y)^{T}|_{(x,y)=(\bar{x},\bar{y})}\eta, \\ \eta\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b,\bar{\lambda})(0) \end{array}\displaystyle \right \} \quad \Rightarrow\quad \eta=0. $$
(15)

Indeed, notice that, by the Mordukhovich criterion (5), we only need to verify

$$ D^{*}M(0,\bar{x},\bar{y},\bar{\lambda}) (0)=\{0\} $$
(16)

under condition (15). Let \(y^{*}\in D^{*}M(0,\bar{x},\bar{y},\bar{\lambda})(0)\), by Definition 2.1, we have

$$ \left( \textstyle\begin{array}{@{}c@{}} y^{*} \\ 0 \end{array}\displaystyle \right)\in N_{\operatorname{gph}M} \left( \textstyle\begin{array}{@{}c@{}} 0 \\ \bar{x} \\ \bar{y} \\ \bar{\lambda} \end{array}\displaystyle \right). $$
(17)

Since

$$\operatorname{gph} M=\left \{(q,x,y,\lambda)\in\Re^{2p}\times\Re ^{n}\times\Re^{m}\times \Re^{p} \biggm| q+ \left( \textstyle\begin{array}{@{}c@{}} A(x)y+b \\ \lambda \end{array}\displaystyle \right)\in\operatorname{gph} N_{\mathcal{K}^{p}} \right \}, $$

we know from Proposition 2.2 that if the condition

$$ \left . \textstyle\begin{array}{l} 0=\mathcal{J}_{(q,x,y,\lambda)} \bigl[q+ \bigl( {\scriptsize\begin{matrix}{} A(x)y+b \cr \lambda \end{matrix}} \bigr) \bigr]^{T} |_{(q,x,y,\lambda)=(0,\bar{x},\bar{y},\bar {\lambda})}\xi, \\ \xi\in N_{\operatorname{gph} N_{\mathcal{K}^{p}}} \bigl( {\scriptsize\begin{matrix}{} A(\bar{x})\bar{y}+b \cr \bar{\lambda} \end{matrix}} \bigr) \end{array}\displaystyle \right \}\quad \Rightarrow\quad \xi=0 $$
(18)

holds, then

$$ N_{\operatorname{gph}M} \left( \textstyle\begin{array}{@{}c@{}} 0 \\ \bar{x} \\ \bar{y} \\ \bar{\lambda} \end{array}\displaystyle \right)\subseteq {\Biggl.\mathcal{J}_{(q,x,y,\lambda)} \left[q+ \left( \textstyle\begin{array}{@{}c@{}} A(x)y+b \\ \lambda \end{array}\displaystyle \right) \right]^{T} \Biggr|_{(q,x,y,\lambda)=(0,\bar{x},\bar{y},\bar {\lambda})}}N_{\operatorname{gph} N_{\mathcal{K}^{p}}} \left( \textstyle\begin{array}{@{}c@{}} A(\bar{x})\bar{y}+b \\ \bar{\lambda} \end{array}\displaystyle \right). $$
(19)

Notice that

$$\mathcal{J}_{(q,x,y,\lambda)}\left[q+ \left( \textstyle\begin{array}{@{}c@{}} A(x)y+b \\ \lambda \end{array}\displaystyle \right) \right]^{T}_{(q,x,y,\lambda)=(0,\bar{x},\bar{y},\bar{\lambda })}= \left( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} I_{p}&0& \mathcal{J}(A(x)y)|_{(x,y)=(\bar{x},\bar{y})}&0 \\ 0&I_{p}&0&I_{p} \end{array}\displaystyle \right), $$

we have (18), then (19) holds and hence by (17), we have

$$\left( \textstyle\begin{array}{@{}c@{}} y^{*}\\ 0 \end{array}\displaystyle \right)\in \left( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} I_{p}&0& \mathcal{J}(A(x)y)|_{(x,y)=(\bar{x},\bar{y})}&0 \\ 0&I_{p}&0&I_{p} \end{array}\displaystyle \right)^{T}N_{\operatorname{gph} N_{\mathcal{K}^{p}}} \left( \textstyle\begin{array}{@{}c@{}} A(\bar{x})\bar{y}+b \\ \bar{\lambda} \end{array}\displaystyle \right), $$

which, by (15) and Definition 2.1, means that \(y^{*}=0\). Therefore (16) holds.

Next we show \(\mathcal{CNC}\) condition implies (15). In the case when \(\bar{z}_{0}=\|\bar{z}_{1}\|=0\), \(\mathcal{CNC}\) condition means that \(A(\bar{x})\Re^{m}=\Re^{p}\), which is equivalent to

$$0=A(\bar{x})^{T}\eta\quad \Rightarrow\quad \eta=0 $$

and hence condition (15) holds. In the case when \(\bar {z}_{0}=\|\bar{z}_{1}\|\neq0\), we proceed in the proof in two main steps.

Step 1. Taking the orthogonal complements on both sides of (10), the \(\mathcal{CNC}\) condition can be rewritten as

$$ \bigl(A(\bar{x})\Re^{m}\bigr)^{\bot}\cap \operatorname {lin} \bigl(T_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b\bigr) \bigr)^{\bot}=\{ 0\}. $$
(20)

We know from [15], Proposition 4.73 that

$$\operatorname{lin} \bigl(T_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b\bigr) \bigr)^{\bot}=\operatorname{Sp}\bigl\{ N_{\mathcal{K}^{p}}\bigl(A(\bar{x}) \bar{y}+b\bigr)\bigr\} , $$

which, by (20), means that the \(\mathcal{CNC}\) condition is equivalent to

$$ \operatorname{Sp}\bigl\{ N_{\mathcal{K}^{p}}\bigl(A(\bar {x})\bar{y}+b \bigr)\bigr\} \cap\operatorname{Ker} A(\bar{x})^{T}=\{0\}. $$
(21)

Step 2. We next show

$$ \operatorname{Sp}\bigl\{ N_{\mathcal{K}^{p}}\bigl(A(\bar {x})\bar{y}+b \bigr)\bigr\} =D^{*}N_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b,\bar {\lambda}\bigr) (0). $$
(22)

Since \(\bar{\lambda}\in N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\), we have \(\bar{\lambda}=k(-\bar{z}_{0}, \bar{z}_{1})\) with \(k\in\Re _{+}\), where \(\bar{z}=A(\bar{x})\bar{y}+b\). Then we know from Proposition 2.1 that

$$\begin{aligned} D^{*}N_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b,\bar{\lambda}\bigr) (0) =& \biggl[ \biggl\vert \biggl(-1,\frac{(\bar{z}+\bar{\lambda})_{1}^{T}}{\|(\bar{z}+\bar {\lambda})_{1}\|} \biggr)^{T}\biggr\vert \biggr] \\ =& \biggl[\biggl\vert \biggl(-1,\frac{(k+1)\bar{z}_{1}^{T}}{\|(k+1)\bar{z}_{1}\|} \biggr)^{T} \biggr\vert \biggr] = \biggl[\biggl\vert \biggl(-1,\frac{\bar{z}_{1}^{T}}{\|\bar{z}_{1}\|} \biggr)^{T}\biggr\vert \biggr], \end{aligned}$$

which, by \(\|\bar{z}_{1}\|=\bar{z}_{0}\), means that (22) holds. Combining with (21) and (22), the \(\mathcal{CNC}\) condition is equivalent to

$$ \left . \textstyle\begin{array}{l} 0=A(\bar{x})^{T}\eta, \\ \eta\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b,\bar{\lambda})(0) \end{array}\displaystyle \right \}\quad \Rightarrow\quad \eta=0, $$

which implies (15). We complete the proof. □

Remark 3.1

We know from the proof of Theorem 3.1 that the calmness of \(M(\cdot)\) at \((0,\bar{x},\bar{y},\bar{\lambda})\) is ensured by the \(\mathcal{CNC}\) condition, which means the condition in Theorem 3.1 is weaker than the conditions in [10], Theorem 3.2.

In the following, we apply the results obtained to derive a necessary and sufficient optimality condition for the bilevel programming (1).

Theorem 3.2

Suppose the function f in BP (1) is convex, \(\nabla_{y}\psi(x,y)\) is a linear function and the conditions in Theorem  3.1 hold at \((\bar{x}, \bar{y})\) with the involved function \(F(x,y):=\nabla_{y}\psi(x,y)\). Then \((\bar{x},\bar{y})\) is a locally optimal solution of BP (1) if and only if there exists \((w, u)\in\Re^{p}\times\Re^{m}\) satisfying \(w\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b, \bar{\lambda})(A(\bar{x})u)\) for some \(\bar{\lambda}\in \Lambda(\bar{x},\bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar {y}+b)\) such that

$$ 0=\nabla f(\bar{x},\bar{y})+\bigl(\mathcal{J}_{x,y} \mathcal{L}(\bar{x},\bar {y},\bar{\lambda})\bigr)^{T}u+\bigl( \mathcal{J}G(\bar{x},\bar{y})\bigr)^{T}w, $$
(23)

where \(G(x,y):=A(x)y+b\), \(\mathcal{L}(x,y,\lambda)=\nabla_{y}\psi(x,y)+A(x)^{T}\lambda\) and \(\Lambda(x,y)=\{\lambda:\mathcal{L}(x,y,\lambda)=0\}\).

Proof

Since \(S(x)\) is the optimal solution set of the parametric problem (2) and for any \(x\in\Re^{n}\), (2) is a convex optimization problem, \(S(x)\) can be written as

$$ S(x)=\bigl\{ y:0\in\nabla_{y}\psi(x,y)+Q(x,y)\bigr\} , $$
(24)

where \(\Omega(x)=\{y:A(x,y)+b\in\mathcal{K}^{p}\}\) and \(Q(x,y)=N_{\Omega(x)}(y)\). As a result, the bilevel problem can be reformulated as

$$ \begin{aligned} &\min f(x,y) \\ &\quad \mbox{s.t. } (x, y) \in\operatorname{gph} S, \end{aligned} $$

where

$$\operatorname{gph} S=\left \{(x,y)\in\Re^{n}\times \Re^{m}\Biggm| \left[ \textstyle\begin{array}{@{}c@{}} x \\ y \\ -\nabla_{y}\psi(x,y) \end{array}\displaystyle \right] \in \operatorname{gph}Q \right \}. $$

We next show that gphQ is normally regular at \((\bar {x},\bar{y},-\nabla\psi(\bar{x},\bar{y}))\). We know from the proof of [10], Theorem 3.1 that, under conditions (a) and (b) in Theorem 3.1,

$$D^{*}Q(\bar{x},\bar{y},\bar{v}) (u)= \bigl(\mathcal{J}_{x,y}\bigl(A( \bar{x})^{T}\lambda\bigr)\bigr)^{T}u+D^{*}(N_{\mathcal {K}^{p}}\circ G) (\bar{x},\bar{y},\lambda) \bigl(A(\bar{x})u\bigr) $$

holds for any \(\lambda\in\Lambda(\bar{x},\bar{y})\), which, by Definition 2.1, means that

$$\begin{aligned}& \left[ \textstyle\begin{array}{@{}c@{}} w \\ -u \end{array}\displaystyle \right]\in N_{\operatorname{gph}Q}( \bar{x},\bar{y},\bar{v}) \\& \quad \Longleftrightarrow\quad w\in D^{*}Q(\bar{x}, \bar{y},\bar{v}) (u) \\& \quad \Longleftrightarrow\quad w\in\bigl(\mathcal{J}_{x,y}\bigl(A( \bar{x})^{T}\lambda \bigr)\bigr)^{T}u+D^{*}(N_{\mathcal{K}^{p}} \circ G) (\bar{x},\bar{y},\lambda) \bigl(A(\bar{x})u\bigr) \\& \quad \Longleftrightarrow\quad \left[ \textstyle\begin{array}{@{}c@{}} w-(\mathcal{J}_{x,y}(A(\bar{x})^{T}\lambda))^{T}u \\ -A(\bar{x})u \end{array}\displaystyle \right]\in N_{\operatorname{gph}N_{\mathcal{K}^{p}}\circ G}(\bar{x},\bar {y},\lambda) \\& \quad \Longleftrightarrow\quad \left[ \textstyle\begin{array}{@{}c@{\quad}c@{}} I_{n+m}& (\mathcal{J}_{x,y}(A(\bar{x})^{T}\lambda))^{T} \\ 0 & A(\bar{x}) \end{array}\displaystyle \right] \left[ \textstyle\begin{array}{@{}c@{}} w \\ -u \end{array}\displaystyle \right]\in N_{\operatorname{gph}N_{\mathcal{K}^{p}}\circ G}(\bar{x},\bar {y}, \lambda) \end{aligned}$$
(25)

holds for any \(\lambda\in\Lambda(\bar{x},\bar{y})\). Under conditions (a) and (b) in Theorem 3.1, we know from the proof of [16], Lemma 3.3 that

$$ \widehat{D}^{*}Q(\bar{x},\bar{y},\bar{v}) (u)= \bigl( \mathcal{J}_{x,y}\bigl(A(\bar{x})^{T}\lambda\bigr) \bigr)^{T}u+\widehat {D}^{*}(N_{\mathcal{K}^{p}}\circ G) (\bar{x},\bar{y}, \lambda) \bigl(A(\bar{x})u\bigr). $$
(26)

Under the \(\mathcal{SC}\) condition, by Proposition 2.1, we have

$$ N_{\operatorname{gph}N_{\mathcal{K}^{p}}\circ G}(\bar{x},\bar{y},\lambda )=\widehat{N}_{\operatorname{gph}N_{\mathcal{K}^{p}}\circ G}( \bar{x},\bar {y},\lambda). $$
(27)

Consequently, combining with (25), (26), and (27), we see that gphQ is normally regular at \((\bar{x},\bar{y},-\nabla\psi(\bar{x},\bar{y}))\). We know from the proof of [5], Theorem 4.3 that if the set-valued mapping P (12) is calm at the points \((0, 0, \bar{x}, \bar{y}, \bar{\lambda})\) with \(\bar{\lambda}\in\Lambda(\bar{x}, \bar{y})\cap N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b)\), then the set-valued mapping \(\Psi:\Re^{n}\times\Re^{m}\times\Re^{m}\rightrightarrows\Re^{n}\times \Re^{m}\) defined by

$$\Psi(\zeta):=\left \{(x,y)\in\Re^{n}\times\Re^{m} \Biggm| \left[ \textstyle\begin{array}{@{}c@{}} x \\ y \\ -\nabla_{y}\psi(x,y) \end{array}\displaystyle \right]+\zeta\in \operatorname{gph}Q \right \} $$

is calm at \((0,\bar{x},\bar{y})\), which, by Proposition 2.2, implies that

$$ N_{\operatorname{gph}S}(\bar{x},\bar{y})\subseteq \left[ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} I_{n} & 0 & -\mathcal{J}_{x}(\nabla_{y}\psi(\bar{x},\bar{y}))^{T}\\ 0 & I_{m} & -\mathcal{J}_{y}(\nabla_{y}\psi(\bar{x},\bar{y}))^{T} \end{array}\displaystyle \right]\circ N_{\operatorname{gph}Q}\bigl(\bar{x},\bar{y},-\nabla_{y} \psi(\bar{x},\bar{y})\bigr). $$
(28)

On the other hand, we know from [11], Theorem 6.14 that

$$ \widehat{N}_{\operatorname{gph}S}(\bar{x},\bar{y})\supseteq \left[ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} I_{n} & 0 & -\mathcal{J}_{x}(\nabla_{y}\psi(\bar{x},\bar{y}))^{T}\\ 0 & I_{m} & -\mathcal{J}_{y}(\nabla_{y}\psi(\bar{x},\bar{y}))^{T} \end{array}\displaystyle \right]\circ \widehat{N}_{\operatorname{gph}Q}\bigl( \bar{x},\bar{y},-\nabla_{y}\psi(\bar {x},\bar{y})\bigr). $$
(29)

Notice that

$$\widehat{N}_{\operatorname{gph}S}(\bar{x},\bar{y})\subseteq N_{\operatorname {gph}S}(\bar{x}, \bar{y}). $$

Then combining (28) and (29), the normal regularity of gphS at \((\bar{x},\bar{y})\) is directly from the normal regularity of gphQ at \((\bar{x},\bar{y},-\nabla\psi (\bar{x},\bar{y}))\). Therefore, \((\bar{x}, \bar{y})\) is a locally optimal solution if and only if \((\bar{x}, \bar{y})\) satisfying \(0\in\nabla f(x,y)+N_{\operatorname{gph} S}(\bar{x}, \bar{y})\), i.e.,

$$ 0\in \nabla_{x}f(\bar{x},\bar{y})+D^{*}S(\bar{x},\bar{y}) \bigl(\nabla_{y}f(\bar {x},\bar{y})\bigr). $$
(30)

Under the conditions in Theorem 3.1, we have

$$\begin{aligned} D^{*}S(\bar{x},\bar{y}) \bigl(y^{*}\bigr) =& \bigl\{ \bigl( \mathcal{J}_{x}\mathcal{L}(\bar{x},\bar{y},\bar {\lambda}) \bigr)^{T}u+\bigl(\mathcal{J}_{x}\bigl(A(\bar{x})\bar{y} \bigr)\bigr)^{T}w \mid \\ & 0\in y^{*}+\bigl(\mathcal{J}_{y}\mathcal{L}(\bar{x},\bar{y},\bar{ \lambda })\bigr)^{T}u+A(\bar{x})^{T}w, \\ &w\in D^{*}N_{\mathcal{K}^{p}}\bigl(A(\bar{x})\bar{y}+b,\bar{\lambda}\bigr) \bigl(A( \bar {x})u\bigr) \bigr\} . \end{aligned}$$
(31)

Consequently, the conclusion is directly from (30) and (31). □

In [10], Theorem 5.1, a necessary and sufficient global optimality condition for the bilevel programming (1) has been derived under some strong condition such as \(G(x,y)+\lambda \in(\operatorname{int} \mathcal{K}^{p})\cup(\operatorname{int} (\mathcal {K}^{p})^{-})\). In the case when one of the conditions in [10], Theorem 5.1 is not satisfied at a point, we do not know whether the point is a global optimal solution. However, by Theorem 3.2, we may verify that the point is a local optimal solution for the bilevel programming (1). We next give an example to show this.

Example 3.1

Consider

$$ \begin{aligned} &\min f(x_{1},x_{2},y_{1},y_{2}):=e^{x_{1}}+2x_{2}+y_{1}^{2}-3y_{1}+y_{2}^{4}-2y_{2} \\ &\quad \mbox{s.t. } y \in S(x), \end{aligned} $$
(32)

where \(S(x)\) is the optimal solution set of the following problem:

$$\begin{aligned}& \min \psi(x_{1},x_{2},y_{1},y_{2}):=y_{1}^{2}-2y_{2}+e^{x_{1}}+x_{2} \\& \quad \mbox{s.t. } G(x,y):= \left( \textstyle\begin{array}{@{}c@{\quad}c@{}} x_{1}+1 & 0 \\ 0 & x_{2}+1 \end{array}\displaystyle \right) \left( \textstyle\begin{array}{@{}c@{}} y_{1} \\ y_{2} \end{array}\displaystyle \right) \in\mathcal{K}^{2}, \end{aligned}$$

where \(x_{1}, x_{2}, y_{1}, y_{2}\in\Re\). Consider a point \((\bar{x},\bar {y})=(0,0,1,1)^{T}\in\Re^{4}\). By simple computing, we have the multiplier set \(\Lambda(\bar{x},\bar{y})=\{\bar{\lambda}\}=\{(-2,2)^{T}\}\). Then we have \(G(\bar{x},\bar{y})+\bar{\lambda}=(-1,3)^{T}\notin(\operatorname {int} \mathcal{K}^{2})\cup(\operatorname{int} (\mathcal{K}^{2})^{-})\), which means that one of the conditions in [10], Theorem 5.1 is not satisfied at \((\bar{x},\bar{y})\) and hence we do not know whether it is a global solution to problem (32). Next by Theorem 3.2, we verify that \((\bar{x},\bar{y})=(0,0,1,1)^{T}\) is a locally optimal solution of problem (32). (i) Since there exists \(\hat{y}=(1, 0)^{T}\) such that \(A(\bar{x})\hat{y}=\hat{y}\in\operatorname{int} \mathcal{K}^{2}\), the SCQ (8) holds for x̄. (ii) Since \(A(\bar{x})=I_{2}\), the \(\mathcal{CNC}\) (10) holds at \((\bar{x},\bar{y})\). (iii) The multifunction \(P(\cdot)\) defined by (12) is calm at the points \((0, 0, \bar{x}, \bar{y}, \bar{\lambda})\). In fact, by a simple computation, we obtain

$$\begin{aligned}& \bigl(\mathcal{J}_{x,y}\mathcal{L}(\bar{x},\bar{y},\bar{\lambda }) \bigr)^{T}u+\bigl(\mathcal{J}_{x,y}\bigl(A(\bar{x})\bar{y} \bigr)\bigr)^{T}w \\& \quad = \left( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 0 & 2 & 0 \\ 0 & 2 & 0 &0 \end{array}\displaystyle \right)^{T}u + \left( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{array}\displaystyle \right)w, \end{aligned}$$

which means that

$$ \left . \textstyle\begin{array}{l} 0\in (\mathcal{J}_{x,y}\mathcal{L}(\bar{x},\bar{y},\lambda ) )^{T}u+(\mathcal{J}_{x,y}(A(\bar{x})\bar{y}))^{T}w, \\ w\in D^{*}N_{\mathcal{K}^{p}}(G(\bar{x},\bar{y}),\lambda)(A(\bar{x})u) \end{array}\displaystyle \right \}\quad \Rightarrow\quad w=0,\qquad u=0. $$

This, by the Mordukhovich criterion (5), is a condition ensuring the Lipschitz-like property of P at \((0,0,\bar{x},\bar{y},\lambda)\), which ensures the calmness of P at \((0,0,\bar{x},\bar{y},\lambda)\). (iv) By simple computing, the \(\mathcal{SC}\) condition holds. (v) Next we show there exists \((w, u)\in\Re^{2}\times\Re^{2}\) satisfying \(w\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b, \bar{\lambda})(A(\bar{x})u)\) such that (23) holds. If the equality (23) holds for \((\bar{x},\bar{y},\bar{\lambda})\) and \((w,u)\), then we have

$$ 0= \left( \textstyle\begin{array}{@{}c@{}} 1 \\ 2 \\ -1 \\ 2 \end{array}\displaystyle \right)+ \left( \textstyle\begin{array}{@{}c@{\quad}c@{}} -2 & 0 \\ 0 & 2 \\ 2 & 0 \\ 0 & 0 \end{array}\displaystyle \right) \left( \textstyle\begin{array}{@{}c@{}} u_{1} \\ u_{2} \end{array}\displaystyle \right)+ \left( \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 0 & 1 \\ 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right) \left( \textstyle\begin{array}{@{}c@{}} w_{1} \\ w_{2} \end{array}\displaystyle \right). $$
(33)

We know from Proposition 2.1 that \(w\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b, \bar{\lambda})(A(\bar{x})u)\) means that

$$ \left( \textstyle\begin{array}{@{}c@{}} w_{1} \\ w_{2} \end{array}\displaystyle \right) \in \bigl[\vert u\vert \bigr]^{\bot}+ \left( \textstyle\begin{array}{@{}c@{}} -6u_{2} \\ 2u_{2} \end{array}\displaystyle \right). $$
(34)

Then, combining (33) and (34), we have \(w=(0,-2)^{T} \) and \(u=(1/2,0)^{T}\) satisfying \(w\in D^{*}N_{\mathcal{K}^{p}}(A(\bar{x})\bar{y}+b, \bar{\lambda})(A(\bar{x})u)\) such that (23) holds. Therefore, by Theorem 3.2, \((\bar{x},\bar{y})\) is a locally optimal solution of problem (32).

Remark 3.2

By a similar computation, we can infer the point \((\bar{x},\bar{y})=(0,0,1,1)^{T}\) is a locally optimal solution of problem (32) with \(f(x,y):=x_{1}^{2}+x_{2}^{2}+(y_{1}-1)^{2}+(y_{2}-1)^{2}\) and it is not a locally optimal solution of problem (32) with \(f(x,y):=e^{x_{1}}+x_{2}^{2}-x_{2}+2y_{1}+1/2y_{2}^{2}-2y_{2}\).

4 Conclusion

In this paper, an equality type representation of the coderivative of the solution mapping S (3) is obtained, which is an improvement of [10], Theorem 3.2. The result obtained is then used to develop a necessary and sufficient local optimality condition for a bilevel programming with SOC as its lower level problem.

References

  1. Dempe, S: Foundations of Bilevel Programming. Kluwer Academic, Dordrecht (2002)

    MATH  Google Scholar 

  2. Ye, J: Constraint qualifications and KKT conditions for bilevel programming problems. Math. Oper. Res. 31, 811-824 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  3. Outrata, JV, Sun, DF: On the coderivative of the projection operator onto the second-order cone. Set-Valued Anal. 16, 999-1014 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  4. Mordukhovich, BS: Variational Analysis and Generalized Differentiation. I: Basic Theory, II: Applications. Springer, Berlin (2006)

    Google Scholar 

  5. Mordukhovich, BS, Outrata, JV: Coderivative analysis of quasi-variational inequalities with applications to stability and optimization. SIAM J. Optim. 18, 389-412 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Shen, J, Pang, LP: A bundle-type auxiliary problem method for generalized variational-like inequality. Comput. Math. Appl. 55, 2993-2998 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  7. Shen, J, Pang, LP: An approximate bundle-type auxiliary problem method for generalized variational inequality. Math. Comput. Model. 48, 769-775 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  8. Shen, J, Pang, LP: A proximal analytic center cutting plane algorithm for solving variational inequality problems. J. Appl. Math. 2012, Article ID 503242 (2012)

    MathSciNet  Google Scholar 

  9. Mordukhovich, BS: Coderivatives of set-valued mappings: calculus and applications. Nonlinear Anal. 30, 3059-3070 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Zhang, J, Li, Y, Zhang, L: On the coderivative of the solution mapping to a second-order cone constrained parametric variational inequality. J. Glob. Optim. 61, 379-396 (2015)

    Article  MATH  Google Scholar 

  11. Rockafellar, RT, Wets, RJB: Variational Analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  12. Mordukhovich, BS: Metric approximations and necessary optimality conditions for general classes of extremal problems. Sov. Math. Dokl. 22, 526-530 (1980)

    MATH  Google Scholar 

  13. Mordukhovich, BS: Generalized differential calculus for nonsmooth and set-valued mappings. J. Math. Anal. Appl. 183, 250-288 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  14. Henrion, R, Jourani, A, Outrata, JV: On the calmness of a class of multifunctions. SIAM J. Optim. 13, 603-618 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  15. Bonnans, JF, Shapiro, A: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  16. Zhang, J, Zhang, L, Pang, L: On the convergence of coderivative of SAA solution mapping for a parametric stochastic variational inequality. Set-Valued Var. Anal. 20, 75-109 (2012)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Project No. 11201210, CPSF grant 2014M560200 and Program for Liaoning Excellent Talents in University No. LJQ2015059.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Wang, H. & Sun, Y. A note on the optimality condition for a bilevel programming. J Inequal Appl 2015, 361 (2015). https://doi.org/10.1186/s13660-015-0882-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0882-2

MSC

Keywords