Skip to main content

Global optimality conditions for nonconvex minimization problems with quadratic constraints

Abstract

In this paper, some global optimality conditions for nonconvex minimization problems subject to quadratic inequality constraints are presented. Then some sufficient and necessary global optimality conditions for nonlinear programming problems with box constraints are derived. We also establish a sufficient global optimality condition for a nonconvex quadratic minimization problem with box constraints, which is expressed in a simple way in terms of the problem’s data. In addition, a sufficient and necessary global optimality condition for a class of nonconvex quadratic programming problems with box constraints is discussed. We also present some numerical examples to illustrate the significance of our optimality conditions.

1 Introduction

Consider the following nonconvex minimization problem (QCNP):

$$\begin{aligned} (\mbox{QCNP}) \quad&\min\quad f(x) \\ \quad&\mbox{s.t.}\quad f_{i}(x)=\frac{1}{2}x^{T} A_{i} x+x^{T} a_{i}+c_{i}\leq 0,\quad i=1,\ldots,m, \end{aligned}$$

where \(f:R^{n}\rightarrow R\) is a twice continuously differentiable function, \(c_{i}\in R\), \(a_{i}\in R^{n}\), \(A_{i} \in S^{n}\), \(i=1,\ldots,m\), and \(S^{n}\) is the set of all symmetric \(n\times n\) matrices. Model problems of the form (QCNP) arise in many important optimization problems [1, 2], such as trust-region subproblems [3–6] and quadratic minimization problems [7, 8].

Over the years, significant advances have been made in characterizing solutions of constrained convex optimization problems, where a local optimal solution is a global one. However, the development of characterizing global solutions of optimization problems that exhibit multiple local optima has so far been limited to a few classes of nonconvex problems (see [8–13] and other references therein). Recently many researchers have focused on characterizing globally optimal solutions of various special cases of (QCNP). In [14, 15], sufficient global optimality conditions were given for quadratic optimization problems with binary constraints. Some Lagrange multiplier necessary conditions for global optimality for problem (QCNP) have been established in [16], by employing S-lemma and over-estimators of the objective function, when \(m=1\). Global optimality conditions for (QCNP) in the case where f is a quadratic function were recently developed in [6, 17–20]. In [6], Moré studied the problem of minimizing a quadratic function subject to one general quadratic constraint and obtained some sufficient and necessary global optimality conditions. Peng and Yuan [17] derived some global optimality conditions for nonconvex quadratic programming problems with two quadratic constraints. In [18], Jeyakumar et al. established by a Lagrange multiplier sufficient as well as necessary conditions for global optimality of general quadratic minimization problems with quadratic constraints. The necessary and sufficient conditions were given in [18] by using a regularity condition, called the S-property. Global optimality conditions in maximizing a convex quadratic function under convex quadratic constraints were presented in [19]. Hiriart-Urruty [20] studied global optimality conditions for quadratic minimization problems with quadratic constraints. Some sufficient global optimality conditions for nonconvex quadratic minimization problems with box constraints were obtained by using abstract subdifferentials in [21]. Jeyakumar et al. in [22] have given global optimality conditions for the minimization of difference of quadratic and convex functions over box constraints by constructing quadratic underestimators and characterizing global minimizers of the underestimators.

In this paper, we study the global optimality conditions for general nonconvex minimization problems with inequality quadratic constraints, which can be viewed as a generalization of Jeyakumar et al. [16] for a single quadratic constraint. Moreover, some global optimality conditions for nonlinear problems with box constraints are presented by transforming the box constraints into n quadratic inequality constraints.

The outline of this paper is as follows. In Section 2, we present some necessary and sufficient global optimality conditions for nonlinear programming problems with inequality quadratic constraints (QCNP). In Section 3, we provide some global optimality conditions for nonlinear minimization problems with box constraints, and deduce necessary and sufficient optimality conditions for a class of nonconvex quadratic minimization problems with box constraints.

2 Global optimality conditions for problem (QCNP)

We begin this section by presenting basic definitions and preliminary results that will be used throughout the paper. The real line is denoted by R and the n-dimensional Euclidean space is denoted by \(R^{n}\). The set of all nonnegative vectors of \(R^{n}\) is denoted by \(R^{n}_{+}\). For vectors \(x=(x_{1},x_{2},\ldots,x_{n})^{T}\), \(y=(y_{1},y_{2},\ldots,y_{n})^{T}\in R^{n}\), \(x\geq y\) means that \(x_{i}\geq y_{i}\), for \(i=1,\ldots,n\). The notation \(A\succeq B\) means \(A-B\) is a positive semidefinite and \(A\preceq0\) means \(-A\succeq0\). A diagonal matrix with diagonal elements \(\alpha_{1},\ldots,\alpha_{n}\) is denoted by \(\operatorname{diag}(\alpha_{1},\ldots,\alpha _{n})=\operatorname{diag}(\alpha)\), where \(\alpha=(\alpha_{1},\ldots ,\alpha_{n})^{T}\). A matrix \(H=(h_{ij})_{1\leq i,j\leq n}\in S^{n}\) is called a Z-matrix if \(h_{ij}\leq0\) for all \(i\neq j\).

For problem (QCNP), let

$$D:=\bigl\{ x\in R^{n}| f_{i}(x)\leq0, i=1,\ldots,m\bigr\} . $$

For \(f_{i}(x)=\frac{1}{2}x^{T} A_{i} x+x^{T} a_{i}+c_{i}\), \(i=1,\ldots,m\), we define

$$ H_{i}= \begin{pmatrix} A_{i} & a_{i}\\ a_{i}^{T} & 2c_{i} \end{pmatrix}, \quad i=1,\ldots,m. $$
(1)

The following theorem of the alternative ([23], Theorem 5.2) for systems of arbitrary finite number of quadratic inequalities plays an important role in deriving our main result, Theorem 1.

Lemma 1

[23]

Suppose that \(H_{i}\), \(i=1,\ldots,m\), are all Z-matrices. Then exactly one of the following statements holds:

  1. (i)

    (\(\exists x\in R^{n}\)) \(f_{i}(x)< 0\), \(i=1,\ldots,m\);

  2. (ii)

    \(\exists\lambda\in R^{m}_{+}\backslash\{0\}\), (\(\forall x\in R^{n}\)) \(\sum_{i=1}^{m}\lambda_{i}f_{i}(x)\geq0\).

Theorem 1

(Necessary condition)

For the problem (QCNP), suppose that each \(H_{i}\) is a Z-matrix, \(i=1,\ldots ,m \). Let \(\bar{x}\in D\) and let C be a convex set containing D. Assume that there exists a Z-matrix \(A\in S^{n}\) such that \(\nabla f(\bar{x})-A\bar{x}\leq 0\) and \(\nabla^{2}f(x)-A\preceq0\), for each \(x\in C\) and that there exists \(x_{0}\in R^{n}\), such that \(f_{i}(x_{0})<0\), \(i=1, \ldots,m\). If \(\bar{x}\) is a global minimizer of (QCNP) then there exists \(\lambda =(\lambda_{1},\ldots,\lambda_{m})^{T}\in R^{m}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,m\),

$$ \nabla f(\bar{x})+\sum_{i=1}^{m} \lambda_{i}(A_{i}\bar{x}+a_{i})=0\quad \textit{and}\quad A+\sum _{i=1}^{m}\lambda_{i}A_{i} \succeq0. $$
(2)

Proof

Let \(g(x)=\frac{1}{2}x^{T}Ax+(\nabla f(\bar{x})-A\bar{x})^{T}x\), \(\forall x\in R^{n}\), and let \(\varphi(x)=f(x)-g(x)\), \(x\in C\), then

$$\nabla^{2}\varphi(x)=\nabla^{2}f(x)-\nabla^{2}g(x)= \nabla^{2}f(x)-A\preceq 0,\quad \forall x\in C. $$

So φ is a concave function over C. Moreover, \(\nabla\varphi (\bar{x})=\nabla f(\bar{x})-\nabla g(\bar{x})=0\). Hence,

$$\varphi(x)\leq\varphi(\bar{x}),\quad \forall x\in C, $$

i.e.

$$g(x)-g(\bar{x})\geq f(x)-f(\bar{x}),\quad \forall x\in C. $$

If \(\bar{x}\) is a global minimizer of (QCNP), then

$$ g(x)-g(\bar{x})\geq0, \quad\forall x\in D. $$
(3)

Then the system \(g(x)-g(\bar{x})<0\) and \(f_{i}(x)<0\), \(i=1,\ldots,m\), has no solution.

Let \(\widetilde{g}=g(x)-g(\bar{x})\), then

$$ H_{\widetilde{g}}= \begin{pmatrix} A & \nabla f(\bar{x})-A\bar{x}\\ (\nabla f(\bar{x})-A\bar{x})^{T} & -2g(\bar{x}) \end{pmatrix}. $$

A being a Z-matrix and \(\nabla f(\bar{x})-A\bar{x}\leq0\), we know that \(H_{\widetilde{g}}\) is also a Z-matrix.

So, by Lemma 1, there exist \(\mu=(\mu_{0},\mu_{1},\ldots ,\mu_{m})^{T}\in R^{m+1}_{+}\backslash\{0\}\), such that

$$\mu_{0}\bigl(g(x)-g(\bar{x})\bigr)+\sum_{i=1}^{m} \mu_{i}f_{i}(x)\geq0, \quad\forall x\in R^{n}. $$

In particular,

$$ \mu_{i}f_{i}(\bar{x})=0,\quad i=1,\ldots,m. $$
(4)

\(\mu_{0}g+\sum_{i=1}^{m}\mu_{i}f_{i}\) attains its minimum at \(\bar{x}\) over \(R^{n}\). We now show that \(\mu_{0}>0\). Otherwise, \(\sum_{i=1}^{m}\mu_{i}f_{i}(x)\geq0\), \(\forall x\in R^{n}\). Note that \(f_{i}(x_{0})<0\), \(i=1,\ldots,m\), it follows that \(\mu_{i}=0\), \(i=1,\ldots,m\). This contradicts the fact that \(\mu\neq0\). Hence,

$$ g(x)+\sum_{i=1}^{m}\lambda_{i}f_{i}(x) \geq g(\bar{x}),\quad \forall x\in R^{n}, $$
(5)

where \(\lambda_{i}=\frac{\mu_{i}}{\mu_{0}}\), \(i=1,\ldots,m\). This implies that \(\lambda_{i}f_{i}(\bar{x})=0\), \(i=1,\ldots,m\). Therefore, \(\bar{x}\) is a global minimizer of \(g+\sum_{i=1}^{m}\lambda_{i}f_{i}\) over \(R^{n}\). This shows us that \(\nabla(g+\sum_{i=1}^{m}\lambda_{i}f_{i})(\bar{x})=0\) and \(\nabla^{2}(g+\sum_{i=1}^{m}\lambda_{i}f_{i})(\bar{x})\succeq0\). That is, \(\nabla f(\bar{x})+\sum_{i=1}^{m}\lambda_{i}(A_{i}\bar{x}+a_{i})=0\) and \(A+\sum_{i=1}^{m}\lambda_{i}A_{i}\succeq0\). □

Theorem 2

(Sufficient condition)

For the problem (QCNP), let \(\bar{x}\in D\) and let C be a convex set containing D. Suppose that there exists \(A\in S^{n}\) such that \(\nabla^{2}f(x)-A\succeq0\), for each \(x\in C\) and there exists \(\lambda\in R^{m}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,m\),

$$ \nabla f(\bar{x})+\sum_{i=1}^{m} \lambda_{i}(A_{i}\bar{x}+a_{i})=0 \quad\textit{and}\quad A+ \sum_{i=1}^{m}\lambda_{i}A_{i} \succeq0, $$
(6)

then \(\bar{x}\) is a global minimizer of (QCNP).

Proof

Let \(g(x)=\frac{1}{2}x^{T}Ax+(\nabla f(\bar{x})-A\bar{x})^{T}x\), \(\forall x\in R^{n}\), and let \(\varphi(x)=f(x)-g(x)\), \(x\in C\), then

$$\nabla^{2}\varphi(x)=\nabla^{2}f(x)-\nabla^{2}g(x)= \nabla^{2}f(x)-A\succeq 0,\quad \forall x\in C. $$

So φ is a convex function over C. Moreover, \(\nabla\varphi (\bar{x})=\nabla f(\bar{x})-\nabla g(\bar{x})=0\). Hence,

$$\varphi(x)\geq\varphi(\bar{x}),\quad \forall x\in C, $$

i.e.

$$f(x)-f(\bar{x})\geq g(x)-g(\bar{x}),\quad \forall x\in C. $$

If \(A+\sum_{i=1}^{m}\lambda_{i}A_{i}\succeq0\), then the function \(g(x)+\sum_{i=1}^{m}\lambda_{i} f_{i}(x)\) is convex on \(R^{n}\). By \(\nabla f(\bar{x})+\sum_{i=1}^{m}\lambda_{i}(A_{i}\bar{x}+a_{i})=0\Leftrightarrow\nabla g(\bar{x})+\sum_{i=1}^{m}\lambda _{i}(A_{i}\bar{x}+a_{i})=0\), we know that \(\bar{x}\) is a global minimizer of function \(g(x)+\sum_{i=1}^{m}\lambda_{i} f_{i}(x)\) on \(R^{n}\). Hence

$$g(x)+\sum_{i=1}^{m}\lambda_{i} f_{i}(x)\geq g(\bar{x})+\sum_{i=1}^{m} \lambda_{i} f_{i}(\bar{x})=g(\bar{x}),\quad \forall x\in R^{n}. $$

Therefore \(g(x)-g(\bar{x})\geq-\sum_{i=1}^{m}\lambda_{i} f_{i}(x)\geq0\), \(\forall x\in D\). Thus, \(f(x)\geq f(\bar{x})\), \(\forall x\in D\), which means that \(\bar{x}\) is a global minimizer of problem (QCNP). □

Remark 1

Note that the matrix A in Theorem 1 (the necessary condition) is a Z-matrix, \(\nabla f(\bar{x})-A\bar{x}\leq0\), and \(\nabla^{2}f(x)-A\preceq0\), for each \(x\in C\), but the matrix A in Theorem 2 (the sufficient condition) satisfies that \(\nabla^{2}f(x)-A\succeq0\), for each \(x\in C\).

Necessary global optimality conditions for nonlinear programming problems with quadratic constraints was given in [13], by using polynomial over-estimators and a polynomial version of a theorem of the alternative, as the following lemma.

Lemma 2

[13]

For the problem (QCNP), suppose that there exist a convex set \(C\supseteq D\) and \(p\in R[x]\) such that, for each \(x\in C\), \(\nabla^{2}f(x)-\nabla^{2}p(x)\preceq0\). If \(x_{0}\) is a global minimizer of (QCNP), then exist \(\gamma\in N\), an sos-polynomial \(y_{i}\in R[x]\) and \(u_{i}\in S\langle-f_{1},\ldots,-f_{m},\widetilde{g}_{x_{0}}\rangle\), \(i=1,\ldots,2^{m+1}\) such that \(\min\{y_{i}(x_{0}),u_{i}(x_{0})\}=0\) for \(i\in\{1,\ldots,2^{m+1}\}\) and, for each \(x\in R^{n}\),

$$ \sum_{i=1}^{2^{m+1}}y_{i}(x)u_{i}(x)+ \bigl[\widetilde{g}_{x_{0}}(x)\bigr]^{\gamma}=0, $$
(7)

where \(R[x]\) means the real polynomial on \(R^{n}\), \(S\langle f_{1},\ldots,f_{k}\rangle:= \{\prod_{j=1}^{k}f_{j}^{e_{j}},e_{j}\in \{0,1\},j=1,\ldots,k\}\), \(g(x):=p(x)-(\nabla f(x_{0})-\nabla p(x_{0}))^{T}x\) and \(\widetilde{g}_{x_{0}}(x):=g(x_{0})-g(x)\).

The following example shows how to use the global optimality conditions to check a given point is or is not a global minimizer, and illustrates the case where the necessary global optimality condition (2) is not satisfied at a feasible point which is not a global minimizer whereas the necessary condition (7) holds at the point.

Example 1

Consider the problem

$$\begin{aligned} (\mbox{P0}) \quad&\min\quad f(x):=\sin^{2}x_{1}-4x_{2}^{2}-x_{2}^{4} \\ \quad&\mbox{s.t.}\quad f_{1}(x):=2x_{1}^{2}+x_{2}^{2}-9 \leq0. \end{aligned}$$

Hence \(\nabla^{2}f(x)= \bigl({\scriptsize\begin{matrix}{} 2\cos(2x_{1}) & 0 \cr 0 & -8-12x_{2}^{2} \end{matrix}}\bigr)\), \(A_{1}= \bigl({\scriptsize\begin{matrix}{} 4 & 0 \cr 0 & 2 \end{matrix}}\bigr)\), and \(a_{1}=(0,0)^{T}\). Let \(\bar{x}=(0,3)^{T}\), we take \(A= \bigl({\scriptsize\begin{matrix}{} 2 & 0 \cr 0 & -8 \end{matrix}}\bigr)\), then A is a Z-matrix, \(\nabla f(\bar{x})-A\bar{x}\leq 0\), and \(\nabla^{2}f(x)-A=\bigl({\scriptsize\begin{matrix}{} 2\cos(2x_{1})-2 & 0 \cr 0 & -12x_{2}^{2} \end{matrix}}\bigr)\preceq0\).

A direct calculation shows that \(\lambda=22\) solves

$$\nabla f(\bar{x})+\lambda(A_{1}\bar{x}+a_{1})=0\quad \mbox{and}\quad A+\lambda A_{1}\succeq 0. $$

Hence the necessary global optimality condition (2) holds at \(\bar {x}\).

Let us consider another point \(x_{0}=(0,0)^{T}\). If \(\nabla f(x_{0})+\lambda(A_{1}x_{0}+a_{1})=0 \) and \(\lambda f_{1}(x_{0})=0\) then \(\lambda=0\) and \(A+\lambda A_{1}=A\), which is not a positive semidefinite matrix. Hence the necessary global optimality condition (2) is not satisfied at \(x_{0}\) while the necessary condition (7) holds at \(x_{0}\). Take \(p(x)=x_{1}^{2}\), then \(\nabla^{2}f(x)-\nabla^{2} p(x)\preceq0\), \(\forall x\in R^{n}\) and \(\widetilde{g}_{x_{0}}(x)=-x_{1}^{2}\). Thus the necessary condition (7) holds with \(\gamma=1\) since \(x_{1}^{2}\cdot1+(-x_{1}^{2})=0\).

In the following corollary, we see that our optimality condition yields Corollary 5.3 given in [23].

Corollary 1

For the problem (QCNP), let \(f(x)=\frac{1}{2}x^{T}A_{0}x+a_{0}^{T}x+c_{0}\) and let

$$ H_{f}= \begin{pmatrix} A_{0} & a_{0} \\ a_{0}^{T} & 2c_{0} \end{pmatrix} $$
(8)

be a Z-matrix. Suppose that each \(H_{i}\) is a Z-matrix, \(i=1,\ldots,m \), and that there exists \(x_{0}\in R^{n}\), such that \(f_{i}(x_{0})<0\), \(i=1, \ldots,m\). Then \(\bar{x}\) is a global minimizer of (QCNP) if and only if there exists \(\lambda\in R^{m}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,m\),

$$ A_{0}\bar{x}+a_{0}+\sum_{i=1}^{m} \lambda_{i}(A_{i}\bar{x}+a_{i})=0 \quad\textit{and}\quad A_{0}+\sum_{i=0}^{m} \lambda_{i}A_{i}\succeq0. $$
(9)

Proof

Take \(A=A_{0}\) in Theorem 1 and Theorem 2, we have \(\nabla f(\bar{x})-A\bar{x}=A_{0}\bar{x}+a_{0}-A_{0}\bar{x}=a_{0} \leq0\), since \(H_{f}\) is a Z-matrix and \(\nabla^{2}f(x)-A=A_{0}-A=0\preceq0\) and also \(\nabla^{2}f(x)-A=A_{0}-A=0\succeq0\). So \(A=A_{0}\) satisfies all the conditions given in Theorem 1 and Theorem 2. If also there exists \(x_{0}\in R^{n}\), such that \(f_{i}(x_{0})<0\), \(i=1, \ldots,m\), then by Theorem 1 and Theorem 2, we know that \(\bar{x}\) is a global minimizer of (QCNP) if and only if there exists \(\lambda\in R^{m}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,m\),

$$ A_{0}\bar{x}+a_{0}+\sum_{i=1}^{m} \lambda_{i}(A_{i}\bar{x}+a_{i})=0 \quad\mbox{and}\quad A_{0}+\sum_{i=0}^{m} \lambda_{i}A_{i}\succeq0. $$
(10)

 □

3 Global optimality conditions for nonlinear programming problems with box constraints

In this section, we will derive some global optimality conditions for nonlinear programming problems with box constraints by using the results obtained in Section 2.

3.1 Global optimality conditions for general nonconvex minimization problems with box constraints

We consider the following nonlinear programming problem:

$$\begin{aligned} (\mbox{BP}) \quad&\min\quad f(x) \\ \quad&\mbox{s.t.}\quad x\in B:=\prod_{i=1}^{n}[u_{i},v_{i}], \end{aligned}$$

where \(f:R^{n}\rightarrow R\) is a twice continuously differentiable function, \(u_{i},v_{i}\in R\) and \(u_{i}\leq v_{i}\), \(i=1,\ldots,n\).

Without loss of generality, we suppose that \(0\leq u_{i}< v_{i}\), \(i=1,\ldots,n\). If \(u_{i}=v_{i}\), we can replace B by \(\bar{B}=\prod_{j=1}^{j=i-1}B_{j}\times\prod_{j=i+1}^{n}B_{j}\) and replace f by \(\bar{f}\), where \(\bar{f}(x)=f(\bar{x})\), \(\bar{x}=(x_{1},\ldots,x_{i-1},u_{i}, x_{i+1},\ldots,x_{n})^{T}\) and \(B_{j}=[u_{j},v_{j}]\). If \(u_{i}<0\), we let \(y_{i}=x_{i}-u_{i}\), \(y_{j}=x_{j}\), \(j\neq i\), then \(y_{i}\in[0,v_{i}-u_{i}]\). Hence, we can obtain the solution of problem (BP) by solving the following problem:

$$\begin{aligned} (\mbox{P}) \quad&\min\quad f(y) \\ \quad&\mbox{s.t.}\quad y\in B^{\prime}, \end{aligned}$$

where \(B^{\prime}:=\prod_{j=1}^{j=i-1}B_{j}\times[0,v_{i}-u_{i}]\times \prod_{j=i+1}^{n}B_{j}\).

The feasible set B can be written as

$$B=\bigl\{ x\in R^{n}|x_{i}^{2}-(u_{i}+v_{i})x_{i}+u_{i}v_{i} \leq0, i=1,\ldots,n\bigr\} . $$

Thus we can formulate (BP) as an equivalent nonlinear problem with quadratic constraints:

$$\begin{aligned} (\mbox{BP}_{0}) \quad&\min\quad f(x) \\ \quad&\mbox{s.t.}\quad f_{i}(x)=x_{i}^{2}-(u_{i}+v_{i})x_{i}+u_{i}v_{i} \leq 0,\quad i=1,\ldots,n. \end{aligned}$$
(11)

For problem (BP), we let

$$\begin{aligned}& u = (u_{1},\ldots,u_{n})^{T}, \\& v = (v_{1},\ldots,v_{n})^{T}. \end{aligned}$$

Theorem 3

(Necessary condition)

For the problem (BP), let \(\bar{x}\in B\). Assume that there exists a Z-matrix \(A\in S^{n}\) such that \(\nabla f(\bar{x})-A\bar{x}\leq0\) and \(\nabla^{2}f(x)-A\preceq0\), for each \(x\in B\). If \(\bar{x}\) is a global minimizer of (BP), then there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$ \nabla f(\bar{x})+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\textit{and}\quad A+2 \operatorname{diag}(\lambda)\succeq0. $$
(12)

Proof

Let \(A_{i}=\operatorname{diag}(2e_{i})\), \(a_{i}=-(u_{i}+v_{i})e_{i}\), \(i=1,\ldots,n\), where \(e_{i}=(0,\ldots, 0,1,0,\ldots,0)^{T}\), the ith component is 1 and the others are 0, then \(H_{i}= \bigl({\scriptsize\begin{matrix}{} A_{i} & a_{i}\cr a_{i}^{T} & 2u_{i}v_{i} \end{matrix}}\bigr)\), \(i=1,\ldots,n\), and each \(H_{i}\) is a Z-matrix, as \(0\leq u_{i}< v_{i}\). It is obvious that there exists \(x_{0}\in B\) such that \(f_{i}(x_{0})<0\). For instance, we can take \(x^{0}_{i}=(v_{i}-u_{i})/2\), \(i=1,\ldots,n\), then \(x_{0}=(x^{0}_{1},\ldots ,x^{0}_{n})^{T}\in B\) and \(f_{i}(x_{0})<0\). Since \(\bar{x}\) is a global minimizer of \((\mbox{BP}) \Leftrightarrow \bar{x}\) is a global minimizer of (BP0), by Theorem 1, there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$ \nabla f(\bar{x})+\sum_{i=1}^{n} \lambda_{i}(A_{i}\bar{x}+a_{i})=0 \quad\mbox{and}\quad A+ \sum_{i=1}^{n}\lambda_{i}A_{i} \succeq0, $$

i.e.

$$ \nabla f(\bar{x})+\operatorname{diag }(\lambda) (2\bar{x}-u-v)=0 \quad\mbox{and}\quad A+2 \operatorname{diag}(\lambda)\succeq0. $$

 □

Theorem 4

(Sufficient condition)

For the problem (BP), let \(\bar{x}\in B\). Assume that there exists \(A\in S^{n}\) such that \(\nabla^{2}f(x)-A\succeq0\), for each \(x\in B\), and there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$ \nabla f(\bar{x})+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\textit{and}\quad A+2 \operatorname{diag}(\lambda)\succeq0, $$
(13)

then \(\bar{x}\) is a global minimizer of (BP).

Proof

It can be obtained directly from Theorem 2. □

We now present two examples to illustrate that a global minimizer satisfies our necessary condition and sometimes also satisfies our sufficient condition while a local minimizer that is not global fails to satisfy the necessary condition.

Example 2

(Three-hump camelback function [24])

We consider the following nonlinear programming problem with box constraints

$$\begin{aligned} (\mbox{P1}) \quad&\min\quad -6.3x_{1}^{4}+x_{1}^{6}+12x_{1}^{2}-6x_{1}x_{2}+6x_{2}^{2} \\ \quad&\mbox{s.t.}\quad 0\leq x_{1},x_{2}\leq2. \end{aligned}$$

We know \(\bar{x}=(0,0)\) is a global minimizer of (P1), and \(\bar{y}=(1.75,0.8)^{T}\) is a local minimizer of (P1); see [24]. Let \(f(x)=-6.3x_{1}^{4}+x_{1}^{6}+12x_{1}^{2}-6x_{1}x_{2}+6x_{2}^{2}\), \(u=(0,0)^{T},v=(2,2)^{T}\), then \(\nabla f(\bar{x})=(0,0)^{T}\). Take \(A_{z}= \bigl({\scriptsize\begin{matrix}{} 210 & -6\cr -6 & 12 \end{matrix}}\bigr)\), then \(A_{z}\) is a Z-matrix, \(\nabla f(\bar{x})-A_{z}\bar{x}=0\leq0\) and \(\nabla^{2}f(x)-A_{z}\preceq 0\), for each \(x\in[u,v]\). Then a direct calculation shows that \(\lambda=(\lambda_{1},\lambda_{2})=(0,0)\) solve

$$\lambda_{i}f_{i}(\bar{x})=0,\quad i=1,2,\quad\quad \nabla f(\bar{x})+ \operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\mbox{and}\quad A_{z}+2 \operatorname{diag}(\lambda)\succeq0, $$

i.e. the necessary condition (11) holds at the global minimizer \(\bar{x}\). But the sufficient condition (13) does not hold at \(\bar{x}\). Suppose that there exist \(A\in S^{n}\) and \(\lambda\in R^{2}_{+}\), such that the sufficient condition (13) holds. Then \(\lambda_{1}=\lambda_{2}=0\) and \(A\succeq0\). Hence, \(\nabla^{2}f(x)\succeq A \succeq0\) which contradicts the fact that \(\nabla^{2}f(x)\) is not positive semidefinite.

Note that there does not exist a Z-matrix \(A_{z}\) which satisfies \(\nabla f(\bar{y})-A_{z}\bar{y}\leq0\) and \(\nabla^{2}f(x)-A_{z}\preceq 0\), for each \(x\in[u,v]\). Hence, the necessary condition is not satisfied at the local minimizer \(\bar{y}\) which is not global. See Figure 1.

Figure 1
figure 1

The behavior of (P1).

Example 3

We consider the following nonlinear programming problem with box constraints

$$\begin{aligned} (\mbox{P2}) \quad &\min \quad\sin x_{1}+x_{2}^{2}-x_{1} \\ \quad&\mbox{s.t.}\quad 0\leq x_{1}\leq\pi, 0 \leq x_{2}\leq1. \end{aligned}$$

Let \(\bar{x}=(\pi,0)\) and \(f(x)=\sin x_{1}+x_{2}^{2}-x_{1}\), \(u=(0,0)^{T}\), \(v=(\pi,1)^{T}\), then \(\nabla f(x)=(-1+\cos x_{1},2x_{2})^{T}\), \(\nabla^{2} f(x)= \bigl( {\scriptsize\begin{matrix}{} -\sin x_{1} & 0\cr 0 & 2 \end{matrix}}\bigr)\), and \(\nabla f(\bar{x})=(-2,0)^{T}\). Take \(A= \bigl({\scriptsize\begin{matrix}{} -1 & 0\cr 0 & 2 \end{matrix}}\bigr)\), then \(\nabla^{2}f(x)-A\succeq 0\), for each \(x\in[u,v]\). Then a direct calculation shows that \(\lambda=(\lambda_{1},\lambda_{2})=(\frac{2}{\pi},0)\) solve

$$\lambda_{i}f_{i}(\bar{x})=0,\quad i=1,2,\quad\quad \nabla f(\bar{x})+ \operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\mbox{and}\quad A+2 \operatorname{diag}(\lambda)\succeq0, $$

i.e. the sufficient condition (13) holds at \(\bar{x}\), hence \(\bar{x}\) is a global minimizer of (P2). Similarly, we can take a Z-matrix \(A^{\prime}= \bigl({\scriptsize\begin{matrix}{} 0 & 0\cr 0 & 2 \end{matrix}}\bigr)\), then \(\nabla f(\bar{x})-A^{\prime}\bar{x}=(-2,0)^{T}\leq0\), \(\nabla^{2}f(x)-A^{\prime}\preceq 0\), for each \(x\in[u,v]\), and \(\lambda=(\lambda_{1},\lambda_{2})=(\frac {2}{\pi},0)\) solve

$$\lambda_{i}f_{i}(\bar{x})=0,\quad i=1,2, \quad\quad\nabla f(\bar{x})+ \operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\mbox{and}\quad A^{\prime}+2 \operatorname{diag}(\lambda)\succeq0, $$

i.e. the necessary global optimality condition (12) is satisfied at the global minimizer \(\bar{x}\).

Let us consider another point \(\bar{y}=(\frac{\pi}{2},0)^{T}\), then \(\nabla f(\bar{y})=(-1,0)^{T}\). It is obvious that both the sufficient condition (13) and the necessary condition (12) do not hold at \(\bar{y}\), since there does not exist \(\lambda\in R^{2}_{+}\) such that \(\nabla f(\bar{y})+\operatorname{diag}(\lambda) (2\bar{y}-u-v)=0\).

3.2 Global optimality conditions for nonconvex quadratic programming problems with box constraints

We consider the following nonconvex quadratic program with box constraint:

$$\begin{aligned} (\mbox{BQP}) \quad&\min\quad f_{0}(x)=\frac{1}{2}x^{T}A_{0}x+a_{0}^{T}x+c_{0} \\ \quad&\mbox{s.t.}\quad x\in B, \end{aligned}$$

where \(A_{0}\in S^{n}\) and \(a_{0}\in R^{n}\).

For (BQP), we define

$$H_{0}= \begin{pmatrix} A_{0} & a_{0}\\ a_{0}^{T} & 2c_{0} \end{pmatrix}. $$

Corollary 2

(Sufficient condition)

For problem (BQP), let \(\bar{x} \in B\). If there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$ a_{0}+A_{0}\bar{x}+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\textit{and}\quad A_{0}+2 \operatorname{diag}(\lambda) \succeq0, $$
(14)

then \(\bar{x}\) is a global minimizer of (BQP).

Proof

Take \(A=A_{0}\), then the conclusion follows from Theorem 4. □

Another sufficient global optimality condition for nonconvex quadratic minimization problem with box constraints was given in [21] as the following corollary.

Corollary 3

[21]

For problem (BQP), let \(\bar{x} \in B\). suppose that there exists a diagonal matrix \(Q:=\operatorname{diag}(q_{1},\ldots,q_{n})\), \(q_{i}\in R^{n}\), \(i=1,\ldots,n\), such that \(A_{0}-Q\succeq0\) and

$$ \widetilde{X}(a_{0}+A_{0}\bar{x})-\frac{1}{2} \widehat{Q }(v-u)\leq 0, $$
(15)

then \(\bar{x}\) is a global minimizer of (BQP), where

$$ \begin{aligned}&\widetilde{ x}_{i}: = \textstyle\begin{cases}-1, & \textit{if } \bar{x}_{i}=u_{i},\\ 1,& \textit{if }\bar{x}_{i}=v_{i}, \\ (a+A\bar{x})_{i},& \textit{if }\bar{x}_{i}\in(u_{i},v_{i}), \end{cases}\displaystyle \\ &\widetilde{ X}:=\operatorname{diag }( \widetilde{ x}_{1},\ldots, \widetilde{ x}_{n}), \end{aligned} $$
(16)

and for \(q=(q_{1},\ldots,q_{n})^{T} \in R^{n}\), \(\widehat{q}_{i}=\min \{0,q_{i}\}\), \(i=1,\ldots,n\), \(\widehat{Q}=\operatorname{diag}(\widehat{q}_{1},\ldots,\widehat{q}_{n})\).

Remark 2

The two sufficient global optimality conditions given in Corollaries 2 and 3 are equivalent. If the sufficient condition (14) is satisfied, then we can prove that condition (15) holds by taking \(q_{i}=-2\lambda_{i}\) and discussing \(\bar{x}_{i}\) in the following three cases: \(\bar{x}_{i}=u_{i}\), \(\bar{x}_{i}=v_{i}\) and \(\bar{x}_{i}\in(u_{i},v_{i})\). Conversely, the sufficient condition (14) holds if condition (15) is satisfied. We let \(\lambda_{i}=-\frac{\widetilde{x}_{i}(a_{0}+A_{0}\bar{x})_{i}}{v_{i}-u_{i}}\), then \(\lambda_{i}\geq0\) and \(A_{0}+2\operatorname{diag}(\lambda)\succeq A_{0}-\widehat{Q}\succeq A_{0}-Q\succeq0\), since \(-\frac{\widetilde{x}_{i}(a_{0}+A_{0}\bar{x})_{i}}{v_{i}-u_{i}}\geq -\frac{1}{2}\widehat{q}_{i}\geq0\). If \(\bar{x}_{i}\in(u_{i},v_{i})\), then \((a_{0}+A_{0}\bar{x})_{i}=0\) and \(\lambda_{i}=0\). Hence the sufficient condition (14) holds.

Corollary 4

(Sufficient and necessary condition)

For problem (BQP), if \(H_{0}\) is a Z-matrix, then \(\bar{x}\in B\) is a global minimizer of (BQP) if and only if there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$\begin{aligned} A_{0}\bar{x}+a_{0}+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\textit{and}\quad A_{0}+2 \operatorname{diag}(\lambda) \succeq0. \end{aligned}$$
(17)

Proof

The necessary condition for global optimality directly follows from Theorem 3, by taking \(A=A_{0}\).

Conversely, suppose that there exists \(\lambda\in R^{n}_{+}\), such that \(\lambda_{i} f_{i}(\bar{x})=0\), \(i=1,\ldots,n\),

$$A_{0}\bar{x}+a_{0}+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0 \quad\mbox{and}\quad A_{0}+2 \operatorname{diag}(\lambda)\succeq0, $$

i.e.

$$\nabla\Biggl(f_{0}+\sum_{i=1}^{n} \lambda_{i}f_{i}\Biggr) (\bar{x})=0 \quad\mbox{and}\quad \nabla^{2}\Biggl(f_{0}+\sum_{i=1}^{m} \lambda_{i}f_{i}\Biggr) (\bar{x})\succeq0. $$

Hence, we have

$$f_{0}(x)+\sum_{i=1}^{n} \lambda_{i}f_{i}(x)\geq f_{0}(\bar{x})+\sum _{i=1}^{n}\lambda_{i}f_{i}( \bar{x})=f_{0}(\bar{x}),\quad \forall x\in B. $$

Thus, \(\bar{x}\) is a global minimizer of (BQP), since \(\sum_{i=1}^{n}\lambda_{i}f_{i}(x)\leq0\). □

Let us now give a numerical example to apply our sufficient and necessary global optimality condition to a nonconvex quadratic program with box constraint.

Example 4

Consider the following quadratic program:

$$\begin{aligned} (\mbox{P3}) \quad&\min\quad f_{0}(x)=2x_{1}^{2}-4x_{2}^{2}-4x_{1}x_{2} \\ \quad&\mbox{s.t.}\quad 0\leq x_{1},x_{2}\leq2. \end{aligned}$$

Let \(A_{0}= \bigl({\scriptsize\begin{matrix}{} 2 & -2\cr -2 & -4 \end{matrix}}\bigr)\), \(a_{0}=0\), \(c_{0}=0\), \(u=(0,0)^{T}\), \(v=(2,2)^{T}\). Obviously, \(H_{0}\) is a Z-matrix. Let \(\bar{x}=(2,2)^{T}\). By \(A_{0}\bar{x}+a_{0}+\operatorname{diag} (\lambda) (2\bar{x}-u-v)=0\), we have \(\lambda=(0,6)^{T}\in R_{+}^{2}\). It is easy to verify that \(\lambda_{i}f_{i}(\bar{x})=0\), \(i=1,2\), where \(f_{i}(x)=x_{i}^{2}-2x_{i}\), \(i=1,2\), and

$$A_{0}+2\operatorname{diag}(\lambda)= \begin{pmatrix} 2 & -2\\ -2 & 8 \end{pmatrix}\succeq0. $$

Hence, the sufficient and necessary global optimality condition (17) holds at \(\bar{x}\). See Figure 2.

Figure 2
figure 2

The behavior of (P3).

Another sufficient and necessary global optimality condition for (BQP), when \(A_{0}\) is a diagonal matrix, is given in [22] as the following corollary.

Corollary 5

For problem (BQP), if \(A_{0}\) is a diagonal matrix, then \(\bar{x}\) is a global minimizer of (BQP) if and only if

$$ \frac{1}{2}\widetilde{a}_{ii}(v_{i}-u_{i})+ \widetilde{x}_{i}(a_{0}+A_{0}\bar{x})_{i}\leq0,\quad i=1,\ldots,n, $$
(18)

where \(\widetilde{a}_{ii}=\max\{0,-a_{ii}\}\), \(a_{ii}\), \(i=1,\ldots,n\), are the diagonal elements of \(A_{0}\), and \(\widetilde{x}_{i}\) is defined by (16).

Note that the sufficient and necessary global optimality condition (17) extends the global optimality condition (18). On one hand, in Corollary 4, \(A_{0}\) is a Z-matrix and not necessarily a diagonal matrix. On the other hand, we can easily verify that condition (17) implies condition (18), by taking \(\bar{x}_{i}=u_{i}\), \(\bar{x}_{i}=v_{i}\), and \(\bar{x}_{i}\in(u_{i},v_{i})\) and taking \(a_{ii}\geq0\) and \(a_{ii}<0\).

Remark 3

In this paper, some global optimality conditions for nonlinear program with box constraints are presented. In these conditions given by Theorem 3, Theorem 4, Corollary 2 and 4, if the multiplier λ does exist, then λ is unique and

$$ \lambda_{i} = \textstyle\begin{cases}\frac{(\nabla f(\bar{x}))_{i}}{v_{i}-u_{i}}, & \mbox{if } \bar{x}_{i} =u_{i},\\ \frac{(\nabla f(\bar{x}))_{i}}{u_{i}-v_{i}}, & \mbox{if } \bar{x}_{i} =v_{i}, \\ 0,& \mbox{if } \bar{x}_{i}\in(u_{i},v_{i}). \end{cases} $$

Hence, the global optimality conditions obtained in this paper are tractable.

For problem (BQP) and \(\bar{x}=(\bar{x}_{1},\ldots,\bar{x}_{n})^{T}\in B\), we define

$$\begin{aligned}& I_{\bar{x}} := \bigl\{ i\in\{1,\ldots,n\}| \bar{x}_{i} \in(u_{i},v_{i})\bigr\} , \\& \lambda_{\bar{x}_{i}} := \textstyle\begin{cases}\frac{(a_{0}+A_{0}\bar{x})_{i}}{v_{i}-u_{i}}, & \mbox{if } \bar{x}_{i} =u_{i},\\ \frac{(a_{0}+A_{0}\bar{x})_{i}}{u_{i}-v_{i}}, & \mbox{if } \bar{x}_{i} =v_{i}, \\ 0,& \mbox{if } \bar{x}_{i}\in(u_{i},v_{i}), \end{cases}\displaystyle \quad i=1,\ldots,n, \\& \lambda_{\bar{x}} := (\lambda_{\bar{x}_{1}},\ldots,\lambda_{\bar{x}_{n}})^{T}. \end{aligned}$$

Therefore, Corollaries 2 and 4 can be written in a simple way.

Corollary 6

(Sufficient condition)

For problem (BQP), let \(\bar{x} \in B\). If \(\lambda_{\bar{x}}\geq0\) and

$$ \textstyle\begin{cases} (a_{0}+A_{0}\bar{x})_{i}=0,\quad i\in I_{\bar{x}} ,\\ A_{0}+2 \operatorname{diag}(\lambda_{\bar{x}})\succeq0, \end{cases} $$

then \(\bar{x}\) is a global minimizer of (BQP).

Corollary 7

(Sufficient and necessary condition)

For problem (BQP), if \(H_{0}\) is a Z-matrix, then \(\bar{x}\in B\) is a global minimizer of (BQP) if and only if \(\lambda_{\bar{x}}\in R^{n}_{+}\), and

$$ \textstyle\begin{cases} (a_{0}+A_{0}\bar{x})_{i}=0,\quad i\in I_{\bar{x}} ,\\ A_{0}+2 \operatorname{diag}(\lambda_{\bar{x}})\succeq0. \end{cases} $$

References

  1. Floudas, CA, Visweswaran, V: Quadratic optimization. In: Horst, R, Pardalos, PM (eds.) Handbook of Global Optimization, pp. 217-269. Kluwer Academic, Dordrecht (1995)

    Chapter  Google Scholar 

  2. Pardalos, P, Romeijn, H: Handbook in Global Optimization, Vol. II. Kluwer Academic, Dordrecht (2002)

    Book  Google Scholar 

  3. Davidon, WC: Conic approximations and collinear scalings for optimizers. SIAM J. Numer. Anal. 17, 268-281 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  4. Di, S, Sun, WY: A trust region method for conic model to solve unconstrained optimization. Optim. Methods Softw. 6, 237-263 (1996)

    Article  Google Scholar 

  5. Ni, Q: Optimality conditions for trust-region subproblems involving a conic model. SIAM J. Optim. 3, 189-209 (2005)

    Google Scholar 

  6. Moré, JJ: Generalization of the trust region problem. Optim. Methods Softw. 2, 189-209 (1993)

    Article  Google Scholar 

  7. Pardalos, PM, Rosen, JB: Constrained Global Optimization: Algorithms and Applications. Lecture Notes in Computer Sciences, vol. 268. Springer, Berlin (1987)

    Book  MATH  Google Scholar 

  8. Horst, R, Pardalos, P: Handbook of Global Optimization, Nonconvex Optimization and Its Applications. Kluwer Academic, Dordrecht (1995)

    Book  Google Scholar 

  9. Dúr, M, Horst, R, Locatelli, M: Necessary and sufficient global optimality conditions for convex maximization revisited. J. Math. Anal. Appl. 217, 637-649 (1998)

    Article  MathSciNet  Google Scholar 

  10. Glover, BM, Ishizuka, Y, Jeyakumar, V, Tuan, HD: Complete characterizations of global optimality for problems involving the pointwise minimum of sublinear functions. SIAM J. Optim. 6, 362-372 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  11. Stern, R, Wolkowicz, H: Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. Optim. 5, 286-313 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  12. Strekalovsky, A: Global optimality conditions for nonconvex optimization. J. Glob. Optim. 12, 415-434 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  13. Jeyakumar, V, Li, G: Necessary global optimality conditions for nonlinear programming problems with polynomial constraints. Math. Program., Ser. A 126, 393-399 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  14. Beck, A, Teboulle, M: Global optimality conditions for quadratic optimization problems with binary constraints. SIAM J. Optim. 11, 179-188 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. Pinar, MC: Sufficient global optimality conditions for bivalent quadratic optimization. J. Optim. Theory Appl. 122, 433-440 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. Jeyakumar, V, Srisatkunarajah, S: Lagrange multiplier necessary conditions for global optimality for non-convex minimization over a quadratic constraint via S-lemma. Optim. Lett. 3, 23-33 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Peng, JM, Yuan, YX: Optimality conditions for the minimization of a quadratic with two quadrait constraints. SIAM J. Optim. 7, 579-594 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  18. Jeyakumar, V, Rubinov, AM, Wu, ZY: Nonconvex quadratic minimization problems with quadratic constraints: global optimality conditions. Math. Program., Ser. A 110, 521-541 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hiriart-Urruty, JB: Global optimality conditions in maximizing a convex quadratic function under convex quadratic constraints. J. Glob. Optim. 21, 445-455 (2001)

    Article  MathSciNet  Google Scholar 

  20. Hiriart-Urruty, JB: Conditions for global optimality 2. J. Glob. Optim. 13, 349-367 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  21. Jeyakumar, V, Rubinov, AM, Wu, ZY: Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints. J. Glob. Optim. 36, 471-481 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Jeyakumar, V, Huy, NQ: Global minimization of difference of quadratic and convex functions over box or binary constraints. Optim. Lett. 2, 223-238 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Jeyakumar, V, Lee, GM, Li, GY: Alternative theorems for quadratic inequality systems and global quadratic optimization. SIAM J. Optim. 20, 983-1001 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Floudas, CA, Pardalos, PM: A Collection of Test Problems for Constrained Global Optimization Algorithms. Springer, Berlin (1990)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This research is partially supported by Natural Science Foundation of China (Nos. 11471062, 11401064) and Natural Science Foundation of Chongqing (Nos. cstc2013jjB00001, cstc2013jcyjA00021).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoquan Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors participated in this article’s design and coordination, they also read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, G., Wu, Z. & Quan, J. Global optimality conditions for nonconvex minimization problems with quadratic constraints. J Inequal Appl 2015, 257 (2015). https://doi.org/10.1186/s13660-015-0776-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0776-3

MSC

Keywords