Skip to main content

Optimality for \(E\mbox{-}[0,1]\) convex multi-objective programming problems

Abstract

In this paper, we are interested in deriving the sufficient and necessary conditions for an optimal solution of special classes of programming problems. These classes involve generalized \(E\mbox{-}[0,1]\) convex functions. The characterization of efficient solutions for \(E\mbox{-}[0,1]\) convex multi-objective programming problems is obtained. Finally, sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution are derived.

1 Introduction

The study of multi-objective programming problems was very active in recent years. The weak minimum (weakly efficient, weak Pareto) solution is an important concept in mathematical models, economics, decision theory, optimal control, and game theory (see, for example, [1–3]). In most works, the assumption of convexity was made for the objective functions. The extension of convexity is an area of active current research in the field of optimization theory. Various relaxations of convexity were possible, and were called generalized convex functions. The definition of generalized convex functions has occupied the attention of a number of mathematicians; for an overview of generalized convex functions we refer to [4–6]. A significant generalization of convexity is the concept of \(E\mbox{-}[0,1]\) convexity [7]. \(E\mbox{-}[0,1]\) convexity depends on the effect of an operator E on the range of the function and the closed unit interval \([0,1]\). Inspired and motivated by above reasons, the purpose of this paper is to formulate the problems which involve generalized \(E\mbox{-}[0,1]\) convex functions. The paper is organized as follows. In Section 2, we define generalized \(E\mbox{-}[0,1]\) convex functions, which are called pseudo \(E\mbox{-}[0,1]\) convex functions, and obtain sufficient and necessary conditions for an optimal solution of \(E\mbox{-}[0,1]\) convex programming problems. In Section 3, we consider the Mond-Weir type dual and generalize its results under the \(E\mbox{-}[0,1]\) convexity assumptions. In Section 4, we formulate the multi-objective programming problem which involves \(E\mbox{-}[0,1]\) convex functions. An efficient solution for the problem considered is characterized by weighting, and ε-constraint approaches. At the end of this paper, we obtain sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution for problems involving generalized \(E\mbox{-}[0,1]\) convex functions. Let us survey, briefly, the definitions and some results as regards \(E\mbox{-}[0,1]\) convexity.

Definition 1

[7]

A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a \(E\mbox{-}[0,1]\) convex function on M with respect to \(E:R\times[0,1]\to R\), if M is a convex set and, for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),

$$f(\lambda_{1} x+\lambda_{2} y) \le E\bigl(f(x), \lambda_{1} \bigr)+E\bigl(f(y) ,\lambda_{2} \bigr). $$

If \(f(\lambda_{1} x+\lambda_{2} y) \ge E(f(x),\lambda_{1} )+E(f(y) ,\lambda_{2} )\), then f is called a \(E\mbox{-}[0,1]\) concave function on M. If the inequality signs in the previous two inequalities are strict, then f is called strictly \(E\mbox{-}[0,1]\) convex and strictly \(E\mbox{-}[0,1]\) concave, respectively.

Every \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times [0,1]\to R\) is a convex function if \(E(f(x),\lambda)= \lambda f(x)\). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=(1+\lambda) t\), \(t \in R\), \(\lambda\in[0,1]\), then the function \(h(x)=\sum_{i=1}^{k}a_{i} f_{i} (x)\) is \(E\mbox{-}[0,1]\) convex on M for \(a_{i} \ge0\), \(i=1,2,\ldots,k\), if the functions \(f_{i} :R^{n} \to R\) are all \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda t,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then a numerical function \(f:M\subset R^{n} \to R^{+} \) defined on a convex set \(M\subseteq R^{n} \) is \(E\mbox{-}[0,1]\) convex if and only if its \(\operatorname{epi}(f)\) is convex. Let B be an open convex subset of \(R^{n} \) and let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then f is continuous on B if f is \(E\mbox{-}[0,1]\) convex on B. If \(f:R^{n} \to R\) is a differentiable \(E\mbox{-}[0,1]\) convex function at \(y\in M\) with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min\{ \lambda t,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then, for each \(x\in M\), we have \((x-y)\nabla f(y)\le f(x)-f(y)\). For more details as regards \(E\mbox{-}[0,1]\) convex functions, see [7].

Definition 2

[8]

A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a quasi \(E\mbox{-}[0,1]\) convex function on M with respect to \(E:R\times[0,1]\to R\), if M is a convex set and, for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),

$$f(\lambda_{1} x+\lambda_{2} y) \le\max\bigl\{ E\bigl(f(x), \lambda_{1} \bigr),E\bigl(f(y) ,\lambda_{2} \bigr)\bigr\} . $$

If \(f(\lambda_{1} x+\lambda_{2} y) \ge \min\{ E(f(x),\lambda _{1} ),E(f(y) ,\lambda_{2} )\} \), then f is called a quasi \(E\mbox{-}[0,1]\) concave function on M. If the inequality signs in the previous two inequalities are strict, then f is called strictly quasi \(E\mbox{-}[0,1]\) convex and strictly quasi \(E\mbox{-}[0,1]\) concave, respectively.

Every quasi \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times[0,1]\to R\) is a convex function if \(E(f(x),\lambda)= \lambda f(x)\). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(f(x),\lambda)=f(\lambda x) \) for each \(x\in M\), \(\lambda\in [0,1]\), then \(f(\sum_{i=1}^{n}\lambda_{i} x_{i} )\le {\max } _{1\le i\le n} E(f(x_{i} ), \lambda_{i} )\) for each \(x_{i} \in M\), \(\lambda_{i} \ge0\), \(\sum_{i=1}^{n}\lambda_{i}=1\), if \(f:R^{n} \to R\) is \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then the level set \(L_{\alpha}^{E\mbox{-}[0,1]} \) is a convex set for each \(\alpha\in R\) if \(f:R^{n} \to R\) is quasi \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\max \{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), and let \(\alpha= \min_{x} \min_{\lambda} E(f(x),\lambda)\), then the level set \(L_{\alpha}^{E\mbox{-}[0,1]} \) is a convex set if and only if f is quasi \(E\mbox{-}[0,1]\) convex. If \(f:R^{n} \to R\) is a differentiable quasi \(E\mbox{-}[0,1]\) convex function at \(y\in M\) with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min\{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then, for each \(x\in M\), we have \((x-y)\nabla f(y)\le0\). For more details as regards quasi \(E\mbox{-}[0,1]\) convex functions, see [8].

2 Generalized \(E\mbox{-}[0,1]\) convex programming problems

In this section, we define generalized \(E\mbox{-}[0,1]\) convex functions, which are called pseudo strongly E-convex functions, and obtain sufficient and necessary conditions for an optimal solution for problems involving generalized \(E\mbox{-}[0,1]\) convex functions.

Definition 3

A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a pseudo \(E\mbox{-}[0,1]\) convex function on a convex set \(M\subseteq R^{n} \) if there exists a strictly positive function \(b:R^{n} \times R^{n} \to R\) such that

$$E\bigl(f(x),\lambda_{1} \bigr)< E\bigl(f(y) ,\lambda_{2} \bigr)\quad \Rightarrow\quad f(\lambda_{1} x+\lambda_{2} y) \le E\bigl(f(y) ,\lambda_{2} \bigr)-\lambda_{1} \lambda_{2} b(x,y) $$

for all \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\).

Remark 1

Every pseudo \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times[0,1]\to R\) is convex function if \(E(t,\lambda)=\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\).

Proposition 1

Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)= \max\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). A convex function \(f:R^{n} \to R\) on a convex set \(M\subseteq R^{n} \) is a pseudo \(E\mbox{-}[0,1]\) convex function on M.

Proof

Let \(E(f(x),\lambda_{1} )< E(f(y) ,\lambda_{2} )\). Since f is a convex function on a convex set \(M\subseteq R^{n} \), for all \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\), we have

$$f(\lambda_{1} x+\lambda_{2} y) \le\lambda_{1} f(x)+\lambda _{2} f(y) \le\lambda_{1} E\bigl(f(x), \lambda_{1} \bigr)+\lambda_{2} E\bigl(f(y) , \lambda_{2} \bigr). $$

That is,

$$\begin{aligned} f(\lambda_{1} x+\lambda_{2} y) \le& E\bigl(f(y) , \lambda_{2} \bigr)+\lambda_{1} \bigl[E\bigl(f(x) , \lambda_{1} \bigr)-E\bigl(f(y) ,\lambda_{2} \bigr)\bigr] \\ \le& E\bigl(f(y) ,\lambda_{2} \bigr)+\lambda_{1} \lambda_{2} \bigl[E\bigl(f(x) ,\lambda_{1} \bigr)-E\bigl(f(y) ,\lambda_{2} \bigr)\bigr] \\ =& E\bigl(f(y) ,\lambda_{2} \bigr)-\lambda_{1} \lambda_{2} \bigl[E\bigl(f(y) ,\lambda_{2} \bigr)-E\bigl(f(x) ,\lambda_{1} \bigr)\bigr] \\ =& E\bigl(f(y) ,\lambda_{2} \bigr)-\lambda_{1} \lambda_{2} b(x,y), \end{aligned}$$

since \(b(x,y)=E(f(y) ,\lambda_{2} )-E(f(x) ,\lambda_{1} )>0\). This is the required result. □

Theorem 1

Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\) and \(M\subseteq R^{n} \) be a convex set. If \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\), then \((x-y)\nabla f(y) <0\), for each \(x\in M\).

Proof

Since \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\),

$$\begin{aligned}& E\bigl(f(x),\lambda_{1} \bigr)< E\bigl(f(y) ,\lambda_{2} \bigr) \\& \quad \Rightarrow \quad f(\lambda_{1} x+\lambda_{2} y) \le E\bigl(f(y) ,\lambda_{2} \bigr)-\lambda_{1} \lambda_{2} b(x,y) \le f(y)-\lambda_{1} \lambda_{2} b(x,y) \end{aligned}$$

for each \(x\in M \) and \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). That is,

$$\begin{aligned}& E\bigl(f(x),\lambda_{1} \bigr)< E\bigl(f(y) ,\lambda_{2} \bigr) \\& \quad \Rightarrow \quad f\bigl(y+\lambda_{1} (x-y)\bigr) \le f(y)- \lambda_{1} \lambda_{2} b(x,y) \\& \quad \Rightarrow\quad f(y)+\lambda_{1} (x-y)\nabla f(y)+\mathrm{O} \bigl(\lambda_{1} ^{2} \bigr) \le f(y)-\lambda_{1} \lambda_{2} b(x,y). \end{aligned}$$

Dividing the above inequality by \(\lambda_{1} >0\) and letting \(\lambda _{1} \to0\), we get

$$(x-y)\nabla f(y) \le-b(x,y)< 0 $$

for each \(x\in M\). □

Remark 2

Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\), and \(M\subseteq R^{n} \) be a convex set. If \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\), then \((x-y)\nabla f(y) \ge0 \Rightarrow E(f(x),\lambda_{1} )\ge E(f(y) ,\lambda_{2} )\), for each \(x\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\).

Lemma 1

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). If \(g_{i} :R^{n} \to R\) is an \(E\mbox{-}[0,1]\) convex function on \(R^{n} \), \(i=1,2,\ldots,m\), then the set \(M=\{ x\in R^{n} : g_{i} (x)\le 0,i=1,2,\ldots,m\}\) is convex set.

Proof

Since \(g_{i} (x)\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E(t,\lambda)=\lambda\min \{ t,\lambda\} \), for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),

$$\begin{aligned} g_{i} (\lambda_{1} x+\lambda_{2} y) \le& E \bigl(g_{i} (x),\lambda_{1} \bigr)+E\bigl(g_{i} (y) ,\lambda_{2} \bigr) \\ =& \lambda_{1} \min\bigl\{ g_{i} (x), \lambda_{1} \bigr\} +\lambda_{2} \min \bigl\{ g_{i} (y),\lambda_{2} \bigr\} \\ \le& \lambda_{1} g_{i} (x)+\lambda_{2} g_{i} (y)\le0, \quad i=1,2,\ldots,m, \end{aligned}$$

hence \(\lambda_{1} x+\lambda_{2} y\in M\) for all \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). This means that M is convex set. □

Lemma 2

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min \{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). If \(g_{i} :R^{n} \to R\) is a quasi \(E\mbox{-}[0,1]\) convex function on \(R^{n} \), \(i=1,2,\ldots,m\), then the set \(M=\{ x\in R^{n} : g_{i} (x)\le0,i=1,2,\ldots,m\}\) is convex set.

Proof

Since \(g_{i} (x)\), \(i=1,2,\ldots,m\), are quasi \(E\mbox{-}[0,1]\) convex functions with respect to \(E(t,\lambda)=\min \{ t,\lambda\} \), for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),

$$\begin{aligned} g_{i} (\lambda_{1} x+\lambda_{2} y) \le& \max \bigl[ E\bigl(g_{i} (x),\lambda_{1} \bigr), E \bigl(g_{i} (y) ,\lambda_{2} \bigr)\bigr] \\ \le& \max\bigl[ g_{i} (x), g_{i} (y)\bigr] \\ \le& 0, \quad i=1,2,\ldots,m, \end{aligned}$$

hence \(\lambda_{1} x+\lambda_{2} y\in M\) for all \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). This means that M is convex set. □

Now, we discuss the necessary and sufficient conditions for a feasible solution to be an optimal solution for \(E\mbox{-}[0,1]\) convex programming problems. Consider the following \(E\mbox{-}[0,1]\) convex programming problem:

$$\begin{aligned} \begin{aligned} \hphantom{(\bar{\mathrm{P}})}&\quad \mathrm{min}\quad f(x) \\ (\bar{\mathrm{P}})&\qquad \mbox{subject to} \\ \hphantom{(\bar{\mathrm{P}})}&\qquad x\in M=\bigl\{ x\in R^{n} : g_{i} (x) \le0,i=1,2,\ldots,m\bigr\} . \end{aligned} \end{aligned}$$

Here \(f:R^{n} \to R\) and \(g_{i} :R^{n} \to R\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\).

Theorem 2

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exists a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)), and f, g are differentiable \(E\mbox{-}[0,1]\) convex functions with respect to the same E at \(x^{*} \). If there is \(u\in R^{m}\) and \(u\ge0\) such that \((x^{*} ,u)\) satisfies the following conditions:

$$ \begin{aligned} &\nabla f\bigl(x^{*} \bigr)+ \nabla u^{T} g\bigl(x^{*} \bigr)=0, \\ &u^{T}g\bigl(x^{*} \bigr)=0, \quad g\bigl(x^{*} \bigr)\le0, \end{aligned} $$
(1)

then \(x^{*} \) is an optimal solution for problem (\(\bar{\mathrm{P}}\)).

Proof

For each \(x\in M\), we have

$$\begin{aligned} f(x)-f\bigl(x^{*} \bigr) \ge& \bigl(x-x^{*} \bigr)\nabla f \bigl(x^{*} \bigr)=-\bigl(x-x^{*} \bigr)\nabla u^{T} g\bigl(x^{*} \bigr) \\ \ge& -u^{T} \bigl(g(x)-g\bigl(x^{*} \bigr) \bigr)=-u^{T} g(x)\ge0, \end{aligned}$$

where the above inequalities hold because f, g are \(E\mbox{-}[0,1]\) convex at \(x^{*}\) with respect to the same E (see Theorem 4.1 in [7]). Thus, \(x^{*} \) is the minimizer of \(f(x)\) under the constraint \(g(x)\le0\), which implies that \(x^{*} \) is an optimal solution for problem (\(\bar{\mathrm{P}}\)). □

Remark 3

[9]

In Theorem 2 above, since \(u\ge0\), \(g(x^{*} )\le0\), and \(u^{T} \nabla g(x^{*} )=0\), we have

$$ u_{i} g_{i} \bigl(x^{*} \bigr)=0, \quad i=1,2,\ldots,m. $$
(2)

If \(I(x^{*})=\{ i: g_{i} (x^{*} )=0\} \) and \(J=\{ i: g_{i} (x^{*} )<0\} \), then \(I\cup J=\{ 1,2,\ldots,m\}\), and (2) gives \(u_{i} =0\) for \(i\in J\). It is obvious then, from the proof of Theorem 2, that \(E\mbox{-}[0,1]\) convexity of \({g}_{I}\) at \(x^{*} \) is all that is needed instead of the \(E\mbox{-}[0,1]\) convexity of g at \(x^{*} \) as was assumed in the theorem above.

Theorem 3

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and scalars, \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (1) of Theorem  2 holds. If f is pseudo \(E\mbox{-}[0,1]\) convex, and \(g_{I} \) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in[0,1]\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).

Proof

Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ) ,\lambda_{2} )=0\), \(u_{i} \ge0\), \(\lambda_{1}, \lambda_{2} \in [0,1]\), \(\lambda_{1} +\lambda_{2} =1\), and \(g_{I} \) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \), we have

$$ \bigl(x-x^{*} \bigr)\sum_{i\in I(x^{*} )}u_{i} \bigl[\nabla g_{i} \bigl(x^{*} \bigr) \bigr]^{T} \le 0, \quad \forall x\in M, $$
(3)

by using the above inequality in (1), and pseudo \(E\mbox{-}[0,1]\) convexity of f at \(x^{*} \), we obtain

$$\bigl(x-x^{*} \bigr)\bigl[\nabla f \bigl(x^{*} \bigr) \bigr]^{T} \ge 0 \quad \Rightarrow\quad E\bigl(f(x), \lambda_{1} \bigr)\ge E\bigl(f\bigl(x^{*} \bigr) , \lambda_{2} \bigr) \quad \Rightarrow\quad f(x) \ge E\bigl(f \bigl(x^{*} \bigr) ,\lambda_{2} \bigr) . $$

Hence, \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □

The next two theorems use the idea proposed by Mahajan and Vartak [10].

Theorem 4

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and scalars, \(u_{i} \ge 0\), \(i\in I(x^{*} )\), such that (1) of Theorem  2 holds. If f is pseudo \(E\mbox{-}[0,1]\) convex, and \(u_{I}^{T} g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in [0,1]\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).

Proof

The proof of this theorem is similar to the proof of Theorem 3 except that the argument to get the inequality (3) is as follows: Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ) ,\lambda_{2} )\), \(u_{I} \ge0\), \(\lambda_{1}, \lambda_{2} \in [0,1]\), \(\lambda_{1} +\lambda_{2} =1\), we obtain

$$u_{I}^{T} E\bigl(g_{I} (x),\lambda_{1} \bigr) \le0=u_{I}^{T}E\bigl(g_{I} \bigl(x^{*} \bigr) ,\lambda_{2} \bigr) $$

for all \(x\in M\). Quasi \(E\mbox{-}[0,1]\) convexity of \(u_{I}^{T} g_{I}\) at \(x^{*} \) yields

$$\bigl(x-x^{*} \bigr)\nabla\bigl(u_{I} ^{T} g_{I} \bigl(x^{*} \bigr)\bigr) \le 0,\quad \forall x\in M. $$

We can proceed as in the above theorem to prove that \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □

Theorem 5

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exists a feasible point \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and the numerical function \(f+u_{I}^{T} g_{I} \) is pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \). If there is a scalar \(u\in R^{m} \) such that \((x^{*}, u)\) satisfies the conditions (1) of Theorem  2, then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in[0,1]\), is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).

Proof

The proof of this theorem is similar to the proof of Theorem 4 except that the arguments are as follows: (1) can be written as

$$\nabla f\bigl(x^{*} \bigr)+\nabla\bigl(u^{T} _{I} g_{I} \bigl(x^{*} \bigr)\bigr)=0. $$

This can be rewritten in the form

$$\bigl(x-x^{*} \bigr)\nabla\bigl(\bigl(f+u_{I} ^{T} g_{I} \bigr) \bigl(x^{*} \bigr)\bigr) \le 0, \quad \forall x\in M, $$

which gives

$$E\bigl(\bigl(f+u_{I} ^{T} g_{I} \bigr) \bigl(x^{*} \bigr),\lambda_{2} \bigr) \le E\bigl( \bigl(f+u_{I} ^{T} g_{I} \bigr) (x), \lambda_{1} \bigr),\quad \forall x\in M, $$

because \(f+ u_{I}^{T} g_{I} \) is pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \), i.e.,

$$E\bigl(\bigl(f+u_{I} ^{T} g_{I} \bigr) \bigl(x^{*} \bigr),\lambda_{2} \bigr) \le f(x)+ \bigl(u_{I} ^{T} g_{I} \bigr) (x), \quad \forall x \in M. $$

It follows, by using the definition of I, that

$$E\bigl(f\bigl(x^{*} \bigr),\lambda_{2} \bigr) \le f(x),\quad \forall x\in M. $$

Hence, \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □

Theorem 6

(Necessary optimality criteria)

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in [0,1]\). Assume that \(x^{*}\) is an optimal solution for problem (\(\bar {\mathrm{P}}\)) and there exists a feasible point x for (\(\bar{\mathrm{P}}\)) such that \(g_{i} (x)<0\), \(i=1,2,\ldots,m\). If \(g_{i} \), \(i\in I(x^{*} )\), is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then there exist scalars \(u_{i} \ge 0\), \(i \in I(x^{*} )\), such that \((x^{*} , u_{i} )\) satisfies

$$ \nabla f\bigl(x^{*} \bigr)+\sum _{i\in I(x^{*} )}u_{i} \nabla g_{i} \bigl(x^{*} \bigr) =0. $$
(4)

Proof

We desire to show that

$$ \bigl(x-x^{*} \bigr)\nabla g_{I} \bigl(x^{*} \bigr) \le 0\quad \Rightarrow\quad \bigl(x-x^{*} \bigr)\nabla f\bigl(x^{*}\bigr) \ge0. $$
(5)

The result will follow as in [11] by applying Farkas’ lemma. Assume (5) does not hold, i.e., there exists \(x\in R^{n} \) such that

$$ \bigl(x-x^{*} \bigr)\nabla g_{I} \bigl(x^{*} \bigr) \le 0\quad \Rightarrow\quad \bigl(x-x^{*} \bigr)\nabla f\bigl(x^{*}\bigr)< 0. $$
(6)

Since by the assumed Slater-type condition,

$$g_{i} (\tilde{x})-g_{i} \bigl(x^{*} \bigr)< 0, \quad i\in I\bigl(x^{*} \bigr), $$

and from \(E\mbox{-}[0,1]\) convexity of \(g_{i}\) at \(x^{*} \), we get

$$ \bigl(\tilde{x}-x^{*} \bigr)^{T} \nabla g_{i} \bigl(x^{*} \bigr)< 0,\quad i\in I\bigl(x^{*} \bigr). $$
(7)

Therefore from (6) and (7)

$$\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr) \bigr]^{T} \nabla g_{i} \bigl(x^{*} \bigr)< 0,\quad i\in I\bigl(x^{*} \bigr), \forall\rho>0. $$

Hence for some positive β small enough

$$g_{i} \bigl(x^{*} +\beta\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr]\bigr)< g_{i} \bigl(x^{*} \bigr)=0, \quad i\in I\bigl(x^{*} \bigr). $$

Similarly, for \(i\notin I(x^{*} )\), \(g_{i} (x^{*} )<0\), and for \(\beta >0\) small enough,

$$g_{i} \bigl(x^{*} +\beta\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr]\bigr) \le 0,\quad i\notin I\bigl(x^{*} \bigr). $$

Thus, for β sufficiently small and all \(\rho>0\), \(x^{*} +\beta [(x-x^{*} )+\rho(\tilde{x}-x^{*} )]\) is feasible for problem (\(\bar {\mathrm{P}}\)). For sufficiently small \(\rho>0\) (6) gives

$$ f\bigl(x^{*} +\beta\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr]\bigr)< f \bigl(x^{*} \bigr), $$
(8)

which contradicts the optimality of \(x^{* }\) for (\(\bar{\mathrm{P}}\)). Hence, the system (6) has no solution. The result then follows from an application of Farkas’ lemma, namely

$$\nabla f\bigl(x^{*} \bigr)+\sum_{i\in I(x^{*} )}u_{i} \nabla g_{i} \bigl(x^{*} \bigr) =0,\quad u \ge0. $$

 □

3 Duality in \(E\mbox{-}[0,1]\) convexity

We consider the Wolfe type dual and generalized its results under the \(E\mbox{-}[0,1]\) convexity assumptions. Consider the following Wolfe type dual of problem (\(\bar{\mathrm{P}}\)):

$$\begin{aligned} \hphantom{(\bar{\mathrm{D}})}&\quad \mathrm{max}\quad \psi(y,u) =f(y)+u^{T} g(y) \\ (\bar{\mathrm{D}})&\qquad \mbox{subject to} \\ \hphantom{(\bar{\mathrm{D}})}&\qquad\nabla f(y)+u^{T} \nabla g(y)=0, \quad u \ge0, \end{aligned}$$

where f, g are differentiable functions defined on \(R^{n} \). We now prove the following duality theorems, relating problem (\(\bar {\mathrm{P}}\)) and (\(\bar{\mathrm{D}}\)).

Theorem 7

(Weak duality)

Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\lambda \min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), and let there exist a feasible solution x for (\(\bar{\mathrm{P}}\)) and \((y,u)\), a feasible solution for (\(\bar{\mathrm{D}}\)). If f, g are \(E\mbox{-}[0,1]\) convex functions at y, then \(f(x)\nless f(y)+u^{T} g(y)\).

Proof

Let x be a feasible solution for (\(\bar {\mathrm{P}}\)) and \((y,u)\) be a feasible solution for (\(\bar{\mathrm{D}}\)). Suppose contrary to the result \(f(x)< f(y)+u^{T} g(y)\), then

$$\begin{aligned}& f(x)+u^{T} g(x)< f(y)+u^{T} g(y) \quad \mbox{or} \\& f(x)+u^{T} g(x)-f(y)-u^{T} g(y)<0. \end{aligned}$$
(9)

\(E\mbox{-}[0,1]\) convexity of f, g at y, implies that

$$\begin{aligned}& f(x)-f(y) \ge (x-y)^{T} \nabla f(y) \quad \mbox{and} \\& u^{T}\bigl[g(x)-g(y)\bigr] \ge u^{T}(x-y)^{T} \nabla g(y), \end{aligned}$$

and combining the above two inequalities gives

$$f(x)-f(y)+u^{T}g(x)-u^{T}g(y) \ge (x-y)^{T} \bigl[ \nabla f(y)+u^{T} \nabla g(y)\bigr], $$

and by using inequality (9), we get

$$(x-y)^{T} \bigl[\nabla f(y)+u^{T} \nabla g(y)\bigr]< 0, $$

which contradicts the constraint \(\nabla f(y)+u^{T} \nabla g(y)=0\) of (\(\bar{\mathrm{D}}\)). □

Theorem 8

(Strong duality)

Let \(x^{*}\) be an optimal solution for (\(\bar{\mathrm{P}}\)) and g satisfy the Kuhn-Tucker constraint qualification at \(x^{*}\). Then, there exists \(u^{*} \in R^{m} \), such that \((x^{*} ,u^{*} )\) is a feasible solution for (\(\bar{\mathrm{D}}\)) and the (\(\bar{\mathrm{P}}\))-objective at \(x^{*} \) equals the (\(\bar{\mathrm{D}}\))-objective at \((x^{*} ,u^{*} )\). If f, g are \(E\mbox{-}[0,1]\) convex functions at \(x^{*}\) with respect to \(E:R\times [0,1]\to R\) such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), then \((x^{*} ,u^{*})\) is an optimal solution for problem (\(\bar{\mathrm{D}}\)).

Proof

Since g satisfies the Kuhn-Tucker constraint qualification at \(x^{*}\), there exists \(u^{*} \in R^{m} \), such that the following Kuhn-Tucker conditions are satisfied:

$$\begin{aligned}& \nabla f\bigl(x^{*} \bigr)+u^{*T} \nabla g \bigl(x^{*} \bigr)=0, \end{aligned}$$
(10)
$$\begin{aligned}& u^{*T} g\bigl(x^{*} \bigr) =0, \end{aligned}$$
(11)
$$\begin{aligned}& g\bigl(x^{*} \bigr) \le 0, \end{aligned}$$
(12)
$$\begin{aligned}& u^{*} \ge0. \end{aligned}$$
(13)

(10) and (13) show that \((x^{*} ,u^{*} )\) a feasible solution for (\(\bar{\mathrm{D}}\)). Also, (11) shows that the (\(\bar{\mathrm{P}}\))-objective at \(x^{*} \) equal to the (\(\bar{\mathrm{D}}\))-objective at \((x^{*} ,u^{*} )\). Now, from \(E\mbox{-}[0,1]\) convexity of f and g, we have

$$\begin{aligned} \psi\bigl(x^{*} ,u^{*}\bigr) -\psi(x,u) =& f \bigl(x^{*}\bigr)+u^{*T} g\bigl(x^{*} \bigr)-f(x)-u^{T} g(x) \\ =& f\bigl(x^{*}\bigr)-f(x)-u^{T} g(x), \quad \mbox{by (11)} \\ \ge&\bigl(x^{*}-x \bigr)^{T}\nabla f(x)-u^{T} g(x) \\ =&-\bigl(x^{*} -x\bigr)^{T}u^{T}\nabla g(x)-u^{T} g(x), \quad \mbox{by (10)} \\ \ge& -u^{T} \bigl(g\bigl(x^{*}\bigr)-g(x) \bigr)-u^{T} g(x) \\ =&-u^{T} g\bigl(x^{*}\bigr)\ge0 \end{aligned}$$

for each feasible point \((x,u)\) of (\(\bar{\mathrm{D}}\)). Hence, \((x^{*} ,u^{*} )\) is an optimal solution for problem (\(\bar{\mathrm{D}}\)). □

Example 1

Let \(E\mbox{-}[0,1]:R\times[0,1] \to R \) be defined as \(E(t,\lambda)=\lambda\sqrt[3]{t}\), where \(t \in R\), and \(\lambda\in[0,1]\). Consider the problem (\(\bar{\mathrm{P}}\))

$$\begin{aligned}& \mathrm{min}\quad f(x,y)=(y-x)^{3} \\& \quad \mbox{s.t. } (x,y)\in M=\bigl\{ (x,y)\in R^{2} : x+y\le3, 1 \le y \le3, x\ge0\bigr\} , \end{aligned}$$

where f is \(E\mbox{-}[0,1]\) convex function on convex set M. Formulate the dual problem (\(\bar{\mathrm{D}}\)) as follows:

$$\begin{aligned}& \mathrm{max}\quad \bigl[f(y)+u^{T} g(y)\bigr] \\& \quad \mbox{s.t. } \nabla f(y)+u^{T} \nabla g(y)=0,\quad u \ge0. \end{aligned}$$

From the system (10)-(13), we have

$$\begin{aligned}& 3\bigl(y^{*}-x^{*}\bigr)^{2}+u_{1}^{*} +u_{2}^{*} -u_{3}^{*}=0, \\& -3\bigl(y^{*}-x^{*}\bigr)^{2}+u_{1}^{*} -u_{4^{*}}=0, \\& u_{1}^{*} \bigl(x^{*} +y^{*} -3\bigr)= 0, \\& u_{2}^{*} \bigl(y^{*}-3 \bigr)= 0, \\& u_{3}^{*} \bigl(1-y^{*}\bigr)=0, \\& -u_{4}^{*} x^{*}=0, \\& x^{*} +y^{*} -3\leq0, \\& y^{*}-3 \leq0, \\& 1-y^{*} \leq0, \\& - x^{*} \leq0, \end{aligned}$$

where \(u_{i}^{*} \ge0\), \(i=1,2,3,4\). By solving this system, we conclude that \((x^{*},u^{*})\) is the optimal solution of the dual problem (\(\bar{\mathrm{D}}\)) such that \(x^{*}=(2,1)\) and \(u^{*}=(3,0,6,0)\). Hence, \(x^{*}=(2,1)\) is the optimal solution of (\(\bar{\mathrm{P}}\)).

4 Generalized \(E\mbox{-}[0,1]\) convex multi-objective programming problems

In this section, we formulate a multi-objective programming problem which involves \(E\mbox{-}[0,1]\) convex functions. An efficient solution for the considered problem is characterized by weighting and ε-constraint approaches. At the end of this section, we obtain sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution for this kind of problems. An \(E\mbox{-}[0,1]\) convex multi-objective programming problem is formulated as follows:

$$\begin{aligned} \hphantom{(\mathrm{P})}&\quad \mathrm{min}\quad \bigl(f_{1} (x),f_{2} (x),\ldots,f_{k} (x)\bigr) \\ (\mathrm{P})&\qquad \mbox{subject to} \\ \hphantom{(\mathrm{P})}&\qquad x\in M=\bigl\{ x\in R^{n} : g_{i} (x) \le0,i=1,2,\ldots,m\bigr\} , \end{aligned}$$

where \(f_{j} :R^{n} \to R \), \(j=1,2,\ldots,k\), and \(g_{i} :R^{n} \to R\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\).

Definition 4

[12]

A feasible solution \(x^{*} \) for (P) is said to be an efficient solution for (P) if and only if there is no other feasible x for (P) such that, for some \(i\in\{ 1,2,\ldots,k\} \),

$$f_{i} (x)< f_{i} \bigl(x^{*} \bigr),\qquad f_{j} (x)\le f_{j} \bigl(x^{*} \bigr)\quad \mbox{for all } j\ne i. $$

Definition 5

[12]

An efficient solution \(x^{*} \in M\) for (P) is a properly efficient solution for (P) if there exists a scalar \(\mu>0\) such that for each i, \(i=1,2,\ldots,k\), and each \(x\in M\) satisfying \(f_{i} (x)< f_{i} (x^{*} )\), there exists at least one \(j\ne i\) with \(f_{j} (x)>f_{j} (x^{*} )\), and \([f_{i} (x)-f_{i} (x^{*} )]/[f_{j} (x^{*} )-f_{j}(x)]\le\mu\).

Lemma 3

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). If \(f:R^{n} \to R^{k} \) is an \(E\mbox{-}[0,1]\) convex function on a convex set \(M\subseteq R^{n} \), then the set \(A= \bigcup_{x\in M} A(x)\) is convex set such that

$$A(x)=\bigl\{ z:z\in R^{k} ,z>f(x)-f\bigl(x^{*} \bigr)\bigr\} , \quad x\in M. $$

Proof

Let \(z^{1} ,z^{2} \in A\), then for all \(x^{1} ,x^{2} \in M\) and \(\lambda_{1}, \lambda_{2} \in[0, 1]\), \(\lambda _{1} +\lambda_{2} = 1\), we have

$$\begin{aligned} \begin{aligned} \lambda_{1} z^{1} +\lambda_{2} z^{2} &> \lambda_{1} \bigl[f\bigl(x^{1} \bigr)-f \bigl(x^{*} \bigr)\bigr]+\lambda_{2} \bigl[f \bigl(x^{2} \bigr)-f\bigl(x^{*} \bigr)\bigr] \\ &= \lambda_{1} f\bigl(x^{1} \bigr)+\lambda_{2} f\bigl(x^{2} \bigr)-f\bigl(x^{*}\bigr) \\ &\ge \lambda_{1} \min\bigl(f\bigl(x^{1} \bigr), \lambda_{1} \bigr)+\lambda_{2} \min \bigl(f \bigl(x^{2} \bigr) ,\lambda_{2} \bigr)-f\bigl(x^{*} \bigr) \\ &= E\bigl(f\bigl(x^{1} \bigr),\lambda_{1} \bigr)+E\bigl(f \bigl(x^{2} \bigr) ,\lambda_{2} \bigr)-f\bigl(x^{*} \bigr) \\ &\ge f\bigl(\lambda_{1} x^{1} +\lambda_{2} x^{2} \bigr)-f\bigl(x^{*} \bigr), \end{aligned} \end{aligned}$$

since f is \(E\mbox{-}[0,1]\) convex function on convex set M. Then \(\lambda_{1} z^{1} +\lambda_{2} z^{2} \in A \), and hence A is convex set. □

4.1 Characterizing efficient solutions by weighting approach

To characterize an efficient solution for problem (P) by weighting approach [12] let us scalarize problem (P) to get the form

$$(\mathrm{P}_{w})\quad \mathrm{min}\quad \sum _{j=1}^{k}w_{j} f_{j} (x)\quad \mbox{subject to } x\in M, $$

where \(w_{j} \ge0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), and \(f_{j} \), \(j=1,2,\ldots,k\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in [0,1]\), on convex set M.

Theorem 9

If \(\bar{x}\in M\) is an efficient solution for problem (P), then there exists \(w_{j} \ge0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), such that \(\bar{x}\) is an optimal solution for problem (\(\mathrm{P}_{w}\)).

Proof

Let \(\bar{x}\in M\) be an efficient solution for problem (P), then the system \(f_{j} (x)-f_{j} (\bar {x})<0\), \(j=1,2,\ldots,k\), has no solution \(x\in M\). Upon Lemma 3 and applying the generalized Gordan theorem [13], there exists \(p_{j} \ge0\), \(j=1,2,\ldots,k\), such that \(p_{j} [f_{j} (x)-f_{j} (\bar{x})]\ge0\), \(j=1,2,\ldots,k\), and \(\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } f_{j} (x)\ge\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } f_{j} (\bar{x})\).

Denote \(w_{j} =\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } \), then \(w_{j} \ge 0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), and \(\sum_{j=1}^{k}w_{j} f_{j} (\bar{x}) \le\sum_{j=1}^{k}w_{j} f_{j} (x) \). Hence \(\bar{x}\) is an optimal solution for problem (\(\mathrm{P}_{w}\)). □

Theorem 10

If \(\bar{x}\in M\) is an optimal solution for (\(\mathrm{P}_{\bar{w}}\)) corresponding to \(\bar{w}_{j} \), then \(\bar{x}\) is an efficient solution for problem (P) if one of the following two conditions holds:

  1. (i)

    \(\bar{w}_{j} >0\), \(\forall j=1,2,\ldots,k\);

  2. (ii)

    \(\bar{x}\) is the unique solution of (\(\mathrm{P}_{\bar{w}}\)).

Proof

For the proof see Chankong and Haimes [12]. □

4.2 Characterizing efficient solutions by ε-constraint approach

The ε-constraint approach is one of the common approaches for characterizing efficient solutions of multi-objective programming problems [12]. In the following we shall characterize an efficient solution for the multi-objective \(E\mbox{-}[0,1]\) convex programming problem (P) in terms of an optimal solution of the following scalar problem:

$$\begin{aligned} \hphantom{\mathrm{P}_{q}(\varepsilon,E)}&\quad \mathrm{min}\quad f_{q} (x) \\ \mathrm{P}_{q}(\varepsilon,E)&\qquad \mbox{subject to } x\in M, \\ \hphantom{\mathrm{P}_{q}(\varepsilon,E)}&\qquad f_{j} (x)\le E(\varepsilon_{j} , \lambda _{j}), \quad j=1,2,\ldots,k, j\ne q. \end{aligned}$$

Here \(f_{j} \), \(j=1,2,\ldots,k\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min \{ t ,\lambda \} \), \(t \in R\), \(\lambda\in[0,1]\), on the convex set M.

Theorem 11

If \(\bar{x}\in M\) is an efficient solution for problem (P), then \(\bar{x}\) is an optimal solution for problem \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\) and \(\bar{\varepsilon}_{j} =f_{j} (\bar{x})\).

Proof

Let \(\bar{x}\) be not optimal solution for \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E})\) where \(\bar {\varepsilon}_{j} =f_{j} (\bar{x})\), \(j=1,2,\ldots,k\). So there exists \(x\in M\) such that

$$\begin{aligned}& f_{q} (x)< f_{q} (\bar{x}), \\& f_{j} (x)\le\bar{E}(\bar{\varepsilon}_{j},\bar{ \lambda}_{j} )\le\bar {\varepsilon}_{j} =f_{j} ( \bar{x}), \quad j=1,2,\ldots,k, j\ne q, \end{aligned}$$

since \(\bar{E}(\bar{\varepsilon}_{j},\bar{\lambda}_{j} )=\min(\bar {\varepsilon}_{j},\bar{\lambda}_{j} )\) and convexity of M. This implies that the system \(f_{j} (x)-f_{j} (\bar{x})<0 \), \(j=1,2,\ldots,k\), has a solution \(x\in M\). Thus, \(\bar{x}\) is an inefficient solution for problem (P), which is a contradiction. Hence \(\bar{x}\) is an optimal solution for problem \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\). □

Theorem 12

Let \(\bar{x}\in M\) be an optimal solution, for all q of \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\), where \(\bar{\varepsilon}_{j} =f_{j} ( \bar{x})\), \(j=1,2,\ldots,k\). Then \(\bar{x}\) is an efficient solution for problem (P).

Proof

Since \(\bar{x}\in M\) is an optimal solution for \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E})\), where \(\bar {\varepsilon}_{j} =f_{j} (\bar{x})\), \(j=1,2,\ldots,k\), for each \(x\in M\), we get

$$\begin{aligned}& f_{q} ( \bar{x})< f_{q} ( x), \\& f_{j} (x)\le\bar{E}(\bar{\varepsilon}_{j},\bar{ \lambda}_{j} )\le\bar {\varepsilon}_{j} =f_{j} ( \bar{x}), \quad j=1,2,\ldots,k, j\ne q, \end{aligned}$$

where \(\bar{E}(\bar{\varepsilon}_{j},\bar{\lambda}_{j} )=\min(\bar {\varepsilon}_{j},\bar{\lambda}_{j} )\). This implies the system \(f_{j} (x)-f_{j} (\bar{x})<0\), \(j=1,2,\ldots,k\), has no solution \(x\in M\), i.e., \(\bar{x}\) is an efficient solution for problem (P). □

4.3 Sufficient and necessary conditions for efficiency

In this section, we discuss the sufficient and necessary conditions for a feasible solution \(x^{*}\) to be efficient or properly efficient for problem (P) in the form of the following theorems.

Theorem 13

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in [0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that

$$ \sum_{i=1}^{k} \gamma_{i} \nabla f_{i} \bigl(x^{*} \bigr) +\sum _{i\in I(x^{*} )}u_{i} \nabla g_{i} \bigl(x^{*} \bigr) =0. $$
(14)

If \(f_{i} \), \(i=1,2,\ldots,k\), and \(g_{i} \), \(i\in I(x^{*} )\), are differentiable \(E\mbox{-}[0,1]\) convex functions at \(x^{*} \in M\), then \(x^{*} \) is a properly efficient solution for problem (P).

Proof

Since \(f_{i} \), \(i=1,2,\ldots,k\), and \(g_{i}\), \(i\in I(x^{*} )\), are differentiable \(E\mbox{-}[0,1]\) convex functions at \(x^{*} \in M\), for any \(x\in M\), we have

$$\begin{aligned} \begin{aligned} \sum_{i=1}^{k}\gamma_{i} f_{i} (x) -\sum_{i=1}^{k} \gamma_{i} f_{i} \bigl(x^{*} \bigr) &\ge \bigl(x-x^{*} \bigr)\sum_{i=1}^{k} \gamma_{i} \bigl[\nabla f_{i} \bigl(x^{*} \bigr) \bigr]^{T} \\ &= -\bigl(x-x^{*} \bigr)\sum_{i\in I(x^{*} )}u_{i} \bigl[\nabla g_{i} \bigl(x^{*} \bigr) \bigr]^{T} \\ &\ge \sum_{i\in I(x^{*} )}^{k}u_{i} g_{i} \bigl(x^{*} \bigr) -\sum_{i\in I(x^{*} )}^{k}u_{i} g_{i} (x) \\ &= -\sum_{i\in I(x^{*} )}u_{i} g_{i} (x) \ge0. \end{aligned} \end{aligned}$$

Thus, \(\sum_{i=1}^{k}\gamma_{i} f_{i} (x) \ge\sum_{i=1}^{k}\gamma _{i} f_{i} (x^{*} ) \), for all \(x\in M\), which implies that \(x^{*} \) is the minimizer of \(\sum_{i=1}^{k}\gamma_{i} f_{i} (x) \) under the constraint \(g(x)\le0\). Hence, from Theorem 4.11 of [12], \(x^{*} \) is a properly efficient solution for problem (P). □

Theorem 14

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1 \), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that the triplet \((x^{*} ,\gamma_{i} ,u_{i} )\) satisfies (14) of Theorem  13. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is strictly \(E\mbox{-}[0,1]\) convex, and \(g_{I} \) is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(x^{*} \) is an efficient solution for problem (P).

Proof

Suppose that \(x^{*} \) is not an efficient solution for (P), then there exist a feasible \(x\in M\) and an index r such that

$$\begin{aligned}& f_{r} (x)< f_{r} \bigl(x^{*} \bigr), \\& f_{i} (x)\le f_{i} \bigl(x^{*} \bigr) \quad \mbox{for all } i\ne r. \end{aligned}$$

Since \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) is strictly \(E\mbox{-}[0,1]\) convex at \(x^{*} \), the previous two inequalities lead to

$$ 0\ge\sum_{i=1}^{k} \gamma_{i} f_{i} (x) -\sum_{i=1}^{k} \gamma_{i} f_{i} \bigl(x^{*} \bigr) \quad \Rightarrow \quad 0>\bigl(x-x^{*} \bigr)\sum _{i=1}^{k}\gamma_{i} \bigl[\nabla f_{i} \bigl(x^{*} \bigr) \bigr]^{T} . $$
(15)

Also, \(E\mbox{-}[0,1]\) convexity of \(g_{i} \), \(i\in I(x^{*} )\), at \(x^{*} \) implies

$$\bigl(x-x^{*} \bigr)\nabla g_{i} \bigl(x^{*} \bigr)\le g_{i} (x)-g_{i} \bigl(x^{*} \bigr) \quad \Rightarrow \quad \bigl(x-x^{*} \bigr)\nabla g_{i} \bigl(x^{*} \bigr)\le0, \quad i\in I\bigl(x^{*} \bigr), $$

and, for \(u_{i} \ge0\), \(i\in I(x^{*} )\), we get

$$ \bigl(x-x^{*} \bigr)\sum_{i\in I(x^{*} )}u_{i} \bigl[\nabla g_{i} \bigl(x^{*} \bigr) \bigr]^{T} \le0. $$
(16)

Adding (15) and (16) contradicts (14). Hence, \(x^{*} \) is an efficient solution for problem (P). □

Remark 4

Similarly to Theorem 13, it can easily be seen that \(x^{*} \) becomes a properly efficient solution for (P), in the above theorem, if \(\gamma _{i} >0\), for all \(i=1,2,\ldots,k\).

Theorem 15

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma _{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem  13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is pseudo \(E\mbox{-}[0,1]\) convex, and \(g_{I}\) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2} )\), \(\lambda_{2} \in[0,1]\) is a properly nondominated solution in the objective space of problem (P).

Proof

Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ),\lambda_{2} )=0\), \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\), and from quasi \(E\mbox{-}[0,1]\) convexity of \(g_{I} \) at \(x^{*} \), \(u_{I} \ge0\), we get

$$\bigl(x-x^{*} \bigr)\sum_{i\in I(x^{*} )}u_{i} \bigl[\nabla g_{i} \bigl(x^{*} \bigr) \bigr]^{T} \le 0, \quad \forall x\in M, $$

by using the above inequality in (14), and from pseudo \(E\mbox{-}[0,1]\) convexity of \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) at \(x^{*} \), we get

$$\begin{aligned} \bigl(x-x^{*} \bigr)\sum_{i=1}^{k} \gamma_{i} \bigl[\nabla f_{i} \bigl(x^{*} \bigr) \bigr]^{T} \ge 0 & \quad \Rightarrow \quad \sum _{i=1}^{k}\gamma_{i} E\bigl(f_{i} (x) ,\lambda_{1} \bigr)\ge\sum_{i=1}^{k} \gamma_{i} E\bigl(f_{i} \bigl(x^{*} \bigr) , \lambda_{2} \bigr) \\ &\quad \Rightarrow \quad \sum_{i=1}^{k} \gamma_{i} f_{i} (x) \ge\sum_{i=1}^{k} \gamma_{i} E\bigl(f_{i} \bigl(x^{*} \bigr) , \lambda_{2} \bigr). \end{aligned}$$

Hence, \(E(f(x^{*} ),\lambda_{2} )\) is a properly nondominated solution in the objective space of problem (P). □

Theorem 16

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min \{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma _{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1 \), \(u_{i} \ge 0\), \(i\in I(x^{*} )\), such that (14) of Theorem  13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is strictly pseudo \(E\mbox{-}[0,1]\) convex and \(g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\) is a nondominated solution in the objective space of problem (P).

Proof

Suppose that \(E(f(x^{*} ),\lambda_{2} )\) is dominated solution for (P), then there exist a feasible x for (P) and an index r such that

$$f_{r} (x)< E\bigl(f_{r} \bigl(x^{*} \bigr), \lambda_{2} \bigr), \qquad f_{i} (x)\le E \bigl(f_{i} \bigl(x^{*} \bigr),\lambda_{2} \bigr) \quad \mbox{for all } i\ne r. $$

Since \(E(t,\lambda_{1})=\min\{ t,\lambda_{1} \} \), \(t \in R\), \(\lambda_{1} \in[0,1]\), we have

$$E\bigl(f_{r} (x),\lambda_{1} \bigr)< E\bigl(f_{r} \bigl(x^{*} \bigr),\lambda_{2} \bigr),\qquad E \bigl(f_{i} (x),\lambda_{1} \bigr)\le E\bigl(f_{i} \bigl(x^{*} \bigr),\lambda_{2} \bigr),\quad \forall i\ne r. $$

The strictly pseudo \(E\mbox{-}[0,1]\) convexity of \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) at \(x^{*} \) implies that

$$\sum_{i=1}^{k}\gamma_{i} E \bigl(f_{i} (x),\lambda_{1} \bigr) \le\sum _{i=1}^{k}\gamma_{i} E\bigl(f_{i} \bigl(x^{*} \bigr),\lambda_{2} \bigr)\quad \Rightarrow \quad \bigl(x-x^{*} \bigr)\sum_{i=1}^{k} \gamma_{i} \bigl[\nabla f_{i} \bigl(x^{*} \bigr) \bigr]^{T} < 0. $$

Also, quasi \(E\mbox{-}[0,1]\) convexity of \(g_{I} \) at \(x^{*} \) implies that

$$E\bigl(g_{I} (x),\lambda_{1} \bigr)\le E \bigl(g_{I} \bigl(x^{*} \bigr),\lambda_{2} \bigr)=0 \quad \Rightarrow \quad \bigl(x-x^{*} \bigr)\nabla g_{I} \bigl(x^{*} \bigr)\le0. $$

The proof now follows along lines similar to Theorem 14. □

Remark 5

Similarly to Theorem 15, it can easily be seen that \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), becomes a properly nondominated solution for (P), in the above theorem, if \(\gamma_{i} >0\), for all \(i=1,2,\ldots,k\).

Theorem 17

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem  13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is pseudo \(E\mbox{-}[0,1]\) convex and \(u_{I} g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\) is a properly nondominated solution in the objective space of problem (P).

Proof

The proof is similar to the proof of Theorem 15. □

Theorem 18

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem  13 holds. If \(I(x^{*} )\ne\phi\), \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) is quasi \(E\mbox{-}[0,1]\) convex and \(u_{I} g_{I} \) is strictly pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), is a nondominated solution in the objective space of problem (P).

Proof

The proof is similar to the proof of Theorem 16. □

Remark 6

Similarly to Theorem 15, it can easily be seen that \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), becomes a properly nondominated solution for (P), in the above theorem, if \(\gamma_{i} >0\), for all \(i=1,2,\ldots,k\).

Theorem 19

(Necessary efficiency criteria)

Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), and \(x^{*}\) be a properly efficient solution for problem (P). Assume that there exists a feasible point x for (P) such that \(g_{i} (x)<0\), \(i=1,2,\ldots,m\), and each \(g_{i} \), \(i\in I(x^{*} )\), is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\). Then there exist scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\) and \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that the triplet \((x^{*} ,\gamma_{i} ,u_{i} )\) satisfies

$$ \sum_{i=1}^{k} \gamma_{i} \nabla f_{i} \bigl(x^{*} \bigr) +\sum _{i\in I(x^{*} )}u_{i} \nabla g_{i} \bigl(x^{*} \bigr) =0. $$
(17)

Proof

Let the system

$$\begin{aligned}& \bigl(x-x^{*} \bigr)^{T} \nabla f_{q} \bigl(x^{*} \bigr)< 0, \\& \bigl(x-x^{*}\bigr)^{T} \nabla f_{i} \bigl(x^{*} \bigr)\le0 \quad \mbox{for all } i\ne q, \\& \bigl(x-x^{*} \bigr)^{T} \nabla g_{i} \bigl(x^{*} \bigr)\le0,\quad i\in I\bigl(x^{*} \bigr), \end{aligned}$$
(18)

have a solution for every \(q=1,2,\ldots,k\). Since, by the assumed Slater-type condition,

$$g_{i} (\tilde{x})-g_{i} \bigl(x^{*} \bigr)< 0, \quad i\in I\bigl(x^{*} \bigr), $$

and from \(E\mbox{-}[0,1]\) convexity of \(g_{i} \) at \(x^{*} \), we get

$$ \bigl(\tilde{x}-x^{*} \bigr)^{T} \nabla g_{i} \bigl(x^{*} \bigr)< 0,\quad i\in I\bigl(x^{*} \bigr). $$
(19)

Therefore from (18) and (19)

$$\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr) \bigr]^{T} \nabla g_{i} \bigl(x^{*} \bigr)< 0,\quad \forall i\in I\bigl(x^{*} \bigr), \rho>0. $$

Hence for some positive β small enough

$$g_{i} \bigl(x^{*} +\beta\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr]\bigr)< g_{i} \bigl(x^{*} \bigr)=0,\quad i\in I\bigl(x^{*} \bigr). $$

Similarly, for \(i\notin I(x^{*} )\), \(g_{i} (x^{*} )<0\) and for \(\beta>0\) small enough

$$g_{i} \bigl(x^{*} +\beta\bigl[\bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr]\bigr)\le0, \quad i\notin I \bigl(x^{*} \bigr). $$

Thus, for β sufficiently small and all \(\rho>0\), \(x^{*} +\beta [(x-x^{*} )+\rho(\tilde{x}-x^{*} )]\) is feasible for problem (P). For sufficiently small \(\rho>0\) (18) gives

$$ f_{q} \bigl(x^{*} +\beta\bigl[ \bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr] \bigr)< f_{q} \bigl(x^{*} \bigr). $$
(20)

Now, for all \(j\ne q\) such that

$$ f_{j} \bigl(x^{*} +\beta\bigl[ \bigl(x-x^{*} \bigr)+\rho\bigl(\tilde{x}-x^{*} \bigr)\bigr] \bigr)>f_{j} \bigl(x^{*} \bigr), $$
(21)

consider the ratio

$$ \frac{N(\beta,\rho)}{D(\beta,\rho)} =\frac{[f_{q} (x^{*} )-f_{q} (x^{*} +\beta[(x-x^{*} )+\rho(\tilde{x}-x^{*} )])]/\beta}{[f_{j} (x^{*} +\beta[(x-x^{*} )+\rho(\tilde{x}-x^{*} )])-f_{j} (x^{*} )]/\beta} . $$
(22)

From (18), \(N(\beta,\rho)\to-(x-x^{*} )^{T} \nabla f_{q} (x^{*} )>0\). Similarly, \(D(\beta,\rho)\to(x-x^{*} )^{T} \nabla f_{j} (x^{*} )\le0\); but, by (21), \(D(\beta,\rho)>0\), so \(D(\beta,\rho)\to0\). Thus, the ratio in (22) becomes unbounded, contradicting the proper efficiency of \(x^{*}\) for (P). Hence, for each \(q=1,2,\ldots,k\), the system (18) has no solution. The result then follows from an application of Farkas’ lemma, namely

$$\sum_{i=1}^{k}\gamma_{i} \nabla f_{i} \bigl(x^{*} \bigr) +\sum_{i\in I(x^{*} )}u_{i} \nabla g_{i} \bigl(x^{*} \bigr) =0, \quad u \ge0. $$

 □

Theorem 20

Assume that \(x^{*} \) is an efficient solution for problem (P) at which the Kuhn-Tucker constraint qualification is satisfied. Then, there exist scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1\), \(u_{j} \ge0\), \(j=1,2,\ldots,m\), such that

$$\sum_{i=1}^{k}\gamma_{i} \nabla f_{i} \bigl(x^{*} \bigr) +\sum_{j=1}^{m}u_{j} \nabla g_{j} \bigl(x^{*} \bigr) =0, \qquad \sum _{j=1}^{m}u_{j} g_{j} \bigl(x^{*} \bigr) =0. $$

Proof

Since every efficient solution is a weak minimum, by applying Theorem 2.2 of Weir and Mond [14] for \(x^{*} \), we see that there exist \(\gamma\in R^{k} \), \(u\in R^{m} \) such that

$$\begin{aligned}& \gamma^{T} \nabla f\bigl(x^{*} \bigr)+u^{T} \nabla g\bigl(x^{*} \bigr)=0,\qquad u^{T} g\bigl(x^{*} \bigr)=0, \\& u\ge0, \qquad \gamma\ge0, \qquad \gamma ^{T} e=0, \end{aligned}$$

where \(e=(1,1,\ldots,1)\in R^{k} \). □

Example 2

Let \(E\mbox{-}[0,1]:R\times[0,1] \to R \) be defined as \(E(t,\lambda)=\lambda\sqrt[3]{t}\), where \(t \in R\), and \(\lambda\in[0,1]\). Consider the problem:

$$\begin{aligned}& \mathrm{min}\quad f_{1} (x,y)=x^{3}, \\& \mathrm{min}\quad f_{2} (x,y)=(y-x)^{3} \\& \quad \mbox{s.t. } (x,y)\in M=\bigl\{ (x,y)\in R^{2} : x+y\le3, 1 \le y \le3, x\ge0\bigr\} , \end{aligned}$$

where \(f_{1} \), and \(f_{2} \) are \(E\mbox{-}[0,1]\) convex functions on convex set M. It is clear that \(f(M)\) is \(R_{\geqq}^{2}\)-nonconvex set (see Figure 1(a)), but the image of the objective space \(f(M)\) under the map \(E\mbox{-}[0,1]\) is \(R_{\geqq}^{2}\)-convex set (see Figure 1(b)).

Figure 1
figure 1

Example  2 of bicriteria \(\pmb{E\mbox{-}[0,1]}\) convex problem. (a) \(f(M)\) is \(R_{\geqq}^{2}\)-nonconvex. (b) The image of \(f(M)\) under \(E\mbox{-}[0,1]\) map is \(R_{\geqq }^{2}\)-convex.

(i) Formulate the weighting problem (\(\mathrm{P}_{w}\)) as

$$\begin{aligned}& \mathrm{min}\quad \bigl\{ w_{1} x^{3} +w_{2} (y-x)^{3} \bigr\} \\& \quad \mbox{subject to } x\in M, \end{aligned}$$

where \(w_{1} ,w_{2} \ge0\), \(w_{1} +w_{2} =1\).

It is clear that a point \((0,y)\in M\), \(1 \le y \le3\), is an optimal solution for (\(\mathrm{P}_{w} \)) corresponding \(w=(w_{1} ,0)\), \(0 < w_{1} \le1\), and a point \((x,1)\in M\), \(0 \le x \le2\) is an optimal solution for (\(\mathrm{P}_{w} \)) corresponding to \(w=(0,w_{2} )\), \(0 < w_{2} \le1\). Hence the set of efficient solutions of problem (P) can be described as

$$X=\bigl\{ (x,1)\in M: 0 \le x \le2 \mbox{ and } (0,y)\in M: 1 \le y \le 3\bigr\} . $$

(ii) Formulate the problem \(\mathrm{P}_{q} (\varepsilon )\) as

$$\begin{aligned}& \mathrm{min}\quad x^{3} \\& \quad \mbox{subject to} \\& \quad (x,y)\in M, \\& \quad (y-x)^{3} \le E(\varepsilon_{1},1) \end{aligned}$$

and

$$\begin{aligned}& \mathrm{min}\quad (y-x)^{3} \\& \quad \mbox{subject to} \\& \quad (x,y)\in M, \\& \quad x^{3} \le E(\varepsilon_{2},1). \end{aligned}$$

It is easy to see that the points \(\{ (x,1)\in M: 0 \le x \le2 \mbox{ and } (0,y)\in M: 1 \le y \le3\} \) are optimal solutions corresponding to

$$\bigl(E(\varepsilon_{1},1) ,E(\varepsilon_{2},1) \bigr)= \bigl(y^{*}-x^{*},x^{*}\bigr) . $$

(iii) Applying the Kuhn-Tucker conditions yields

$$\begin{aligned}& 3\gamma_{1} \bigl(x^{*}\bigr)^{2} -3 \gamma_{2} \bigl(y^{*}-x^{*}\bigr)^{2}+u_{1} -u_{4} =0, \\& 3\gamma_{2} \bigl(y^{*}-x^{*}\bigr)^{2}+u_{1} +u_{2} -u_{3}=0, \\& u_{1} \bigl(x^{*} +y^{*} -3\bigr)=0, \\& u_{2} \bigl(y^{*}-3 \bigr)=0, \\& u_{3} \bigl(1-y^{*}\bigr) =0, \\& -u_{4} x^{*} =0 \end{aligned}$$

and

$$x^{*} + y^{*} \le3, \qquad y^{*} \ge1, \qquad y^{*} \le 3, \qquad x^{*} \ge0, $$

where \(\gamma_{i} \ge0\), \(i=1,2\), \(\gamma_{1} +\gamma _{2}=1\), and \(u_{i} \ge0\), \(i=1,2,3,4\). From this system we conclude that the set of efficient solutions can be described as

$$X=\bigl\{ (x,1)\in M: 0 \le x \le2 \mbox{ and } (0,y)\in M: 1 \le y \le 3\bigr\} . $$

References

  1. Chen, J-W, Li, J, Wang, J-N: Optimality conditions and duality for nonsmooth multiobjective optimization problems with cone constraints. Acta Math. Sci. 32(1), 1-12 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  2. Youness, EA, Emam, T: Characterization of efficient solutions of multi-objective optimization problems involving semi-strongly and generalized semi-strongly E-convexity. Acta Math. Sci. Ser. B 28(1), 7-16 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  3. Mishra, SK, Wang, SY, Lai, KK: Generalized Convexity and Vector Optimization. Nonconvex Optimization and Its Applications, vol. 90. Springer, Berlin (2009)

    MATH  Google Scholar 

  4. Mishra, SK, Giorgi, G: Invexity and Optimization. Nonconvex Optimization and Its Applications, vol. 88. Springer, Berlin (2008)

    MATH  Google Scholar 

  5. Emam, T: Roughly B-invex programming problems. Calcolo 48(2), 173-188 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. Komlosi, S, Rapesak, T, Schaible, S: Generalized Convexity. Springer, Berlin (1994)

    Book  MATH  Google Scholar 

  7. Youness, EA, El-Banna, AZ, Zorba, S: \(E\mbox{-}[0, 1]\) Convex functions. Nuovo Cimento 120(4), 397-406 (2005)

    MathSciNet  Google Scholar 

  8. Youness, EA, El-Banna, AZ, Zorba, S: Quasi \(E\mbox{-}[0, 1]\) convex functions. In: 8th International Conference on Parametric Optimization, 27 November - 1 December (2005)

    Google Scholar 

  9. Kaul, RN, Kaur, S: Optimality criteria in nonlinear programming involving nonconvex functions. J. Math. Anal. Appl. 105, 104-112 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  10. Mahajan, DG, Vartak, MN: Generalizations of some duality theorems in nonlinear programming. Math. Program. 12, 293-317 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  11. Bazaraa, MS, Shetty, CM: Nonlinear Programming: Theory and Algorithms. Wiley, New York (1979)

    MATH  Google Scholar 

  12. Chankong, V, Haimes, YY: Multiobjective Decision Making: Theory and Methodology. North-Holland, Amsterdam (1983)

    MATH  Google Scholar 

  13. Mangasarian, OL: Nonlinear Programming. McGraw-Hill, New York (1969)

    MATH  Google Scholar 

  14. Weir, T, Mond, B: Generalized convexity and duality in multiple objective programming. Bull. Aust. Math. Soc. 39, 287-299 (1989)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is grateful to everyone who contributed, in one way or another, to the success of this paper. Also, UOH funds are gratefully acknowledged for a research grant that has been given so that this research could be conducted.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tarek Emam.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Emam, T. Optimality for \(E\mbox{-}[0,1]\) convex multi-objective programming problems. J Inequal Appl 2015, 160 (2015). https://doi.org/10.1186/s13660-015-0675-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0675-7

Keywords