Skip to main content

Solving linear bilevel multiobjective programming problem via exact penalty function approach

Abstract

In this paper, the linear bilevel multiobjective programming problem is addressed. The duality gap of the lower level problem is appended to the objectives of the upper level problem with a penalty, and a penalized problem for the linear bilevel multiobjective programming problem is obtained. We prove that the Pareto optimality is reached for an exact penalty function, then an algorithm (original algorithm) is proposed. In addition, for the linear bilevel multiobjective programming problem with given weights for the upper level objective functions, we analyze the optimality conditions and propose an algorithm (weights algorithm). The numerical results showing viability of the penalty function approach are presented.

1 Introduction

Bilevel programming (BP), which is characterized by the existence of two optimization problems in which the constraint region of the first-level problem is implicitly determined by another optimization problem, has increasingly been addressed in the literature, both from the theoretical and computational points of view (see the monographs of Dempe [1] and Bard [2] and the reviews by Vicente and Calamai [3], Dempe [4] and Colson et al. [5]). In the last two decades, many papers have been published about bilevel optimization, however, there are only very few of them dealing with the bilevel multiobjective programming (BMP) problem, where the upper level or the lower level or both of a bilevel decision have multiple conflicting objectives [6–8].

Although solving the bilevel multiobjective programming problem is not an easy task, some researchers have presented feasible approachers for this problem. Shi and Xia [9, 10] propose an interactive algorithm based on the concepts of satisfactoriness and direction vector for the nonlinear bilevel multiobjective problem. Abo-Sinna [11], Osman [12] present some approaches via fuzzy set theory for solving bilevel and multiple level multiobjective problems, and Teng and Li [13] and Deb and Sinha [14] give evolutionary algorithms for some bilevel multiobjective programming problems. Besides, for the so-called semivectorial bilevel optimization problem, Bonnel and Morgan [15] construct the penalized problem by the marginal function of the lower level problem and propose the penalty function algorithm. However, no numerical result is reported. Based on the objective penalty function approach for the single level programs, Zheng and Wan [16] propose the corresponding penalty function algorithm, which contains two penalty parameters. A recent study by Eichfelder [8] suggests a refinement-based strategy in which the algorithm starts with a uniformly distributed set of points on upper level variable. Noted that if the dimension of upper level variable is high, generating a uniformly spread upper level variables and refining the resulting upper level variable will be computationally expensive.

The linear bilevel multiobjective programming (LBMP) problem, i.e., both the objective functions and the constraint functions are linear functions, has attracted more and more attention. Nishizaki and Sakawa [17] give three Stackelberg solution definitions and propose the corresponding algorithms based on the idea of the Kth best method. However, they do not give the numerical results. Ankhili and Mansouri [18] propose an exact penalty function algorithm based on the marginal function of lower level problem for the LBMP problem, where the upper level is a linear scalar optimization problem and the lower level is a linear multiobjective optimization problem. Calvete and Gale [19] analyze the characters of the feasible region and give some algorithms frame for the LBMP problem, where the upper level is a linear multiobjective optimization problem and the lower level is a linear scalar optimization problem.

In this paper, inspired by the method presented in [20, 21] for the linear bilevel single objective programming problem, we propose an exact penalty algorithm for the LBMP problem, where both the upper level and the lower level are linear multiobjective optimization problems. There are few reports on using exact penalty approach for solving the above LBMP problem. Our strategy can be outlined as follows. By using the weighted sum scalarization approach, we reformulate the LBMP problem as a special bilevel programming problem, where the upper level is a linear multiobjective programming problem and the lower level is a linear scalar optimization problem. Then the duality gap of the lower level problem is appended to the objectives of the upper level problem with a penalty, and a penalized problem for the LBMP is obtained. We prove that the penalty function is exact, and an exact penalty function algorithm is proposed for the LBMP. In addition, for the LBMP problem with given weights for the upper level objective functions, we analyze the optimality conditions and propose a special algorithm. Finally, we give some numerical examples to illustrate the algorithm proposed in this paper.

The remainder of the paper is organized as follows. In the next section we give the mathematical model of the LBMP problem and construct the penalized problem. In Section 3, we analyze the characters of the penalized problem and give via an exact penalty method an existence theorem of Pareto optimal solutions. In Section 4, we propose the algorithm and give the numerical results. Finally, we conclude the paper with some remarks.

2 Linear bilevel multiobjective programming problem and penalized problem

In this paper, we consider the following linear bilevel multiobjective programming problem (LBMP), which can be written as

$$\begin{aligned}& \max_{(x,y)\geq0}Cx+C'y \\ & \mbox{s.t.}\quad\max_{y\geq0}Dy \\ & \hphantom{\mbox{s.t.}\quad}\mbox{s.t.}\quad A_{1}x+A_{2}y\leq b, \end{aligned}$$
(1)

where \(x\in R^{n}\), \(y\in R^{m}\), \(b\in R^{p}\), \(A_{1}\in R^{p\times n}\), \(A_{2}\in R^{p\times m}\), \(C\in R^{q\times n}\), \(C'\in R^{q\times m}\), \(D\in R^{l\times m}\).

Let \(S=\{(x,y)\mid A_{1}x+A_{2}y\leq b,x\geq0, y\geq0\}\) denote the constraint region of problem (1), and \(\Pi_{y}=\{x\in R^{n}_{+}\mid \exists y\in R^{m}_{+},A_{1}x+A_{2}y\leq b\}\) be the projection of S onto the upper level’s decision space. To define problem (1), we make the following assumption.

  1. (A)

    The constraint region S is nonempty and compact.

For fixed \(x\in R_{+}^{n}\), let \(S(x)\) denote the efficiency set of solutions to the lower level problem

$$\begin{aligned} (P_{x}): \quad& \max_{y\geq0}Dy \\ \quad& \mbox{s.t.}\quad A_{1}x+A_{2}y\leq b. \end{aligned}$$

Definition 2.1

A point \((x,y)\) is feasible for problem (1) if \((x,y)\in S\) and \(y\in S(x)\); the term \((x^{*},y^{*})\) is a Pareto optimal solution to problem (1), provided that it is a feasible point and there exists no other feasible point \((x,y)\) such that \(Cx^{*}+C'y^{*}\leq Cx+C'y\).

Remark 2.1

Note that in problem (1) the objective function of the upper level is maximized with regard to x and y, that is, in this work we adopt the optimistic approach to consider the bilevel multiobjective programming problem [8].

One way to transform the lower level problem \((P_{x})\) into a scalar optimization problem is the so-called scalarization technique, which consists of solving the following further parameterized problem:

$$\begin{aligned}& \max_{y\geq0}\lambda^{T}Dy \\& \mbox{s.t.}\quad A_{1}x+A_{2}y\leq b, \end{aligned}$$
(2)

where the new parameter vector λ is a nonnegative point of the unit sphere, i.e., λ belongs to \(\Omega=\{\lambda\mid \lambda\in R_{+}^{l},\sum _{i=1}^{l}\lambda_{i}=1\}\). Since it is a difficult task to choose the best choice \(x(y)\) on the Pareto front for a given upper level variable x, our approach in this paper consists to consider the set Ω as a new constraint set for the upper level problem. Note that this approach can also be used in [22] for the semivectorial bilevel optimization problem.

For any given parameter couple \((x,\lambda)\in\Pi_{y}\times\Omega\), let \(\gamma(x,\lambda)\) denote the solution set of problem (2). The following relationship (see, e.g., Theorem 3.1 in [22]) relates the solution set of \((P_{x})\) and that of problem (2).

Proposition 2.1

Let assumption (A) be satisfied. Then we have

$$ S(x)=\gamma(x,\Omega):=\bigcup\bigl\{ \gamma(x,\lambda):\lambda\in \Omega\bigr\} . $$
(3)

Based on Proposition 2.1, problem (1) can be replaced by the following bilevel multiobjective programming problem:

$$\begin{aligned}& \max_{(x,y,\lambda)\geq0}Cx+C'y \\& \mbox{s.t.}\quad \sum_{i=1}^{l} \lambda_{i}=1, \\& \hphantom{\mbox{s.t.}\quad} \max_{y\geq0}\lambda^{T}Dy \\& \hphantom{\mbox{s.t.}\quad} \mbox{s.t.}\quad A_{1}x+A_{2}y\leq b. \end{aligned}$$
(4)

The link between problem (1) and (4) will be formalized in the following proposition.

Proposition 2.2

Let assumption (A) be satisfied. Then the following assertions hold.

  1. (i)

    Let \((\bar{x},\bar{y})\) be a Pareto optimal solution of problem (1). Then for all \(\bar{\lambda}\in\Omega\) with \(\bar{y}\in\gamma(\bar {x},\bar{\lambda})\), the point \((\bar{x},\bar{y},\bar{\lambda})\) is a Pareto optimal solution of problem (4).

  2. (ii)

    Let \((\bar{x},\bar{y},\bar{\lambda})\) be a Pareto optimal solution of problem (4). Then \((\bar{x},\bar{y})\) is a Pareto optimal solution of problem (1).

Proof

(i) Let \((\bar{x},\bar{y})\) be a Pareto optimal solution of problem (1), and there exists \(\lambda^{o}\in\Omega\) with \(\bar{y}\in\gamma(\bar{x},\lambda^{o})\), such that \((\bar{x},\bar {y},\lambda^{o})\) is not a Pareto optimal solution of problem (4). Then there exists a feasible point \((x',y',\lambda')\) of problem (4), such that

$$ Cx'+C'y'\geq C\bar{x}+C' \bar{y}.$$
(5)

Following Proposition 2.1, we have \(y'\in S(x')\), i.e. \((x',y')\) is also a feasible point of problem (1). That means that there exists a feasible point \((x',y')\) of problem (1), such that (5) is satisfied. It contradicts that \((\bar{x},\bar{y})\) is a Pareto optimal solution of problem (1).

(ii) Let \((\bar{x},\bar{y},\bar{\lambda})\) be a Pareto optimal solution of problem (4), then there does not exist a feasible point \((x',y',\lambda')\) of problem (4), such that

$$ C\bar{x}+C'\bar{y}\leq Cx'+C'y'.$$
(6)

Suppose that \((\bar{x},\bar{y})\) is not a Pareto optimal solution of problem (1), then there exists a feasible point \((x'',y'')\), such that

$$ C\bar{x}+C'\bar{y}\leq Cx''+C'y''.$$
(7)

Following Proposition 2.1, we see that for \((x'',y'')\), there exists \(\lambda''\in\Omega\) such that \((x'',y'',\lambda'')\) is feasible to problem (4). That means that there exists a feasible point \((x'',y'',\lambda'')\) of problem (4), such that (7) is satisfied, which contradicts (6). □

The dual of problem (2) is the following:

$$\begin{aligned}& \min_{w}w^{T}(b-A_{1}x) \\& \mbox{s.t.}\quad w^{T}A_{2}\geq\lambda^{T}D, \\& \hphantom{\mbox{s.t.}\quad} w\geq0. \end{aligned}$$
(8)

Here \(w\in R^{q}\) is the duality variable.

Let \(\pi(x,y,\lambda,w)=w^{T}(b-A_{1}x)-\lambda^{T}Dy\) denote the duality gap of the lower level problem, if \(\pi(x,y,\lambda,w)\) is equal to zero, the optimal solution of the lower level problem would be reached.

We can employ a penalty function approach to the LBMP problem (1), and formulate the penalty problem \(P(K)\) as follows:

$$\begin{aligned}& \max_{(x,y,\lambda,w)} F(x,y,\lambda,w,K)=Cx+C'y-K \bigl(w^{T}(b-A_{1}x)-\lambda^{T}Dy\bigr)e \\& \mbox{s.t.}\quad \sum_{i=1}^{l} \lambda_{i}=1, \\& \hphantom{\mbox{s.t.}\quad } A_{1}x+A_{2}y\leq b, \\& \hphantom{\mbox{s.t.}\quad } w^{T}A_{2}\geq\lambda^{T}D, \\& \hphantom{\mbox{s.t.}\quad } x,y,\lambda,w\geq0, \end{aligned}$$
(9)

where \(e\in R^{q}\) has all its components equal to unity, and K is the penalty factor. Clearly, problem (9) will reach optimality when \(w^{T}(b-A_{1}x)-\lambda^{T}Dy\rightarrow0\).

3 Main results

We will now analyze the main theoretical result, i.e., the exactness of the penalty function approach, which means we can get the Pareto optimal solutions of problem (1) by solving the penalized problem (9) for some finite positive constant K.

Before presenting some theoretical results, we introduce some useful notations first. Let \(Z=\{(x,y)\mid A_{1}x+A_{2}y\leq b,x\geq0,y\geq 0\}\), \(W=\{(\lambda,w)\mid \sum_{i=1}^{l}\lambda_{i}=1, w^{T}A_{2}\geq \lambda^{T}D, \lambda\geq0, w\geq0\}\), and we denote the extreme points of W and Z by \(W_{v}\) and \(Z_{v}\), respectively.

Theorem 3.1

For a given value of \((\lambda,w)\in W\) and fixed K, a Pareto optimal solution to the following programming problem:

$$\begin{aligned}& \max_{(x,y)} F(x,y,\lambda,w,K) \\& \textit{s.t.}\quad (x,y)\in Z , \end{aligned}$$
(10)

is achievable at some \((x^{*},y^{*})\in Z_{v}\).

Proof

Note that for fixed \((\lambda,w)\) and K, problem (10) is a linear multiobjective programming problem, then Theorem 3.1 is obvious. □

Theorem 3.1 yields the following theorem.

Theorem 3.2

For fixed K, a Pareto optimal solution to problem (9) is achievable in \(Z_{v}\times W_{v}\) and \(Z_{v}\times W_{v}=(Z\times W)_{v}\).

Proof

Let \((x^{*},y^{*})\in Z_{v}\) be a Pareto optimal solution to problem (10). As \(F(x^{*},y^{*},\lambda,w,K)\) is an affine function of \((\lambda,w)\), and W is a polytope, the problem

$$\begin{aligned}& \max_{(\lambda,w)} F\bigl(x^{*},y^{*}, \lambda,w,K\bigr) \\& \mbox{s.t.}\quad (\lambda,w)\in W \end{aligned}$$

will have some Pareto optimal solution \((\lambda^{*},w^{*})\in W_{v}\). This proves the first part and the second part is obvious following Theorem 2 in [20]. □

The above theorem is based on a fixed value of K. We now show that a finite value of K would yield an exact Pareto optimal solution to the overall problem (9), where the penalty term \(w^{T}(b-A_{1}x)-\lambda^{T}Dy\) becomes zero.

Theorem 3.3

There exists a finite value of K, \(K^{*}\) say, for which the Pareto optimal solution \((x,y,\lambda,w)\) to the penalty function problem (9) satisfies \(w^{T}(b-A_{1}x)-\lambda^{T}Dy=0\).

Proof

Suppose \((x^{*},y^{*},\lambda^{*})\) is the Pareto optimal solution to problem (4), i.e. the linear bilevel multiobjective problem, then the optimality conditions of the lower level problem are satisfied. That means \((w^{*})^{T}(b-A_{1}x^{*})-(\lambda^{*})^{T}Dy^{*}=0\).

Let \((x,y,\lambda,w)\) be a Pareto optimal solution to problem (9), then there exists an index i, such that

$$\begin{aligned} C_{i}x+C_{i}'y-K\bigl(w^{T}(b-A_{1}x)- \lambda^{T}Dy\bigr) \geq& C_{i}x^{*}+C_{i}'y^{*}-K \bigl(\bigl(w^{*}\bigr)^{T}\bigl(b-A_{1}x^{*} \bigr)-\bigl(\lambda^{*}\bigr)^{T}Dy^{*}\bigr) \\ =&C_{i}x^{*}+C_{i}'y^{*}, \end{aligned}$$

where \(C_{i}\) and \(C_{i}'\) are the ith rows of C and \(C'\), respectively.

Thus,

$$ 0\leq w^{T}(b-A_{1}x)-\lambda^{T}Dy\leq \frac{\max[C_{i}x+C_{i}'y-C_{i}x^{*}-C_{i}'y^{*}]}{K}\leq\frac{k}{K}, $$

where k is some constant. Thus, as \(K\rightarrow\infty\), \(w^{T}(b-A_{1}x)-\lambda^{T}Dy=0\rightarrow0\). However, since \(Z_{v}\times W_{v}\) is finite, \(w^{T}(b-A_{1}x)-\lambda^{T}Dy=0\) for some large finite value of K, say \(K^{*}\). □

We now show that, by increasing K monotonically, we can achieve the Pareto optimal solutions of the linear bilevel multiobjective programming problem (4).

Theorem 3.4

There exists some \(K^{*}\geq0\), such that for all \(K\geq K^{*}\), the Pareto optimal solution \((x,y,\lambda,w)\) to problem (9) is also a Pareto optimal solution to problem (4).

Proof

Following Theorem 3.3 and the essentials of the penalty function method for multiobjective programs (Theorem 2.1 in [23]), we know that there exists some \(K^{*}\geq0\), such that for all \(K\geq K^{*}\), the Pareto optimal solution \((x,y,\lambda,w)\) to problem (9) satisfying \(w^{T}(b-A_{1}x)-\lambda^{T}Dy=0\), and there does not exist a point \((\hat{x},\hat{y},\hat{\lambda},\hat{w})\) which is feasible to problem (9), satisfying

$$ Cx+C'y\leq C\hat{x}+C'\hat{y}.$$
(11)

Suppose that \((x,y,\lambda,w)\) (feasible to problem (4)) is not a Pareto optimal solution to problem (4), then there exists a feasible point \((\check{x},\check{y},\check{\lambda },\check{w})\) to problem (4) (also feasible to problem (9)), satisfying

$$ Cx+C'y\leq C\check{x}+C'\check{y},$$
(12)

which contradicts (11). Then \((x,y,\lambda)\) is also a Pareto optimal solution to problem (4). □

4 Algorithm and numerical results

Following the above results, we know that we can obtain the Pareto optimal solution of problem (1) by solving the penalized problem (9). Then we can propose the following exact penalty function algorithm for the linear bilevel multiobjective programming problem (1).

Original algorithm

Step 0. Choose an initial point \((x^{0},y^{0},\lambda^{0},w^{0})\in W_{v}\times Z_{v}\), a positive constant \(K>1\) and stopping tolerance \(\epsilon>0\), and set \(k:=0\).

Step 1. Find a Pareto optimal solution \((x^{k},y^{k},\lambda ^{k},w^{k})\) of problem (9).

Step 2. If \((w^{k})^{T}(b-A_{1}x^{k})-(\lambda^{k})^{T}Dy^{k}\leq \epsilon\), stop, a Pareto optimal solution is obtained; else go to Step 3.

Step 3. Set \((x^{k+1},y^{k+1},\lambda^{k+1},w^{k+1}):=(x^{k},y^{k},\lambda^{k},w^{k})\), \(k:=k+1\), \(K:=2K\), and go to Step 1.

The following theorem states the convergence of the above algorithm.

Theorem 4.1

Let assumption (A) be satisfied, then the last point in the sequence \((x^{k},y^{k},\lambda^{k})\), which is generated by the original algorithm, is a Pareto optimal solution to problem (4).

Proof

Following Theorem 3.3, we know that the penalty function is exact. Then it means that the sequence \((x^{k},y^{k},\lambda^{k})\), which is generated by the above algorithm, is finite. Let \((x^{*},y^{*},\lambda^{*})\) be the last point in the sequence \((x^{k},y^{k},\lambda^{k})\). Following Theorem 3.4, it is obvious that \((x^{*},y^{*},\lambda^{*})\) is a Pareto optimal solution to problem (4). □

It is clear that we can obtain the Pareto optimal front of problem (1) by solving problem (9) using some appropriate approaches. In the following, we consider the situation that the decision maker gives the positive weights for the upper level objective functions, and propose a special algorithm for this problem.

Let the decision maker give the positive weights \(\mu=(\mu_{1},\ldots ,\mu_{q})^{T}\in R_{+}^{q}\) with \(\sum_{i=1}^{q}\mu_{i}=1\) for the upper level objective functions. Then the Pareto optimal solution to the given weights μ can be obtained by solving the following programs:

$$\begin{aligned}& \max\mu^{T}\bigl(Cx+C'y\bigr)-K\bigl(w^{T}(b-A_{1}x)- \lambda^{T}Dy\bigr) \\& \mbox{s.t.} \quad(x,y)\in Z, (\lambda,w)\in W. \end{aligned}$$
(13)

Let \(Q(\lambda,w,K)=\max_{(x,y)\in Z}[\mu ^{T}(Cx+C'y)-K(w^{T}(b-A_{1}x)-\lambda^{T}Dy)]\), we have the following result.

Theorem 4.2

For \((\bar{\lambda},\bar{w}),(\tilde{\lambda},\tilde{w})\in W\), we have

$$ Q(\bar{\lambda},\bar{w},K)\geq Q(\tilde{\lambda},\tilde{w},K)-K\bigl[( \bar{w}-\tilde{w})^{T}(b-A_{1}x)-(\bar{\lambda}-\tilde{ \lambda})^{T}Dy\bigr] . $$
(14)

Proof

For all \((\bar{\lambda},\bar{w})\in W\),

$$ Q(\tilde{\lambda},\tilde{w},K)=\mu^{T}\bigl(Cx+C'y \bigr)-K\bigl[\bigl(\tilde{w}^{T}(b-A_{1}x)-\tilde{ \lambda}^{T}Dy\bigr)\bigr] $$
(15)

and

$$ Q(\bar{\lambda},\bar{w},K)\geq\mu^{T}\bigl(Cx+C'y \bigr)-K\bigl[\bigl(\bar{w}^{T}(b-A_{1}x)-\bar{ \lambda}^{T}Dy\bigr)\bigr]. $$
(16)

Subtract (15) from (16) and the result follows. □

Following Theorem 4.2, if \(\alpha(\tilde{\lambda},\tilde{w})=\min _{(\lambda,w)\in W}[(w-\tilde{w})^{T}(b-A_{1}x)-(\lambda-\tilde{\lambda })^{T}Dy]<0\), then \((\tilde{\lambda},\tilde{w})\neq(\lambda ^{*},w^{*})\in\arg\max\{Q(\lambda,w,K):(\lambda,w)\in W\}\).

Based on the above analysis, we can propose an algorithm for solving the special problem (13).

Weights algorithm

Step 0. Give the weights \(\mu=(\mu_{1},\ldots,\mu_{q})^{T}\in R_{+}^{q}\) with \(\sum_{i=1}^{q}\mu_{i}=1\) for the upper level objective functions, and choose an initial point \((\lambda^{0},w^{0})\in W_{v}\), a positive constant \(K>1\), and stopping tolerance \(\epsilon>0\), and set \(k:=0\).

Step 1. Solve \(\max_{(x,y)\in Z}\mu ^{T}(Cx+C'y)-K((w^{k})^{T}(b-A_{1}x)-(\lambda^{k})^{T}Dy)\) and we obtain the optimal solution \((x^{k},y^{k})\).

Step 2. Solve \(\alpha(\lambda^{k},w^{k})=\min_{(\lambda,w)\in W}[(w-w^{k})^{T}(b-A_{1}x^{k})-(\lambda-\lambda^{k})^{T}Dy^{k}]\), and obtain the optimal solution \((\lambda^{*},w^{*})\).

Step 3.

  1. (1)

    If \(\alpha(\lambda^{k},w^{k})<0\), then \((\lambda ^{k+1},w^{k+1})=(\lambda^{*},w^{*})\), \(k:=k+1\), go to Step 1.

  2. (2)

    If \(\alpha(\lambda^{k},w^{k})\geq0\) and \((w^{k})^{T}(b-A_{1}x^{k})-(\lambda^{k})^{T}Dy^{k}>\epsilon\), then \(K:=2K\), \(k:=k\), go to Step 1.

  3. (3)

    If \(\alpha(\lambda^{k},w^{k})\geq0\) and \((w^{k})^{T}(b-A_{1}x^{k})-(\lambda^{k})^{T}Dy^{k}<\epsilon\), then the optimal solution to problem (13) is \((x^{k},y^{k},\lambda^{k},w^{k})\).

To illustrate the weights algorithm, we solve the following linear bilevel multiobjective programming problem using the above algorithm.

Example 1

[6]

$$\begin{aligned}& \min_{(x,y)\geq0}F(x,y)=(-x+2y,2x-4y)^{T} \\& \mbox{s.t.}\quad {-}x+3y\leq4, \\& \hphantom{\mbox{s.t.}\quad} \min_{y\geq0}f(x,y)=(-x+2y,2x-y)^{T} \\& \hphantom{\mbox{s.t.}\quad}\mbox{s.t.}\quad x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad\mbox{s.t.}\quad}{-}x-y\leq0. \end{aligned}$$

For Example 1, following problem (4) and (9) we can get the following penalized problem:

$$\begin{aligned}& \max(x-2y,-2x+4y)^{T}-K(-w_{1}x-w_{2}x+2 \lambda_{1}y-\lambda_{2}y)e \\& \mbox{s.t.}\quad \lambda_{1}+\lambda_{2}=1, \\& \hphantom{\mbox{s.t.}\quad} {-}x+3y\leq4, \\& \hphantom{\mbox{s.t.}\quad} x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad} {-}x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad} w_{1}+w_{2}-2\lambda_{1}+ \lambda_{2}\leq0, \\& \hphantom{\mbox{s.t.}\quad} x,y,\lambda,w\geq0. \end{aligned}$$
(17)

The solution proceeds as follows.

Loop 1

Step 0. Choose the weights \(\mu=(0.5,0.5)^{T}\) for the upper level objective functions, and an initial point \((\lambda ^{0},w^{0})=(1/3,2/3,0,0)^{T}\), a positive constant \(K=100\), and stopping tolerance \(\epsilon=0.01\), and set \(k:=1\).

Step 1. Solve

$$\begin{aligned}& \max-0.5x+y \\& \mbox{s.t.} \quad{-}x+3y\leq4, \\& \hphantom{\mbox{s.t.}\quad} x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad} {-}x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad}x,y\geq0 \end{aligned}$$

and obtain the optimal solution \((x^{0},y^{0})=(0,\frac{4}{3})^{T}\).

Step 2. Solve

$$\begin{aligned}& \alpha\bigl(\lambda^{0},w^{0}\bigr)=\min 4w_{1}-\frac{8}{3}\lambda_{1}+\frac{4}{3} \lambda_{2} \\& \mbox{s.t.}\quad \lambda_{1}+\lambda_{2}=1, \\& \hphantom{\mbox{s.t.}\quad} w_{1}+w_{2}-2\lambda_{1}+ \lambda_{2}\leq0, \\& \hphantom{\mbox{s.t.}\quad} \lambda,w\geq0. \end{aligned}$$

Obtain the optimal solution \((\lambda^{*},w^{*})=(1,0,0,0)^{T}\), and the optimal value \(\alpha(\lambda^{0},w^{0})=-\frac{8}{3}\).

Step 3. As \(\alpha(\lambda^{0},w^{0})=-\frac{8}{3}<0\), let \((\lambda ^{1},w^{1})=(1,0,0,0)^{T}\), \(k:=1\), go to Step 1.

Loop 2

Step 1. Solve

$$\begin{aligned}& \max-0.5x-199y \\& \mbox{s.t.}\quad {-}x+3y\leq4, \\& \hphantom{\mbox{s.t.}\quad} x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad} {-}x-y\leq0, \\& \hphantom{\mbox{s.t.}\quad} x,y\geq0, \end{aligned}$$

and obtain the optimal solution \((x^{1},y^{1})=(0,0)^{T}\).

Step 2. Solve

$$\begin{aligned}& \alpha\bigl(\lambda^{1},w^{1}\bigr)=\min 4w_{1}+w_{2} \\& \mbox{s.t.}\quad \lambda_{1}+\lambda_{2}=1, \\& \hphantom{\mbox{s.t.}\quad} w_{1}+w_{2}-2\lambda_{1}+ \lambda_{2}\leq0, \\& \hphantom{\mbox{s.t.}\quad} \lambda,w\geq0. \end{aligned}$$

Obtain the optimal solution \((\lambda^{*},w^{*})=(1,0,0,0)^{T}\), and the optimal value \(\alpha(\lambda^{1},w^{1})=0\).

Step 3. \(\alpha(\lambda^{1},w^{1})=0\), and \((w^{1})^{T}(b-A_{1}x^{1})-(\lambda^{1})^{T}Dy^{1}=0<0.01\). Then the Pareto optimal solution of Example 1 to the weights \(\mu=(0.5,0.5)^{T}\) is \((x,y,\lambda,w)=(0,0,1.0,0,0,0)^{T}\).

It is noted that in [6], a Pareto optimal solution to Example 1 is \((x,y)=(0,0.5)^{T}\), and the upper level objective value is \(F(x,y)=(1,-2)^{T}\). However, in this paper the Pareto optimal solution of Example 1 is \((x,y)=(0,0)^{T}\) obtained by the weights algorithm, and the corresponding upper level objective function value is \(F(x,y)=(0,0)^{T}\). Following the vector partial order, it is obvious that the optimal solution \((x,y)=(0,0)\) obtained here is a Pareto optimal solution to Example 1. It is shown that the weights algorithm is feasible to the linear bilevel multiobjective programming problem.

To further show the viability of the weights algorithm proposed, we consider the following two examples.

Example 2

[6]

$$\begin{aligned}& \max_{(x,y)\geq0}F(x,y)=(x_{1}+9x_{2}+10y_{1}+y_{2}+3y_{3}, 9x_{1}+2x_{2}+2y_{1}+7y_{2}+4y_{3})^{T} \\& \mbox{s.t.}\quad 3x_{1}+9x_{2}+9y_{1}+5y_{2}+3y_{3} \leq1\mbox{,}039, \\& \hphantom{\mbox{s.t.}\quad} {-}4x_{1}-x_{2}+3y_{1}-3y_{2}+2y_{3} \leq94, \\& \hphantom{\mbox{s.t.}\quad} \max_{y\geq0}f(x,y)=(4x_{1}+6x_{2}+7y_{1}+4y_{2}+8y_{3}, 6x_{1}+4x_{2}+8y_{1}+7y_{2}+4y_{3})^{T} \\& \hphantom{\mbox{s.t.}\quad} \mbox{s.t.}\quad 3x_{1}-9x_{2}-9y_{1}-4y_{2} \leq61, \\& \hphantom{\mbox{s.t.}\quad\mbox{s.t.}\quad} 5x_{1}+9x_{2}+10y_{1}-y_{2}-2y_{3} \leq924, \\& \hphantom{\mbox{s.t.}\quad\mbox{s.t.}\quad} 3x_{1}-3x_{2}+y_{2}+5y_{3}\leq 420. \end{aligned}$$

Example 3

[24]

$$\begin{aligned}& \max_{(x,y)\geq0}F(x,y)=(x_{1}+2x_{2}, 3x_{1}+x_{2})^{T} \\& \mbox{s.t.}\quad x_{1}+x_{2}\leq3, \\& \hphantom{\mbox{s.t.}\quad} \max_{y\geq0}f(x,y)=(y_{1}+3y_{2}, 2y_{1}+y_{2})^{T} \\& \hphantom{\mbox{s.t.}\quad } \mbox{s.t.}\quad {-}x_{1}+y_{1}+y_{2}\leq6, \\& \hphantom{\mbox{s.t.}\quad\mbox{s.t.}\quad} {-}x_{2}+y_{1}\leq3, \\& \hphantom{\mbox{s.t.}\quad\mbox{s.t.}\quad} x_{1}+x_{2}+y_{2}\leq8. \end{aligned}$$

In the above examples, we choose the fixed weights of the upper level objectives as \(\mu=(\mu_{1},\mu_{2})^{T}=(0.5,0.5)^{T}\), and we obtain the Pareto optimal solutions, which are presented in Table 1.

Table 1 Pareto optimal solutions to the different penalty factors K

In Table 2, we compare the upper level objective value obtained in this paper with that in the corresponding references. From the above comparison, it is showed that the Pareto optimal solutions obtained in this paper are the Pareto optimal solutions to the above two examples. Then the exact penalty function approach presented here to the linear bilevel multiobjective programming problem shows usefulness and viability.

Table 2 Results in this paper comparing with the references

5 Conclusion

In this paper, we introduce a new exact penalty function method for solving a linear bilevel multiobjective programming problem. The method is based on appending the duality gap of the lower level problem to the upper level objectives with a penalty. The numerical results reported illustrated that the exact penalty function method introduced in this paper can be numerically efficient.

It is noted that, besides its theoretical properties, the original algorithm proposed in this paper has one distinct advantage: it only requires the use of practicable algorithms for solving the smooth multiobjective optimization problems, no other complex operations are necessary. In addition, for the LBMP problem with given upper level objective functions weights, using the weights algorithm proposed we only need to solve a series of linear programming problems to obtain the Pareto optimal solutions.

References

  1. Dempe, S: Foundations of Bilevel Programming. Kluwer, Dordrecht (2002)

    MATH  Google Scholar 

  2. Bard, J: Practical Bilevel Optimization: Algorithm and Applications. Kluwer, Dordrecht (1998)

    Book  MATH  Google Scholar 

  3. Vicente, L, Calamai, P: Bilevel and multilevel programming: a bibliography review. J. Glob. Optim. 5(3), 291-306 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  4. Dempe, S: Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints. Optimization 52(3), 333-359 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  5. Colson, B, Marcotte, P, Savard, G: An overview of bilevel optimization. Ann. Oper. Res. 153, 235-256 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Zhang, G, Lu, J, Dillon, T: Decentralized multi-objective bilevel decision making with fuzzy demands. Knowl.-Based Syst. 20(5), 495-507 (2007)

    Article  Google Scholar 

  7. Ye, J: Necessary optimality conditions for multiobjective bilevel programs. Math. Oper. Res. 36(1), 165-184 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  8. Eichfelder, G: Multiobjective bilevel optimization. Math. Program. 123, 419-449 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  9. Shi, X, Xia, H: Interactive bilevel multiobjective decision making. J. Oper. Res. Soc. 48(9), 943-949 (1997)

    Article  MATH  Google Scholar 

  10. Shi, X, Xia, H: Model and interactive algorithm of bilevel multiobjective decision making with multiple interconnected decision makers. J. Multi-Criteria Decis. Anal. 10, 27-34 (2004)

    Article  Google Scholar 

  11. Abo-Sinna, M: A bilevel nonlinear multiobjective decision making under fuzziness. J. Oper. Res. Soc. Ind. 38(5), 484-495 (2001)

    MATH  MathSciNet  Google Scholar 

  12. Osman, M, Abo-Sinna, M, et al.: A multilevel nonlinear multiobjective decision making under fuzziness. Appl. Math. Comput. 153(1), 239-253 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  13. Teng, C, Li, L: A class of genetic algorithms on bilevel multiobjective decision making problem. J. Syst. Sci. Syst. Eng. 9(3), 290-296 (2000)

    MathSciNet  Google Scholar 

  14. Deb, K, Sinha, A: Solving bilevel multi-objective optimization problems using evolutionary algorithms. Lect. Notes Comput. Sci. 5467, 110-124 (2009)

    Article  Google Scholar 

  15. Bonnel, H, Morgan, J: Semivectorial bilevel optimization problem: penalty approach. J. Optim. Theory Appl. 131(3), 365-382 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  16. Zheng, Y, Wan, Z: A solution method for semivectorial bilevel programming problem via penalty method. J. Appl. Math. Comput. 37, 207-219 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  17. Nishizaki, I, Sakawa, M: Stackelberg solutions to multiobjective two-level linear programming problem. J. Optim. Theory Appl. 103(1), 161-182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  18. Ankhili, Z, Mansouri, A: An exact penalty on bilevel programs with linear vector optimization lower level. Eur. J. Oper. Res. 197, 36-41 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Calvete, HI, Gale, C: Linear bilevel programs with multiple objectives at the upper level. J. Comput. Appl. Math. 234, 950-959 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  20. Anandalingam, G, White, DJ: A solution for the linear static Stackelberg problem using penalty function. IEEE Trans. Autom. Control 38(5), 484-495 (2001)

    Google Scholar 

  21. Lv, YB, Hu, T, Wang, G, Wan, Z: A penalty function method based on Kuhn-Tucker condition for solving linear bilevel programming. Appl. Math. Comput. 188, 808-813 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  22. Dempe, S, Gadhi, N, Zemkoho, AB: New optimality conditions for the semivectorial bilevel optimization problem. J. Optim. Theory Appl. 157, 54-74 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  23. White, DJ: Multiobjective programming and penalty function. J. Optim. Theory Appl. 43(4), 583-600 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pieume, CO, Marcotte, P, Fotso, LP: Solving bilevel linear multiobjective programming problems. Amer. J. Oper. Res. 1, 214-219 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the editor Rachel for the suggestions on maintaining the integrity of this paper. The work is supported by the Nature Science Foundation of China (Nos. 11201039, 71471140, 61273179).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yibing Lv.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

YL and ZW conceived and designed the study. YL wrote and edited the paper. ZW reviewed the manuscript. The two authors read and approved the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, Y., Wan, Z. Solving linear bilevel multiobjective programming problem via exact penalty function approach. J Inequal Appl 2015, 258 (2015). https://doi.org/10.1186/s13660-015-0780-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0780-7

Keywords