Skip to main content

Optimization of PID parameters with an improved simplex PSO

Abstract

In this paper, we adopt the ‘0.618’ method to select the compression factor and expansion factor for the simplex particle swarm optimization algorithm. We use our new method to optimize parameters of a Proportion Integral Differential (PID) controller. The experimental results show that our method can effectively solve the slow convergence problem and give a better performance than the conventional PID method does.

1 Introduction

The Particle Swarm Optimization (PSO) algorithm is an evolutionary computation method, proposed by Kennedy and Eberhart in 1995 [1, 2]. PSO has many advantages, such as simplicity, and a high practical implement ability, to be extended for multi-objective optimization [36]. Ono and Nakayama, proposed an algorithm using multi-objective Particle Swarm Optimization (MOPSO) for finding robust solutions against small perturbations of design variables [7]. Gong et al. presented a global path planning approach based on multi-objective particle swarm optimization [8]. Shang et al. proposed an improved multi-objective particle swarm optimization algorithm [9]. Mostaghim and Teich proposed a Sigma method that decided p-best for each particle and introduced a disturbance factor [3]. Yu introduced two kinds of improved multi-objective particle swarm optimization algorithms [10, 11]. But it is likely to be entrapped in a local extremum for some complex optimization problems. To overcome the disadvantage of a local extremum and optimize the PSO algorithm performance, expanding its application, Chen proposed a simplex particle swarm optimization algorithm (SPSO) which combines PSO with a strong local search capability method (Simplex Method, SM) [12]. In order to enhance the performance of SPSO, a new strategy for the compression factor and expand factor of SPSO is proposed in this paper. We brought our new method into the Proportion Integral Differential (PID) controller to optimize its parameters, and the results indicate the superiority of our method.

2 Improved simplex particle swarm optimization algorithm

2.1 The basic particle swarm optimization algorithm

The PSO algorithm adopts a speed- and position-based searching model, each particle representing an alternative solution; a swarm of particles fly through a n-dimensional search space with a certain velocity. The velocities are dynamically changed according to the experience of the flight. Assuming that the whole particle swarm has m particles, the position and velocity of the ith particle in the n-dimensional search space can be represented as \(x_{i}=(x_{i1}, x_{i2}, \ldots, x_{in})\) and \(v_{i}=(v_{i1}, v_{i2}, \ldots, v_{in})\). Putting \(x_{i}\) into the objective function we can calculate its fitness value, \(p_{i}=(p_{i1}, p_{i2}, \ldots, p_{iD})\) being the best position that the ith particle has experienced, \(P_{\mathrm{best}}\) is a globally optimal position choosing from all particles, \(p_{i}\). In each generation, the velocity \(v_{i}\) and position \(x_{i}\) of particle are updated by the following formulas:

$$\begin{aligned}& v_{id}=\omega\cdot v_{id}(t)+c_{1}\cdot \operatorname{rand}() (p_{id}-x_{id})+c_{2} \cdot \operatorname{rand}() (p_{gd}-x_{id})\quad (d=1, 2, \ldots, D), \end{aligned}$$
(1)
$$\begin{aligned}& x_{id}=x_{id}+v_{id}\quad (d=1, 2, \ldots, D), \end{aligned}$$
(2)

where ω is an inertia weight, usually the value is within \([0.4, 1.2]\), \(c_{1}\) and \(c_{2}\) are acceleration constants, the so-called learning factors, \(\operatorname{rand}()\) is for random numbers in the range of \([0, 1]\).

2.2 Improved simplex particle swarm optimization algorithm

PSO is a global optimization method, it does not need any prior information when it seeks the optimal solution in the solution space. But in some complex cases, PSO may easily fall into a local extremum. Instinctively, local search technologies were concerned to solve this problem. Instead, PSO can employ some simplex method to guide its search direction while the local search can take the PSO search result as the initial point, due to the randomness of the particles’ searching movement, the potential global optimal solution nearly may be skipped. So SPSO employs a repeated searching strategy round the current optimal solution to increase the probability of finding the global optimal solution [12, 13]. In our method, we introduce a compression process when the reflection of the worst solution is inferior to the second worst solution, and an expansion process when the reflection of the worst solution is superior to the second worst solution; then a new solution instead of the worst solution involves the next simplex searching. The compression factor and the expansion factor are two random numbers, they are less and greater than 1, respectively. In our method, we exploit the ‘0.618’ method to select the compression factor and the expansion factor.

2.3 New algorithm

  1. (1)

    Initialize each parameter values, evaluate the initial fitness value of each particle according to the objective function;

  2. (2)

    calculate each particle’s velocity and position by (1) and (2);

  3. (3)

    update and store every particle’s historical optimal location and fitness value, update and store the historical global optimal position and optimum;

  4. (4)

    if the current global optimum is not better than the historical one, go to (7), otherwise, go to (5);

  5. (5)

    call the improved simplex method (SM), if the SM end conditions are satisfied, substitute the results of SM into PSO;

  6. (6)

    if it is necessary, update and store the historical optimal location and optimal fitness of the each back substitution particle; update and store the historical global optimal position and optimum;

  7. (7)

    if the termination conditions (deviations or iteration times) have been reached, output the global optimal position and optimum, end the iteration. Otherwise, go to (2).

3 Optimize PID parameters with the improved SPSO

3.1 The optimization problem of PID controller parameters

In an industrial control system, the output of some control object under the action of a step signal appears in an S-shaped rising curve; in this case, one can use a model of second-order inertia and delay to describe it. Its transfer function is

$$W(s)=\frac{K\mathrm{e}^{-T_{3}S}}{(T_{1}S+1)(T_{2}S+1)}. $$

In order to maintain for the control output y a constant value under the effect of a disturbance, usually, the PID controller is used to form a constant value control system.

When the production process is stable, namely, the object properties are stable, K, \(T_{1}\), \(T_{2}\), \(T_{3}\) are constant, respectively, at this time, the adjusted PID parameters can stay the same, while, when changes appear frequently in the production process, such as chemical reaction in chemical engineering or a load change in a power plant, the constant PID parameters usually cannot achieve an optimal control result.

Since the computer has a good capability of calculation and control flexibility and can bring about an automatic control of DDC, it is to be possible to adjust the PID controller parameters by some computer system [14, 15].

Assume that the control u and deviation e satisfy the following difference equation:

$$u(n)=K_{P} \Biggl[e(n)+\frac{1}{T_{I}}\sum _{K=0}^{n}e(K)T+T_{d}\frac {e(n)-e(n-1)}{T} \Biggr], $$

or

$$u(n)=u(n-1)+K_{P} \biggl\{ \bigl[e(n)-e(n-1) \bigr]+ \frac{T}{T_{I}}e(n)+\frac {T_{d}}{T} \bigl[e(n)-2e(n-1)+e(n-2) \bigr] \biggr\} , $$

where \(u(n)\) is the nth control action, \(e(n)\) is the nth deviation, T is the sampling period, \(K_{P}\) is the proportional gain coefficient, \(T_{1}\) is a constant integral time, and \(T_{d}\) is a constant differential time.

Our optimization task is searching for proper \(K_{P}\), \(T_{I}\), \(T_{d}\) of PID, which allow the objective function \(Q=\int_{0}^{+\infty}|\mathrm {e}|t \,dt\) to be minimum. It belongs to the multivariate function optimization problems of nonlinear programming, and up to now it cannot use a mathematical expression to describe the relationship between the objective function and \(K_{P}\), \(T_{I}\), \(T_{d}\). In this paper, we exploit the improved particle swarm optimization algorithm to deal with this case.

3.2 The simulation results

We input the following data:

  • variable number \(N=3\); calculation accuracy \(E=0.01\);

  • compression factor of 0.618; expansion factor of 1.618;

  • the parameters of the controlled object \(T_{1}=0.44~\mbox{s}\), \(T_{2}=0.44~\mbox{s}\), \(T_{3}=0.12~\mbox{s}\);

  • total number of print \(L_{3}=30\); the control system input value \(R=10\);

  • the PID parameters \(K_{P}\), \(T_{I}\), \(T_{d}\); the initial values \(X(1, 0)=1.5\), \(X(2, 0)=0.88\), \(X(3, 0)=0.11\);

  • the group size of SPSO is 30, the termination number is 50, \(v_{d}^{\max }\) is set to 15% up and down, \(c_{1}=c_{2}=2\), the inertia weight \(\omega=0.8\).

We calculated the optimal value of the PID controller parameters:

$$K_{P}=1.69762, \qquad T_{I}=0.772664~\mbox{s}, \qquad T_{d}=0.229206~\mbox{s}. $$

Correspondingly, we also found some interesting information between the output \(X_{1}\) and deviation value \(X_{5}\) (shown in Table 1).

Table 1 The output results of system simulation

The system output overshoot volume \(E=1.26\%\), transient time \(T_{P}=1.0~\mbox{s}\).

At this point, according to the conventional simplex method we calculate the results as follows:

$$K_{P}=1.69768, \qquad T_{I}=0.772671~\mbox{s}, \qquad T_{d}=0.229212~\mbox{s}. $$

The system output overshoot volume \(E=1.85\)%, the transient time \(T_{P}=1.2~\mbox{s}\).

According to the engineering design method of the standard I system, the system output overshoot volume \(E=4.3\)%, transient time \(T_{P}=1.76~\mbox{s}\).

Comparison results show our optimization method can attain a better result than the conventional PID method does, and it also effectively solves the problem of slow convergence.

4 Conclusion

In this paper, we developed an improved SPSO method to solve the optimization of PID parameters. Simulation results show that the proposed method has a good convergence efficiency and a good accuracy. It can effectively improve the quality of a dynamic system.

References

  1. Kennedy, J, Eberhart, RC: Particle swarm optimization. In: Proceedings IEEE International Conference on Neural Networks, IEEE Service Center, Perth, Australia, pp. 1942-1948 (1995)

    Chapter  Google Scholar 

  2. Eberhart, RC, Kennedy, J: A new optimizer using particle swarm theory. In: Proceedings the Sixth International Symposium on MicroMachine and Human Science, IEEE Service Center, Nagoy, Japan, pp. 39-43 (1995)

    Chapter  Google Scholar 

  3. Hu, X, Eberhart, R, Shi, Y: Particle swarm with extended memory for multiobjective optimization. In: Proceedings of 2003 IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, April, pp. 193-197 (2003)

    Google Scholar 

  4. Mostaghim, S, Teich, J: Strategies for finding good local guides in multiobjective particle swarm optimization. In: Proceedings of 2003 IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, April, pp. 26-33 (2003)

    Chapter  Google Scholar 

  5. Mostaghim, S, Teich, J: The role of e-dominance in multiobjective particle swarm optimization methods. In: Proceedings of IEEE Congress on Evolutionary Computation CEC’2003, Canberra, Australia, pp. 1764-1771 (2003)

    Google Scholar 

  6. Huang, VL, Suganthan, PN, Liang, JJ: Comprehensive learning particle swarm optimizer for solving multiobjective optimization problems. Int. J. Intell. Syst. 21(2), 209-226 (2006)

    Article  MATH  Google Scholar 

  7. Ono, S, Nakayama, S: Multi-objective particle swarm optimization for robust optimization and its hybridization with gradient search. In: IEEE International Conference on Evolutionary Computations, pp. 1629-1636 (2009)

    Google Scholar 

  8. Gong, DW, Zhang, JH, Zhang, Y: Multi-objective particle swarm optimization for robot path planning in environment with danger sources. J. Comput. 6(8), 1554-1561 (2011)

    Article  Google Scholar 

  9. Shang, JT, Sun, TY, Liu, CC: An improved multi-objective particle swarm optimizer for multi-objective problems. Expert Syst. Appl. 37(8), 5872-5886 (2010)

    Article  Google Scholar 

  10. Guolin, Y: Multi-objective estimation of estimation of distribution algorithm based on the simulated binary crossover. J. Converg. Inf. Technol. 7(3), 110-116 (2012)

    Article  Google Scholar 

  11. Yu, G: An improved differential evolution algorithm for multi-objective optimization problems. Int. J. Adv. Comput. Technol. 3(9), 1106-1113 (2011)

    Google Scholar 

  12. Chen, G-c, Yu, J-S: Simplex particle swarm optimization algorithm and its application. J. Syst. Simul. 18(4), 862-865 (2006)

    Google Scholar 

  13. Xiong, W-l, Xu, B-g, Zhou, Q-m: Study on optimization of PID parameter based on improved PSO. Comput. Eng. 24, 41-43 (2005)

    Google Scholar 

  14. Xiong, G-l: Digital Simulation Control System, pp. 159-162. Tsinghua University Press, Beijing (1984)

    Google Scholar 

  15. Yuan, Y-X, Sun, W-Y: Optimization Theory and Method, pp. 69-75. Science Press, Beijing (1999)

    Google Scholar 

Download references

Acknowledgements

This work is supported by Natural Science Foundation of Ningxia (No. NZ14101); National Natural Science Foundation of China (61362033).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li-jun Zhu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Jm., Liou, YC. & Zhu, Lj. Optimization of PID parameters with an improved simplex PSO. J Inequal Appl 2015, 325 (2015). https://doi.org/10.1186/s13660-015-0785-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0785-2

MSC

Keywords