Skip to main content

Guaranteed cost control for switched recurrent neural networks with interval time-varying delay

Abstract

This paper studies the problem of guaranteed cost control for a class of switched recurrent neural networks with interval time-varying delay. The time delay is a continuous function belonging to a given interval, but not necessary differentiable. A cost function is considered as a nonlinear performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some exponential stability constraints on the closed-loop poles. By constructing a set of augmented Lyapunov-Krasovskii functionals, a guaranteed cost controller is designed via memoryless state feedback control, a switching rule for the exponential stabilization for the system is designed via linear matrix inequalities and new sufficient conditions for the existence of the guaranteed cost state-feedback for the system are given in terms of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the obtained result.

1 Introduction

Stability and control of recurrent neural networks with time delay have attracted considerable attention in recent years [18]. In many practical systems, it is desirable to design neural networks which are not only asymptotically or exponentially stable but can also guarantee an adequate level of system performance. In the area of control, signal processing, pattern recognition and image processing, delayed neural networks have many useful applications. Some of these applications require that the equilibrium points of the designed network be stable. In both biological and artificial neural systems, time delays due to integration and communication are ubiquitous and often become a source of instability. The time delays in electronic neural networks are usually time-varying, and sometimes vary violently with respect to time due to the finite switching speed of amplifiers and faults in the electrical circuitry. Guaranteed cost control problem [912] has the advantage of providing an upper bound on a given system performance index and thus the system performance degradation incurred by the uncertainties or time delays is guaranteed to be less than this bound. The Lyapunov-Krasovskii functional technique has been among the popular and effective tools in the design of guaranteed cost controls for neural networks with time delay. Nevertheless, despite such a diversity of results available, the most existing works either assumed that the time delays are constant or differentiable [1316]. Although, in some cases, delay-dependent guaranteed cost control for systems with time-varying delays were considered in [12, 13, 15], the approach used there cannot be applied to systems with interval, non-differentiable time-varying delays. To the best of our knowledge, the guaranteed cost control and state feedback stabilization for switched recurrent neural networks with interval time-varying delay, non-differentiable time-varying delays have not been fully studied yet (see, e.g., [912, 1525] and the references therein). Which are important in both theories and applications. This motivates our research.

In this paper, we investigate the guaranteed cost control for switched recurrent neural networks problem. The novel features here are that the delayed neural network under consideration is with various globally Lipschitz continuous activation functions, and the time-varying delay function is interval, non-differentiable. Specifically, our goal is to develop a constructive way to design a switching rule to exponentially stabilize the system. A nonlinear cost function is considered as a performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some exponential stability constraints on the closed-loop poles. Based on constructing a set of augmented Lyapunov-Krasovskii functionals combined with the Newton-Leibniz formula, new delay-dependent criteria for guaranteed cost control via memoryless feedback control are established in terms of LMIs, which allow simultaneous computation of two bounds that characterize the exponential stability rate of the solution and can be easily determined by utilizing MATLABs LMI control toolbox.

The outline of the paper is as follows. Section 2 presents definitions and some well-known technical propositions needed for the proof of the main result. LMI delay-dependent criteria for guaranteed cost control and a numerical example showing the effectiveness of the result are presented in Section 3. The paper ends with conclusions and cited references.

2 Preliminaries

The following notation will be used in this paper. R + denotes the set of all real non-negative numbers; R n denotes the n-dimensional space with the scalar product x,y or x T y of two vectors x, y, and the vector norm ; M n × r denotes the space of all matrices of (n×r) dimensions. A T denotes the transpose of matrix A; A is symmetric if A= A T ; I denotes the identity matrix; λ(A) denotes the set of all eigenvalues of A; λ max (A)=max{Reλ;λλ(A)}. x t :={x(t+s):s[h,0]}, x t = sup s [ h , 0 ] x(t+s); C 1 ([0,t], R n ) denotes the set of all R n -valued continuously differentiable functions on [0,t]; L 2 ([0,t], R m ) denotes the set of all the R m -valued square integrable functions on [0,t].

Matrix A is called semi-positive definite (A0) if Ax,x0 for all x R n ; A is positive definite (A>0) if Ax,x>0 for all x0; A>B means AB>0. The notation diag{} stands for a block-diagonal matrix. The symmetric term in a matrix is denoted by .

Consider the following switched recurrent neural networks with interval time-varying delay:

x ˙ ( t ) = A γ ( x ( t ) ) x ( t ) + W 0 γ ( x ( t ) ) f ( x ( t ) ) x ˙ ( t ) = + W 1 γ ( x ( t ) ) g ( x ( t h ( t ) ) ) + B γ ( x ( t ) ) u ( t ) , t 0 , x ( t ) = ϕ ( t ) , t [ h 1 , 0 ] ,
(2.1)

where x(t)= [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T R n is the state of the neural, u() L 2 ([0,t], R m ) is the control; n is the number of neurons, and

f ( x ( t ) ) = [ f 1 ( x 1 ( t ) ) , f 2 ( x 2 ( t ) ) , , f n ( x n ( t ) ) ] T , g ( x ( t ) ) = [ g 1 ( x 1 ( t ) ) , g 2 ( x 2 ( t ) ) , , g n ( x n ( t ) ) ] T ,

are the activation functions; γ(): R n N:={1,2,,N} is the switching rule, which is a function depending on the state at each time and will be designed. A switching function is a rule which determines a switching sequence for a given switching system. Moreover, γ(x(t))=j implies that the system realization is chosen as the j th system, j=1,2,,N. It is seen that system (2.1) can be viewed as an autonomous switched system in which the effective subsystem changes when the state x(t) hits predefined boundaries.

A j =diag( a ¯ 1 j , a ¯ 2 j ,, a ¯ n j ), a ¯ i j >0, represents the self-feedback term; B j R n × m are control input matrices; W 0 j , W 1 j denote the connection weights and the delayed connection weights, respectively. The time-varying delay function h(t) satisfies the condition

0 h 0 h(t) h 1 .

The initial functions ϕ(t) C 1 ([ h 1 ,0], R n ), with the norm

ϕ= sup t [ h 1 , 0 ] ϕ ( t ) 2 + ϕ ˙ ( t ) 2 .

In this paper we consider various activation functions and assume that the activation functions f(), g() are Lipschitzian with the Lipschitz constants f i , e i >0:

| f i ( ξ 1 ) f i ( ξ 2 ) | f i | ξ 1 ξ 2 | , i = 1 , 2 , , n , ξ 1 , ξ 2 R , | g i ( ξ 1 ) g i ( ξ 2 ) | e i | ξ 1 ξ 2 | , i = 1 , 2 , , n , ξ 1 , ξ 2 R .
(2.2)

The performance index associated with system (2.1) is the following function:

J= 0 f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) dt,
(2.3)

where f 0 (t,x(t),x(th(t)),u(t)): R + × R n × R n × R m R + , is a nonlinear cost function satisfying

Q 1 , Q 2 ,R: f 0 (t,x,y,u) Q 1 x,x+ Q 2 y,y+Ru,u
(2.4)

for all (t,x,y,u) R + × R n × R n × R m and Q 1 , Q 2 R n × n , R R m × m , are given symmetric positive definite matrices. The objective of this paper is to design a memoryless state feedback controller u(t)=Kx(t) for system (2.1) and the cost function (2.3) such that the resulting closed-loop system

x ˙ (t)=( A j B j K)x(t)+ W 0 j f ( x ( t ) ) + W 1 j g ( x ( t h ( t ) ) )
(2.5)

is exponentially stable and the closed-loop value of the cost function (2.3) is minimized.

Remark 2.1 It is worth noting that the time delay is a time-varying function belonging to a given interval, in which the lower bound of delay is not restricted to zero; therefore, the stability criteria proposed in [47, 913, 1518, 2124] are not applicable to this system.

Remark 2.2 It is worth noting that the time delay is a time-varying function belonging to a given interval, in which the delay function h(t) is non-differentiable; therefore, the stability criteria proposed in [5, 6, 8, 1012, 1419, 2225] are not applicable to this system.

Definition 2.1 Given α>0. The zero solution of closed-loop system (2.5) is α-exponentially stabilizable if there exists a positive number N>0 such that every solution x(t,ϕ) satisfies the following condition:

x ( t , ϕ ) N e α t ϕ,t0.

Definition 2.2 Consider control system (2.1). If there exist a memoryless state feedback control law u(t)=Kx(t) and a positive number J such that the zero solution of closed-loop system (2.5) is exponentially stable and the cost function (2.3) satisfies J J , then the value J is a guaranteed constant and u(t) is a guaranteed cost control law of the system and its corresponding cost function.

We introduce the following technical well-known propositions, which will be used in the proof of our results.

Proposition 2.1 (Schur complement lemma [26])

Given constant matrices X, Y, Z with appropriate dimensions satisfying X= X T , Y= Y T >0. Then X+ Z T Y 1 Z<0 if and only if

( X Z T Z Y )<0.

Proposition 2.2 (Integral matrix inequality [27])

For any symmetric positive definite matrix M>0, scalar σ>0 and vector function ω:[0,σ] R n such that the integrations concerned are well defined, the following inequality holds:

( 0 σ ω ( s ) d s ) T M ( 0 σ ω ( s ) d s ) σ ( 0 σ ω T ( s ) M ω ( s ) d s ) .

3 Design of guaranteed cost controller

In this section, we give a design of memoryless guaranteed feedback cost control for neural networks (2.1). Let us set

w 11 = [ P + α I ] A j A j T [ P + α I ] 2 B j B j T + 0.25 B j R B j T + i = 0 1 G i , w 12 = P + A j P + 0.5 B j B j T , w 13 = e 2 α h 0 H 0 + 0.5 B j B j T + A j P , w 14 = 2 e 2 α h 1 H 1 + 0.5 B j B j T + A j P , w 15 = P 0.5 B j B j T + A j P , w 22 = i = 0 1 W i j D i W i j T + i = 0 1 h i 2 H i + ( h 1 h 0 ) U 2 P B j B j T , w 23 = P , w 24 = P , w 25 = P , w 33 = e 2 α h 0 G 0 e 2 α h 0 H 0 e 2 α h 1 U + i = 0 1 W i j D i W i j T , w 34 = 0 , w 35 = 2 α h 1 U , w 44 = i = 0 1 W i j D i W i j T e 2 α h 1 U e 2 α h 1 G 1 e 2 α h 1 H 1 , w 45 = e 2 α h 1 U , w 55 = e 2 α h 1 U + W 0 j D 0 W 0 j T , E = diag { e i , i = 1 , , n } , F = diag { f i , i = 1 , , n } , λ 1 = λ min ( P 1 ) , λ 2 = λ max ( P 1 ) + h 0 λ max [ P 1 ( i = 0 1 G i ) P 1 ] λ 2 = + h 1 2 λ max [ P 1 ( i = 0 1 H i ) P 1 ] + ( h 1 h 0 ) λ max ( P 1 U P 1 ) .

Theorem 3.1 Consider control system (2.1) and the cost function (2.3). If there exist symmetric positive definite matrices P, U, G 0 , G 1 , H 0 , H 1 and diagonal positive definite matrices D i , i=0,1, satisfying the following LMIs:

E j =[ w 11 w 12 w 13 w 14 w 15 w 22 w 23 w 24 w 25 w 33 w 34 w 35 w 44 w 45 w 55 ]<0,j=1,2,,N,
(3.1)
S 1 j =[ P A j A j T P i = 0 1 e 2 α h i H i 2 P F P Q 1 D 0 0 Q 1 1 ]<0,j=1,2,,N,
(3.2)
S 2 j =[ W 1 j D 1 W 1 j T e 2 α h 1 U 2 P E P Q 2 D 1 0 Q 2 1 ]<0,j=1,2,,N,
(3.3)

then

u j (t)= 1 2 B j T P 1 x(t),t0,j=1,2,,N,
(3.4)

is a guaranteed cost control and the guaranteed cost value is given by

J = λ 2 ϕ 2 .

The switching rule is chosen as γ(x(t))=j. Moreover, the solution x(t,ϕ) of the system satisfies

x ( t , ϕ ) λ 2 λ 1 e α t ϕ,t0.

Proof Let Y= P 1 , y(t)=Yx(t). Using the feedback control (2.5), we consider the following Lyapunov-Krasovskii functional:

V ( t , x t ) = i = 1 6 V i ( t , x t ) , V 1 = x T ( t ) Y x ( t ) , V 2 = t h 0 t e 2 α ( s t ) x T ( s ) Y G 0 Y x ( s ) d s , V 3 = t h 1 t e 2 α ( s t ) x T ( s ) Y G 1 Y x ( s ) d s , V 4 = h 0 h 0 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y H 0 Y x ˙ ( τ ) d τ d s , V 5 = h 1 h 1 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y H 1 Y x ˙ ( τ ) d τ d s , V 6 = ( h 1 h 0 ) t h 1 t h 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y U Y x ˙ ( τ ) d τ d s .

It is easy to check that

λ 1 x ( t ) 2 V ( t , x t ) λ 2 x t 2 , t 0 .
(3.5)

Taking the derivative of V 1 , we have

V 1 ˙ = 2 x T ( t ) Y x ˙ ( t ) V 1 ˙ = y T ( t ) [ P A j T A j P ] y ( t ) y T ( t ) B j B j T y ( t ) V 1 ˙ = + 2 y T ( t ) W 0 j f ( ) y ( t ) + 2 y T ( t ) W 1 j g ( ) y ( t ) V 2 ˙ = y T ( t ) G 0 y ( t ) e 2 α h 0 y T ( t h 0 ) G 0 y ( t h 0 ) 2 α V 2 ; V 3 ˙ = y T ( t ) G 1 y ( t ) e 2 α h 1 y T ( t h 1 ) G 1 y ( t h 1 ) 2 α V 3 ; V 4 ˙ = h 0 2 y ˙ T ( t ) H 0 y ˙ ( t ) h 1 e 2 α h 0 t h 0 t x ˙ T ( s ) H 0 x ˙ ( s ) d s 2 α V 4 ; V 5 ˙ = h 1 2 y ˙ T ( t ) H 1 y ˙ ( t ) h 1 e 2 α h 1 t h 1 t y ˙ T ( s ) H 1 y ˙ ( s ) d s 2 α V 4 ; V 6 ˙ = ( h 1 h 0 ) 2 y ˙ T ( t ) U y ˙ ( t ) ( h 1 h 0 ) e 2 α h 1 t h 1 t h 0 y ˙ T ( s ) U y ˙ ( s ) d s 2 α V 6 .

Applying Proposition 2.2 and the Leibniz-Newton formula

s t y ˙ (τ)dτ=y(t)y(s),

we have, for j=1,2, i=0,1,

h i t h i t y ˙ T ( s ) H j y ˙ ( s ) d s [ t h i t y ˙ ( s ) d s ] T H j [ t h i t y ˙ ( s ) d s ] [ y ( t ) y ( t h ( t ) ) ] T H j [ y ( t ) y ( t h ( t ) ) ] = y T ( t ) H i y ( t ) + 2 x T ( t ) H j y ( t h ( t ) ) y T ( t h i ) H j y ( t h i ) .
(3.6)

Note that

t h 1 t h 0 y ˙ T (s)U y ˙ (s)ds= t h 1 t h ( t ) y ˙ T (s)U y ˙ (s)ds+ t h ( t ) t h 0 y ˙ T (s)U y ˙ (s)ds.

Applying Proposition 2.2 gives

[ h 1 h ( t ) ] t h 1 t h ( t ) y ˙ T ( s ) U y ˙ ( s ) d s [ t h 1 t h ( t ) y ˙ ( s ) d s ] T U [ t h 1 t h ( t ) y ˙ ( s ) d s ] [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] .

Since h 1 h(t) h 1 h 0 , we have

[ h 1 h 0 ] t h 1 t h ( t ) y ˙ T (s)U y ˙ (s)ds [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] ,

then

[ h 1 h 0 ] t h 1 t h ( t ) y ˙ T (s)U y ˙ (s)ds [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] .

Similarly, we have

( h 1 h 0 ) t h ( t ) t h 0 y ˙ T (s)U y ˙ (s)ds [ y ( t h 0 ) y ( t h ( t ) ) ] T U [ y ( t h 0 ) y ( t h ( t ) ) ] .

Then we have

V ˙ ( ) + 2 α V ( ) y T ( t ) [ P A j T A j P ] y ( t ) y T ( t ) B j B j T y ( t ) + 2 y T ( t ) W 0 j f ( ) + 2 y T ( t ) W 1 j g ( ) + y T ( t ) ( i = 0 1 G i ) y ( t ) + 2 α P y ( t ) , y ( t ) + y ˙ T ( t ) ( i = 0 1 h i 2 H i ) y ˙ ( t ) + ( h 1 h 0 ) y ˙ T ( t ) U y ˙ ( t ) i = 0 1 e 2 α h i y T ( t h i ) G i y ( t h i ) e 2 α h 0 [ y ( t ) y ( t h 0 ) ] T H 0 [ y ( t ) y ( t h 0 ) ] e 2 α h 1 [ y ( t ) y ( t h 1 ) ] T H 1 [ y ( t ) y ( t h 1 ) ] e 2 α h 1 [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] e 2 α h 1 [ y ( t h 0 ) y ( t h ( t ) ) ] T U [ y ( t h 0 ) y ( t h ( t ) ) ] .
(3.7)

Using equation (2.5)

P y ˙ (t)+ A j Py(t) W 0 j f() W 1 j g()+0.5 B j B j T y(t)=0,

and multiplying both sides by [ 2 y ( t ) , 2 y ˙ ( t ) , 2 y ( t h 0 ) , 2 y ( t h 1 ) , 2 y ( t h ( t ) ) ] T , we have

2 y T ( t ) P y ˙ ( t ) + 2 y T ( t ) A j P y ( t ) 2 y T ( t ) W 0 j f ( ) 2 y T ( t ) W 1 j g ( ) + y T ( t ) B j B j T y ( t ) = 0 , 2 y ˙ T ( t ) P y ˙ ( t ) 2 y ˙ T ( t ) A j P y ( t ) + 2 y ˙ T ( t ) W 0 j f ( ) + 2 y ˙ T ( t ) W 1 j g ( ) y ˙ T ( t ) B j B j T y ( t ) = 0 , 2 y T ( t h 0 ) P y ˙ ( t ) + 2 y T ( t h 0 ) A j P y ( t ) 2 y T ( t h 0 ) W 0 j f ( ) 2 y T ( t h 0 ) W 1 j g ( ) + y T ( t h 0 ) B j B j T y ( t ) = 0 , 2 y T ( t h 1 ) P y ˙ ( t ) + 2 y T ( t h 1 ) A j P y ( t ) 2 y T ( t h 1 ) W 0 j f ( ) 2 y T ( t h 1 ) W 1 j g ( ) + y T ( t h 1 ) B j B j T y ( t ) = 0 , 2 y T ( t h ( t ) ) P y ˙ ( t ) + 2 y T ( t h ( t ) ) A j P y ( t ) 2 y T ( t h ( t ) ) W 0 j f ( ) 2 y T ( t h ( t ) ) W 1 j g ( ) + 2 y T ( t h ( t ) ) B j B j T y ( t ) = 0 .
(3.8)

Adding all the zero items of (3.8) and f 0 (t,x(t),x(th(t)),u(t)) f 0 (t,x(t),x(th(t)),u(t))=0, respectively, into (3.7) and using the condition (2.4) for the following estimations:

f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) Q 1 x ( t ) , x ( t ) + Q 2 x ( t h ( t ) ) , x ( t h ( t ) ) f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) + R u ( t ) , u ( t ) f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) = P Q 1 P y ( t ) , y ( t ) + P Q 2 P y ( t h ( t ) ) , y ( t h ( t ) ) f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) + 0.25 B j R B j T y ( t ) , y ( t ) , 2 W 0 j f ( x ) , y W 0 j D 0 W 0 j T y , y + D 0 1 f ( x ) , f ( x ) , 2 W 1 j g ( z ) , y W 1 j D 1 W 1 j T y , y + D 1 1 g ( z ) , g ( z ) , 2 D 0 1 f ( x ) , f ( x ) F D 0 1 F x , x , 2 D 1 1 g ( z ) , g ( z ) E D 1 1 E z , z ,

we obtain

V ˙ ( ) + 2 α V ( ) ζ T ( t ) E j ζ ( t ) + y T ( t ) S 1 j y ( t ) + y T ( t h ( t ) ) S 2 j y ( t h ( t ) ) f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) ,
(3.9)

where ζ(t)=[y(t), y ˙ (t),y(t h 0 ),y(t h 1 ),y(th(t))], and

E j = [ w 11 w 12 w 13 w 14 w 15 w 22 w 23 w 24 w 25 w 33 w 34 w 35 w 44 w 45 w 55 ] , S 1 j = P A j A j T P i = 0 1 e 2 α h i H i + 4 P F D 0 1 F P + P Q 1 P , S 2 j = W 1 j D 1 W 1 j T e 2 α h 2 U + 4 P E D 1 1 E P + P Q 2 P .

Note that by the Schur complement lemma, Proposition 2.1, the conditions S 1 j <0 and S 2 j <0 are equivalent to the conditions (3.2) and (3.3), respectively. Therefore, by conditions (3.1), (3.2), (3.3), we obtain from (3.9) that

V ˙ (t, x t )2αV(t, x t ),t0.
(3.10)

Integrating both sides of (3.10) from 0 to t, we obtain

V(t, x t )V(ϕ) e 2 α t ,t0.

Furthermore, taking condition (3.5) into account, we have

λ 1 x ( t , ϕ ) 2 V( x t )V(ϕ) e 2 α t λ 2 e 2 α t ϕ 2 ,

then

x ( t , ϕ ) λ 2 λ 1 e α t ϕ,t0,

which concludes the exponential stability of closed-loop system (2.5). To prove the optimal level of the cost function (2.3), we derive from (3.9) and (3.1)-(3.3) that

V ˙ (t, z t ) f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) ,t0.
(3.11)

Integrating both sides of (3.11) from 0 to t leads to

0 t f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) dtV(0, z 0 )V(t, z t )V(0, z 0 ),

due to V(t, z t )0. Hence, letting t+, we have

J= 0 f 0 ( t , x ( t ) , x ( t h ( t ) ) , u ( t ) ) dtV(0, z 0 ) λ 2 ϕ 2 = J .

This completes the proof of the theorem. □

Example 3.1 Consider the switched recurrent neural networks with interval time-varying delays (2.1), where

A 1 = [ 0.1 0 0 0.3 ] , A 2 = [ 0.2 0 0 0.4 ] , W 01 = [ 0.1 0.3 0.2 0.8 ] , W 02 = [ 0.7 0.3 0.4 0.9 ] , W 11 = [ 0.4 0.2 0.3 0.3 ] , W 12 = [ 0.2 0.3 0.1 0.4 ] , B 1 = [ 0.1 0.2 ] , B 2 = [ 0.2 0.3 ] , E = [ 0.2 0 0 0.4 ] , F = [ 0.3 0 0 0.5 ] , Q 1 = [ 0.3 0.2 0.2 0.7 ] , Q 2 = [ 0.4 0.1 0.1 0.6 ] , R = [ 0.3 0.3 0.3 0.9 ] , { h ( t ) = 0.1 + 1.2652 sin 2 t if  t I = k 0 [ 2 k π , ( 2 k + 1 ) π ] , h ( t ) = 0 if  t R + I .

Note that h(t) is non-differentiable, therefore, the stability criteria proposed in [47, 913, 1518, 2124] are not applicable to this system. Given α=0.3, h 0 =0.1, h 1 =1.3652, by using the Matlab LMI toolbox, we can solve for P, U, G 0 , G 1 , H 0 , H 1 , D 0 , and D 1 which satisfy the conditions (3.1)-(3.3) in Theorem 3.1.

A set of solutions is as follows:

P = [ 1.5219 0.3659 0.3659 2.2398 ] , U = [ 3.1239 0.2365 0.2365 3.0123 ] , G 0 = [ 1.3225 0.0258 0.0258 1.2698 ] , G 1 = [ 2.2368 0.0148 0.0148 3.1121 ] , H 0 = [ 2.2189 0.1238 0.1238 1.2368 ] , H 1 = [ 2.3225 0.0369 0.0369 2.1897 ] , D 0 = [ 2.9870 0 0 3.2589 ] , D 1 = [ 3.2698 0 0 4.3258 ] .

Then

u 1 ( t ) = 0.2579 x 1 ( t ) + 0.2589 x 2 ( t ) , t 0 , u 2 ( t ) = 0.1397 x 1 ( t ) + 0.2176 x 2 ( t ) , t 0 ,

are a guaranteed cost control law and the cost given by

J =1.1268 ϕ 2 .

Moreover, the solution x(t,ϕ) of the system satisfies

x ( t , ϕ ) 2.3257 e 0.3 t ϕ,t0.

The trajectories of solution of switched recurrent neural networks is shown in Figure 1, respectively.

Figure 1
figure 1

The simulation of the solutions x 1 (t) and x 2 (t) with the initial condition ϕ(t)= [ 10 5 ] T , t[0,10] .

4 Conclusions

In this paper, the problem of guaranteed cost control for Hopfield neural networks with interval non-differentiable time-varying delay has been studied. A nonlinear quadratic cost function is considered as a performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some exponential stability constraints on the closed-loop poles. By constructing a set of time-varying Lyapunov-Krasovskii functionals, a switching rule for the exponential stabilization for the system is designed via linear matrix inequalities. A memoryless state feedback guaranteed cost controller design has been presented and sufficient conditions for the existence of the guaranteed cost state-feedback for the system have been derived in terms of LMIs.

References

  1. Hopfield JJ: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79: 2554–2558. 10.1073/pnas.79.8.2554

    Article  MathSciNet  Google Scholar 

  2. Kevin G: An Introduction to Neural Networks. CRC Press, Boca Raton; 1997.

    Google Scholar 

  3. Wu M, He Y, She JH: Stability Analysis and Robust Control of Time-Delay Systems. Springer, Berlin; 2010.

    Book  MATH  Google Scholar 

  4. Arik S: An improved global stability result for delayed cellular neural networks. IEEE Trans. Circuits Syst. 2002, 499: 1211–1218.

    Article  MathSciNet  Google Scholar 

  5. He Y, Wang QG, Wu M: LMI-based stability criteria for neural networks with multiple time-varying delays. Physica D 2005, 112: 126–131.

    Article  MathSciNet  MATH  Google Scholar 

  6. Kwon OM, Park JH: Exponential stability analysis for uncertain neural networks with interval time-varying delays. Appl. Math. Comput. 2009, 212: 530–541. 10.1016/j.amc.2009.02.043

    Article  MathSciNet  MATH  Google Scholar 

  7. Phat VN, Trinh H: Exponential stabilization of neural networks with various activation functions and mixed time-varying delays. IEEE Trans. Neural Netw. 2010, 21: 1180–1185.

    Article  Google Scholar 

  8. Rajchakit M, Rajchakit G: LMI approach to robust stability and stabilization of nonlinear uncertain discrete-time systems with convex polytopic uncertainties. Adv. Differ. Equ. 2012., 2012: Article ID 106

    Google Scholar 

  9. Botmart T, Niamsup P: Robust exponential stability and stabilizability of linear parameter dependent systems with delays. Appl. Math. Comput. 2010, 217: 2551–2566. 10.1016/j.amc.2010.07.068

    Article  MathSciNet  MATH  Google Scholar 

  10. Chen WH, Guan ZH, Lu X: Delay-dependent output feedback guaranteed cost control for uncertain time-delay systems. Automatica 2004, 40: 1263–1268. 10.1016/j.automatica.2004.02.003

    Article  MathSciNet  MATH  Google Scholar 

  11. Rajchakit M, Niamsup P, Rajchakit G: A switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay. Adv. Differ. Equ. 2013., 2013: Article ID 44. doi:10.1186/1687–1847–2013–44

    Google Scholar 

  12. Ratchagit K: A switching rule for the asymptotic stability of discrete-time systems with convex polytopic uncertainties. Asian-Eur. J. Math. 2012., 5: Article ID 1250025

    Google Scholar 

  13. Rajchakit M, Rajchakit G: Mean square exponential stability of stochastic switched system with interval time-varying delays. Abstr. Appl. Anal. 2012., 2012: Article ID 623014. doi:10.1155/2012/623014

    Google Scholar 

  14. Palarkci MN: Robust delay-dependent guaranteed cost controller design for uncertain neutral systems. Appl. Math. Comput. 2009, 215: 2939–2946.

    MathSciNet  Google Scholar 

  15. Park JH, Kwon OM: On guaranteed cost control of neutral systems by retarded integral state feedback. Appl. Math. Comput. 2005, 165: 393–404. 10.1016/j.amc.2004.06.019

    Article  MathSciNet  MATH  Google Scholar 

  16. Rajchakit G, Rojsiraphisal T, Rajchakit M: Robust stability and stabilization of uncertain switched discrete-time systems. Adv. Differ. Equ. 2012., 2012: Article ID 134. doi:10.1186/1687–1847–2012–134

    Google Scholar 

  17. Park JH, Choi K: Guaranteed cost control of nonlinear neutral systems via memory state feedback. Chaos Solitons Fractals 2005, 24: 183–190.

    Article  MathSciNet  MATH  Google Scholar 

  18. Rajchakit M, Rajchakit G: Mean square robust stability of stochastic switched discrete-time systems with convex polytopic uncertainties. J. Inequal. Appl. 2012., 2012: Article ID 135. doi:10.1186/1029–242X-2012–135

    Google Scholar 

  19. Fridman E, Orlov Y: Exponential stability of linear distributed parameter systems with time-varying delays. Automatica 2009, 45: 194–201. 10.1016/j.automatica.2008.06.006

    Article  MathSciNet  MATH  Google Scholar 

  20. Xu S, Lam J: A survey of linear matrix inequality techniques in stability analysis of delay systems. Int. J. Syst. Sci. 2008, 39(12):1095–1113. 10.1080/00207720802300370

    Article  MathSciNet  MATH  Google Scholar 

  21. Xie JS, Fan BQ, Young SL, Yang J: Guaranteed cost controller design of networked control systems with state delay. Acta Autom. Sin. 2007, 33: 170–174.

    MathSciNet  MATH  Google Scholar 

  22. Yu L, Gao F: Optimal guaranteed cost control of discrete-time uncertain systems with both state and input delays. J. Franklin Inst. 2001, 338: 101–110. 10.1016/S0016-0032(00)00073-9

    Article  MathSciNet  MATH  Google Scholar 

  23. Phat VN, Ratchagit K: Stability and stabilization of switched linear discrete-time systems with interval time-varying delay. Nonlinear Anal. Hybrid Syst. 2011, 5: 605–612. 10.1016/j.nahs.2011.05.006

    Article  MathSciNet  MATH  Google Scholar 

  24. Ratchagit K, Phat VN: Stability criterion for discrete-time systems. J. Inequal. Appl. 2010., 2010: Article ID 201459

    Google Scholar 

  25. Phat VN, Kongtham Y, Ratchagit K: LMI approach to exponential stability of linear systems with interval time-varying delays. Linear Algebra Appl. 2012, 436: 243–251. 10.1016/j.laa.2011.07.016

    Article  MathSciNet  MATH  Google Scholar 

  26. Boyd S, El Ghaoui L, Feron E, Balakrishnan V: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia; 1994.

    Book  MATH  Google Scholar 

  27. Gu K, Kharitonov V, Chen J: Stability of Time-Delay Systems. Birkhäuser, Berlin; 2003.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Office of Agricultural Research and Extension Maejo University, the Thailand Research Fund Grant, the Higher Education Commission and Faculty of Science, Maejo University, Thailand. The first author is supported by the Center of Excellence in Mathematics, Thailand, and Commission for Higher Education, Thailand. The authors thank anonymous reviewers for valuable comments and suggestions, which allowed us to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grienggrai Rajchakit.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors contributed equally and significantly in writing this paper. The authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Niamsup, P., Rajchakit, M. & Rajchakit, G. Guaranteed cost control for switched recurrent neural networks with interval time-varying delay. J Inequal Appl 2013, 292 (2013). https://doi.org/10.1186/1029-242X-2013-292

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-292

Keywords