Skip to main content

Asymptotic normality of Huber-Dutter estimators in a linear EV model with AR(1) processes

Abstract

The paper studies a linear errors-in-variables model with first order autoregressive processes. The Huber-Dutter (HD) estimators of unknown parameters are given, and the asymptotic normality of the HD estimators is investigated. Finally, a simple example is given to illustrate our estimation method.

MSC:60F05, 60G10, 62F35, 62M10, 60G42.

1 Introduction

Consider the following linear errors-in-variables (EV) model:

{ y t = x t T β + ε t , X t = x t + ζ t , t = 1 , 2 , , n ,
(1.1)

where the superscript T denotes the transpose throughout the paper, { y t ,t=1,2,,n} are scalar response variables, { X t = ( X t 1 , X t 2 , , X t d ) T ,t=1,2,,n} and { x t = ( x t 1 , x t 2 , , x t d ) T ,t=1,2,,n} are observable and unobservable random variables, respectively, β= ( β 1 , , β d ) T is a vector of d unknown parameters, { ζ t } are independent and identically distributed (i.i.d.) measurement errors with E ζ t =0 and Var( ζ t )= σ ζ 2 I d , { ζ t } and { ε t } are independent, { ( ε t , ζ t T ) T } and { x t } are independent, and { ε t ,t=1,2,,n} are the first order autoregressive (AR(1)) processes

ε 1 = η 1 , ε t =a ε t 1 + η t ,t=2,3,,n,
(1.2)

where { η t ,t=1,2,,n} are i.i.d. random errors with zero mean and finite variance σ 2 >0, and <a< is a one-dimensional unknown parameter. A common assumption is that the ratio of the error variances λ= σ 2 σ ζ 2 is known. This is assumed throughout this paper and all variables are assumed scaled so that λ=1.

The linear errors-in-variables model (1.1) with AR(1) processes (1.2) includes three important special models: (1) an ordinary linear regression model with AR(1) processes (when ζ t =0, see e.g., Hu [1], Maller [2], Pere [3], and Fuller [4]); (2) an ordinary linear errors-in-variables model (when a=0, see e.g., Miao and Liu [5], Miao et al. [6, 7], Liu and Chen [8], Cui [9], Cui and Chen [10], Cheng and Van Ness [11]); (3) autoregressive processes (when β=0, see e.g., Hamilton [12], Brockwell and Davis [13], and Fuller [4]). The independence assumption for the errors is not always valid in applications, especially for sequentially collected economic data, which often exhibit evident dependence in the errors. Recently, linear errors-in-variables models with serially correlated errors have attracted increasing attention from statisticians; see, for example, Baran [14], Fan et al. [15], Miao et al. [16], among others.

It is well known that in the EV model, the ordinary least-squares (OLS) estimators are biased and inconsistent and that orthogonal regression is better in that case Fuller [17]. However, both methods are very sensitive to outliers in the data and some robust alternatives have been proposed. Brown [18] and Ketellapper and Ronner [19] applied robust ordinary regression techniques in the EV model. Zamar [20] proposed robust orthogonal regression M-estimators and showed that it outperformed the robust ordinary regression. Cheng and Van Ness [21] generalized the proposal of Zamar by defining robust orthogonal Generalized M-estimators which had bounded influence function in the simple case. He and Liang [22] proposed a regression quantile approach in the EV model which allowed for heavier tailed errors distribution than the gaussian distribution. Fekri and Ruiz-Gazen [23] proposed robust weighted orthogonal regression.

Over the last 40 years, several estimators in linear regression models that posses robustness have been proposed, such as Wu [24], Silvapullé [25], Hampel et al. [26], Huber and Ronchetti [27], Li [28], Babu [29], Cheng and Van Ness [21], Salibian-Barrera and Zamar [30], Wu and Wang [31], Zhou and Wu [32], and so on. It is well known that HD estimate approach is one of important robust techniques. Recently, some authors applied HD estimate approach to regression models. For example, Silvapullé [25] established asymptotic normality of HD estimators for the linear regression model with i.i.d. errors. Hu [1] investigated asymptotic normality of HD estimators for the linear regression model with AR(1) errors. Tong et al. [33] considered consistency and normality of HD estimators for the partial linear regression model. However, nobody used the HD method to investigate the models (1.1)-(1.2).

The paper discusses the models (1.1)-(1.2) with a robust approach, which has been suggested by Huber and Dutter. We extend some results of Hu [1], Silvapullé [25], etc. to the EV regression model with AR(1) errors. The organization of the paper is as follows. In Section 2 estimators of β, a and σ 2 are given by HD method. Under general conditions, the asymptotic normality of the HD estimators is investigated in Section 3. The theoretical proofs of main results are presented in Section 4, a simple example is given in Section 5.

2 Estimation method

By (1.2), we have

ε t = j = 1 t a t j η j ,t=1,2,,
(2.1)

thus ε t is measurable with respect to the σ-field H generated by η 1 , η 2 ,, η t , E ε t =0 and

Var( ε t )= { σ 2 ( 1 a 2 t 1 a 2 ) , if  | a | 1 , σ 2 t , if  | a | = 1 .
(2.2)

Furthermore,

Δ n ( a , σ ) = t = 2 n E ε t 1 2 = { σ 2 ( a 2 n a 2 + ( n 1 ) ( 1 a 2 ) ( 1 a 2 ) 2 ) , if  | a | 1 , 1 2 σ 2 n ( n 1 ) , if  | a | = 1 , = { O ( n ) , if  | a | < 1 , O ( a 2 n ) , if  | a | > 1 , O ( n 2 ) , if  | a | = 1 .
(2.3)

Let y 0 =0, x 0 =0. From (1.1),

ε t = y t ( X t T ζ t T ) β.

By the above equation and (1.2), we obtain

η t = ε t a ε t 1 = y t X t T β + ζ t T β a ( y t 1 X t 1 T β + ζ t 1 T β ) .
(2.4)

Thus we could consider HD estimators by minimizing

Q ( β T , σ , a ) = t = 1 n ρ ( y t X t T β + ζ t T β a ( y t 1 X t 1 T β + ζ t 1 T β ) σ ) σ+ A n σ,
(2.5)

where ρ0 is convex, ρ(0)=0, ρ ( t ) | t | k as |t| for some k>0, and { A n } is a suitably chosen sequence of constants.

Remark 1 Since

Var ( y t X t T β a ( y t 1 X t 1 T β ) 1 + ( 1 + a 2 ) β T β ) = σ 2 ,

using the method of HD, we also obtain HD estimators by minimizing

Q ˜ ( β T , σ , a ) = t = 1 n ρ ( y t X t T β a ( y t 1 X t 1 T β ) 1 + ( 1 + a 2 ) β T β σ ) σ+ A n σ.

We will investigate its estimators in the future because there are some difficulties. For example, there is very sophisticated calculation, and it is difficult to investigate the asymptotic properties of these unknown parametric estimators because

y t X t T βa ( y t 1 X t 1 T β ) = η t ( ζ t T a ζ t 1 T ) β

are dependent.

Let us introduce some notation: 1×(d+2) vector θ=( β T ,σ,a) and its estimator θ ˆ n =( β ˆ n T , σ ˆ n , a ˆ n ). For an arbitrary function f, f and f are the first and second derivatives of f, respectively. x is the Euclidean norm of x and ε 0 =0. ( M ) i j and ( U ) i are the (i,j)th component of matrix M and the i th component of vector U, respectively.

The HD estimators for θ are obtained by solving the estimating equations by equating to 0 the derivatives

Q β T = t = 1 n ψ ( y t X t T β + ζ t T β a ( y t 1 X t 1 T β + ζ t 1 T β ) σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) = t = 1 n ψ ( ε t a ε t 1 σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) ,
(2.6)
Q σ = t = 1 n { ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ ρ ( ε t a ε t 1 σ ) } + A n = { t = 1 n χ ( ε t a ε t 1 σ ) A n } ,
(2.7)

and

Q a = t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 ,
(2.8)

where ψ= ρ and χ(u)=uψ(u)ρ(u)= 0 u xdψ(x).

The corresponding estimators, if they exist (see Proposition 2.1), will satisfy

t = 1 n ψ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) ( X t a ˆ n X t 1 ( ζ t a ˆ n ζ t 1 ) ) =0,
(2.9)
t = 1 n χ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) = A n ,
(2.10)

and

t = 1 n ψ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) ε ˆ t 1 =0
(2.11)

with ε ˆ t = y t x t T β ˆ n .

Although { ζ t } are unknown in (2.9)-(2.11), but we easily estimate it by the method of Fuller [17] in practice.

In what follows, it will be assumed that nd+2 and A n >0. Without loss of generality, we may assume k=1. (Its definition has been given between (2.5) and Remark 1.) Therefore ψ is bounded and increases from −1 to +1. It will also be assumed that χ is bounded.

Remark 2 From the above equations, it is easily seen that our estimators include some existing estimators; see, for example, simultaneous M-estimators of the location and the scale (a=0, A n =0), the least-squares estimators (ρ(u)= u 2 , A n =0, and σ=1), the least absolute deviation estimators (ρ(u)=|u|, A n =0). In particular, we discuss three important cases as follows.

Case 1. Let a=0, ζ t =0. The estimating equations (2.9)-(2.11) may be written as

t = 1 n ψ ( y t X t T β ˆ n σ ˆ n ) X t =0, t = 1 n χ ( y t X t T β ˆ n σ ˆ n ) = A n ,
(2.12)

which are the same as Silvapullé’s [25].

Case 2. If ρ(u)= | u | q (0<q1), A n =0, and ζ t =0, then the above equations (2.9)-(2.11) may be rewritten as

t = 1 n | ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n | q 2 ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ( X t a ˆ n X t 1 )=0,
(2.13)
t = 1 n | ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n | q =0,
(2.14)

and

t = 1 n | ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n | q 2 ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ε ˆ t 1 =0
(2.15)

with ε ˆ t = y t X t T β ˆ n .

Let a=0 and ρ(u)= | u | q (0<q1), A n =0. We rewrite (2.13)-(2.15) by

t = 1 n | y t X t T β ˆ n σ ˆ n | q 2 y t X t T β ˆ n σ ˆ n =0, t = 1 n | y t X t T β ˆ n σ ˆ n | q =0.
(2.16)

Furthermore, if σ is a constant, then the estimator of β satisfies the following equation:

t = 1 n | y t X t T β ˆ n | q 2 ( y t X t T β ˆ n ) =0,
(2.17)

which is L q or the maximum likelihood estimate equation for the parameter β in a linear regression model with q-norm distribution errors. Many authors have investigated (2.17), such as Arcones [34] and Zeckhauser and Thompson [35], Ronner [36, 37], and so on.

Case 3. Let ζ t =0. The estimating equations (2.9)-(2.11) may be written as

t = 1 n ψ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) ( X t a ˆ n X t 1 )=0,
(2.18)
t = 1 n χ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) = A n ,
(2.19)

and

t = 1 n ψ ( ε ˆ t a ˆ n ε ˆ t 1 σ ˆ n ) ε ˆ t 1 =0,
(2.20)

with ε ˆ t = y t x t T β ˆ n , which are the same as Hu’s [1].

From Proposition 1 in Silvapullé [25] and pp.136 in Huber and Ronchetti [27], the existence results of the HD estimators may be given by the following.

Proposition 2.1 Suppose that ρ is continuous and, for some A>0, ν<1A V 1 , where ν is the largest jump in the error distribution, V=χ()χ(), then the equation E{ψ( ε μ σ ),χ( ε μ σ )A}=0 has a solution (μ(A),σ(A)) with σ(A)>0. Especially, when A= lim n n 1 A n , where A n is defined in (2.5), we denote it by (μ,σ) with σ>0.

3 Main results

To obtain our results, we start with some assumptions.

  • (A1) max t = 1 n X t <.

  • (A2) lim n n 1 2 ( n 1 A n A)=0 for some 0< n 1 A n , A<min{χ(),χ()}.

  • (A3) The function ψ is continuous.

  • (A4) Eψ( η t σ )=0 for any σ>0. b=E ψ ( η t σ )>0, c=E{ ψ ( η t σ ) η t σ }, r=E{ ψ ( η t σ ) η t 2 }, br c 2 , Var( ψ ( η t σ ))<, Var( ψ ( η t σ ) η t )<, Var( ψ ( η t σ ) η t 2 )<, E η t 6 < and E ( ψ ( η t σ ) ) 2 <.

  • (A5) For any a(,), X n (a)= t = 1 n ( X t a X t 1 ) ( X t a X t 1 ) T is positive definite for sufficiently large n.

Remark 3 The condition (A1) is often imposed in the estimation theory of regression models. The condition (A2) is used by Tong et al. [33]. In addition, by (A1) and (A2), we can obtain conditions n 1 max t = 1 n X t 2 0 and lim n ( n 1 A n A)=0, which are used by Silvapullé [25]. The conditions (A3) and (A4) except E η t 6 < and E ( ψ ( η t σ ) ) 2 < are used by Silvapullé [25]. The condition (A5) is used by Maller [2], Hu [1], etc. Therefore, our conditions are quite mild and can easily be satisfied.

For ease of exposition, we shall introduce the following notations which will be used later in the paper. Define

Θ 1 = { θ ˜ : | θ ˜ θ | C n 1 2 } , Θ 2 = { θ ˜ : | β ˜ β | C n 1 2 , | σ ˜ σ | C n 1 2 , | a ˜ a | C n 1 } , Θ 3 = { θ ˜ : | β ˜ β | C n 1 2 , | σ ˜ σ | C n 1 2 , | a ˜ a | C a 3 n 2 } , S n ( θ ) = Q θ = ( Q β T , Q σ , Q a ) = ( [ S n ( θ ) ] β , σ , Q a ) ,
(3.1)

and

F n (θ)= 2 Q θ θ T =( 2 Q β T β 2 Q β T σ 2 Q β T a 2 Q σ 2 2 Q σ a 2 Q a 2 ),
(3.2)

where the indicates that the elements are filled in by symmetry, and

2 Q β T β = 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T = X n ( a , ω ) ,
(3.3)
2 Q β T σ = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(3.4)
2 Q β T a = t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 σ ( X t a X t 1 ( ζ t a ζ t 1 ) ) T + t = 1 n ψ ( ε t a ε t 1 σ ) ( X t 1 ζ t 1 ) T ,
(3.5)
2 Q σ 2 = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) 2 ,
(3.6)
2 Q σ a = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) ε t 1 ,
(3.7)

and

2 Q a 2 = 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 2 .
(3.8)

Theorem 3.1 Suppose that conditions (A1)-(A5) hold. Then, as n:

  1. (1)

    For |a|<1 and θ Θ 1 , we have

    ( θ ˆ n θ) D n (θ) Var 1 2 ( S n ( θ ) ) D N(0, I d + 2 ),
    (3.9)

where D n (θ)=E( F n (θ)) and Var( S n (θ)) defined in Lemma 4.1.

  1. (2)

    For |a|=1 and θ Θ 2 , (3.9) holds.

  2. (3)

    For |a|>1 and θ Θ 3 , (3.9) holds.

From the above theorem, we may obtain the following corollaries. Here we omit their proofs.

Corollary 3.1 If β=0 and conditions (A2)-(A5) hold, then

n 1 2 ( σ ˆ n σ) D N ( 0 , σ 2 r 2 Var ( χ ( η 1 σ ) ) )
(3.10)

and

Δ n 1 2 (a,σ)( a ˆ n a) D N ( 0 , σ 2 b 2 E ψ 2 ( η 1 σ ) ) ,n.
(3.11)

Corollary 3.2 If a=0 and conditions (A1)-(A5) hold, then

[ S n ( θ ) ] β , σ [ Var 1 2 S n ( θ ) ] β , σ D N(0, I d + 1 ),n.
(3.12)

Remark 4 Corollary 3.2 is similar to Theorem 2 of Silvapullé [25].

Corollary 3.3 Let σ be a constant. If conditions (A1)-(A5) hold, then

( β ˆ n β) ( X n ( a ) + n ( 1 + a 2 ) σ ζ 2 I d ) 1 2 D N ( 0 , σ 2 b 2 E ψ 2 ( η 1 σ ) I d )
(3.13)

and

Δ n 1 2 (a,σ)( a ˆ n a) D N ( 0 , σ 2 b 2 E ψ 2 ( η 1 σ ) ) ,n.
(3.14)

Corollary 3.4 Let ζ t =0. If conditions (A1)-(A5) hold, then Theorem 3.1 holds.

Remark 5 For |a|<1, Corollary 3.4 is the same as Theorem 3.1 of Hu [1]. Therefore, we extend the corresponding results of Hu [1] to linear EV models.

If we do not consider the dependency on the parameters β ˆ n and σ ˆ n , then we will obtain Theorem 3.2.

Theorem 3.2 Let

[ θ ] β , σ = ( β T , σ ) , D n (θ)=diag ( [ D n ( θ ) ] β , σ , b σ Δ n ( a , σ ) ) .

Suppose that conditions (A1)-(A5) hold. Then

  1. (1)

    for any a(,),

    n 1 2 [ θ ˆ n θ ] β , σ [ D n ( θ ) ] β , σ D N(0,Σ),n,
    (3.15)

where

Σ=diag { Δ n 1 ( a , σ ) ( X n ( a ) + n ( 1 + a 2 ) σ ζ 2 I d ) E ( ψ 2 ( η 1 σ ) ) , Var ( χ ( η 1 σ ) ) } ;
  1. (2)

    for any a(,),

    Δ n 1 2 (a,σ)( a ˆ n a) D N ( 0 , σ 2 b 2 E ψ 2 ( η 1 σ ) ) .
    (3.16)

4 Proofs of main results

Throughout this paper, let C denote a generic positive constant which could take different value at each occurrence. To prove Theorem 3.1 and Theorem 3.2, we first introduce the following lemmas.

Lemma 4.1 If (A4) and (A5) hold, then the matrix D n (θ) is positive definite with E( S n (θ))=0 for sufficiently large n and

Var ( S n ( θ ) ) = ( E ψ 2 ( η 1 σ ) ( X n ( a ) + n ( 1 + a 2 ) σ ζ 2 I d ) Cov ( Q β T , Q σ ) 0 n Var ( χ ( η 1 σ ) ) 0 0 0 E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) = diag { [ Var ( S n ( θ ) ) ] β , σ , E ψ 2 ( η 1 σ ) Δ n ( a , σ ) } ,
(4.1)

where

Cov ( Q β T , Q σ ) =E ( ψ ( η 1 σ ) χ ( η 1 σ ) ) t = 1 n ( X t a X t 1 ) T .

Furthermore, Var( S n (θ)) is a positive definite matrix.

Proof Note that Eψ( η t σ )=0, E( ε t )=0, and ε t , ζ t , and η t + 1 are independent. By (3.3)-(3.8), we easily obtain

D n (θ)=( b σ X n ( a ) + n b σ ( 1 + a 2 ) σ ζ 2 I d c σ t = 1 n ( X t a X t 1 ) 0 n r σ 0 0 0 b σ Δ n ( a , σ ) ).
(4.2)

It is easy to show that

D n 1 ( θ ) = | b σ X n ( a ) + n b σ ( 1 + a 2 ) σ ζ 2 I d | > 0 , D n 2 ( θ ) = | b σ X n ( a ) + n b σ ( 1 + a 2 ) σ ζ 2 I d c σ t = 1 n ( X t a X t 1 ) n r σ | = | b σ X n ( a ) + n b σ ( 1 + a 2 ) σ ζ 2 I d | | n r σ c 2 b σ t = 1 n ( X t a X t 1 ) T ( X n ( a ) + n ( 1 + a 2 ) σ ζ 2 I d ) 1 t = 1 n ( X t a X t 1 ) | > 0 ,
(4.3)

and

| D n ( θ ) | >0.

Thus the matrix D n (θ) is positive definite.

By (2.6), we have

E ( Q β T ) = t = 1 n Eψ ( η t σ ) ( X t a X t 1 ) T =0.
(4.4)

By (2.7) and Proposition 2.1, we have

E ( Q σ ) = t = 1 n Eχ ( η t σ ) + A n 0.
(4.5)

Note that ε t 1 and η t are independent; by (2.8) and E( ε t )=0, we have

E ( Q a ) = t = 1 n E ( ψ ( η t σ ) ε t 1 ) = t = 1 n E ( ψ ( η t σ ) ) E( ε t 1 )=0.
(4.6)

Hence, from (4.4)-(4.6),

E ( S n ( θ ) ) = ( 0 , t = 1 n E χ ( η t σ ) + A n , 0 ) 0.

By (2.6), we have

Var ( Q β T ) = Var ( ψ ( η t σ ) ) t = 1 n ( ( X t a X t 1 ) ( X t a X t 1 ) T + ( 1 + a 2 ) σ ζ 2 I d ) = E ( ψ ( η 1 σ ) ) 2 ( X n ( a ) + n ( 1 + a 2 ) σ ζ 2 I d ) .
(4.7)

By (2.7), we have

Var ( Q σ ) = t = 1 n Var ( χ ( η t σ ) ) =nVar ( χ ( η 1 σ ) ) .
(4.8)

Note that {ψ( η t σ ) ε t 1 , H t } is a martingale difference sequence with

Var ( ψ ( η t σ ) ε t 1 ) =E ( ψ ( η t σ ) ) 2 E ε t 1 2 ,

so we have

Var ( Q a ) = t = 1 n Var ( ψ ( η t σ ) ε t 1 ) = t = 1 n E ( ψ ( η t σ ) ) 2 E ε t 1 2 = E ( ψ ( η 1 σ ) ) 2 Δ n ( a , σ ) .
(4.9)

By (2.6) and (2.7), we have

Cov ( Q β T , Q σ ) = E ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T t = 1 n ( χ ( η t σ ) E χ ( η t σ ) ) ) = t = 1 n E ( ψ ( η t σ ) ( χ ( η t σ ) E χ ( η t σ ) ) ) ( X t a X t 1 ) T = E ( ψ ( η 1 σ ) χ ( η 1 σ ) ) t = 1 n ( X t a X t 1 ) T .
(4.10)

By (2.6), (2.8), and noting that ζ t , ε t 1 , and η t are independent, we have

Cov ( Q β T , Q a ) = E ( Q β T , Q a ) = E ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T t = 1 n ψ ( η t σ ) ε t 1 ) = t = 1 n E ( ψ 2 ( η t σ ) ε t 1 ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T + 2 t > k n E ( ψ ( η t σ ) ψ ( η k σ ) ε k 1 ) = t = 1 n E ψ 2 ( η t σ ) E ε t 1 ( X t a X t 1 ) T + 2 t > k n E ( ψ ( η t σ ) ψ ( η k σ ) ) E ε k 1 = 0 .
(4.11)

By (2.7) and (2.8), we have

Cov ( Q a , Q σ ) = E ( t = 1 n ψ ( η t σ ) ε t 1 t = 1 n ( χ ( η t σ ) E χ ( η t σ ) ) ) = t = 1 n E ( ψ ( η t σ ) ( χ ( η t σ ) E χ ( η t σ ) ) ) E ε t 1 + 2 t > k n E ( ( χ ( η t σ ) E χ ( η t σ ) ) ψ ( η k σ ) ) E ε k 1 = 0 .
(4.12)

Hence, (4.1) follows immediately from (4.7)-(4.12). □

Similarly to the proof of D n (θ), we easily prove that the matrix Var( S n (θ)) is positive definite. Thus, we complete the proof of Lemma 4.1.

Lemma 4.2 Assume that (A1) and (A4) hold. Then:

  1. (1)

    for |a|<1, we have

    F n (θ) D n (θ)= O p ( n 1 2 ) ,n;
    (4.13)
  2. (2)

    for |a|=1, we have

    F n (θ) D n (θ)=( O p ( n 1 2 ) O p ( n 1 2 ) O p ( n ) O p ( n 1 2 ) O p ( n 1 2 ) O p ( n 3 2 ) ),
    (4.14)

where the indicates that the elements are filled in by symmetry;

  1. (3)

    for |a|>1, we have

    F n (θ) D n (θ)=( O p ( n 1 2 ) O p ( n 1 2 ) O p ( a n ) O p ( n 1 2 ) O p ( n 1 2 ) O p ( a 2 n ) ).
    (4.15)

Proof By (3.3) and (4.2), we obtain

n 1 { X n ( a , ω ) b X n ( a ) n b ( 1 + a 2 ) σ ζ 2 I d } = 1 n σ t = 1 n { ψ ( ε t a ε t 1 σ ) b } ( X t a X t 1 ) ( X t a X t 1 ) T + 1 n σ t = 1 n { ψ ( ε t a ε t 1 σ ) ζ t ζ t T b σ ζ 2 I d } + a 2 n σ t = 1 n { ψ ( ε t a ε t 1 σ ) ζ t 1 ζ t 1 T b σ ζ 2 I d } 2 a n σ t = 1 n ψ ( ε t a ε t 1 σ ) ζ t ζ t 1 T 2 n σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t a X t 1 ) ζ t T + 2 a n σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t a X t 1 ) ζ t 1 T = U 1 + U 2 + U 3 + U 4 + U 5 + U 6 .
(4.16)

Note that { ψ ( η t σ ),t=1,2,,n} are i.i.d. random variables with finite variance Var( ψ ( η t σ )), we have

Var { n 1 t = 1 n ( ψ ( η t σ ) E ψ ( η t σ ) ) } = n 2 t = 1 n Var ( ψ ( η t σ ) ) = n 1 Var ( ψ ( η 1 σ ) ) = O ( n 1 ) .
(4.17)

By the Chebyshev inequality and (4.17), we have

( U 1 ) i j = 1 n σ t = 1 n { ψ ( η t σ ) E ψ ( η t σ ) } ( X t a X t 1 ) i ( X t a X t 1 ) j T 1 σ max X t a X t 1 2 n 1 t = 1 n { ψ ( η t σ ) E ψ ( η t σ ) } = O p ( n 1 2 ) .
(4.18)

Similarly, we obtain

( U i ) i j = O p ( n 1 2 ) .
(4.19)

By (4.16), (4.18), and (4.19), we have

X n (θ,ω)b X n (a)nb ( 1 + a 2 ) σ ζ 2 I d = O p ( n 1 2 ) .
(4.20)

By (2.12) and (4.2), we easily obtain

2 Q β T σ c σ t = 1 n ( X t a X t 1 ) T = 1 σ t = 1 n { ψ ( η t σ ) η t σ E ( ψ ( η t σ ) η t σ ) } ( X t a X t 1 ) T 1 σ t = 1 n ψ ( η t σ ) η t σ ( ζ t a ζ t 1 ) T = O p ( n 1 2 ) .
(4.21)

By (3.5) and (4.2), we obtain

n 1 2 { 2 Q β T a 0 } = n 1 2 t = 1 n ψ ( η t σ ) ε t 1 σ ( X t a X t 1 ) T n 1 2 t = 1 n ψ ( η t σ ) ε t 1 σ ( ζ t a ζ t 1 ) T + n 1 2 t = 1 n ψ ( η t σ ) X t 1 T n 1 2 t = 1 n ψ ( η t σ ) ζ t 1 T = n 1 2 ( U 1 + U 2 + U 3 + U 4 ) .
(4.22)

Note that { ψ ( η t σ ) ε t 1 σ , H t } is a martingale difference sequence with

Var ( ψ ( η t σ ) ε t 1 σ ) =E ( ψ ( η t σ ) ) 2 E ( ε t 1 2 σ 2 ) ,

so we have

( Var { n 1 t = 1 n ψ ( η t σ ) ε t 1 σ ( X t a X t 1 ) T } ) i j = n 2 t = 1 n Var { ψ ( η t σ ) ε t 1 σ } ( X t a X t 1 ) i ( X t a X t 1 ) j T 1 σ 2 max X t a X t 1 2 E ( ψ ( η 1 σ ) ) 2 n 2 t = 1 n E ( ε t 1 2 ) = O ( Δ n ( a , σ ) n 2 ) .
(4.23)

By the Chebyshev inequality and (4.23), we have

( U 1 ) i = O p ( Δ n 1 2 ( a , σ ) ) .
(4.24)

Similarly, we have

( U 2 ) i = O p ( Δ n 1 2 ( a , σ ) ) .
(4.25)

It is easy to show that

( U 3 ) i = O p ( n 1 2 ) , U 4 = O p ( n 1 2 ) .
(4.26)

By (4.22) and (4.24)-(4.26), we have

2 Q β T a 0= O p ( Δ n 1 2 ( a , σ ) ) .
(4.27)

By (3.6) and (4.2), we obtain

2 Q σ 2 n d σ = 1 σ { t = 1 n ψ ( η t σ ) η t 2 n d } = 1 σ t = 1 n { ψ ( η t σ ) η t 2 E ( ψ ( η t σ ) η t 2 ) } = O p ( n 1 2 ) .
(4.28)

Note that { ψ ( η t σ ) η t ε t 1 , H t } is a martingale difference sequence with Var( ψ ( η t σ ) η t ε t 1 )=Var( ψ ( η t σ ) η t )Var( ε t 1 ), and by (3.7) and (4.2), we obtain

2 Q σ a 0= 1 σ 2 t = 1 n ψ ( η t σ ) η t ε t 1 = O p ( n 1 2 ) .
(4.29)

By (3.8) and (4.2), we obtain

n 1 { 2 Q a 2 b σ 0 Δ n ( a , σ ) } = 1 n σ { t = 1 n ψ ( η t σ ) ε t 1 2 b Δ n ( a , σ ) } = 1 n σ t = 1 n { ψ ( η t σ ) E ( ψ ( η t σ ) ) } ε t 1 2 + b Δ n ( a , σ ) σ t = 1 n ( ε t 1 2 E ( ε t 1 2 ) ) = T 1 + T 2 .
(4.30)

Since we easily prove that { ψ ( η t σ )E( ψ ( η t σ )) ε t 1 2 , H t } is a martingale difference sequence,

Var ( T 1 ) = 1 ( n σ ) 2 t = 1 n E { ψ ( η t σ ) E ( ψ ( η t σ ) ) } 2 E ( ε t 1 4 ) = 1 ( n σ ) 2 Var { ψ ( η 1 σ ) } t = 1 n E ( ε t 1 4 ) .
(4.31)

By (2.1), we obtain

E ( ε t 1 4 ) j = 1 t 1 a 4 ( t 1 j ) E η j 4 + σ 4 j = 1 t 1 a 2 ( t 1 j ) k = 1 t 1 a 2 ( t 1 k ) = { E η 1 4 a 4 ( t 2 ) ( 1 a 4 ( t 1 ) ) 1 a 4 + 3 σ 4 ( a 2 ( t 2 ) ( 1 a 2 ( t 1 ) ) 1 a 2 ) 2 , if  | a | 1 , E η 1 4 ( t 1 ) + σ 4 ( t 1 ) 2 , if  | a | = 1 .
(4.32)

Thus

t = 1 n E ( ε t 1 4 ) { E η 1 4 ( a 4 n 1 ( 1 a 4 ) 2 n a 4 1 ) + 3 σ 4 ( 1 a 2 ) 2 ( a 4 n 1 1 a 4 2 ( a 2 n 1 ) a 2 ( 1 a 2 ) + n a 4 ) , | a | 1 , 1 2 E η 1 4 n ( n 1 ) + σ 4 ( n 1 ) n ( 2 n 1 ) 6 , | a | = 1 .
(4.33)

That is,

t = 1 n E ( ε t 1 4 ) = { O ( n ) , if  | a | < 1 , O ( a 4 n ) , if  | a | > 1 , O ( n 3 ) , if  | a | = 1 .
(4.34)

By Chebyshev inequality and (4.31)-(4.34), we have

T 1 = { O p ( n 1 2 ) , if  | a | < 1 , O p ( n 1 a 2 n ) , if  | a | > 1 , O p ( n 1 2 ) , if  | a | = 1 .
(4.35)

Similarly to the proof of (4.35), we easily obtain

T 2 = { O p ( n 1 2 ) , if  | a 0 | < 1 , O p ( 1 ) , if  | a 0 | > 1 , O p ( n 1 2 ) , if  | a 0 | = 1 .
(4.36)

Hence, by (4.30), (4.35), and (4.36), we have

2 Q a 2 b σ Δ n (θ)= { O p ( n 1 2 ) , if  | a | < 1 , O p ( a 2 n ) , if  | a | > 1 , O p ( n 3 2 ) , if  | a | = 1 .
(4.37)

Thus Lemma 4.2 follows from (4.20), (4.21), (4.27)-(4.29), and (4.37). □

Lemma 4.3 Assume that (A1), (A3), and (A4) hold, and E η t 6 <, E ( ψ ( η t σ ) ) 2 < and θΘ. Then, as n:

  1. (1)

    for |a|<1, we have

    R n l (θ)= θ l 2 θ T θ Q(θ)= θ l F n (θ)= O p ( n 1 2 ) ,l=1,2,,d+2;
  2. (2)

    for |a|=1, we have

    R n l (θ)= ( ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( n ) O p ( n 1 2 ) O p ( n ) O p ( n ) ) ( d + 2 ) × ( d + 2 ) ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( n ) O p ( n 1 2 ) O p ( n ) O p ( n ) ) ( d + 2 ) × ( d + 2 ) ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( n ) O p ( n 1 2 ) O p ( n ) O p ( n 3 2 ) ) ( d + 2 ) × ( d + 2 ) ( O p ( n 3 2 ) O p ( n 3 2 ) O p ( n 3 2 ) O p ( n 3 2 ) O p ( n 3 2 ) O p ( n 2 ) ) ( d + 2 ) × ( d + 2 ) ) ( d + 2 ) × 1 ;
  3. (3)

    for |a|>1, we have

    R n l (θ)= ( ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( a n ) O p ( n 1 2 ) O p ( a n ) O p ( a n ) ) ( d + 2 ) × ( d + 2 ) ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( a n ) O p ( n 1 2 ) O p ( a n ) O p ( a n ) ) ( d + 2 ) × ( d + 2 ) ( O p ( n 1 2 ) O p ( n 1 2 ) O p ( a n ) O p ( n 1 2 ) O p ( a n ) O p ( a 2 n ) ) ( d + 2 ) × ( d + 2 ) ( O p ( a n ) O p ( a n ) O p ( a 2 n ) O p ( a n ) O p ( a 2 n ) O p ( a 3 n ) ) ( d + 2 ) × ( d + 2 ) ) ( d + 2 ) × 1 .

Proof Let

θ l F n (θ)=( 3 Q θ l β T β 3 Q θ l β T σ 3 Q θ l β T a 3 Q θ l σ 2 3 Q θ l σ a 3 Q θ l a 2 ).
(4.38)

Case 1. l=1,2,,d.

3 Q θ l β T β = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.39)
3 Q θ l β T σ = 1 σ 2 t = 1 n { ψ ( ε t a ε t 1 σ ) + ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ } ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.40)
3 Q θ l β T a = 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 σ ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) { ( X t 1 , l ζ t 1 , l ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T + ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( X t 1 ζ t 1 ) T } ,
(4.41)
3 Q θ l σ 2 = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( ε t a ε t 1 ) 2 2 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) ( ε t a ε t 1 ) ,
(4.42)
3 Q θ l σ a = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ ε t 1 ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) ( X t 1 , l ζ t 1 , l ) ,
(4.43)

and

3 Q θ l a 2 = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 2 ( X t l a X t 1 , l ( ζ t l a ζ t 1 , l ) ) 2 σ t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 ( X t 1 , l ζ t 1 , l ) .
(4.44)

Case 2. l=d+1.

3 Q σ β T β = 1 σ 2 t = 1 n ( ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ + ψ ( ε t a ε t 1 σ ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.45)
3 Q σ β T σ = 1 σ 3 t = 1 n { 2 ψ ( ε t a ε t 1 σ ) + ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ } ( ε t a ε t 1 ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.46)
3 Q σ β T a = 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ ε t 1 σ ( X t a X t 1 ( ζ t a ζ t 1 ) ) T 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t 1 σ ( X t a X t 1 ( ζ t a ζ t 1 ) ) T + ε t a ε t 1 σ ) ,
(4.47)
3 Q σ 3 = 1 σ 3 t = 1 n ( 2 ψ ( ε t a ε t 1 σ ) + ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ ) ( ε t a ε t 1 ) 2 ,
(4.48)
3 Q σ 2 a = 1 σ 4 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) 2 ε t 1 2 σ 3 t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) ε t 1 ,
(4.49)

and

3 Q σ a 2 = 1 σ 2 t = 1 n ( ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ + ψ ( ε t a ε t 1 σ ) ) ε t 1 2 .
(4.50)

Case 3. l=d+2.

3 Q a β T β = 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 σ ( X t a X t 1 ( ζ t a ζ t 1 ) ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t 1 ζ t 1 ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T 1 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) ( X t 1 ζ t 1 ) T ,
(4.51)
3 Q a β T σ = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ( ( ε t a ε t 1 ) ( X t 1 ζ t 1 ) T + ε t 1 ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ) 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 σ ( ε t a ε t 1 ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.52)
3 Q a β T a = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 2 ( X t a X t 1 ( ζ t a ζ t 1 ) ) T ,
(4.53)
3 Q a σ 2 = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 ( ε t a ε t 1 ) 2 2 σ t = 1 n ψ ( ε t a ε t 1 σ ) ( ε t a ε t 1 ) ε t 1 ,
(4.54)
3 Q a σ a = 1 σ 2 t = 1 n ( ψ ( ε t a ε t 1 σ ) ε t a ε t 1 σ + ψ ( ε t a ε t 1 σ ) ) ε t 1 2 ,
(4.55)

and

3 Q a 3 = 1 σ 2 t = 1 n ψ ( ε t a ε t 1 σ ) ε t 1 3 .
(4.56)

 □

Similarly to the proof of Lemma 4.2, we easily obtain Lemma 4.3. Here we omit it.

Lemma 4.4 (Prakasa Rao [38])

If { X n ,n1} are independent random variables with E X n =0, and s n ( 2 + δ ) j = 1 n E | X j | 2 + δ 0 for some δ>0, then

s n 1 j = 1 n X j N(0,1),

where s n 2 = j = 1 n E X j 2 .

Lemma 4.5 (Hall and Heyde [39])

Let { S n i , F n i ,1i k n ,n1} be a zero-mean, square-integrable martingale array with differences X n i , and let η 2 be an a.s. finite random variable. Suppose that i E{ X n i 2 I(| X n i |>ε)| F n , i 1 } p 0, for all ε0, and i E{ X n i 2 | F n , i 1 } p η 2 . Then

S n k n = i X n i D Z,

where the r.v. Z has characteristic function E{exp( 1 2 η 2 t 2 )}.

Proof of Theorem 3.1 Expanding θ Q( θ ˆ n ) about θ, we have

θ Q( θ ˆ n )= θ Q(θ)+( θ ˆ n θ) 2 θ T θ Q(θ)+ 1 2 [ R ˜ n l ( θ ¯ , θ ˆ n , θ ) ] 1 l d + 2 ,
(4.57)

where θ ¯ =s θ ˆ n +(1s)θ for some 0s1 and

[ R ˜ n l ( θ ¯ , θ ˆ n , θ ) ] 1 l d + 2 = { ( θ ˆ n θ ) R n 1 ( θ ¯ ) ( θ ˆ n θ ) T , , ( θ ˆ n θ ) R n , d + 2 ( θ ¯ ) ( θ ˆ n θ ) T } .

By (2.9)-(2.11) and (3.1), (3.2), we have

0= S n (θ)+( θ ˆ n θ) F n (θ)+ 1 2 [ R ˜ n l ( θ ¯ , θ ˆ n , θ ) ] 1 l d + 2 .
(4.58)

By (4.58), we have

( θ ˆ n θ) D n (θ)= S n (θ)( θ ˆ n θ) ( D n ( θ ) F n ( θ ) ) 1 2 [ R ˜ n l ( θ ¯ , θ ˆ n , θ ) ] 1 l d + 2 .
(4.59)

(1) |a|<1. By Lemma 4.3(1), Lemma 4.2(1), (2.9)-(2.11), and θ Θ 1 , we have

n 1 2 ( θ ˆ n θ ) D n ( θ ) = n 1 2 S n ( θ ) + o p ( 1 ) = n 1 2 { t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n χ ( η t σ ) A n , t = 1 n ψ ( η t σ ) ε t 1 } + o p ( 1 ) = n 1 2 { t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n ( χ ( η t σ ) A ) , t = 1 n ψ ( η t σ ) ε t 1 } + o p ( 1 ) .
(4.60)

Note that Var( S n (θ))=O(n), so we have

( θ ˆ n θ ) D n ( θ ) Var 1 2 ( S n ( θ ) ) = S n ( θ ) Var 1 2 ( S n ( θ ) ) + o p ( 1 ) = { [ S n ( θ ) ] β , σ [ Var 1 2 ( S n ( θ ) ) ] β , σ , t = 1 n ψ ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 } + o p ( 1 ) .
(4.61)

By Lemma 2 of Silvapullé [25], we easily obtain

[ S n ( θ ) ] β , σ [ Var 1 2 ( S n ( θ ) ) ] β , σ D N(0, I d + 1 ),n.
(4.62)

In the following, we will prove that

t = 1 n ψ ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 N(0,1),n.
(4.63)

Note that {ψ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 , H t } is a martingale differences sequence, so we will verify the Lindeberg conditions for their convergence to normality.

From (1.2), we have

( 1 a 2 ) t = 2 n ε t 1 2 + ε n 2 ε 1 2 = t = 2 n ( ε t 2 a 2 ε t 1 2 ) = t = 2 n ( ε t a ε t 1 ) ( ε t + a ε t 1 ) = t = 2 n η t ( η t + 2 a ε t 1 ) = t = 2 n η t 2 + 2 a t = 2 n η t ε t 1 .
(4.64)

By Var( t = 2 n η t ε t 1 )= σ 2 Δ n (a,σ)= σ 2 n and Chebyshev inequality, we have

t = 2 n η t ε t 1 = O p ( n 1 2 ) .
(4.65)

Obviously, n 1 t = 2 n η t 2 p E η 1 2 . By (4.46) and max 1 t n ε t 2 n = o p (1), we have

t = 2 n ε t 1 2 = O p (n).
(4.66)

By (4.65), we have

t = 1 n E ( ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 ψ 2 ( η t σ ) ε t 1 2 | H t 1 ) = ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 t = 1 n ε t 1 2 E ( ψ 2 ( η t σ ) ) = ( Δ n ( a , σ ) ) 1 t = 1 n ε t 1 2 = 1 + o p ( 1 ) .
(4.67)

For given δ>0, there is a set whose probability approaches 1 as n on which max 1 t n | ε t n |δ. In this event, for any c>0,

t = 1 n E { ψ 2 ( η t σ ) ε t 1 2 n I ( | ψ ( η t σ ) ε t 1 n | > c ) | H t 1 } = t = 1 n c y 2 d P { | ψ ( η t σ ) ε t 1 n | y | H t 1 } = t = 1 n ε t 1 2 n c ε t 1 n y 2 d P { | ψ ( η t σ ) | y | H t 1 } t = 1 n ε t 1 2 n c δ y 2 d P { | ψ ( η t σ ) | y | H t 1 } = o δ t = 1 n ε t 1 2 n = o δ O p ( 1 ) 0 , n .
(4.68)

Here o δ 0 as δ0. This verifies the Lindeberg conditions, hence (4.63) follows from Lemma 4.5.

Note that [ S n ( θ ) ] β , σ [ Var 1 2 ( S n ( θ ) ) ] β , σ are asymptotic independent of

t = 1 n ψ ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 .

Therefore, we obtain Theorem 3.1(1) from (4.61)-(4.63).

(2) For |a|=1. By Lemma 4.5(2), Lemma 4.2(2), (2.9)-(2.11), (4.59), and θ Θ 2 , we have

( θ ˆ n θ ) D n ( θ ) = S n ( θ ) + ( O p ( 1 ) , O p ( n 1 2 ) ) + ( O p ( n 1 2 ) , O p ( n 1 2 ) ) = ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n χ ( η t σ ) A n , t = 1 n ψ ( η t σ ) ε t 1 ) + ( O p ( 1 ) , O p ( n 1 2 ) ) = ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n ( χ ( η t σ ) A ) , t = 1 n ψ ( η t σ ) ε t 1 ) + ( O p ( 1 ) , O p ( n 1 2 ) ) .
(4.69)

Note that Var( S n (θ))=diag(O(n),O( n 2 )), so we have

( θ ˆ n θ ) D n ( θ ) Var 1 2 ( S n ( θ ) ) = { [ S n ( θ ) ] β , σ [ Var 1 2 ( S n ( θ ) ) ] β , σ , t = 1 n ψ ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 } + o p ( 1 ) .
(4.70)

Similarly to the proof of (4.67) and (4.68), we have

t = 1 n E ( ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 ψ 2 ( η t σ ) ε t 1 2 | H t 1 ) =1+ o p (1)
(4.71)

and

t = 1 n E { ψ 2 ( η t σ ) ε t 1 2 n 2 I ( | ψ ( η t σ ) ε t 1 n | > c ) | H t 1 } 0,n.
(4.72)

This verifies the Lindeberg conditions, hence (4.63) follows from Lemma 4.5. Similarly to the proof of Theorem 3.1(1), we easily prove Theorem 3.1(2).

  1. (3)

    For |a|>1. By Lemma 4.3(3), Lemma 4.2(3), (2.9)-(2.11), (4.59), and θ Θ 3 , we have

    ( θ ˆ n θ ) D n ( θ ) = S n ( θ ) + ( O p ( 1 ) , O p ( a n n 1 2 ) ) + ( O p ( n 1 2 ) , O p ( a n n 1 ) ) = ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n χ ( η t σ ) A n , t = 1 n ψ ( η t σ ) ε t 1 ) + ( O p ( 1 ) , O p ( a n n 1 2 ) ) = ( t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T , t = 1 n ( χ ( η t σ ) A ) , t = 1 n ψ ( η t σ ) ε t 1 ) + ( O p ( 1 ) , O p ( a n n 1 2 ) ) .
    (4.73)

Note that Var( S n (θ))=diag(O(n),O( a 2 n )), so we have

( θ ˆ n θ ) D n ( θ ) Var 1 2 ( S n ( θ ) ) = { [ S n ( θ ) ] β , σ [ Var 1 2 ( S n ( θ ) ) ] β , σ , t = 1 n ψ ( η t σ ) ε t 1 ( E ψ 2 ( η 1 σ ) Δ n ( a , σ ) ) 1 2 } + o p ( 1 ) .
(4.74)

Similarly to the proof of Theorem 3.1(1), we easily prove Theorem 3.1(3). The proof of Theorem 3.1 is now complete. □

Proof of Theorem 3.2 Case 1. For |a|<1. A first step towards (4.74) is to show that

n 1 2 t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T N ( 0 , ( n 1 X n ( a ) + ( 1 + a 2 ) σ ζ 2 I d ) E ( ψ 2 ( η 1 σ ) ) ) , n .
(4.75)

Let u R d with |u|=1. Then

s n 2 = Var { n 1 2 t = 1 n ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T u } = ( n 1 u T X n ( a ) u + ( 1 + a 2 ) σ ζ 2 ) E ( ψ 2 ( η 1 σ ) ) = O ( 1 ) .
(4.76)

Thus,

s n ( 2 + δ ) t = 1 n E | n 1 2 ψ ( η t σ ) ( X t a X t 1 ( ζ t a ζ t 1 ) ) T u | 2 + δ = s n ( 2 + δ ) n 2 + δ 2 E | ψ ( η t σ ) | 2 + δ t = 1 n ( | ( X t a X t 1 ) T u | 2 + δ + E | ( ζ t a ζ t 1 ) T u | 2 + δ ) = O ( 1 ) n δ 2 E | ψ ( η t σ ) | 2 + δ ( max 1 t n | ( X t a X t 1 ) T u | 2 + δ + O ( 1 ) ) 0 , n .
(4.77)

By Lemma 4.4 and the Cramer-Wold device, (4.75) follows from (4.77).

Next we need to show that

n 1 2 t = 1 n ( χ ( η t σ ) A ) N ( 0 , Var ( χ ( η 1 σ ) ) ) ,n.
(4.78)

In fact,

( Var ( χ ( η 1 σ ) ) ) 2 + δ 2 t = 1 n E | n 1 2 ( χ ( η t σ ) A ) | 2 + δ = ( Var ( χ ( η 1 σ ) ) ) 2 + δ 2 n δ 2 E | χ ( η t σ ) A | 2 + δ 0 , n .
(4.79)

By Lemma 4.4, (4.78) follows from (4.79).

Finally, by (4.63), we easily prove that

n 1 2 t = 1 n ψ ( η t σ ) ε t 1 N ( 0 , σ 2 1 a 2 E ( ψ 2 ( η 1 σ ) ) ) ,n.
(4.80)

Case 2. For |a|1. Similarly to the proof of Case 1, we easily prove this case.

This completes the proof of Theorem 3.2. □

5 Numerical example

In the section, we will simulate a simple linear regression model (1.1) with (1.2), where X t =10cos( 2 π t 50 ), β=3, n=50, ζ t and η t N(0,1).

We take ρ(u)=u(2Φ(u)1)+2Φ(u) 2 2 π , ψ(u)=2Φ(u)1, χ(u)= 2 2 π 2 Φ (u), A n = ( 2 1 ) n π , where Φ() and Φ () are the distribution and density function of standard normal N(0,1), respectively Hu [1]. In the following, we calculate by using our method and the quasi-Newton line search method.

Case 1. For a=0.5, we have β ˆ n =3.1125, a ˆ n =0.4074, and σ ˆ n =0.9940. β ˆ n and σ ˆ n approximately equal β and σ, respectively.

Case 2. For a=1, we have β ˆ n =3.0766, a ˆ n =1.0000 and σ ˆ n =0.5311. For a=1, we have β ˆ n =3.0572, a ˆ n =1.0000, and σ ˆ n =0.5298. β ˆ n and a ˆ n approximately equal β and a, respectively.

Case 3. For a=1.1, we have β ˆ n =3.0710, a ˆ n =1.1080, and σ ˆ n =0.5307. β ˆ n and a ˆ n approximately equal β and a, respectively.

The above results show that our estimation method is valid in some cases.

References

  1. Hu HH: Asymptotic normality of Huber-Dutter estimators in a linear model with AR(1) processes. J. Stat. Plan. Inference 2013,143(3):548–562. 10.1016/j.jspi.2012.08.012

    Article  MathSciNet  MATH  Google Scholar 

  2. Maller RA: Asymptotics of regressions with stationary and nonstationary residuals. 105. Stochastic Processes and Their Appliations 2003, 33–67.

    Google Scholar 

  3. Pere P: Adjusted estimates and Wald statistics for the AR(1) model with constant. J. Econom. 2000, 98: 335–363. 10.1016/S0304-4076(00)00023-3

    Article  MathSciNet  MATH  Google Scholar 

  4. Fuller WA: Introduction to Statistical Time Series. 2nd edition. Wiley, New York; 1996.

    MATH  Google Scholar 

  5. Miao Y, Liu W: Moderate deviations for LS estimator in simple linear EV regression model. J. Stat. Plan. Inference 2009,139(9):3122–3131. 10.1016/j.jspi.2009.02.021

    Article  MathSciNet  MATH  Google Scholar 

  6. Miao Y, Wang K, Zhao F: Some limit behaviors for the LS estimator in simple linear EV regression models. Stat. Probab. Lett. 2011, 81: 92–102. 10.1016/j.spl.2010.09.023

    Article  MathSciNet  MATH  Google Scholar 

  7. Miao Y, Yang G, Shen L: The central limit theorem for LS estimator in simple linear EV regression models. Commun. Stat., Theory Methods 2007, 36: 2263–2272. 10.1080/03610920701215266

    Article  MathSciNet  MATH  Google Scholar 

  8. Liu JX, Chen XR: Consistency of LS estimator in simple linear EV regression models. Acta Math. Sci. 2005, 25: 50–58.

    MathSciNet  MATH  Google Scholar 

  9. Cui HJ: Asymptotic normality of M -estimates in the EV model. J. Syst. Sci. Math. Sci. 1997,10(3):225–236.

    MathSciNet  MATH  Google Scholar 

  10. Cui HJ, Chen SX: Empirical likelihood confidence region for parameter in the errors-in-variables models. J. Multivar. Anal. 2003, 84: 101–115. 10.1016/S0047-259X(02)00017-9

    Article  MathSciNet  MATH  Google Scholar 

  11. Cheng CL, Van Ness JW: Statistical Regression with Measurement Error. Arnold, London; 1999.

    MATH  Google Scholar 

  12. Hamilton JD: Time Series Analysis. Princeton University Press, Princeton; 1994.

    MATH  Google Scholar 

  13. Brockwell PJ, Davis RA: Time Series: Theory and Methods. Springer, New York; 1987.

    Book  MATH  Google Scholar 

  14. Baran S: A consistent estimator for linear models with dependent observations. Commun. Stat., Theory Methods 2004,33(10):2469–2486.

    Article  MathSciNet  MATH  Google Scholar 

  15. Fan GL, Liang HY, Wang JF, Xu HX: Asymptotic properties for LS estimators in EV regression model with dependent errors. AStA Adv. Stat. Anal. 2010, 94: 89–103. 10.1007/s10182-010-0124-3

    Article  MathSciNet  Google Scholar 

  16. Miao Y, Zhao F, Wang K: Central limit theorems for LS estimators in the EV regression model with dependent measurements. J. Korean Stat. Soc. 2011, 40: 303–312. 10.1016/j.jkss.2010.12.002

    Article  MathSciNet  MATH  Google Scholar 

  17. Fuller WA: Measurement Error Models. Wiley, New York; 1987.

    Book  MATH  Google Scholar 

  18. Brown M: Robust line estimation with errors in both variables. J. Am. Stat. Assoc. 1982, 77: 71–79. 10.1080/01621459.1982.10477768

    Article  MATH  Google Scholar 

  19. Ketellapper RH, Ronner AE: Are robust estimation methods useful in the structural errors-in-variables model. Metrika 1984, 31: 33–41. 10.1007/BF01915180

    Article  MathSciNet  MATH  Google Scholar 

  20. Zamar RH: Robust estimation in the errors-in-variables model. Biometrika 1989,76(1):149–160. 10.1093/biomet/76.1.149

    Article  MathSciNet  MATH  Google Scholar 

  21. Cheng CL, Van Ness JW: Generalized M -estimators for errors-in-variables regression. Ann. Stat. 1992,20(1):385–397. 10.1214/aos/1176348528

    Article  MathSciNet  MATH  Google Scholar 

  22. He X, Liang H: Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Stat. Sin. 2000, 10: 129–140.

    MathSciNet  MATH  Google Scholar 

  23. Fekri M, Ruiz-Gazen A: Robust weighted orthogonal regression in the errors-in-variables model. J. Multivar. Anal. 2004, 88: 89–108. 10.1016/S0047-259X(03)00057-5

    Article  MathSciNet  MATH  Google Scholar 

  24. Wu WB: M -Estimation of linear models with dependent errors. Ann. Stat. 2007,35(2):495–521. 10.1214/009053606000001406

    Article  MathSciNet  MATH  Google Scholar 

  25. Silvapullé MJ: Asymptotic behavior of robust estimators of regression and scale parameters with fixed carriers. Ann. Stat. 1985,13(4):1490–1497. 10.1214/aos/1176349750

    Article  MathSciNet  MATH  Google Scholar 

  26. Hampel FR, Ronchetti EM, Rousseeuw PJ, et al.: Robust Statistics. Wiley, New York; 1986.

    MATH  Google Scholar 

  27. Huber PJ, Ronchetti EM: Robust Statistics. 2nd edition. Wiley, New York; 2009.

    Book  MATH  Google Scholar 

  28. Li L: On Koul’s minimum distance estimators in the regression models with long memory moving averages. 105. Stochastic Processes and Their Applications 2003, 257–269.

    Google Scholar 

  29. Babu GJ: Strong representations for LAD estimators in linear models. Probab. Theory Relat. Fields 1989, 83: 547–558. 10.1007/BF01845702

    Article  MathSciNet  MATH  Google Scholar 

  30. Salibian-Barrera M, Zamar RH: Bootstrapping robust estimates of regression. Ann. Stat. 2002,30(2):556–582. 10.1214/aos/1021379865

    Article  MathSciNet  MATH  Google Scholar 

  31. Wu R, Wang Q: Shrinkage estimation for linear regression with ARMA errors. J. Stat. Plan. Inference 2012, 142: 2136–2148. 10.1016/j.jspi.2012.02.047

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhou Z, Wu W: On linear models with long memory and heavy-tailed errors. J. Multivar. Anal. 2011, 102: 349–362. 10.1016/j.jmva.2010.09.009

    Article  MathSciNet  MATH  Google Scholar 

  33. Tong XW, Cui HJ, Yu P: Consistency and normality of Huber-Dutter estimators for partial linear model. Sci. China Ser. A 2008,51(10):1831–1842. 10.1007/s11425-008-0028-9

    Article  MathSciNet  MATH  Google Scholar 

  34. Arcones MA:The Bahadur-Kiefer representation of L p regression estimators. Econom. Theory 1996, 12: 257–283. 10.1017/S0266466600006587

    Article  MathSciNet  Google Scholar 

  35. Zeckhauser R, Thompson M: Linear regression with non-normal error terms. Rev. Econ. Stat. 1970, 52: 280–286. 10.2307/1926296

    Article  Google Scholar 

  36. Ronner, AE: P-norm estimators in a linear regression model. Ph.D. thesis, Drukkerijenbv-Groningen, The Netherlands (1977)

  37. Ronner AE: Asymptotic normality of p -norm estimators in multiple regression. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1984, 66: 613–620. 10.1007/BF00531893

    Article  MathSciNet  MATH  Google Scholar 

  38. Prakasa Rao BLS: Asymptotic Theory of Statistical Inference. Wiley, New York; 1987.

    MATH  Google Scholar 

  39. Hall P, Heyde CC: Martingale Limit Theory and Its Application. Academic Press, New York; 1980.

    MATH  Google Scholar 

Download references

Acknowledgements

The first author’s work was supported by the Natural Science Foundation of China (No. 11471105). The second author’s work was Supported by Natural Science Foundation of China (No. 41374017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongchang Hu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, H., Pan, X. Asymptotic normality of Huber-Dutter estimators in a linear EV model with AR(1) processes. J Inequal Appl 2014, 474 (2014). https://doi.org/10.1186/1029-242X-2014-474

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-474

Keywords