Skip to main content

Asymptotic properties of wavelet-based estimator in nonparametric regression model with weakly dependent processes

Abstract

In this paper, we consider a nonparametric regression model with replicated observations based on the φ-mixing and the ρ-mixing error’s structures respectively, for exhibiting dependence among the units. The wavelet procedures are developed to estimate the regression function. Under suitable conditions, we obtain expansions for the bias and the variance of wavelet estimator, prove the moment consistency, the strong consistency, the strong convergence rate of it, and establish the asymptotic normality of wavelet estimator.

1 Introduction

Consider the following nonparametric regression model:

Y(x)=g(x)+e(x),
(1.1)

from a discrete set of observations of the process Y() at the points { x i ,1in}, {e()} is a zero mean stochastic process, defined on a probability space (Ω,A,P), and g(x) is an unknown function defined on a closed interval I=[0,1].

It is well known that regression model has a wide range of applications in filtering and prediction in communications and control systems, pattern recognition, classification and econometrics, and is an important tool of data analysis. Of much interest about the problem have been the weighted function estimates of g; see, for example, Priestley and Chao [1], Gasser and Müller [2, 3], Prakasa Rao [4], Clark [5] and the references therein for the independent case; Roussas [6], Fan [7], Roussas and Tran [8], Liang and Jing [9], Yang and Li [10], Yang [11] for the various dependent cases.

In this article, we discuss a nonparametric estimation problem in the model (1.1) with repeated measurements. We assume that a random sample of m experimental units is available and the observed data for the j th unit are the values, Y ( j ) ( x i ) (i=1,,n), of a response variable corresponding to the values, x i (i=1,,n), of a controlled variable. Let {( Y ( j ) ( x i ), x i ),1jm,1in} obey the model (1.1), i.e.,

Y ( j ) ( x i )=g( x i )+ e ( j ) ( x i ),
(1.2)

where the x i ’s are fixed with 0 x 1 x n 1, and the e ( j ) ( x i )’s are zero mean random errors. The model can be applied to many fields. For instance, in hydrology many phenomena may be represented by a sequence of continuous responses y ( j ) ( x i ), where x i represents the time elapsed from the beginning of a certain year, j indicates the corresponding year, e ( j ) ( x i ) is the measurement of the deviation from the annual mean g( x i ); in some biological phenomena as the growth of individual (or populations) y ( j ) ( x i ) will be the growth points of the j-individual, e ( j ) ( x i ) the measurement of the deviation from the mean growth g( x i ) of the response of the j-individual, and x i the points where measurements are taken [12]. It is clear that the observations Y ( j ) ( x i ) (i=1,,n) made on the same experimental unit will in general be correlated. Hart and Wehrly [13] studied the asymptotic square mean error of a kernel estimator in the model with zero mean errors e ( j ) ( x i )’s be correlated, that is, cov( e ( j ) ( x i ), e ( k ) ( x l ))= σ 2 ρ( x i x l ) for j=k and zero elsewhere where ρ() is a correlation function. The uncorrelated assumption between units is unrealistic. In practice, sometimes the observed responses in different units are also correlated, more precisely a sequence of responses curve { Y ( j ) (),j1} has an intrinsic dependence structure, such as mixing conditions. Under a weak error’s structure among units, Fraiman and Iribarren [12] proposed nonparametric estimates of g() in model (1.2) based on locally weighted averages, and gave their local and global asymptotic behaviors.

In the paper, we develop wavelet methods to estimate a regression function in the model (1.2) with the φ-mixing and ρ-mixing error’s structures respectively, that is, { e ( j ) (),j1} is a φ-mixing or ρ-mixing process. One motivation for using wavelet for the model (1.2) is that most of the nonparametric analyses of regression models are based on an important assumption that the regression function is smooth; but in reality, the assumption may not be satisfied. The major advantage of the wavelet is that the hypotheses of degrees of smoothness of the underlying function are less restrictive. Up to now, there have been no results on wavelet estimation for model (1.2). Another motivation for considering the model (1.2) with weakly dependent processes is that our interest is to avoid as far as possible any assumptions on the error’s structure within { e ( j ) ( x 1 ),, e ( j ) ( x n )} for each j (j=1,,m), and in the meantime to exhibit weakly dependence among the units.

For a systematic discussion of wavelets and their applications in statistics, see the recent monographs by Härdle et al. [14] and Vidakovic [15]. Due to their ability to adapt to local features of unknown curves, many authors have applied wavelet procedures to estimate the general nonparametric model. See recent works, for example, Antoniadis et al. [16] and Xue [17] on independent errors; Johnstone and Silverman [18] for correlated noise; Liang et al. [19] on martingale difference errors; Li and Xiao [20] for long memory data; Li et al. [21] on associated samples; Xue [22], Sun and Chai [23], Li and Guo [24] and Liang [25] on mixing error assumptions.

For dealing with weakly dependent data, bootstrap and blockwise are well known. They are useful techniques of resampling, which can preserve the dependent properties of the data by appropriately choosing blocks of data. They have been sufficiently investigated by many papers, for example, Bühlman and Künsch [26], Yuichi [27], Lahiri [28], Lin and Zhang [29, 30] and Lin et al. [31]. For the nonparametric regression model without repeated observations under weakly dependent processes, Lin and Zhang [30] respectively adopted bootstrap wavelet and blockwise bootstrap wavelet to generate an independent blockwise sample from the original dependent data, defined the wavelet estimators of g(), and then took advantage of the independence of the blockwise sample to prove some asymptotic properties of the wavelet estimators. In addition, the weakly dependent conditions A 0 and A 0 in Lin and Zhang [30] are weaker than the conditions for the consistency and asymptotic normality of ρ-mixing and φ-mixing dependent sequences. In our paper, we consider the nonparametric regression model with repeated observations under the specific ρ-mixing and φ-mixing dependent processes. We do not use bootstrap and/or blockwise to define wavelet estimator of g(), but construct it by the following simple formula (2.2), and can show directly some asymptotic behaviors of the wavelet estimator by applying the fundamental properties of ρ-mixing and φ-mixing sequences and some proof techniques.

Recall the definitions of the sequences of the φ-mixing and ρ-mixing random variables. Let { X m ,m1} be a sequence of random variables defined on a probability space (Ω,F,P), F k l =σ( X i ,kil) be σ-algebra generated by X k ,, X l , and denote L 2 ( F k l ) to be the set of all F k l -measurable random variables with second moments.

A sequence of random variables { X m ,m1} is said to be φ-mixing if

φ(m)= sup k 1 , A F 1 k , P ( A ) 0 , B F k + m | P ( B | A ) P ( B ) | 0,as m.

A sequence of random variables { X m ,m1} is said to be ρ-mixing if maximal correlation coefficient

ρ(m)= sup k 1 , X L 2 ( F 1 k ) , Y L 2 ( F k + m ) | cov ( X , Y ) | Var ( X ) Var ( Y ) 0as m.

The concept of a mixing sequence is central in many areas of economics, finance and other sciences. A mixing time series can be viewed as a sequence of random variables for which the past and distant future are asymptotically independent. A number of limit theorems for φ-mixing and ρ-mixing random variables have been studied by many authors. For example, see Shao [32], Peligrad [33], Utev [34], Kiesel [35], Chen et al. [36] and Zhou [37] for φ-mixing; Peligrad [38], Peligrad and Shao [39, 40], Shao [41] and Bradley [42] for ρ-mixing. Some limit theories can be found in the monograph of Lin and Lu [43].

The article is structured as follows. In Section 2, we introduce the wavelet estimation procedures and establish main results. The proofs of the main results are provided in Section 3.

2 Estimators and main results

Defining Y ¯ ( x i )= j = 1 m Y ( j ) ( x i )/m, from (1.2), we have

Y ¯ ( x i )=g( x i )+ e ¯ ( x i ),
(2.1)

where e ¯ ( x i )= j = 1 m e ( j ) ( x i )/m. Expressing the model in this way is useful since the problem of estimating g() may now be regarded as that of fitting a curve through the sample means Y ¯ ( x i ). The wavelet technique is applied to estimate the regression function in model (1.2). The detailed procedure is summarized below.

For convenience, we introduce some symbols and definitions along the line of Antoniadis et al. [16]. Suppose that ϕ() is a given scaling function in the Schwartz space with order l. A multiresolution analysis of L 2 (R) consists of an increasing sequence of closed subspaces V k , k=,2,1,0,1,2, , where L 2 (R) is the set of square integrable functions over the real line. The associated integral kernel of V k is given by

E k (x,s)= 2 k E 0 ( 2 k x , 2 k s ) = 2 k l Z ϕ ( 2 k x l ) ϕ ( 2 k s l ) ,

where Z denotes the set of integers, k=k(n)>0 is an integer depending only on n. Let A i =[ s i 1 , s i ] be a partition of the interval with x i A i for 1in. From (2.1), we now construct the wavelet estimator of g,

g ˆ m , n (x)= i = 1 n Y ¯ ( x i ) A i E k (x,s)ds= 1 m j = 1 m i = 1 n Y ( j ) ( x i ) A i E k (x,s)ds.
(2.2)

In the sequel, let C, C 1 , denote generic finite positive constants, whose values are unimportant and may change from line to line. Set X r = ( E | X | r ) 1 / r and suppose that E e ( j ) (x)=0 for j1 and xI.

Before formulating the main results, we first give some assumptions, which are quite mild and can be easily satisfied.

A1.

  1. (i)

    { e ( j ) (x),j1} is a sequence of φ-mixing (ρ-mixing);

  2. (ii)

    { e ( j ) (x),j1} is a sequence of identically distributed φ-mixing (ρ-mixing);

  3. (iii)

    { e ( j ) (x),j1} is a sequence of strictly stationary φ-mixing (ρ-mixing).

A2.

  1. (i)

    sup j 1 , x I E | e ( j ) ( x ) | 2 K<;

  2. (ii)

    n = 1 φ 1 / 2 (n)< ( n = 1 ρ(n)<);

  3. (iii)

    sup j 1 , x I E | e ( j ) ( x ) | 2 + δ K< for some δ>0;

  4. (iv)

    n = 1 φ ( 1 a ) / 2 (n)< ( n = 1 ρ 1 a (n)<) for some 0<a<1.

A3. σ 2 =E ( e ( 1 ) ( x ) ) 2 +2 j = 1 E( e ( 1 ) (x) e ( j + 1 ) (x)).

A4. g() H υ , υ>1/2, where H υ is the Sobolev space of order υ, i.e., if h H υ , then H υ ={h: | h ˆ ( w ) | 2 ( 1 + w 2 ) υ dw<}, where h ˆ denotes the Fourier transform of h.

A5. g() satisfies the Lipschitz condition of order γ>0.

A6. ϕ() is q regular with q>υ, satisfies the Lipschitz condition with order 1 and has a compact support, and satisfies | ϕ ˆ (ξ)1|=O(ξ) as ξ0, where ϕ ˆ () is the Fourier transform of ϕ().

A7. max 1 i n | s i s i 1 |=O(1/n).

A8. k=k(n), k as n, and n=n(m), n as m, such that m/ n 2 γ 0, and n 2 γ =O( 2 ( 2 υ 1 ) k ), where υ and γ are defined in (A4) and (A5), respectively.

Remark 2.1 We refer to the monograph of Doukhan [44] for properties of φ-mixing and ρ-mixing, and more mixing conditions.

Remark 2.2 It is well known that (A4)-(A7) are the mild regularity conditions for wavelet smoothing; see Antoniadis et al. [16], Chai and Xu [45], Xue [22], Sun and Chai [23], Zhou and You [46] and Li and Guo [24].

Remark 2.3 (A8) is satisfied easily. For example, n= m ( 1 + b ) / 2 γ for any b>0, and k=d log 2 n ( 2 υ 1 ) 1 log 2 c for d=2γ/(2υ1) and c>0.

Our results are listed as follows.

Theorem 2.1 Assume that (A1)(i) and (A2)(i), and (A4)-(A7) are satisfied. Then

  1. (a)

    Bias( g ˆ m , n (x))=O( n γ )+O( τ k ), where τ k is defined in Lemma  3.2;

  2. (b)

    φ-mixing: Var( g ˆ m , n (x))=O((1+2 i = 1 m φ 1 / 2 (i))/m);

ρ-mixing: Var( g ˆ m , n (x))=O((1+2 i = 1 m ρ(i))/m).

Theorem 2.2 Under (A1)(i), (A2)(i)(ii), and (A4)-(A7), we have

lim min ( m , n ) sup x I E | g ˆ m , n ( x ) g ( x ) | 2 =0.

Theorem 2.3 Assume that [(A1)(ii) and (A2)(i)(ii)] or [(A1)(i) and (A2)(iii)], and (A4)-(A7) are satisfied. Then

lim min ( m , n ) | g ˆ m , n ( x ) g ( x ) | =0,a.s.

Theorem 2.4 Assume that (A1)(ii), (A2)(i)(ii), and (A4)-(A7) are satisfied. If υ>3/2, k=c log 2 n , and n=O( m d / c ), where 0<c1/3 and 0<d<1/2, then

sup x I | g ˆ m , n ( x ) g ( x ) | =O ( 2 k ) ,a.s.

Theorem 2.5 Assume that (A1)(iii), (A2)(i)(iv), (A3), and (A4)-(A8) are satisfied. For a fixed x and each ϵ>0, there exists η=η(ς) verifying sup | h | ς E | e ( j ) ( x + h ) e ( j ) ( x ) | 2 η(ς), where η(ς)0 as ς0. Then

m 1 / 2 ( g ˆ m , n ( x ) g ( x ) ) d N ( 0 , σ 2 ) ,

where d denotes convergence in distribution.

3 Proofs of the main results

In order to prove the main results, we first present several lemmas.

Lemma 3.1 Suppose that (A6) holds. We have

  1. (a)

    sup 0 x , s 1 | E k (x,s)|=O( 2 k ).

  2. (b)

    sup 0 x 1 0 1 | E k (x,s)|dsC.

  3. (c)

    0 1 E k (x,s)ds1 uniformly in xI, as k.

The proofs of (a) and (b), and (c) respectively can be found in Antoniadis et al. [16] and Walter [47].

Lemma 3.2 Suppose that (A6)-(A7) hold, and h() satisfies (A4)-(A5). Then

sup 0 x 1 |h(x) i = 1 n h( x i ) A i E k (x,s)ds|=O ( n γ ) +O( τ k ),

where

τ k = { ( 1 / 2 k ) υ 1 / 2 if 1 / 2 < υ < 3 / 2 , k / 2 k if υ = 3 / 2 , 1 / 2 k if υ > 3 / 2 .

It follows easily from Theorem  3.2 of Antoniadis et al. [16].

Lemma 3.3 (a) Let { X n ,n1} be a φ-mixing sequence, X L p ( F 1 k ), Y L q ( F k + n ) with p,q1 and 1 p + 1 q . Then

|EXYEXEY|2 φ 1 / p (n) X p Y q .
  1. (b)

    Let { X n ,n1} be a ρ-mixing sequence, X L p ( F 1 k ), Y L q ( F k + n ) with p,q1 and 1 p + 1 q . Then

    |EXYEXEY|4ρ ( n ) 2 p 2 q X p Y q .

Lemmas 3.3(a) and (b) respectively come from Lemmas 10.1.d and 10.1.c of Lin and Lu [43].

Let S j = i = 1 j X j for j1, and S l (i)= j = l + 1 l + i X j for i1 and l0. The following Lemma 3.4(a) and (b) can be found in Shao [32] and Shao [41] respectively.

Lemma 3.4 (a) Let { X n ,n1} be a φ-mixing sequence.

  1. (i)

    If E X i =0, then

    E S l 2 (i)8,000iexp { 6 j = 1 log i φ 1 / 2 ( 2 j ) } max l + 1 j l + i E X j 2 .
  2. (ii)

    Suppose that there exists an array { c l n } of positive numbers such that max 1 i n E S l 2 (i) c l n for every l0, n1. Then, for any q2, there exists a positive constant C=C(q,φ()) such that

    E max 1 i n | S l ( i ) | q C ( c l n q / 2 + E max l < i l + n | X i | q ) .
  1. (b)

    Let { X n ,n1} be a ρ-mixing sequence with E X i =0. Then, for any q2, there exists a positive constant C=C(q,ρ()) such that

    E max 1 j n | S j | q C ( n q / 2 exp { C j = 1 log n ρ ( 2 j ) } max 1 j n ( E | X j | 2 ) q / 2 + n exp { C j = 1 log n ρ 2 / q ( 2 j ) } max 1 j n E | X j | q ) .

Lemma 3.5 Let { X n ,n1} be a φ-mixing (ρ-mixing) sequence of identically distributed random variables with

n = 1 φ 1 / 2 ( 2 n ) < ( n = 1 ρ ( 2 n ) < ) ,E | X 1 | r <

for some 1r<2. Then

1 n i = 1 n ( X i E X i )=o ( n ( 1 1 / r ) ) ,a.s.

Lemma 3.5 can be found in Theorem 8.2.2 of Lin and Lu [43]. Ibragimov [48, 49] gave the following Lemma 3.6, which also can be found in Lin and Lu [43].

Lemma 3.6 Let { X n ,n1} be a strictly stationary φ-mixing (ρ-mixing) sequence of random variables with E X 1 =0, E | X 1 | 2 < and σ n 2 =E S n 2 . If n = 1 φ 1 / 2 (n)< ( n = 1 ρ( 2 n )<), then

S n / σ n N(0,1).

We are now in a position to give the proofs of the main results.

Proof of Theorem 2.1 From (1.2) and (2.2), we have

Bias ( g ˆ m , n ( x ) ) =E ( g ˆ m , n ( x ) ) g(x)= i = 1 m g( x i ) A i E k (x,s)dsg(x).
(3.1)

By Lemma 3.2, (a) holds.

Denote V ( j ) (x)= i = 1 n e ( j ) ( x i ) A i E k (x,s)ds. By sup j 1 , x I E ( e ( j ) ( x ) ) 2 K< and Lemma 3.1(b),

V ( j ) ( x ) 2 2 = E | i = 1 n e ( j ) ( x i ) A i E k ( x , s ) d s | 2 E ( i = 1 n | e ( j ) ( x i ) | A i | E k ( x , s ) | d s ) 2 = E ( i = 1 n A i | E k ( x , s ) | d s 0 1 | E k ( x , s ) | d s | e ( j ) ( x i ) | 0 1 | E k ( x , s ) | d s ) 2 i = 1 n A i | E k ( x , s ) | d s 0 1 | E k ( x , s ) | d s E ( | e ( j ) ( x i ) | 0 1 | E k ( x , s ) | d s ) 2 K ( 0 1 | E k ( x , s ) | d s ) 2 C 1 .
(3.2)

For φ-mixing, by Lemmas 3.3(a) and (3.2), we have

Var ( g ˆ m , n ( x ) ) = E ( g ˆ m , n ( x ) E ( g ˆ m , n ( x ) ) ) 2 = E ( 1 m j = 1 m V ( j ) ( x ) ) 2 = 1 m 2 ( j = 1 m V ( j ) ( x ) 2 2 + 2 1 i < j m E ( V ( i ) ( x ) V ( j ) ( x ) ) ) 1 m 2 ( j = 1 m V ( j ) ( x ) 2 2 + 2 1 i < j m φ 1 / 2 ( j i ) V ( i ) ( x ) 2 V ( j ) ( x ) 2 ) 1 m 2 ( C 1 m + 2 C 1 1 i < j m φ 1 / 2 ( j i ) ) C 1 m ( 1 + 2 i = 1 m φ 1 / 2 ( i ) ) .

Therefore, (b) holds for φ-mixing. Similar to the arguments, we obtain (b) for ρ-mixing by Lemma 3.3(b). □

Proof of Theorem 2.2 We know that E | g ˆ m , n ( x ) g ( x ) | 2 =Var( g ˆ m , n (x))+ ( E g ˆ m , n ( x ) g ( x ) ) 2 . It follows easily from Theorem 2.1. □

Proof of Theorem 2.3 From (1.2) and (2.2), we have

g ˆ m , n (x)g(x)= 1 m j = 1 m V ( j ) (x)+ ( i = 1 n g ( x i ) A i E k ( x , s ) d s g ( x ) ) ,
(3.3)

where V ( j ) (x) is defined in the proof of Theorem 2.1. Note that

| i = 1 n g( x i ) A i E k (x,s)dsg(x)|=O ( n γ ) +O( τ k )=o(1)
(3.4)

as n by Lemma 3.2. It remains to show that

I m = 1 m j = 1 m V ( j ) (x)0,a.s. (m).
(3.5)
  1. (1)

    φ-mixing. Here, we consider I m under two different assumptions: [(A1)(ii) and (A2)(i)(ii)] and [(A1)(i) and (A2)(iii)], respectively.

If the assumptions are [(A1)(ii) and (A2)(i)(ii)], denote e 1 ( j ) ( x i )= e ( j ) ( x i )I(| e ( j ) ( x i )| m d ) and e 2 ( j ) ( x i )= e ( j ) ( x i )I(| e ( j ) ( x i )|> m d ) for 0<d<1, e ˜ 1 ( j ) ( x i )= e 1 ( j ) ( x i )E e 1 ( j ) ( x i ), V 1 ( j ) (x)= i = 1 n e ˜ ( j ) ( x i ) A i E k (x,s)ds. Note that e ( j ) ( x i )= e ˜ 1 ( j ) ( x i )+ e 2 ( j ) ( x i )E( e 2 ( j ) ( x i )). We have

| I m | | 1 m j = 1 m V 1 ( j ) ( x ) | + | 1 m j = 1 m i = 1 n e 2 ( j ) ( x i ) A i E k ( x , s ) d s | + | 1 m j = 1 m i = 1 n E e 2 ( j ) ( x i ) A i E k ( x , s ) d s | = : I m 1 + I m 2 + I m 3 .
(3.6)

For q>2, by Lemma 3.4(a), we have

m = 1 P ( I m 1 > ε ) m = 1 ( m ε ) q E | j = 1 m V 1 ( j ) ( x ) | q C m = 1 m q [ ( m exp { 6 j = 1 log m φ 1 / 2 ( 2 j ) } max 1 l m E | V 1 ( l ) ( x ) | 2 ) q / 2 + j = 1 m E | V 1 ( j ) ( x ) | q ] C m = 1 m q ( m q / 2 + m m ( q 2 ) d ) C m = 1 ( m q / 2 + m ( 1 d ) q 2 d + 1 ) < .

Therefore, it follows from the Borel-Cantelli lemma that

I m 1 =o(1),a.s.
(3.7)

Note that sup x I E ( | e ( 1 ) ( x ) | 2 ) r < (r=1). By Lemma 3.5, we have 1 m j = 1 m | e ( j ) ( x i ) | 2 E | e ( 1 ) ( x i ) | 2 =o(1) a.s. Therefore,

1 m j = 1 m | e ( j ) ( x i ) | 2 <,a.s.
(3.8)

By (3.8) and Lemma 3.1(ii), one gets

I m 2 i = 1 n ( 1 m j = 1 m | e ( j ) ( x i ) | I ( | e ( j ) ( x i ) | > m d ) ) A i | E k ( x , s ) | d s 1 m d i = 1 n ( 1 m j = 1 m | e ( j ) ( x i ) | 2 ) A i | E k ( x , s ) | d s C 1 m d 0 1 | E k ( x , s ) | d s C / m d = o ( 1 ) a.s.
(3.9)

Further, we have

I m 3 1 m j = 1 m i = 1 n E | e ( j ) ( x i ) | I ( | e ( j ) ( x i ) | > m d ) A i | E k ( x , s ) | d s 1 m d + 1 j = 1 m i = 1 n E | e ( j ) ( x i ) | 2 A i | E k ( x , s ) | d s C 1 m d 0 1 | E k ( x , s ) | d s C / m d = o ( 1 ) .
(3.10)

From (3.6), (3.7), (3.9) and (3.10), we obtain (3.5).

If the assumptions are [(A1)(i) and (A2)(iii)], note that φ(m)0 as m, hence j = 1 log m φ 1 / 2 ( 2 j )=o(logm). Further, exp{λ j = 1 log m φ 1 / 2 ( 2 j )}=o( m t ) for any λ>0 and t>0. Take q=2+δ, next take t>0 small enough such that (1+δ/2)+t(1+δ/2)<1. By Lemma 3.4(a), we have

m = 1 P ( I m > ε ) m = 1 ( m ε ) ( 2 + δ ) E | j = 1 m V ( j ) ( x ) | 2 + δ C m = 1 m ( 2 + δ ) [ ( m exp { 6 j = 1 log m φ 1 / 2 ( 2 j ) } max 1 l m E | V ( l ) ( x ) | 2 ) ( 2 + δ ) / 2 + j = 1 m E | V ( j ) ( x ) | 2 + δ ] C m = 1 m ( 2 + δ ) ( m ( 1 + t ) ( 2 + δ ) / 2 + m ) C m = 1 ( m ( 1 + δ / 2 ) + t ( 1 + δ / 2 ) + m ( 1 + δ ) ) < .

Therefore, from the Borel-Cantelli lemma, we obtain (3.5).

  1. (2)

    ρ-mixing. We also consider I m under two different assumptions: [(A1)(ii) and (A2)(i)(ii)] and [(A1)(i) and (A2)(iii)], respectively.

Note that ρ(m)0 as m, hence j = 1 log m ρ 2 / q ( 2 j )=o(logm) for q2. Further, {λ j = 1 log m ρ 2 / q ( 2 j )}=o( m t ) for any λ>0 and t>0.

If the assumptions are [(A1)(ii) and (A2)(i)(ii)], from (3.6)-(3.10), it is known that we only need to prove (3.7) for obtaining (3.5). Taking q>2, we have q/2>1 and (q2)(1d)>0. Next, take t>0 small enough such that q/2t>1 and (q2)(1d)t>0. By Lemma 3.4(b), we have

m = 1 P ( I m 1 > ε ) m = 1 ( m ε ) q | j = 1 m V 1 ( j ) ( x ) | q C m = 1 m q ( m q / 2 exp { C 1 j = 1 log m ρ ( 2 j ) } max 1 j m ( E | V 1 ( j ) ( x ) | 2 ) q / 2 + m exp { C 1 j = 1 log m ρ 2 / q ( 2 j ) } max 1 j m E | V 1 ( j ) ( x ) | q ) C m = 1 m q ( m q / 2 + t + m 1 + t m ( q 2 ) d ) = C m = 1 ( m ( q / 2 t ) + m [ ( q 2 ) ( 1 d ) t + 1 ] ) < .

Therefore, (3.7) holds.

If the assumptions are [(A1)(i) and (A2)(iii)], take q=2+δ, next take t>0 small enough such that δ/2t>0. By Lemma 3.4(b), we have

m = 1 P ( I m > ε ) m = 1 ( m ε ) ( 2 + δ ) E | j = 1 m V ( j ) ( x ) | 2 + δ C m = 1 m ( 2 + δ ) ( m ( 2 + δ ) / 2 exp { C 1 j = 1 log m ρ ( 2 j ) } max 1 j m ( E | V ( j ) ( x ) | 2 ) ( 2 + δ ) / 2 + m exp { C 1 j = 1 log m ρ 2 / ( 2 + δ ) ( 2 j ) } max 1 j m E | V ( j ) ( x ) | 2 + δ ) C m = 1 m ( 2 + δ ) ( m ( 2 + δ ) / 2 + t + m 1 + t ) = C m = 1 ( m ( 1 + δ / 2 t ) + m ( 1 + δ t ) ) < .

Thus, we obtain (3.5).

So, we complete the proof of Theorem 2.3. □

Proof of Theorem 2.4 Here, we use some symbols of the proof of Theorem 2.3. From (3.3), we have

sup x I | g ˆ m , n ( x ) g ( x ) | sup x I | 1 m j = 1 m V ( j ) (x)|+ sup x I ( i = 1 n g ( x i ) A i E k ( x , s ) d s g ( x ) ) .
(3.11)

By Lemma 3.2, for υ>3/2, one gets

sup x I ( i = 1 n g ( x i ) A i E k ( x , s ) d s g ( x ) ) =O ( 2 k ) .
(3.12)

Note that E 0 (x,s) satisfies the Lipschitz condition of order 1 on x. We have

sup x I | 1 m j = 1 m V ( j ) ( x ) | = sup x I | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i E k ( x , s ) d s | = sup x I | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i E k ( x , s ) l = 1 n I ( s l 1 < x s l ) d s | sup x I | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i l = 1 n [ E k ( x , s ) E k ( x l , s ) ] I ( s l 1 < x s l ) d s | + sup x I | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i l = 1 n E k ( x l , s ) I ( s l 1 < x s l ) d s | sup x I | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i l = 1 n [ E k ( x , s ) E k ( x l , s ) ] I ( s l 1 < x s l ) d s | + max 1 l n | 1 m j = 1 m i = 1 n e ( j ) ( x i ) A i E k ( x l , s ) d s | C sup x I 1 m j = 1 m i = 1 n | e ( j ) ( x i ) | A i l = 1 n 2 2 k | x x l | I ( s l 1 < x s l ) d s + max 1 l n | 1 m j = 1 m i = 1 n e 2 ( j ) ( x i ) A i E k ( x l , s ) d s | + max 1 l n | 1 m j = 1 m i = 1 n E e 2 ( j ) ( x i ) A i E k ( x l , s ) d s | + max 1 l n | 1 m j = 1 m V 1 ( j ) ( x l ) | = : J 1 + J 2 + J 3 + J 4 .
(3.13)

Note that sup j 1 , x I E | e ( j ) ( x ) | 2 <, for p=1,2, by Lemma 3.5, we have

1 m j = 1 m ( | e ( j ) ( x i ) | p E | e ( j ) ( x i ) | p ) 0,a.s.

Therefore,

1 m j = 1 m | e ( j ) ( x i ) | p <,a.s.
(3.14)

By (3.14), one gets

2 k J 1 = C sup x I 2 k m j = 1 m i = 1 n | e ( j ) ( x i ) | A i l = 1 n 2 2 k | x x l | I ( s l 1 < x s l ) d s C 2 3 k n 2 i = 1 n ( 1 m j = 1 m | e ( j ) ( x i ) | ) C 2 3 k / n = { o ( 1 ) if  0 < c < 1 / 3 , O ( 1 ) if  c = 1 / 3 , a.s.

Thus, we obtain

J 1 =o ( 2 k ) orO ( 2 k ) ,a.s.
(3.15)

By Lemma 3.1(ii), (3.14), sup x I E | e ( 1 ) ( x i ) | 2 < and n c =O( m d ), we have

2 k J 2 2 k max 1 l n 1 m j = 1 m i = 1 n | e ( j ) ( x i ) | I ( | e ( j ) ( x i ) | > m d ) A i | E k ( x l , s ) | d s 2 k m d max 1 l n i = 1 n ( 1 m j = 1 m | e ( j ) ( x i ) | 2 ) A i | E k ( x l , s ) | d s C 2 k m d max 1 l n 0 1 | E k ( x l , s ) | d s C n c m d = O ( 1 ) , a.s.

and

2 k J 3 2 k max 1 l n 1 m j = 1 m i = 1 n E | e ( j ) ( x i ) | I ( | e ( j ) ( x i ) | > m d ) A i | E k ( x l , s ) | d s 2 k m d max 1 l n 1 m j = 1 m i = 1 n E | e ( 1 ) ( x i ) | 2 A i | E k ( x l , s ) | d s C 2 k m d C n c m d = O ( 1 ) .

Therefore,

J 2 =O ( 2 k ) a.s.and J 3 =O ( 2 k ) .
(3.16)

To complete the proof of the theorem, it is suffices to show that

J 4 =O ( 2 k ) ,a.s. (m),
(3.17)

by (3.11)-(3.13) and (3.15)-(3.16).

Here, we show (3.17) under φ-mixing and ρ-mixing, respectively.

  1. (1)

    φ-mixing. Taking q>2(d+c)/(12d)c, we have (d1/2)q+d/c<1, (2d1)q2d+d/c<2, and 2(d+c)/(12d)c>2. By Lemma 3.4(a), we have

    m = 1 P ( J 4 > 2 k C ) m = 1 l = 1 n P ( | 1 m j = 1 m V 1 ( j ) ( x l ) | > 2 k C ) C m = 1 l = 1 n 2 k q m q E | j = 1 m V 1 ( j ) ( x l ) | q C m = 1 l = 1 n 2 k q m q [ ( m exp { 6 j = 1 log m φ 1 / 2 ( 2 j ) } max 1 l m E | V 1 ( l ) ( x l ) | 2 ) q / 2 + j = 1 m E | V 1 ( j ) ( x l ) | q ] C m = 1 l = 1 n 2 k q m q ( m q / 2 + m m ( q 2 ) d ) C m = 1 ( m ( d 1 / 2 ) q + d / c + m ( 2 d 1 ) q 2 d + d / c + 1 ) < .

Thus, (3.17) holds by the Borel-Cantelli lemma.

  1. (2)

    ρ-mixing. Similar to the arguments in the proof of Theorem 2.3, {λ j = 1 log m ρ 2 / q ( 2 j )}=o( m t ) for any λ>0 and t>0. Taking q>2(d+c)/(12d)c, we have (d1/2)q+d/c<1 and (2d1)q2d+d/c<2. Next, take t>0 small enough such that (2d1)q2d+d/c+t<2. By Lemma 3.4(b), we have

    m = 1 P ( J 4 > 2 k C ) m = 1 l = 1 n P ( | 1 m j = 1 m V 1 ( j ) ( x l ) | > 2 k C ) C m = 1 l = 1 n 2 k q m q E | j = 1 m V 1 ( j ) ( x l ) | q C m = 1 l = 1 n 2 k q m q ( m q / 2 exp { C 1 j = 1 log m ρ ( 2 j ) } max 1 j m ( E | V 1 ( j ) ( x l ) | 2 ) q / 2 + m exp { C 1 j = 1 log m ρ 2 / q ( 2 j ) } max 1 j m E | V 1 ( j ) ( x l ) | q ) C m = 1 l = 1 n 2 k q m q ( m q / 2 + m 1 + t m ( q 2 ) d ) C m = 1 ( m ( d 1 / 2 ) q + d / c + m ( 2 d 1 ) q 2 d + d / c + t + 1 ) < .

Therefore, we also obtain (3.17).

So, we complete the proof of Theorem 2.4. □

Proof of Theorem 2.5 Denote U ( j ) (x)= i = 1 n ( e ( j ) ( x i ) e ( j ) (x)) A i E k (x,s)ds. For each xI, we have

m 1 / 2 ( g ˆ m , n ( x ) g ( x ) ) = m 1 / 2 ( i = 1 n g ( x i ) A i E k ( x , s ) d s g ( x ) ) + m 1 / 2 j = 1 m i = 1 n e ( j ) ( x i ) A i E k ( x , s ) d s = m 1 / 2 ( i = 1 n g ( x i ) A i E k ( x , s ) d s g ( x ) ) + m 1 / 2 j = 1 m U ( j ) ( x ) + m 1 / 2 j = 1 m e ( j ) ( x ) 0 1 E k ( x , s ) d s = : H 1 + H 2 + H 3 .

Since we have

H 1 =O ( m 1 / 2 τ k ) +O ( m 1 / 2 n γ ) =o(1),

by Lemma 3.2 and (A8), it suffices to show that

H 2 = o p (1)
(3.18)

and

H 3 d N ( 0 , σ 2 ) .
(3.19)

Let G={(u,v)|| x u x|ς,| x v x|ς,1u,vn}, and ζ(ς)= u = 1 n A u | E k (x,s)|dsI(| x u x|>ς). Under the assumption of φ-mixing, denote ψ()= φ 1 / 2 (); if we consider ρ-mixing, then ψ() is ρ(). By Lemma 3.3, we have

| E ( U ( j ) ( x ) U ( l ) ( x ) ) | = | E ( u = 1 n ( e ( j ) ( x u ) e ( j ) ( x ) ) A u E k ( x , s ) d s v = 1 n ( e ( l ) ( x v ) e ( l ) ( x ) ) A v E k ( x , s ) d s ) | = | u , v A u E k ( x , s ) d s A v E k ( x , s ) d s E ( ( e ( j ) ( x u ) e ( j ) ( x ) ) ( e ( l ) ( x v ) e ( l ) ( x ) ) ) | ( u , v ) G A u | E k ( x , s ) | d s A v | E k ( x , s ) | d s E | ( e ( j ) ( x u ) e ( j ) ( x ) ) ( e ( l ) ( x v ) e ( l ) ( x ) ) | + ( u , v ) G c A u | E k ( x , s ) | d s A v | E k ( x , s ) | d s E | ( e ( j ) ( x u ) e ( j ) ( x ) ) ( e ( l ) ( x v ) e ( l ) ( x ) ) | ( u , v ) G A u | E k ( x , s ) | d s A v | E k ( x , s ) | d s min ( η ( ς ) , C ψ ( | j l | ) ) + ( u , v ) G c A u | E k ( x , s ) | d s A v | E k ( x , s ) | d s C ψ ( | j l | ) ( u = 1 n A u | E k ( x , s ) | d s I ( | x u x | ς ) ) ( v = 1 n A v | E k ( x , s ) | d s I ( | x v x | ς ) ) min ( η ( ς ) , C ψ ( | j l | ) ) + [ ( u = 1 n A u | E k ( x , s ) | d s I ( | x u x | > ς ) ) ( v = 1 n A v | E k ( x , s ) | d s ) + ( u = 1 n A u | E k ( x , s ) | d s ) ( v = 1 n A v | E k ( x , s ) | d s I ( | x v x | > ς ) ) ] C ψ ( | j l | ) C 1 min ( η ( ς ) , C ψ ( | j l | ) ) + 2 C 2 ζ ( ς ) ψ ( | j l | ) C 1 η ( ς ) a C 1 a ψ ( | j l | ) 1 a + 2 C 2 ζ ( ς ) ψ ( | j l | ) .

Therefore,

E H 2 2 m 1 j = 1 m l = 1 m | E ( U ( j ) ( x ) U ( l ) ( x ) ) | C 3 m 1 j = 1 m l = 1 m ( η ( ς ) a ψ 1 a ( | j l | ) + ζ ( ς ) ψ ( | j l | ) ) C 4 ( η ( ς ) a j = 0 ψ 1 a ( j ) + ζ ( ς ) l = 0 ψ ( l ) ) ,

which can be made arbitrarily small if we first choose ς such that η(ς) is small, and for this ς we choose m large enough so that n=n(m) makes ζ(ς) arbitrarily small. Thus, we obtain (3.18).

It remains to establish (3.19). Denote S m = S m (x)= j = 1 m e ( j ) (x) and σ m 2 =E S m 2 . By Lemma 3.4, and (iv) implies (ii), for φ-mixing and ρ-mixing, we have

1 m σ m 2 C 1 m ( m max 1 j m E | e ( j ) ( x ) | 2 ) C.
(3.20)

For any m2, from (3.20) and the dominated convergence theorem,

1 m σ m 2 = 1 m j = 1 m E ( e ( j ) ( x ) ) 2 + 2 m 1 i < j m E ( e ( i ) ( x ) e ( j ) ( x ) ) = E ( e ( 1 ) ( x ) ) 2 + 2 l = 1 m 1 ( 1 l m ) E ( e ( 1 ) ( x ) e ( l ) ( x ) ) E ( e ( 1 ) ( x ) ) 2 + 2 l = 1 E ( e ( 1 ) ( x ) e ( l ) ( x ) ) < .

Hence, σ 2 =E ( e ( 1 ) ( x ) ) 2 +2 l = 1 E( e ( 1 ) (x) e ( l ) (x)) converges absolutely for φ-mixing and ρ-mixing, that is,

1 m σ m 2 σ 2 .
(3.21)

We easily obtain

H 3 = j = 1 m e ( j ) ( x ) σ m σ m m 0 1 E k (x,s)ds d N ( 0 , σ 2 ) ,

since σ m 1 j = 1 m e ( j ) (x) d N(0,1) by Lemma 3.6, m 1 / 2 σ m σ>0 by (3.21), and 0 1 E k (x,s)ds1 by Lemma 3.1.

Thus, we complete the proof of Theorem 2.5. □

4 Conclusion and discussion

The paper studies a nonparametric regression model with replicated observations under weakly dependent processes by wavelet procedures. For exhibiting dependence among the units, we assume that { e ( j ) (),j1} is a φ-mixing or ρ-mixing process, and avoid as far as possible any assumptions on the error’s structure among { e ( j ) ( x 1 ),, e ( j ) ( x n )} for each j (j=1,,m). Under suitable conditions, we obtain expansions for the bias and the variance of wavelet estimator, prove the moment consistency, the strong consistency, the strong convergence rate of it, and establish the asymptotic normality of wavelet estimator. For the general nonparametric model, consistency results of linear wavelet estimator can be derived from general results on regression estimators in the case of dependent errors. But our results cannot be derived directly from general results on regression estimators because the nonparametric regression model with repeated measurements we considered has complex dependent error’s structure. Bootstrap and blockwise are useful techniques of resampling, which can preserve the dependent properties of the data by appropriately choosing blocks of data. They have been sufficiently investigated for weakly dependent data by many papers, for example, Bühlman and Künsch [26], Yuichi [27], Lahiri [28], Lin and Zhang [29, 30] and Lin et al. [31]. In the future, we may try bootstrap and blockwise methods into our model. Since linear wavelet is not adaptive, nonlinear wavelet and design-adapted wavelet have also received considerable attention recently; see, for example, Li [50], Liang and Uña-Álvarez [51] and Chesneau [52] for (conditional) density estimator; Li and Xiao [20] and Uña-Álvarez et al. [53] for nonparametric models; and Delouille and Sachs [54] for nonlinear autoregressive models, and so on. At present, our paper mainly concentrates on linear wavelet estimator in nonparametric regression model with repeated measurements under weakly dependent processes. Although it is easy to construct nonlinear wavelet estimator in our model, it is very difficult to establish asymptotic theory of nonlinear wavelet estimator and to prove it since the structure of our model is complex. It will be a challenging work. Interesting work for further research includes nonlinear wavelet and design-adapted wavelet estimations for our model.

References

  1. Priestley MB, Chao MT: Nonparametric function fitting. J. R. Stat. Soc. B 1972, 4: 385–392.

    MathSciNet  MATH  Google Scholar 

  2. Gasser T, Müller HG: Kernel estimation of regression function. Lecture Notes in Mathematics 757. In Smooth Techniques for Curve Estimation. Edited by: Gasser T, Rosenblatt M. Springer, Berlin; 1979:23–68.

    Chapter  Google Scholar 

  3. Gasser T, Müller HG: Estimating regression functions and their derivatives by the kernel method. Scand. J. Stat. 1984, 11: 171–185.

    MathSciNet  MATH  Google Scholar 

  4. Prakasa Rao BLS: Nonparametric Functional Estimation. Academic Press, Orlando; 1983.

    MATH  Google Scholar 

  5. Clark RM: Nonparametric estimation of a smooth regression function. J. R. Stat. Soc. B 1997, 39: 107–113.

    Google Scholar 

  6. Roussas GG: Consistent regression with fixed design points under dependence condition. Stat. Probab. Lett. 1989, 8: 41–50. 10.1016/0167-7152(89)90081-3

    Article  MathSciNet  MATH  Google Scholar 

  7. Fan Y: Consistent nonparametric multiple regression for dependent heterogeneous processes: the fixed design case. J. Multivar. Anal. 1990, 33: 72–88. 10.1016/0047-259X(90)90006-4

    Article  MathSciNet  MATH  Google Scholar 

  8. Roussas GG, Tran LT: Fixed design regression for time series: asymptotic normality. J. Multivar. Anal. 1992, 40: 262–291. 10.1016/0047-259X(92)90026-C

    Article  MathSciNet  MATH  Google Scholar 

  9. Liang HY, Jing BY: Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. J. Multivar. Anal. 2005, 95: 227–245. 10.1016/j.jmva.2004.06.004

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang SC, Li YM: Uniform asymptotic normality of the regression weighted estimator for strong mixing samples. Acta Math. Sin. 2006, 49A: 1163–1170.

    MATH  Google Scholar 

  11. Yang SC: Maximal moment inequality for partial sums of strong mixing sequences and application. Acta Math. Sin. Engl. Ser. 2007, 23: 1013–1024. 10.1007/s10114-005-0841-9

    Article  MathSciNet  MATH  Google Scholar 

  12. Fraiman R, Iribarren GP: Nonparametric regression estimation in models with weak error’s structure. J. Multivar. Anal. 1991, 37: 180–196. 10.1016/0047-259X(91)90079-H

    Article  MATH  Google Scholar 

  13. Hart JD, Wehrly TE: Kernel regression estimation using repeated measurements data. J. Am. Stat. Assoc. 1986, 81: 1080–1088. 10.1080/01621459.1986.10478377

    Article  MathSciNet  MATH  Google Scholar 

  14. Härdle W, Kerkyacharian G, Picard D, Tsybabov A: Wavelets: Approximation and Statistical Applications. Springer, New York; 1998.

    Book  MATH  Google Scholar 

  15. Vidakovic B: Statistical Modeling by Wavelet. Wiley, New York; 1999.

    Book  MATH  Google Scholar 

  16. Antoniadis A, Gregoire G, Mckeague IW: Wavelet methods for curve estimation. J. Am. Stat. Assoc. 1994, 89: 1340–1352. 10.1080/01621459.1994.10476873

    Article  MathSciNet  MATH  Google Scholar 

  17. Xue LG: Strong uniform convergence rates of the wavelet estimator of regression function under completed and censored data. Acta Math. Appl. Sin. 2002, 25: 430–438.

    MathSciNet  MATH  Google Scholar 

  18. Johnstone IM, Silverman BW: Wavelet threshold estimators for data with correlated noise. J. R. Stat. Soc. B 1997, 59: 319–351. 10.1111/1467-9868.00071

    Article  MathSciNet  MATH  Google Scholar 

  19. Liang HY, Zhang DX, Lu BX: Wavelet estimation in nonparametric model under martingale difference errors. Appl. Math. J. Chin. Univ. Ser. B 2004, 19: 302–310. 10.1007/s11766-004-0039-4

    Article  MathSciNet  MATH  Google Scholar 

  20. Li LY, Xiao YM: Wavelet-based estimators of mean regression function with long memory data. Appl. Math. Mech. 2006, 27: 901–910. 10.1007/s10483-006-0705-1

    Article  MathSciNet  MATH  Google Scholar 

  21. Li YM, Yang SC, Zhou Y: Consistency and uniformly asymptotic normality of wavelet estimator in regression model with associated samples. Stat. Probab. Lett. 2008, 78: 2947–2956. 10.1016/j.spl.2008.05.004

    Article  MathSciNet  MATH  Google Scholar 

  22. Xue LG: Uniform convergence rates of the wavelet estimator of regression function under mixing error. Acta Math. Sci. 2002, 22A: 528–535.

    MathSciNet  MATH  Google Scholar 

  23. Sun Y, Chai GX: Nonparametric wavelet estimation of a fixed designed regression function. Acta Math. Sci. 2004, 24A: 597–606.

    MathSciNet  MATH  Google Scholar 

  24. Li YM, Guo JH: Asymptotic normality of wavelet estimator for strong mixing errors. J. Korean Stat. Soc. 2009, 38: 383–390. 10.1016/j.jkss.2009.03.002

    Article  MathSciNet  MATH  Google Scholar 

  25. Liang HY: Asymptotic normality of wavelet estimator in heteroscedastic model with α -mixing errors. J. Syst. Sci. Complex. 2011, 24: 725–737. 10.1007/s11424-010-8354-8

    Article  MathSciNet  MATH  Google Scholar 

  26. Bühlman, P, Künsch, HR: The blockwise bootstrap for general parameters of a stationary time series. Research report 70, ETH Zurich (1993)

    Google Scholar 

  27. Yuichi K: Empirical likelihood methods with weakly dependent processes. Ann. Stat. 1997, 25: 2084–2102. 10.1214/aos/1069362388

    Article  MathSciNet  MATH  Google Scholar 

  28. Lahiri SH: Theoretical comparison of block bootstrap methods. Ann. Stat. 1999, 27: 386–404. 10.1214/aos/1018031117

    Article  MATH  Google Scholar 

  29. Lin L, Zhang RC: Blockwise empirical Euclidean likelihood for weakly dependent processes. Stat. Probab. Lett. 2001, 53: 143–152. 10.1016/S0167-7152(01)00066-9

    Article  MathSciNet  MATH  Google Scholar 

  30. Lin L, Zhang RC: Bootstrap wavelet in the nonparametric regression model with weakly dependent processes. Acta Math. Sci. 2004, 24B: 61–70.

    MathSciNet  MATH  Google Scholar 

  31. Lin L, Fan Y, Tan L: Blockwise bootstrap wavelet in nonparametric regression model with weakly dependent processes. Metrika 2008, 67: 31–48.

    Article  MathSciNet  MATH  Google Scholar 

  32. Shao QM: A moment inequality and its application. Acta Math. Sin. 1988, 31: 736–747.

    MATH  Google Scholar 

  33. Peligrad M: The r -quick version of the strong law for stationary φ -mixing sequences. In Proceedings of the International Conference on Almost Everywhere Convergence in Probability and Statistics. Academic Press, New York; 1989:335–348.

    Google Scholar 

  34. Utev SA: Sums of random variables with φ -mixing. Sib. Adv. Math. 1991, 1: 124–155.

    MathSciNet  MATH  Google Scholar 

  35. Kiesel R: Summability and strong laws for φ -mixing random variables. J. Theor. Probab. 1998, 11: 209–224. 10.1023/A:1021655227120

    Article  MathSciNet  MATH  Google Scholar 

  36. Chen PY, Hu TC, Volodin A: Limiting behaviour of moving average processes under φ -mixing assumption. Stat. Probab. Lett. 2009, 79: 105–111. 10.1016/j.spl.2008.07.026

    Article  MathSciNet  MATH  Google Scholar 

  37. Zhou XC: Complete moment convergence of moving average processes under φ -mixing assumptions. Stat. Probab. Lett. 2010, 80: 285–292. 10.1016/j.spl.2009.10.018

    Article  MathSciNet  MATH  Google Scholar 

  38. Peligrad M: On the central limit theorem for ρ -mixing sequences of random variables. Ann. Probab. 1987, 15: 1387–1394. 10.1214/aop/1176991983

    Article  MathSciNet  MATH  Google Scholar 

  39. Peligrad M, Shao QM: Estimation of variance for ρ -mixing sequences. J. Multivar. Anal. 1995, 52: 140–157. 10.1006/jmva.1995.1008

    Article  MathSciNet  MATH  Google Scholar 

  40. Peligrad M, Shao QM: A note on estimation of the variance of partial sums for ρ -mixing random variables. Stat. Probab. Lett. 1996, 28: 141–145.

    Article  MathSciNet  MATH  Google Scholar 

  41. Shao QM: Maximal inequalities for partial sums of ρ -mixing sequences. Ann. Probab. 1995, 23: 948–965. 10.1214/aop/1176988297

    Article  MathSciNet  MATH  Google Scholar 

  42. Bradley R: A stationary rho-mixing Markov chain which is not “interlaced” rho-mixing. J. Theor. Probab. 2001, 14: 717–727. 10.1023/A:1017545123473

    Article  MathSciNet  MATH  Google Scholar 

  43. Lin ZY, Lu CR: Limit Theory for Mixing Dependent Random Variables. Science Press and K.A.P., Beijing; 1996.

    MATH  Google Scholar 

  44. Doukhan P Lecture Notes in Statistics. In Mixing: Properties and Examples. Springer, Berlin; 1994.

    Chapter  Google Scholar 

  45. Chai GX, Xu KJ: Wavelet smoothing in semiparametric regression model. Chinese J. Appl. Probab. Statist. 1999, 15: 97–105.

    MathSciNet  MATH  Google Scholar 

  46. Zhou X, You JH: Wavelet estimation in varying-coefficient partially linear regression models. Stat. Probab. Lett. 2004, 68: 91–104. 10.1016/j.spl.2004.01.018

    Article  MathSciNet  MATH  Google Scholar 

  47. Walter GG: Wavelets and Orthogonal Systems with Applications. CRC Press, Boca Raton; 1994.

    MATH  Google Scholar 

  48. Ibragimov IA: Some limit theorems for stationary processes. Theory Probab. Appl. 1962, 7: 349–382. 10.1137/1107036

    Article  MathSciNet  MATH  Google Scholar 

  49. Ibragimov IA: A note on the central limit theorem for dependent random variables. Theory Probab. Appl. 1975, 20: 134–139.

    Google Scholar 

  50. Li LY: Non-linear wavelet-based density estimators under random censorship. J. Stat. Plan. Inference 2003, 117: 35–58. 10.1016/S0378-3758(02)00366-X

    Article  MathSciNet  MATH  Google Scholar 

  51. Liang HY, Uña-Álvarez J: Wavelet estimation of conditional density with truncated, censored and dependent data. J. Multivar. Anal. 2011, 102: 448–467. 10.1016/j.jmva.2010.10.004

    Article  MathSciNet  MATH  Google Scholar 

  52. Chesneau C: On the adaptive wavelet deconvolution of a density for strong mixing sequences. J. Korean Stat. Soc. 2012, 41: 423–436. 10.1016/j.jkss.2012.01.005

    Article  MathSciNet  MATH  Google Scholar 

  53. Uña-Álvarez J, Liang HY, Rodríguez-Casal A: Nonlinear wavelet estimator of the regression function under left-truncated dependent data. J. Nonparametr. Stat. 2010, 22: 319–344. 10.1080/10485250903469736

    Article  MathSciNet  MATH  Google Scholar 

  54. Delouille V, Sachs RV: Estimation of nonlinear autoregressive models using design-adapted wavelets. Ann. Inst. Stat. Math. 2005, 57: 235–253. 10.1007/BF02507024

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the associate editor and two anonymous referees, whose valuable comments greatly improved the paper. This work is partially supported by Key Natural Science Foundation of Higher Education Institutions of Anhui Province (KJ2012A270), NSFC (11171065, 11061002), Youth Foundation for Humanities and Social Sciences Project from Ministry of Education of China (11YJC790311), NSFJS (BK2011058), National Natural Science Foundation of Guangxi (2011GXNSFA018126) and Postdoctoral Research Program of Jiangsu Province of China (1202013C).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xing-cai Zhou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The three authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhou, Xc., Lin, Jg. & Yin, CM. Asymptotic properties of wavelet-based estimator in nonparametric regression model with weakly dependent processes. J Inequal Appl 2013, 261 (2013). https://doi.org/10.1186/1029-242X-2013-261

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-261

Keywords