SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law

Qunying Wu

Author Affiliations

College of Science, Guilin University of Technology, Guilin 541004, P. R. China

Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin 541004, P.R. China

Journal of Inequalities and Applications 2012, 2012:17  doi:10.1186/1029-242X-2012-17

The electronic version of this article is the complete one and can be found online at: http://www.journalofinequalitiesandapplications.com/content/2012/1/17


Received:4 August 2011
Accepted:20 January 2012
Published:20 January 2012

© 2012 Wu; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Let X, X1, X2,... be a sequence of independent and identically distributed random variables in the domain of attraction of a normal distribution. A universal result in almost sure limit theorem for the self-normalized partial sums Sn/Vn is established, where S n = i = 1 n X i , V n 2 = i = 1 n X i 2 .

Mathematical Scientific Classification: 60F15.

Keywords:
domain of attraction of the normal law; self-normalized partial sums; almost sure central limit theorem

1. Introduction

Throughout this article, we assume {X, Xn}n ∈ ℕ is a sequence of independent and identically distributed (i.i.d.) random variables with a non-degenerate distribution function F. For each n ≥ 1, the symbol Sn/Vn denotes self-normalized partial sums, where S n = i = 1 n X 1 , V n 2 = i = 1 n X i 2 . We say that the random variable X belongs to the domain of attraction of the normal law, if there exist constants an > 0, bn ∈ ℝ such that

S n - b n a n d N , (1)

where N is the standard normal random variable. We say that {Xn}n∈ℕ satisfies the central limit theorem (CLT).

It is known that (1) holds if and only if

lim x x 2 ( | X | > x ) E X 2 I ( | X | x ) = 0 . (2)

In contrast to the well-known classical central limit theorem, Gine et al. [1] obtained the following self-normalized version of the central limit theorem: ( S n - E S n ) / V n d N as n → ∞ if and only if (2) holds.

Brosamler [2] and Schatte [3] obtained the following almost sure central limit theorem (ASCLT): Let {Xn}n∈ℕ be i.i.d. random variables with mean 0, variance σ2 > 0 and partial sums Sn. Then

lim n 1 D n k = 1 n d k I S k σ k < x = Φ ( x ) a . s . for all x , (3)

with dk = 1/k and D n = k = 1 n d k , where I denotes an indicator function, and Φ(x) is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp [4], Ibragimov and Lifshits [5], Miao [6], Berkes and Csáki [7], Hörmann [8], Wu [9,10], and Ye and Wu [11]. Huang and Zhang [12] and Zhang and Yang [13] obtained ASCLT results for self-normalized version.

Under mild moment conditions ASCLT follows from the ordinary CLT, but in general the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram [[14], p. 35]) shows that the large the weight sequence {dk; k ≥ 1} in (3) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. And it would be of considerable interest to determine the optimal weights.

On the other hand, by the Theorem 1 of Schatte [3], Equation (3) fails for weight dk = 1. The optimal weight sequence remains unknown.

The purpose of this article is to study and establish the ASCLT for self-normalized partial sums of random variables in the domain of attraction of the normal law, we will show that the ASCLT holds under a fairly general growth condition on dk = k-1 exp(ln k)α), 0 ≤ α < 1/2.

Our theorem is formulated in a more general setting.

Theorem 1.1. Let {X, Xn}n∈ℕ be a sequence of i.i.d. random variables in the domain of attraction of the normal law with mean zero. Suppose 0 ≤ α < 1/2 and set

d k = exp ( ln α k ) k , D n = k = 1 n d k . (4)

then

lim n 1 D n k = 1 n d k I S k V k x = Φ ( x ) a . s . for any x . (5)

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2. Theorem 1.1 remain valid if we replace the weight sequence {dk}k∈ℕ by any { d k * } k such that 0 d k * d k , k = 1 d k * = .

Remark 1.3. Our results not only give substantial improvements for weight sequence in theorem 1.1 obtained by Huang [12]but also removed the condition n ( | X 1 | > η n ) c ( log n ) ε 0 , 0 < ε0 < 1 in theorem 1.1 of [12].

Remark 1.4. If E X 2 < , then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorems 1.1 is of very broad range.

Remark 1.5. Essentially, the open problem should be whether Theorem 1.1 holds for 1/2 ≤ α < 1 remains open.

2. Proofs

In the following, an ~ bn denotes limn→∞ an/bn = 1. The symbol c stands for a generic positive constant which may differ from one place to another.

Furthermore, the following three lemmas will be useful in the proof, and the first is due to [15].

Lemma 2.1. Let X be a random variable with E X = 0 , and denote l ( x ) = E X 2 I { X x } . The following statements are equivalent:

(i) X is in the domain of attraction of the normal law.

(ii) x 2 ( X > x ) = o ( l ( x ) ) .

(iii) x E ( X I ( | X | > x ) ) = o ( l ( x ) ) .

(iv) E ( X α I ( X x ) ) = o ( x α - 2 l ( x ) ) for α > 2.

Lemma 2.2. Let {ξ, ξn}n∈ℕ be a sequence of uniformly bounded random variables. If exist constants c > 0 and δ > 0 such that

| E ξ k ξ j | c k j δ , for 1 k < j , (6)

then

lim n 1 D n k = 1 n d k ξ k = 0 a.s. , (7)

where dk and Dn are defined by (4).

Proof. Since

E k = 1 n d k ξ k 2 k = 1 n d k 2 E ξ k 2 + 2 1 k < j n d k d j E ξ k ξ j = k = 1 n d k 2 E ξ k 2 + 2 1 k < j n ; j / k ln 2 / δ D n d k d j E ξ k ξ l + 2 1 k < j n ; j / k < ln 2 / δ D n d k d j E ξ k ξ l : = T n 1 + 2 ( T n 2 + T n 3 ) . (8)

By the assumption of Lemma 2.2, there exists a constant c > 0 such that |ξk| ≤ c for any k. Noting that exp(ln α x ) = exp ( 1 x α ( ln u ) α 1 u d u ) , we have exp(lnα x), α < 1 is a slowly varying function at infinity. Hence,

T n 1 c k = 1 n exp ( 2 ln α k ) k 2 c k = 1 exp ( 2 ln α k ) k 2 < .

By (6),

T n 2 c 1 k < j n ; j / k ln 2 / δ D n d k d j k j δ c 1 k < j n ; j / k ln 2 / δ D n d k d j ln 2 D n c D n 2 ln 2 D n . (9)

On the other hand, if α = 0, we have dk = e/k, Dn ~ e ln n, hence, for sufficiently large n,

T n 3 c k = 1 n 1 k j = k k ln 2 / δ D n 1 j c D n ln ln D n D n 2 ln 2 D n . (10)

If α > 0, note that

D n ~ 1 n exp ( ln α x ) x d x = 0 ln n exp ( y α ) d y ~ 0 ln n exp ( y α ) + 1 - α α y - α exp ( y α ) d y = 0 ln n 1 α y 1 - α exp ( y α ) d y = 1 α ln 1 - α n exp ( ln α n ) , n . (11)

This implies

ln D n ~ ln α n , exp ( ln α n ) ~ α D n ( ln D n ) 1 - α α , lnln D n ~ α lnln n .

Thus combining |ξk| ≤ c for any k,

T n 3 c k = 1 n d k 1 k < j n ; j / k < ( ln D n ) 2 / δ d j c k = 1 n d k k < j k ( ln D n ) 2 / δ exp ( ln α n ) 1 j c exp ( ln α n ) lnln D n k = 1 n d k c D n 2 lnln D n ( ln D n ) ( 1 - α ) / α .

Since α < 1/2 implies (1 - 2α)/(2α) > 0 and ε1 : = 1/(2α) - 1 > 0. Thus, for sufficiently large n, we get

T n 3 c D n 2 ( ln D n ) 1 / ( 2 α ) lnln D n ( ln D n ) ( 1 - 2 α ) / ( 2 α ) D n 2 ( ln D n ) 1 / ( 2 α ) = D n 2 ( ln D n ) 1 + ε 1 . (12)

Let T n : = 1 D n k = 1 n d k ξ k , ε 2 : = min ( 1 , ε 1 ) . Combining (8)-(12), for sufficiently large n, we get

E T n 2 c ( ln D n ) 1 + ε 2 .

By (11), we have Dn+1 ~ Dn. Let 0 < η < ε2/(1 + ε2), nk = inf{n; Dn ≥ exp(k1-η)}, then D n k exp ( k 1 - η ) , D n k - 1 < exp ( k 1 - η ) . Therefore

1 D n k exp ( k 1 - η ) ~ D n k - 1 exp ( k 1 - η ) < 1 1 ,

that is,

D n k ~ exp ( k 1 - η ) .

Since (1 - η)(1 + ε2) > 1 from the definition of η, thus for any ε > 0, we have

k = 1 ( T n k > ε ) c k = 1 E T n k 2 c k = 1 1 k ( 1 - η ) ( 1 + ε 2 ) < .

By the Borel-Cantelli lemma,

T n k 0 a.s.

Now for nk < n nk+1, by |ξk| ≤ c for any k,

T n T n k + c D n k i = n k + 1 n k + 1 d i T n k + c D n k + 1 D n k + 1 0 a.s.

from D n k + 1 D n k ~ exp ( k + 1 ) 1 η ) exp ( k 1 η ) = exp ( k 1 η ( ( 1 + 1 / k ) 1 η 1 ) ) ~ exp ( ( 1 η ) k η ) 1 . I.e., (7) holds. This completes the proof of Lemma 2.2.

Let l ( x ) = E X 2 I { X x } , b = inf{x ≥ 1; l(x) > 0} and

η j = inf s ; s b + 1 , l ( s ) s 2 1 j for j 1 .

By the definition of ηj, we have j l ( η j ) η j 2 and jl(ηj - ε) > (ηj - ε)2 for any ε > 0. It implies that

n l ( η n ) ~ η n 2 , as n . (13)

For every 1 ≤ i n, let

X ̄ n i = X i I ( X i η n ) , S ̄ n = i = 1 n X ̄ n i , V ̄ n 2 = i = 1 n X ̄ n i 2 .

Lemma 2.3. Suppose that the assumptions of Theorem 1.1 hold. Then

lim n 1 D n k = 1 n d k I S ̄ k - E S ̄ k k l ( η k ) x = Φ ( x ) a.s. for any x , (14)

lim n 1 D n k = 1 n d k I i = 1 k ( X i > η k ) - E I i = 1 k ( X i > η k ) = 0 a.s. , (15)

lim n 1 D n k = 1 n d k f V ̄ k 2 k l ( η k ) - E f V ̄ k 2 k l ( η k ) = 0 a.s., (16)

where dk and Dn are defined by (4) and f is a non-negative, bounded Lipschitz function.

Proof. By the cental limit theorem for i.i.d. random variables and Var S ̄ n ~ n l ( η n ) as n → ∞ from E X = 0 , Lemma 2.1 (iii), and (13), it follows that

S ̄ n - E S ̄ n n l ( η n ) d N , as n ,

where N denotes the standard normal random variable. This implies that for any g(x) which is a non-negative, bounded Lipschitz function

E g S ̄ n - E S ̄ n n l ( η n ) E g ( N ) , as n ,

Hence, we obtain

lim n 1 D n k = 1 n d k E g S ̄ k - E S ̄ k k l ( η k ) = E g ( N )

from the Toeplitz lemma.

On the other hand, note that (14) is equivalent to

lim n 1 D n k = 1 n d k g S ̄ k - E S ̄ k k l ( η k ) = E g ( N ) a.s.

from Theorem 7.1 of [16] and Section 2 of [17]. Hence, to prove (14), it suffices to prove

lim n 1 D n k = 1 n d k g S ̄ k - E S ̄ k k l ( η k ) - E g S ̄ k - E S ̄ k k l ( η k ) = 0 a.s., (17)

for any g(x) which is a non-negative, bounded Lipschitz function.

For any k ≥ 1, let

ξ k = g S ̄ k - E S ̄ k k l ( η k ) - E g S ̄ k - E S ̄ k k l ( η k ) .

For any 1 ≤ k < j, note that g S ̄ k - E S ̄ k k l ( η k ) and g S ̄ j - E S ̄ j - i = 1 k ( X i - E X i ) I ( X i η j ) j l ( η j ) are independent and g(x) is a non-negative, bounded Lipschitz function. By the definition of ηj, we get,

E ξ k ξ j = Cov g S ̄ k - E S ̄ k k l ( η k ) , g S ̄ j - E S ̄ j j l ( η j ) = Cov g S ̄ k - E S ̄ k k l ( η k ) , g S ̄ j - E S ̄ j j l ( η j ) - g S ̄ j - E S ̄ j - i = 1 k ( X i - E X 1 ) I ( X 1 η j ) j l ( η j ) c E i = 1 k ( X i - E X i ) I ( X i η j ) j l ( η j ) c k E X 2 I ( X η j ) j l ( η j ) = c k j 1 / 2 .

By Lemma 2.2, (17) holds.

Now we prove (15). Let

Z k = I i = 1 k ( X i > η k ) - E I i = 1 k ( X i > η k ) for any k 1 .

It is known that I(A B) - I(B) ≤ I(A) for any sets A and B, then for 1 ≤ k < j, by Lemma 2.1 (ii) and (13), we get

( X > η j ) = o ( 1 ) l ( η j ) η j 2 = o ( 1 ) j . (18)

Hence

E Z k Z j = Cov I i = 1 k ( X i > η k ) , I i = 1 j ( X i > η j ) = Cov I i = 1 k ( X i > η k ) , I i = 1 j ( X i > η j ) - I i = k + 1 j ( X i > η j ) E I i = 1 j ( X i > η j ) - I i = k + 1 j ( X i > η j ) E I i = 1 k ( X i > η j ) k ( X > η j ) k j .

By Lemma 2.2, (15) holds.

Finally, we prove (16). Let

ζ k = f V ̄ k 2 k l ( η k ) - E f V ̄ k 2 k l ( η k ) for any k 1 .

For 1 ≤ k < j,

E ζ k ζ j = Cov f V ̄ k 2 k l ( η k ) , f V ̄ j 2 j l ( η j ) = Cov f V ̄ k 2 k l ( η k ) , f V ̄ j 2 j l ( η j ) - f V ̄ j 2 - i = 1 k X i 2 I ( X i η j ) j l ( η j ) c E i = 1 k X i 2 I ( X i η j ) j l ( η j ) = c k E X 2 I ( X η j ) j l ( η j ) = c k l ( η j ) j l ( η j ) = c k j .

By Lemma 2.2, (16) holds. This completes the proof of Lemma 2.3.

Proof of Theorem 1.1. For any given 0 < ε < 1, note that

I ( S k V k x ) I ( S ¯ k ( 1 + ε ) k l ( η k ) x ) + I ( V ¯ k 2 > ( 1 + ε ) k l ( η k ) ) + I ( i = 1 k | X i | > η k ) ) ,  for  x 0 , I ( S k V k x ) I ( S ¯ k ( 1 ε ) k l ( η k ) x ) + I ( V ¯ k 2 < ( 1 ε ) k l ( η k ) ) + I ( i = 1 k | X i | > η k ) ) ,  for  x < 0 ,

and

I ( S k V k x ) I ( S ¯ k ( 1 ε ) k l ( η k ) x ) I ( V ¯ k 2 < ( 1 ε ) k l ( η k ) ) I ( i = 1 k | X i | > η k ) ) ,  for  x 0 , I ( S k V k x ) I ( S ¯ k ( 1 + ε ) k l ( η k ) x ) I ( V ¯ k 2 < ( 1 + ε ) k l ( η k ) ) I ( i = 1 k | X i | > η k ) ) ,  for  x < 0.

Hence, to prove (5), it suffices to prove

lim n 1 D n k = 1 n d k I S ̄ k k l ( η k ) 1 ± ε x = Φ ( 1 ± ε x ) a.s., (19)

lim n 1 D n k = 1 n d k I ( i = 1 k | X i | > η k ) ) = 0  a.s., (20)

lim n 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 + ε ) k l ( η k ) ) = 0 a.s., (21)

lim n 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 - ε ) k l ( η k ) ) = 0 a.s., (22)

by the arbitrariness of ε > 0.

Firstly, we prove (19). Let 0 < β < 1/2 and h(·) be a real function, such that for any given x ∈ ℝ,

I ( y 1 ± ε x - β ) h ( y ) I ( y 1 ± ε x + β ) . (23)

By E X = 0 , Lemma 2.1 (iii) and (13), we have

E S ̄ k = k E X I ( X η k ) = k E X I ( X > η k ) k E X I ( X > η k ) = o ( k l ( η k ) ) .

This, combining with (14), (23) and the arbitrariness of β in (23), (19) holds.

By (15), (18) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( i = 1 k | X i | > η k ) ) ~ 1 D n k = 1 n d k E I ( i = 1 k | X i | > η k ) 1 D n k = 1 n d k k ( | X | > η k ) 0  a.s.

That is (20) holds.

Now we prove (21). For any μ > 0, let f be a non-negative, bounded Lipschitz function such that

I ( x > 1 + μ ) f ( x ) I ( x > 1 + μ / 2 ) .

Form E V ̄ k 2 = k l ( η k ) , X ̄ n i is i.i.d., Lemma 2.1 (iv), and (13),

V ̄ k 2 > 1 + μ 2 k l ( η k ) = V ̄ k 2 - E V ̄ k 2 > μ 2 k l ( η k ) c E ( V ̄ k 2 - E V ̄ k 2 ) 2 k 2 l 2 ( η k ) c E X 4 I ( X η k ) k l 2 ( η k ) = o ( 1 ) η k 2 k l ( η k ) = o ( 1 ) 0 .

Therefore, from (16) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 + μ ) k l ( η k ) ) 1 D n k = 1 n d k f V ̄ k 2 k l ( η k ) ~ 1 D n k = 1 n d k E f V ̄ k 2 k l ( η k ) 1 D n k = 1 n d k E I ( V ̄ k 2 > ( 1 + μ / 2 ) k l ( η k ) ) = 1 D n k = 1 n d k ( V ̄ k 2 > ( 1 + μ / 2 ) k l ( η k ) ) 0 a.s.

Hence, (21) holds. By similar methods used to prove (21), we can prove (22). This completes the proof of Theorem 1.1.

Competing interests

The author declares that they have no competing interests.

Acknowledgements

The author was very grateful to the referees and the Editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. This work was supported by the National Natural Science Foundation of China (11061012), the project supported by program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011]47), and the support program of Key Laboratory of Spatial Information and Geomatics (1103108-08).

References

  1. Gine, E, Götze, F, Mason, DM: When is the Student t-statistic asymptotically standard normal?. Ann Probab. 25, 1514–531 (1997)

  2. Brosamler, GA: An almost everywhere central limit theorem. Math Proc Camb Philos Soc. 104, 561–574 (1988). Publisher Full Text OpenURL

  3. Schatte, P: On strong versions of the central limit theorem. Mathematische Nachrichten. 137, 249–256 (1988). Publisher Full Text OpenURL

  4. Lacey, MT, Philipp, W: A note on the almost sure central limit theorem. Statist Probab Lett. 9, 201–205 (1990). Publisher Full Text OpenURL

  5. Ibragimov, IA, Lifshits, M: On the convergence of generalized moments in almost sure central limit theorem. Stat Probab Lett. 40, 343–351 (1998). Publisher Full Text OpenURL

  6. Miao, Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc Indian Acad Sci C Math Sci. 118(2), 289–294 (2008). Publisher Full Text OpenURL

  7. Berkes, I, Csáki, E: A universal result in almost sure central limit theory. Stoch Proc Appl. 94, 105–134 (2001). Publisher Full Text OpenURL

  8. Hörmann, S: Critical behavior in almost sure central limit theory. J Theoret Probab. 20, 613–636 (2007). Publisher Full Text OpenURL

  9. Wu, QY: Almost sure limit theorems for stable distribution. Stat Probab Lett. 281(6), 662–672 (2011)

  10. Wu, QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc Math Sci. 121(3), 369–377 (2011). Publisher Full Text OpenURL

  11. Ye, DX, Wu, QY: Almost sure central limit theorem of product of partial sums for strongly mixing. J Inequal Appl vol. 2011, 9 (2011) Article ID 576301,

    Article ID 576301,

    BioMed Central Full Text OpenURL

  12. Huang, SH, Pang, TX: An almost sure central limit theorem for self-normalized partial sums. Comput Math Appl. 60, 2639–2644 (2010). Publisher Full Text OpenURL

  13. Zhang, Y, Yang, XY: An almost sure central limit theorem for self-normalized products of sums of i.i.d. random variables. J Math Anal Appl. 376, 29–41 (2011). Publisher Full Text OpenURL

  14. Chandrasekharan K, Minakshisundaram S (eds.): Typical Means. Oxford University Press, Oxford (1952)

  15. Csörgo, M, Szyszkowicz, B, Wang, QY: Donsker's theorem for self-normalized partial processes. Ann Probab. 31(3), 1228–1240 (2003). Publisher Full Text OpenURL

  16. Billingsley P (ed.): Convergence of Probability Measures. Wiley, New York (1968)

  17. Peligrad, M, Shao, QM: A note on the almost sure central limit theorem for weakly dependent random variables. Stat Probab Lett. 22, 131–136 (1995). Publisher Full Text OpenURL