- Research
- Open access
- Published:
Improved results in almost sure central limit theorems for the maxima and partial sums for Gaussian sequences
Journal of Inequalities and Applications volume 2015, Article number: 109 (2015)
Abstract
Let \(X, X_{1}, X_{2},\ldots\) be a standardized Gaussian sequence. The universal results in almost sure central limit theorems for the maxima \(M_{n}\) and partial sums and maxima \((S_{n}/\sigma_{n}, M_{n})\) are established, respectively, where \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(\sigma^{2}_{n}=\operatorname{Var}S_{n}\), and \(M_{n}=\max_{1\leq i\leq n}X_{i}\).
1 Introduction
Starting with Brosamler [1] and Schatte [2], in the last two decades several authors investigated the almost sure central limit theorem (ASCLT) dealing mostly with partial sums of random variables. Some ASCLT results for partial sums were obtained by Ibragimov and Lifshits [3], Miao [4], Berkes and Csáki [5], Hörmann [6], Wu [7–9], and Wu and Chen [10]. The concept has already started to have applications in many areas. Fahrner and Stadtmüller [11] and Nadarajah and Mitov [12] investigated ASCLT for the maxima of i.i.d. random variables. The ASCLT of Gaussian sequences has experienced new developments in the recent past years. Significant recent contributions can be found in Csáki and Gonchigdanzan [13], Chen and Lin [14], Tan et al. [15], and Tan and Peng [16], extending this principle by proving ASCLT for the maxima of a Gaussian sequence. Further, Peng et al. [17–19], Zhao et al. [20], and Tan and Wang [21] studied the maximum and partial sums of a standardized nonstationary Gaussian sequence.
A standardized Gaussian sequence \(\{X_{n}; n\geq1\}\) is a sequence of standard normal random variables, and for any choice of n, \(i_{1},\ldots,i_{n}\), the joint distribution of \(X_{i_{1}},\ldots,X_{i_{n}}\) is an n-dimensional normal distribution. Throughout this paper we assume \(\{X_{n}; n\geq1\}\) is a standardized Gaussian sequence with covariance \(r_{i,j}:=\operatorname{Cov}(X_{i}, X_{j})\). For each \(n\geq1\), let \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(\sigma^{2}_{n}=\operatorname{Var}S_{n}\), \(M_{n}=\max_{1\leq i\leq n}X_{i}\). The symbols \(S_{n}/\sigma_{n}\) and \(M_{n}\) denote partial sums and maxima, respectively. Let \(\Phi(\cdot)\) and \(\phi(\cdot)\) denote the standard normal distribution function and its density function, respectively, and I denote an indicator function. \(A_{n}\sim B_{n}\) denotes \(\lim_{n\rightarrow\infty}A_{n}/B_{n}=1\), and \(A_{n}\ll B_{n}\) means that there exists a constant \(c>0\) such that \(A_{n}\leq cB_{n}\) for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another. The normalizing constants \(a_{n}\) and \(b_{n}\) are defined by
Chen and Lin [14] obtained the following almost sure limit theorem for the maximum of a standardized nonstationary Gaussian sequence.
Theorem A
Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence such that \(|r_{ij}|\leq\rho_{|i-j|}\) for \(i\neq j\) where \(\rho_{n}<1\) for all \(n\geq1\) and \(\rho_{n}\ll\frac{1}{\ln n(\ln\ln n)^{1+\varepsilon}}\). Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(n(1-\Phi(\lambda_{n}))\) is bounded and \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\geq c\ln^{1/2}n\) for some \(c>0\). If \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) as \(n\rightarrow\infty\) for some \(\tau\geq0\), then
Zhao et al. [20] obtained the following almost sure limit theorem for maximum and partial sums of standardized nonstationary Gaussian sequence.
Theorem B
Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence. Suppose that there exists a numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0<\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). If \(\sup_{i\neq j}|r_{ij}|=\delta<1\),
then
and
By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35) one shows that the larger the weight sequence in ASCLT is, the stronger the relation becomes. Based on this view, one should also expect to get stronger results if one uses larger weights. Moreover, it would be of considerable interest to determine the optimal weights.
The purpose of this paper is to give substantial improvements for weight sequences and to weaken greatly conditions (2) and (3) in Theorem B obtained by Zhao et al. [20]. We will study and establish the ASCLT for maximum \(M_{n}\) and maximum and partial sums of the standardized Gaussian sequences, and we will show that the ASCLT holds under a fairly general growth condition on \(d_{k}=k^{-1}\exp(\ln^{\alpha}k)\), \(0\leq\alpha<1/2\).
2 Main results
Set
Our theorems are formulated in a more general setting.
Theorem 2.1
Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\rho_{n}<1\) for all \(n\geq1\) such that
Then
and
where \(a_{n}\) and \(b_{n}\) are defined by (1).
Theorem 2.2
Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that
Then
and
where \(a_{n}\) and \(b_{n}\) are defined by (1).
Taking \(u_{ki}=u_{k}\) for \(1\leq i\leq k\) in Theorems 2.1 and 2.2, we can immediately obtain the following corollaries.
Corollary 2.3
Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that condition (7) is satisfied. Then (9) and
hold.
Corollary 2.4
Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that conditions (10) and (11) are satisfied. Then (13) and
hold.
By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35), we have the following corollary.
Corollary 2.5
Theorems 2.1 and 2.2, Corollaries 2.3 and 2.4 remain valid if we replace the weight sequence \(\{d_{k}; n\geq1\}\) by any \(\{d_{k}^{\ast}; n\geq1\}\) such that \(0\leq d_{k}^{\ast}\leq d_{k}\), \(\sum_{k=1}^{\infty}d_{k}^{\ast}=\infty\).
Remark 2.6
Obviously, the condition (10) is significantly weaker than the condition (2), and in particular taking \(\alpha=0\), i.e., the weight \(d_{k}=\mathrm{e}/k\), we have \(D_{n}\sim\mathrm{e}\ln n\) and \(\ln D_{n}\sim\ln\ln n\), in this case, the condition (11) is significantly weaker than the condition (3), and the conclusions (12) and (13) become (4) and (5), respectively. Therefore, our Theorem 2.2 not only gives substantial improvements for the weight but also has greatly weakened restrictions on the covariance \(r_{ij}\) in Theorem B obtained by Zhao et al. [20].
Remark 2.7
Theorem A obtained by Chen and Lin [14] is a special case of Theorem 2.1 when \(\alpha=0\). When \(\{X_{n}; n\geq1\}\) is stationary, \(u_{ni}=u_{n}\), \(1\leq i\leq n\), and \(\alpha=0\), Theorem 2.1 is Corollary 2.2 obtained by Csáki and Gonchigdanzan [13].
Remark 2.8
Whether (8), (9), (12), and (13) work also for some \(1/2\leq\alpha<1\) remains an open question.
3 Proofs
The proof of our results follows a well-known scheme of the proof of an a.s. limit theorem, e.g. Berkes and Csáki [5], Chuprunov and Fazekas [23, 24], and Fazekas and Rychlik [25]. We will point out that the weight from \(d_{k}=1/k\) is extended to \(d_{k}=\exp(\ln^{\alpha}k)/k\), \(0\leq\alpha<1/2\), and relaxed restrictions on the covariance \(r_{ij}\) encountered great difficulties and challenges; to overcome the difficulties and challenges the following five lemmas play an important role. The proofs of Lemmas 3.2 to 3.4 are given in the Appendix.
Lemma 3.1
(Normal comparison lemma, Theorem 4.2.1 in Leadbetter et al. [26])
Suppose \(\xi_{1},\ldots, \xi_{n}\) are standard normal variables with covariance matrix \(\Lambda^{1}=(\Lambda^{1}_{ij})\), and \(\eta_{1},\ldots, \eta_{n}\) similarly with covariance matrix \(\Lambda^{0}=(\Lambda^{0}_{ij})\), and let \(\rho_{ij}=\max(|\Lambda^{1}_{ij}|, |\Lambda^{0}_{ij}|)\), \(\max_{i\neq j}\rho_{ij}=\delta<1\). Further, let \(u_{1},\ldots, u_{n}\) be real numbers. Then
for some constant K, depending only on δ.
Lemma 3.2
Suppose that the conditions of Theorem 2.1 hold, then there exists a constant \(\gamma>0\) such that
where ε is defined by (7).
Lemma 3.3
Suppose that the conditions of Theorem 2.2 hold, then there exists a constant \(\gamma>0\) such that
The following weak convergence results are the extended versions of Theorem 4.5.2 of Leadbetter et al. [26] to the nonstationary normal random variables.
Lemma 3.4
Suppose that the conditions of Theorem 2.1 hold, then
Suppose that the conditions of Theorem 2.2 hold, then
Lemma 3.5
Let \(\{\xi_{n}; n\geq1\}\) be a sequence of uniformly bounded random variables. If
for some \(\varepsilon>0\), then
where \(d_{n}\) and \(D_{n}\) are defined by (6).
Proof
Similarly to the proof of Lemma 2.2 in Wu [9], we can prove Lemma 3.5. □
Proof of Theorem 2.1
Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}))\rightarrow\exp(-\tau)\), and hence by the Toeplitz lemma,
Therefore, in order to prove (8), it suffices to prove that
which will be done by showing that
for some \(\varepsilon>0\) from Lemma 3.5. Let \(\xi_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )\). Then \(\mathbb{E}\xi_{k}=0\) and \(|\xi_{k}|\leq1\) for all \(k\geq1\). Hence
Since \(|\xi_{k}|\leq1\) and \(\exp(2\ln^{\beta}x)=\exp(2\int_{1}^{x}\frac{(\ln u)^{\beta-1}}{u} \,\mathrm{d}u)\), \(\beta<1\), is a slowly varying function at infinity, from Seneta [27], it follows that
By Lemma 3.2, for \(1\leq k< l\),
for \(\gamma_{1}=\min(1, \gamma)>0\). Hence,
By (11) in Wu [9],
From this, combined with the fact that \(\int_{1}^{x}\frac{l(t)}{t^{\beta}}\,\mathrm{d}t\sim\frac{l(x)x^{1-\beta}}{1-\beta}\) as \(x\rightarrow\infty\) for \(\beta<1\) and \(l(x)\) is a slowly varying function at infinity (see Proposition 1.5.8 in Bingham et al. [28]), we get
for \(0<\varepsilon_{1}<(1-2\alpha)/\alpha\).
Now, we estimate \(T_{22}\). For \(\alpha>0\), by (25)
For \(\alpha=0\), noting the fact that \(D_{n}\sim\ln n\), similarly we get
Equations (22)-(24), (26)-(28) together establish (21), which concludes the proof of (8). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (9) immediately follows from (8) with \(u_{ni}=x/a_{n} +b_{n}\).
This completes the proof of Theorem 2.1. □
Proof of Theorem 2.2
Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}), S_{n}/\sigma_{n}\leq y )\rightarrow\mathrm{e}^{-\tau}\Phi(y)\), and hence by the Toeplitz lemma,
Therefore, in order to prove (12), it suffices to prove that
which will be done by showing that
for some \(\varepsilon>0\) from Lemma 3.5. Let \(\eta_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )\). By Lemma 3.3, for \(1\leq k< l/\ln l\),
Hence,
By the proof of (26),
Now, we estimate \(T_{4}\). For \(\alpha>0\), by (25)
for \(0<\varepsilon_{2}<1/(2\alpha)-1\).
For \(\alpha=0\),
Equations (30)-(34) together establish (29) for \(\varepsilon=\min(\varepsilon_{1}, \varepsilon_{2})>0\), which concludes the proof of (12). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (13) immediately follows from (12) with \(u_{ni}=x/a_{n} +b_{n}\).
This completes the proof of Theorem 2.2. □
References
Brosamler, GA: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 104, 561-574 (1988)
Schatte, P: On strong versions of the central limit theorem. Math. Nachr. 137, 249-256 (1988)
Ibragimov, IA, Lifshits, M: On the convergence of generalized moments in almost sure central limit theorem. Stat. Probab. Lett. 40, 343-351 (1998)
Miao, Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 118(2), 289-294 (2008)
Berkes, I, Csáki, E: A universal result in almost sure central limit theory. Stoch. Process. Appl. 94, 105-134 (2001)
Hörmann, S: Critical behavior in almost sure central limit theory. J. Theor. Probab. 20, 613-636 (2007)
Wu, QY: Almost sure limit theorems for stable distribution. Stat. Probab. Lett. 81(6), 662-672 (2011)
Wu, QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc. Indian Acad. Sci. Math. Sci. 121(3), 369-377 (2011)
Wu, QY: A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J. Inequal. Appl. 2012, 17 (2012). doi:10.1186/1029-242X-2012-17
Wu, QY, Chen, PY: An improved result in almost sure central limit theorem for self-normalized products of partial sums. J. Inequal. Appl. 2013, 129 (2013). doi:10.1186/1029-242X-2013-129
Fahrner, I, Stadtmüller, U: On almost sure max-limit theorems. Stat. Probab. Lett. 37, 229-236 (1998)
Nadarajah, S, Mitov, K: Asymptotics of maxima of discrete random variables. Extremes 5, 287-294 (2002)
Csáki, E, Gonchigdanzan, K: Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58, 195-203 (2002)
Chen, SQ, Lin, ZY: Almost sure max-limits for nonstationary Gaussian sequence. Stat. Probab. Lett. 76, 1175-1184 (2006)
Tan, ZQ, Peng, ZX, Nadarajah, S: Almost sure convergence of sample range. Extremes 10, 225-233 (2007)
Tan, ZQ, Peng, ZX: Almost sure convergence for non-stationary random sequences. Stat. Probab. Lett. 79, 857-863 (2009)
Peng, ZX, Liu, MM, Nadarajah, S: Conditions based on conditional moments for max-stable limit laws. Extremes 11, 329-337 (2008)
Peng, ZX, Wang, LL, Nadarajah, S: Almost sure central limit theorem for partial sums and maxima. Math. Nachr. 282(4), 632-636 (2009)
Peng, ZX, Weng, ZC, Nadarajah, S: Almost sure limit theorems of extremes of complete and incomplete samples of stationary sequences. Extremes 13, 463-480 (2010)
Zhao, SL, Peng, ZX, Wu, SL: Almost sure convergence for the maximum and the sum of nonstationary Gaussian sequences. J. Inequal. Appl. 2010, Article ID 856495 (2010). doi:10.1155/2010/856495
Tan, ZQ, Wang, YB: Almost sure central limit theorem for the maxima and sums of stationary Gaussian sequences. J. Korean Stat. Soc. 40, 347-355 (2011)
Chandrasekharan, K, Minakshisundaram, S: Typical Means. Oxford University Press, Oxford (1952)
Chuprunov, A, Fazekas, I: Almost sure versions of some analogues of the invariance principle. Publ. Math. (Debr.) 54(3-4), 457-471 (1999)
Chuprunov, A, Fazekas, I: Almost sure versions of some functional limit theorems. J. Math. Sci. (N.Y.) 111(3), 3528-3536 (2002)
Fazekas, I, Rychlik, Z: Almost sure functional limit theorems. Ann. Univ. Mariae Curie-Skłodowska, Sect. A 56(1), 1-18 (2002)
Leadbetter, MR, Lindgren, G, Rootzén, H: Extremes and Related Properties of Random Sequences and Processes. Springer, New York (1983)
Seneta, E: Regularly Varying Functions. Lecture Notes in Mathematics, vol. 508. Springer, Berlin (1976)
Bingham, NH, Goldie, CM, Teugels, JL: Regular Variation. Cambridge University Press, Cambridge (1987)
Acknowledgements
The author is very grateful to the referees and the Editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. The author was supported by the National Natural Science Foundation of China (11361019), project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011] 47), and the Support Program of the Guangxi China Science Foundation (2013GXNSFDA019001).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares to have no competing interests.
Author’s information
Qunying Wu is a professor, doctor, working in the field of probability and statistics.
Appendix
Appendix
Proof of Lemma 3.2
By assumption (7), we have \(\delta:=\sup_{i\neq j}|r_{ij}|<1\). Define λ such that \(0<\lambda<2/(1+\delta)-1\), for \(1\leq k\leq l\),
Since \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}\) is the same as defined in Theorem 2.1, there exists a constant \(c>0\) such that \(n(1-\Phi(\lambda_{n}))\leq c\). \(v_{n}\) is defined to satisfy \(n(1-\Phi(v_{n}))=c\); then clearly \(v_{n}\leq\lambda_{n}\).
Since \(1-\Phi(x)\sim\phi(x)/x\) as \(x\rightarrow\infty\), we have
for \(0<\gamma<2/(1+\delta)-1-\lambda\).
Setting \(\sigma_{j}=\sup_{i\geq j}\rho_{i}\), by (7) and (25),
and
This, combined with (36), shows
This, together with (35) and (37) implies that (14) holds.
It is well known that \(\mathbb{P}(B)-\mathbb{P}(AB)\leq \mathbb{P}(\bar{A})\) for any sets A and B, then using the condition that \(n(1-\Phi(\lambda_{n}))\) is bounded, for \(1\leq k< l\), we get
Hence, (15) holds.
Now we prove (16). By (14), applying the normal comparison lemma, Lemma 3.1, for \(1\leq k< l\),
Hence, (16) holds. □
Proof of Lemma 3.3
Notice, for \(1\leq k< l\),
Using \(l-2|\sum_{1\leq i< j\leq l}r_{ij}|\leq\sigma^{2}_{l}\leq l+2|\sum_{1\leq i<j\leq l}r_{ij}|\) and (10), there exist constants \(c_{i}>0\), \(i=1,2\), such that
Hence, using (11), for \(1\leq i\leq l\leq n\),
and
Noting the fact that lnl and \(\ln D_{l}\) are slowly varying functions at infinity, (40) and (41) imply that there exists \(0<\mu<1\) such that, for sufficiently large l,
and
Combining (36), (40), and the normal comparison lemma, Lemma 3.1, for \(i=3, 4\),
for \(0<\gamma_{1}<1/(1+\mu)-1/2\).
By the proof of (15), we have \(H_{5}\ll k/l\). This, combined with (38) and (42), implies that (17) holds for \(\gamma=\min(1, \gamma_{1})>0\).
Now we prove (18). Again applying the normal comparison lemma, Lemma 3.1, for \(1\leq k< l/\ln l\),
for \(0<\gamma_{2}<2/(1+\delta)-1\),
and
Hence (18) follows for \(\gamma=\min(\gamma_{1}, \gamma_{2})>0\) and thus (17) and (18) hold for \(\gamma=\min(\gamma_{1}, \gamma_{2}, 1)>0\). □
Proof of Lemma 3.4
On applying (14), it follows from the normal comparison lemma, Lemma 3.1, that
Hence, by \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\), we get
That is, (19) holds. Using the proof of \(H_{3}\) and (19), (20) follows from
 □
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Wu, Q. Improved results in almost sure central limit theorems for the maxima and partial sums for Gaussian sequences. J Inequal Appl 2015, 109 (2015). https://doi.org/10.1186/s13660-015-0634-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0634-3