Skip to main content

Complete moment convergence of moving average process generated by a class of random variables

Abstract

In this paper, we establish the complete moment convergence of a moving average process generated by the class of random variables satisfying a Rosenthal-type maximal inequality and a weak mean dominating condition with a mean dominating variable.

1 Introduction

Let \(\{Y_{i}, -\infty< i<\infty\}\) be a doubly infinite sequence of random variables with zero means and finite variances and \(\{a_{i}, -\infty < i<\infty\}\) an absolutely summable sequence of real numbers. Define a moving average process \(\{X_{n}, n\geq1\}\) by

$$ X_{n}=\sum_{i=-\infty}^{\infty}a_{i} Y_{i+n},\quad n\geq1. $$
(1.1)

The concept of complete moment convergence is as follows: Let \(\{Y_{n}, n\geq1\}\) be a sequence of random variables and \(a_{n}>0\), \(b_{n}>0\). If \(\sum_{n=1}^{\infty}a_{n} E\{b_{n}^{-1}|Y_{n}|-\epsilon\}^{+}<\infty\) for all \(\epsilon>0\), then we call that \(\{Y_{n}, n\geq1\}\) satisfies the complete moment convergence. It is well known that the complete moment convergence can imply the complete convergence.

Chow [1] first showed the following complete moment convergence for a sequence of i.i.d. random variables by generalizing the result of Baum and Katz [2].

Theorem 1.1

Suppose that \(\{Y_{n}, n\geq1\}\) is a sequence of i.i.d. random variables with \(EY_{1}=0\). For \(1\leq p<2\) and \(r>p\), if \(E\{|Y_{1}|^{r}+|Y_{1}|\log(1+|Y_{1}|)\}<\infty\), then \(\sum_{n=1}^{\infty}n^{\frac{r}{p}-2-\frac{1}{p}}E(|\sum_{i=1}^{n} Y_{i}|-\epsilon n^{\frac{1}{p}})^{+}<\infty\) for any \(\epsilon>0\).

Recently, under dependence assumptions many authors studied extensively the complete moment convergence of a moving average process; see for example, Li and Zhang [3] for NA random variables, Zhou [4] for φ-mixing random variables, and Zhou and Lin [5] for ρ-mixing random variables.

We recall that a sequence \(\{Y_{n}, n\geq1\}\) of random variables satisfies a weak mean dominating condition with a mean dominating random variable Y if there is some positive constant C such that

$$ \frac{1}{n} \sum_{k=1}^{n} P\bigl(|Y_{k}|>x\bigr)\leq CP\bigl(|Y|>x\bigr) $$
(1.2)

for all \(x>0\) and all \(n\geq1\) (see Kuczmaszewska [6]).

One of the most interesting inequalities in probability theory and mathematical statistics is the Rosenthal-type maximal inequality. For a sequence \(\{Y_{i}, 1\leq i \leq n\}\) of i.i.d. random variables with \(E|Y_{1}|^{q}<\infty\) for \(q\geq2\) there exists a positive constant \(C_{q}\) depending only on q such that

$$ E\Biggl(\max_{1\leq j \leq n}\Biggl|\sum_{i=1}^{j} (Y_{i}-EY_{i})\Biggr|\Biggr)^{q}\leq C_{q} \Biggl\{ \sum_{i=1}^{n} E|Y_{i}|^{q}+ \Biggl(\sum_{i=1}^{n} EY_{i}^{2} \Biggr)^{q/2}\Biggr\} . $$
(1.3)

The above inequality has been obtained for dependent random variables by many authors. See, for example, Peligrad [7] for a strong stationary ρ-mixing sequence, Peligrad and Gut [8] for a \(\rho^{*}\)-mixing sequence, Stoica [9] for a martingale difference sequence, and so forth.

In this paper we will establish the complete moment convergence for a moving average process generated by the class of random variables satisfying a Rosenthal-type maximal inequality and a weak mean dominating condition.

2 Some lemmas

The following lemmas will be useful to prove the main results.

Recall that a real valued function h, positive and measurable on \([0, \infty)\), is said to be slowly varying at infinity if for each \(\lambda>0\)

$$\lim_{x\rightarrow\infty}\frac{h(\lambda x)}{h(x)}=1. $$

Lemma 2.1

(Zhou [4])

If h is a slowly varying function at infinity and m a positive integer, then

  1. (1)

    \(\sum_{n=1}^{m} n^{t} h(n)\leq C m^{t+1} h(m)\) for \(t>-1\),

  2. (2)

    \(\sum_{n=m}^{\infty}n^{t} h(n)\leq C m^{t+1} h(m)\) for \(t<-1\).

Lemma 2.2

(Gut [10])

Let \(\{X_{n}, n\geq1\}\) be a sequence of random variables satisfying a weak dominating condition with a mean dominating random variable X, i.e., there exists some positive constant C

$$\frac{1}{n}\sum_{i=1}^{n} P\bigl(|X_{i}|>x\bigr)\leq C P\bigl(|X|>x\bigr) \quad\textit{for all } x>0 \textit{ and all }n \geq1. $$

Let \(r>0\) and for some \(A>0\)

$$\begin{aligned}& X_{i}^{\prime}=X_{i} I\bigl(|X_{i}|\leq A\bigr),\qquad X_{i}^{\prime\prime}=X_{i} I\bigl(|X_{i}|>A\bigr),\\& X_{i}^{*}=X_{i} I\bigl(|X_{i}|\leq A\bigr)-AI(X_{i}< -A)+AI(X_{i}>A), \end{aligned}$$

and

$$\begin{aligned}& X^{\prime}=X I\bigl(|X|\leq A\bigr),\qquad X^{\prime\prime}=X I\bigl(|X|>A\bigr),\\& X^{*}=X I\bigl(|X|\leq A\bigr)-AI(X< -A)+AI(X>A) . \end{aligned}$$

Then for some \(C>0\)

  1. (1)

    if \(E|X|^{r}<\infty\), then \((n^{-1})\sum_{i=1}^{n} E|X_{i}|^{r}\leq CE|X|^{r}\),

  2. (2)

    \((n^{-1})\sum_{i=1}^{n} E|X_{i}^{\prime}|^{r}\leq C(E|X^{\prime}|^{r}+A^{r} P(|X|>A))\) for any \(A>0\),

  3. (3)

    \((n^{-1})\sum_{i=1}^{n} E|X_{i}^{\prime\prime}|^{r}\leq CE|X^{\prime\prime}|^{r}\) for any \(A>0\),

  4. (4)

    \((n^{-1})\sum_{i=1}^{n} E|X_{i}^{*}|^{r}\leq CE|X^{*}|^{r}\) for any \(A>0\).

3 Main result

Theorem 3.1

Let h be a function slowly varying at infinity, \(p\geq1\), \(\alpha>\frac{1}{2}\) and \(\alpha p>1\). Assume that \(\{a_{i}, -\infty< i<\infty\}\) is an absolutely summable sequence of real numbers and that \(\{Y_{i}, -\infty< i<\infty\}\) is a sequence of mean zero random variables satisfying a weak mean dominating condition with a mean dominating random variable Y, i.e. there exists some positive constant C

$$\frac{1}{n}\sum_{i=j+1}^{j+n} P\bigl(|Y_{i}|>x\bigr)\leq C P\bigl(|Y|>x\bigr) \quad\textit{for all } x>0, -\infty< j< \infty $$

and all \(n\geq1\) and \(E|Y|^{p} h(|Y|^{\frac{1}{\alpha}})<\infty\).

Suppose that \(\{X_{n}, n\geq1\}\) is a moving average process, where \(X_{n}=\sum_{i=-\infty}^{\infty}a_{i} Y_{i+n}\), \(n\geq1\) is defined as (1.1).

Assume that for any \(q\geq2\), there exists a positive \(C_{q}\) depending only on q such that

$$ E\Biggl(\max_{1\leq i \leq n}\Biggl|\sum_{j=1}^{i} (Y_{xj}-EY_{xj})\Biggr|^{q}\Biggr)\leq C_{q} \Biggl\{ \sum_{j=1}^{n} E|Y_{xj}|^{q}+ \Biggl(\sum_{j=1}^{n} EY_{xj}^{2} \Biggr)^{q/2}\Biggr\} , $$
(3.1)

where \(Y_{xj}=-xI(Y_{j}<-x)+Y_{j}I(|Y_{j}|\leq x)+xI(Y_{j}>x)\) for all \(x>0\).

Then for all \(\epsilon>0\)

$$ \sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}h(n)E\Biggl\{ \max_{1\leq i\leq n}\Biggl|\sum_{j=1}^{i} X_{j}\Biggr|-\epsilon n^{\alpha}\Biggr\} ^{+}< \infty $$
(3.2)

and

$$ \sum_{n=1}^{\infty}n^{\alpha p-2}h(n)E\Biggl\{ \sup_{i\geq n}\Biggl|i^{-\alpha}\sum_{j=1}^{i} X_{j}\Biggr|-\epsilon\Biggr\} ^{+}< \infty. $$
(3.3)

Proof of (3.2)

Let \(\tilde{Y_{xj}}=Y_{j}-Y_{xj}\) and \(l(n)=n^{\alpha p-2-\alpha}h(n)\).

Recall that \(\sum_{k=1}^{n} X_{k}=\sum_{k=1}^{n} \sum_{i=-\infty}^{\infty}a_{i} Y_{i+k}=\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n}Y_{j}\) by (1.1).

If \(\alpha>1\), by the assumption that \(\sum_{i=-\infty}^{\infty}|a_{i}|<\infty\) and Lemma 2.2 we have, for \(x>n^{\alpha}\),

$$\begin{aligned} x^{-1}\Biggl|E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n} Y_{xj}\Biggr| &\leq Cx^{-1} n\bigl\{ E|Y|I\bigl[|Y|\leq x\bigr]+xP\bigl(|Y|>x\bigr)\bigr\} \\ &\leq C n^{1-\alpha}\rightarrow0 \quad\mbox{as }n\rightarrow\infty. \end{aligned}$$
(3.4i)

If \(\frac{1}{2}<\alpha\leq1\), \(\alpha p>1\) implies \(p>1\). By the assumption \(EY_{i}=0\) for all \(-\infty< i<\infty\) and Lemma 2.2 we obtain

$$\begin{aligned} x^{-1}\Biggl|E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n} Y_{xj}\Biggr|&=x^{-1}\Biggl|E\sum_{i=-\infty}^{\infty}a_{i}\sum_{j=i+1}^{i+n}\tilde{ Y_{xj}}\Biggr| \\ &\leq Cx^{-1}\sum_{i=-\infty}^{\infty}|a_{i}| \sum_{j=i+1}^{i+n}E|Y_{j}|I\bigl[|Y_{j}|>x\bigr] \\ &\leq Cx^{-1} n E|Y|I\bigl[|Y|>x\bigr]\leq Cx^{\frac{1}{\alpha}-1}E|Y|I\bigl[|Y|>x\bigr] \\ &\leq CE|Y|^{p}I\bigl[|Y|>x\bigr]\rightarrow0\quad \mbox{as }x\rightarrow\infty. \end{aligned}$$
(3.4ii)

It follows from (3.4i) and (3.4ii) that for \(x>n^{\alpha}\) large enough,

$$ x^{-1}\Biggl|E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n} Y_{xj}\Biggr|< \frac {\epsilon}{4}, $$
(3.5)

which yields

$$\begin{aligned} &\sum_{n=1}^{\infty}l(n) E\Biggl\{ \max_{1\leq k \leq n}\Biggl|\sum_{j=1}^{k} X_{j}\Biggr|-\epsilon n^{\alpha}\Biggr\} ^{+} \\ &\quad\leq\sum_{n=1}^{\infty}l(n)\int _{\epsilon n^{\alpha}}^{\infty}P\Biggl(\max_{1\leq k \leq n}\Biggl|\sum _{j=1}^{k} X_{j}\Biggr|\geq x\Biggr)\,dx \quad\bigl(\mbox{letting } x=\epsilon x^{\prime}\bigr) \\ &\quad\leq\epsilon\sum_{n=1}^{\infty}l(n)\int _{n^{\alpha}}^{\infty}P\Biggl(\max_{1\leq k \leq n}\Biggl|\sum _{j=1}^{k} X_{j}\Biggr|\geq \epsilon x^{\prime}\Biggr)\,dx^{\prime} \\ &\quad\leq C \sum_{n=1}^{\infty}l(n)\int _{n^{\alpha}}^{\infty}P\Biggl(\max_{1\leq k \leq n}\Biggl|\sum _{i=-\infty}^{\infty}a_{i}\sum _{j=i+1}^{i+k} \tilde {Y_{xj}}\Biggr|\geq \frac{\epsilon x}{2}\Biggr)\,dx \\ &\qquad{}+C\sum_{n=1}^{\infty}l(n)\int _{n^{\alpha}}^{\infty}P\Biggl(\max_{1\leq k \leq n}\Biggl|\sum _{i=-\infty}^{\infty}a_{i}\sum _{j=i+1}^{i+k}( Y_{xj}-E Y_{xj})\Biggr|\geq \frac{\epsilon x}{4}\Biggr)\,dx \\ &\quad=I_{1}+I_{2}. \end{aligned}$$
(3.6)

Now we will by an estimate show that \(I_{1}<\infty\). It is clear that \(|\tilde{Y_{xj}}|\leq|Y_{j}|I[|Y_{j}|>x]\). Hence for \(I_{1}\), by Markov’s inequality and Lemma 2.2, we have

$$\begin{aligned} I_{1} \leq&C\sum_{n=1}^{\infty}l(n) \int_{n^{\alpha}}^{\infty}x^{-1}E\max _{1\leq k\leq n}\Biggl|\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}\tilde {Y_{xj}}\Biggr|\,dx \\ \leq&C\sum_{n=1}^{\infty}l(n) \int _{n^{\alpha}}^{\infty}x^{-1} \sum _{-\infty}^{\infty}|a_{i}| \sum _{j=i+1}^{i+n}E|\tilde{Y_{xj}}|\,dx \\ \leq&C\sum_{n=1}^{\infty}n l(n) \int _{n^{\alpha}}^{\infty}x^{-1} E|Y|I\bigl[|Y|>x\bigr]\,dx \\ =&C\sum_{n=1}^{\infty}n l(n)\sum _{m=n}^{\infty}\int_{m^{\alpha}}^{(m+1)^{\alpha}} x^{-1}E|Y|I\bigl[|Y|>x\bigr]\,dx \\ \leq&C\sum_{n=1}^{\infty}n l(n)\sum _{m=n}^{\infty}m^{-1}E|Y|I\bigl[|Y|>m^{\alpha}\bigr] \\ =&C\sum_{m=1}^{\infty}m^{-1}E|Y|I \bigl[|Y|>m^{\alpha}\bigr]\sum_{n=1}^{m} n^{\alpha p-1-\alpha}h(n). \end{aligned}$$
(3.7)

If \(p>1\), note that \(\alpha p-1-\alpha>-1\). By Lemma 2.1 and (3.7) we obtain

$$\begin{aligned} I_{1} \leq&C\sum_{m=1}^{\infty}m^{\alpha p-1-\alpha}h(m)E|Y|I\bigl[|Y|>m^{\alpha}\bigr] \\ =&C\sum_{m=1}^{\infty}m^{\alpha p-1-\alpha}h(m) \sum_{k=m}^{\infty}E|Y|I\bigl[k^{\alpha}< |Y| \leq(k+1)^{\alpha}\bigr] \\ =&C\sum_{k=1}^{\infty}E|Y|I \bigl[k^{\alpha}< |Y|\leq(k+1)^{\alpha}\bigr]\sum _{m=1}^{k} m^{\alpha p-1-\alpha}h(m) \\ \leq&C\sum_{k=1}^{\infty}k^{\alpha p-\alpha}h(k) E|Y|I\bigl[k^{\alpha}< |Y|\leq (k+1)^{\alpha}\bigr] \\ \leq&C E|Y|^{p} h\bigl(|Y|^{\frac{1}{\alpha}}\bigr)< \infty. \end{aligned}$$
(3.8)

If \(p=1\), by (3.7), we also obtain

$$\begin{aligned} I_{1} \leq&C\sum_{m=1}^{\infty}m^{-1}E|Y|I\bigl[|Y|>m^{\alpha}\bigr]\sum _{n=1}^{m} n^{-1}h(n) \\ \leq&C\sum_{m=1}^{\infty}m^{-1}E|Y|I \bigl[|Y|>m^{\alpha}\bigr]\sum_{n=1}^{m} n^{-1+\alpha\delta}h(n) \quad\mbox{for any }\delta>0 \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha\delta -1}h(m)E|Y|I \bigl[|Y|>m^{\alpha}\bigr] \\ \leq&C E|Y|^{1+\delta} h\bigl(|Y|^{\frac{1}{\alpha}}\bigr)< \infty. \end{aligned}$$
(3.9)

So, by (3.8) and (3.9) we get

$$ I_{1}< \infty \quad\mbox{for } p\geq1. $$
(3.10)

For \(I_{2}\), by Markov’s inequality, Hölder’s inequality, and (3.1) we get for any \(q\geq2\)

$$\begin{aligned} I_{2} \leq&C\sum_{n=1}^{\infty}l(n) \int_{n^{\alpha}}^{\infty}x^{-q}E\max _{1\leq k\leq n}\Biggl|\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}(Y_{xj}-E Y_{xj})\Biggr|^{q}\,dx \\ \leq&C\sum_{n=1}^{\infty}l(n) \int _{n^{\alpha}}^{\infty}x^{-q} \\ &{} \times E\Biggl[\sum_{i=-\infty}^{\infty}\bigl(|a_{i}|^{1-\frac{1}{q}}\bigr) \Biggl(|a_{i}|^{\frac {1}{q}} \max_{1\leq k\leq n}\Biggl|\sum_{j=i+1}^{i+k}(Y_{xj}-E Y_{xj})\Biggr|\Biggr)\Biggr]^{q}\,dx \\ \leq&C\sum_{n=1}^{\infty}l(n) \int _{n^{\alpha}}^{\infty}x^{-q} \\ &{} \times\Biggl(\sum_{i=-\infty}^{\infty}|a_{i}| \Biggr)^{q-1}\Biggl(\sum_{i=-\infty}^{\infty}|a_{i}|E\max_{1\leq k\leq n}\biggl|\sum_{j=i+1}^{i+k}(Y_{xj}-E Y_{xj})\biggr|^{q}\Biggr)\,dx \\ \leq&C\sum_{n=1}^{\infty}l(n) \int _{n^{\alpha}}^{\infty}x^{-q}\sum _{i=-\infty}^{\infty}|a_{i}|\sum _{j=i+1}^{i+n}E|Y_{xj}-E Y_{xj}|^{q}\,dx \\ &{} +C\sum_{n=1}^{\infty}l(n) \int _{n^{\alpha}}^{\infty}x^{-q}\sum _{i=-\infty }^{\infty}|a_{i}|\Biggl(\sum _{j=i+1}^{i+n}E|Y_{xj}-E Y_{xj}|^{2} \Biggr)^{\frac{q}{2}}\,dx \\ =:&I_{21}+II_{22}. \end{aligned}$$
(3.11)

For \(I_{21}\), we consider the following two cases.

If \(p>1\), take \(q>\max\{2,p\}\), then by the assumption that \(\sum_{i=-\infty}^{\infty}|a_{i}|<\infty\), \(C_{r}\) inequality and Lemmas 2.1 and 2.2 we get

$$\begin{aligned} I_{21} \leq&C\sum_{n=1}^{\infty}n l(n) \int_{n^{\alpha}}^{\infty}x^{-q}\bigl\{ E|Y|^{q}I\bigl[|Y|\leq x\bigr]+ x^{q}P\bigl(|Y|>x\bigr)\bigr\} \,dx \\ \leq&C\sum_{n=1}^{\infty}n l(n) \sum _{m=n}^{\infty}\int_{m^{\alpha}}^{(m+1)^{\alpha}} \bigl\{ x^{-q}E|Y|^{q}I\bigl[|Y|\leq x\bigr]+ P\bigl(|Y|>x\bigr)\bigr\} \,dx \\ \leq&C\sum_{n=1}^{\infty}n l(n) \sum _{m=n}^{\infty}\bigl\{ m^{\alpha(1-q)-1} E|Y|^{q}I \bigl[|Y|\leq(m+1)^{\alpha}\bigr]+ m^{\alpha-1}P\bigl(|Y|>m^{\alpha}\bigr)\bigr\} \\ =&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1} E|Y|^{q}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr]+ m^{\alpha-1}P \bigl(|Y|>m^{\alpha}\bigr)\bigr\} \sum_{n=1}^{m} n l(n) \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha(p-q)-1}h(m) \sum_{k=1}^{m} E|Y|^{q}I \bigl[k^{\alpha}< |Y|\leq(k+1)^{\alpha}\bigr] \\ &{} +C\sum_{m=1}^{\infty}m^{\alpha p-1}h(m) \sum_{k=m}^{\infty}EI\bigl[k^{\alpha}< |Y| \leq(k+1)^{\alpha}\bigr] \\ =&C\sum_{k=1}^{\infty}E|Y|^{q}I \bigl[k^{\alpha}< |Y|\leq(k+1)^{\alpha}\bigr]\sum _{m=k}^{\infty}m^{\alpha(p-q)-1}h(m) \\ &{} +C\sum_{k=1}^{\infty}EI \bigl[k^{\alpha}< |Y|\leq(k+1)^{\alpha}\bigr]\sum _{m=1}^{k} m^{\alpha p-1}h(m) \\ \leq&C\sum_{k=1}^{\infty}k^{\alpha(p-q)}h(k) E|Y|^{q}I\bigl[k^{\alpha}< |Y|\leq (k+1)^{\alpha}\bigr] \\ &{} +C\sum_{k=1}^{\infty}k^{\alpha p}h(k) EI\bigl[k^{\alpha}< |Y|\leq (k+1)^{\alpha}\bigr] \\ \leq&CE|Y|^{p} h\bigl(|Y|^{\frac{1}{\alpha}}\bigr)< \infty. \end{aligned}$$
(3.12)

For \(I_{21}\), if \(p=1\), take \(q>\max\{1+\delta, 2\}\) by the same argument as above one gets for any \(\delta>0\)

$$\begin{aligned} I_{21} \leq&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1}E|Y|^{q}I\bigl[|Y|\leq (m+1)^{\alpha}\bigr]+m^{\alpha-1}P\bigl(|Y|>m^{\alpha}\bigr)\bigr\} \sum _{n=1}^{m} n l(n) \\ =&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1}E|Y|^{q}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr]+m^{\alpha-1}P\bigl(|Y|>m^{\alpha}\bigr)\bigr\} \sum _{n=1}^{m} n^{-1} l(n) \\ \leq&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1}E|Y|^{q}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr] +m^{\alpha-1}P\bigl(|Y|>m^{\alpha}\bigr)\bigr\} \sum _{n=1}^{m} n^{-1+\alpha\delta }h(n) \\ \leq&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q+\delta)-1}h(n)E|Y|^{q}I\bigl[|Y|\leq (m+1)^{\alpha}\bigr] +m^{\alpha(1+\delta)-1}h(x)EI\bigl[|Y|>m^{\alpha}\bigr]\bigr\} \\ \leq&CE|Y|^{1+\delta}h\bigl(|Y|^{\frac{1}{\alpha}}\bigr)< \infty. \end{aligned}$$
(3.13)

It follows from (3.12) and (3.13) that, for \(p\geq1\),

$$ I_{21}< \infty. $$
(3.14)

It remains to estimate \(I_{22}<\infty\).

For \(I_{22}\), we consider the following two cases. If \(1\leq p<2\), take \(q>2\), note that \(\alpha p+\frac{q}{2}-\frac {\alpha p q}{2}-1=(\alpha p-1)(1-\frac{q}{2})<0\). Then by \(C_{r}\) inequality and Lemma 2.2, we obtain

$$\begin{aligned} I_{22} \leq&C\sum_{n=1}^{\infty}n^{\frac{q}{2}} l(n) \int_{n^{\alpha}}^{\infty}x^{-q}\bigl\{ \bigl(E|Y|^{2}I[|Y|\leq x]\bigr)^{\frac {q}{2}}+x^{q} \bigl(P(|Y|>x)\bigr)^{\frac{q}{2}}\bigr\} \,dx \\ \leq&C\sum_{n=1}^{\infty}n^{\frac{q}{2}} l(n)\sum_{m=n}^{\infty}\int_{m^{\alpha}}^{(m+1)^{\alpha}} \bigl\{ x^{-q}\bigl(E|Y|^{2}I[|Y|\leq x]\bigr)^{\frac {q}{2}}+ \bigl(P(|Y|>x)\bigr)^{\frac{q}{2}}\bigr\} \,dx \\ \leq&C\sum_{n=1}^{\infty}n^{\frac{q}{2}} l(n)\sum_{m=n}^{\infty}\bigl\{ m^{\alpha(1-q)-1} \bigl(E|Y|^{2}I\bigl[|Y|\leq(m+1)^{2}\bigr] \bigr)^{\frac{q}{2}} +m^{\alpha-1}\bigl(P\bigl(|Y|>m^{\alpha}\bigr) \bigr)^{\frac{q}{2}}\bigr\} \\ =&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1} \bigl(E|Y|^{2}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr] \bigr)^{\frac{q}{2}} +m^{\alpha-1}\bigl(P\bigl(|Y|>m^{\alpha}\bigr) \bigr)^{\frac{q}{2}}\bigr\} \sum_{n=1}^{m} n^{\frac {q}{2}}l(n) \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha(p-q)+\frac {q}{2}-2}h(m) \bigl(E|Y|^{2}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr] \bigr)^{\frac{q}{2}} \\ &{} +C\sum_{m=1}^{\infty}m^{\alpha p+\frac{q}{2}-2}h(m) \bigl(EI\bigl[|Y|>m^{\alpha}\bigr]\bigr)^{\frac{q}{2}} \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha p+\frac{q}{2}-\frac{\alpha p q}{2}-2}h(m) \bigl(E|Y|^{p}\bigr)^{\frac{q}{2}}< \infty. \end{aligned}$$
(3.15)

If \(p\geq2\), take \(q>\frac{p \alpha-1}{\alpha-\frac{1}{2}}>2\), which yields \(\alpha(p-q)+\frac{q}{2}-2<-1\). Then we get

$$\begin{aligned} I_{22} \leq&C\sum_{m=1}^{\infty}\bigl\{ m^{\alpha(1-q)-1}\bigl(E|Y|^{2}I\bigl[|Y|\leq (m+1)^{\alpha}\bigr]\bigr)^{\frac{q}{2}} \\ &{} +m^{\alpha-1}\bigl(P\bigl(|Y|>m^{\alpha}\bigr) \bigr)^{\frac{q}{2}}\bigr\} \sum_{n=1}^{m} n^{\frac {q}{2}} l(n) \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha(p-q)+\frac {q}{2}-2}h(m) \bigl(E|Y|^{2}I\bigl[|Y|\leq(m+1)^{\alpha}\bigr] \bigr)^{\frac{q}{2}} \\ &{} +C\sum_{m=1}^{\infty}m^{\alpha p+\frac{q}{2}-2}h(m) \bigl(EI\bigl[|Y|>m^{\alpha}\bigr]\bigr)^{\frac{q}{2}} \\ \leq&C\sum_{m=1}^{\infty}m^{\alpha(p-q)+\frac {q}{2}-2}h(m) \bigl(E|Y|^{2}\bigr)^{\frac{q}{2}}< \infty. \end{aligned}$$
(3.16)

Hence, by (3.15) and (3.16) we get

$$ I_{22}< \infty \quad \mbox{for } p\geq1. $$
(3.17)

Moreover, by (3.14) and (3.17), we also get

$$ I_{2}< \infty \quad\mbox{for } p\geq1. $$
(3.18)

The proof of (3.2) is completed by (3.6), (3.10), and (3.18). □

Proof of (3.3)

By Lemma 2.1 and (3.2), we have

$$\begin{aligned} &\sum_{n=1}^{\infty}n^{\alpha p-2}h(n)E \Biggl\{ \sup_{i\geq n}\Biggl|i^{-\alpha}\sum _{j=1}^{i} X_{j}\Biggr|-\epsilon\Biggr\} ^{+} \\ &\quad=\sum_{n=1}^{\infty}n^{\alpha p-2}h(n) \int_{0}^{\infty}P\Biggl(\sup_{i\geq n}\Biggl|i^{-\alpha} \sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x \Biggr)\,dx \\ &\quad=\sum_{k=1}^{\infty}\sum _{n=2^{k-1}}^{2^{k}-1} n^{\alpha p-2}h(n)\int _{0}^{\infty}P\Biggl(\sup_{i\geq n}\Biggl|i^{-\alpha} \sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x \Biggr)\,dx \\ &\quad\leq C\sum_{k=1}^{\infty}\int _{0}^{\infty}P\Biggl(\sup_{i\geq2^{k-1}}\Biggl|i^{-\alpha } \sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x \Biggr)\,dx\sum_{n=2^{k-1}}^{2^{k}-1} n^{\alpha p-2}h(n) \\ &\quad\leq C\sum_{k=1}^{\infty}2^{k(\alpha p-1)}h \bigl(2^{k}\bigr)\int_{0}^{\infty}P\Biggl( \sup_{i\geq2^{k-1}}\Biggl|i^{-\alpha}\sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x\Biggr)\,dx \\ &\quad\leq C\sum_{k=1}^{\infty}2^{k(\alpha p-1)}h \bigl(2^{k}\bigr)\sum_{m=k}^{\infty}\int_{0}^{\infty}P\Biggl(\max_{2^{m-1}\leq i< 2^{m}}\Biggl|i^{-\alpha} \sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x \Biggr)\,dx \\ &\quad\leq C\sum_{m=1}^{\infty}\int _{0}^{\infty}P\Biggl(\max_{2^{m-1}\leq i< 2^{m}}\Biggl|i^{-\alpha} \sum_{j=1}^{i} X_{j}\Biggr|>\epsilon+x \Biggr)\,dx \sum_{k=1}^{m} 2^{k(\alpha p-1)}h \bigl(2^{k}\bigr) \\ &\quad\leq C\sum_{m=1}^{\infty}2^{m(\alpha p-1)}h \bigl(2^{m}\bigr) \int_{0}^{\infty}P\Biggl( \max_{2^{m-1}\leq i< 2^{m}}\Biggl|\sum_{j=1}^{i} X_{j}\Biggr|>(\epsilon+x)2^{(m-1)\alpha}\Biggr)\,dx \\ &\qquad{} \bigl(\mbox{letting }y=2^{(m-1)\alpha}x\bigr) \\ &\quad\leq C\sum_{m=1}^{\infty}2^{m(\alpha p-1-\alpha)}h \bigl(2^{m}\bigr) \int_{0}^{\infty}P\Biggl( \max_{1\leq i< 2^{m}}\Biggl|\sum_{j=1}^{i} X_{j}\Biggr|>\epsilon2^{(m-1)\alpha}+y\Biggr)\,dy \\ &\quad\leq C\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}h(n) \int_{0}^{\infty}P\Biggl(\max _{1\leq i< n}\Biggl|\sum_{j=1}^{i} X_{j}\Biggr|>\epsilon n^{\alpha}2^{-\alpha}+y\Biggr)\,dy \\ &\quad=C\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}h(n) E\Biggl(\max_{1\leq i< n}\Biggl|\sum _{j=1}^{i} X_{j}\Biggr|-\epsilon^{\prime}n^{\alpha}\Biggr)^{+}< \infty, \end{aligned}$$

where \(\epsilon^{\prime}=\epsilon2^{-\alpha}\). Hence the proof of (3.3) is completed. □

Remark

There are many sequences of dependent random variables satisfying (3.1) for all \(q\geq2\).

Examples include sequences of NA random variables (see Shao [11]), \(\rho^{*}\)-mixing random variables (see Utev and Peligrad [12]), φ-mixing random variables (see Zhou [4]), and ρ-mixing random variables (see Zhou and Lin [5]).

Corollary 3.2

Under the assumptions of Theorem  3.1 for any \(\epsilon>0\)

$$ \sum_{n=1}^{\infty}n^{\alpha p-2}h(n) P \Biggl(\max_{1\leq i \leq n}\Biggl|\sum_{j=1}^{i} X_{j}\Biggr|>\epsilon n^{\alpha}\Biggr)< \infty. $$
(3.19)

Proof

As in Remark 1.2 of Li and Zhang [3] we can obtain (3.19). □

References

  1. Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177-201 (1988)

    MATH  Google Scholar 

  2. Baum, LE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120(1), 108-123 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  3. Li, YX, Zhang, LX: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70, 191-197 (2004)

    Article  MATH  Google Scholar 

  4. Zhou, XC: Complete moment convergence of moving average processes under φ-mixing assumption. Stat. Probab. Lett. 80, 285-292 (2010)

    Article  MATH  Google Scholar 

  5. Zhou, XC, Lin, JG: Complete moment convergence of moving average processes under ρ-mixing assumption. Math. Slovaca 61(6), 979-992 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  6. Kuczmaszewska, A: On complete convergence in Marcinkiewicz-Zygmund type SLLN for negatively associated random variables. Acta Math. Hung. 28, 116-130 (2010)

    Article  MathSciNet  Google Scholar 

  7. Peligrad, M: Convergence rates of the strong law for stationary mixing sequences. Z. Wahrscheinlichkeitstheor. Verw. Geb. 70, 307-314 (1985)

    Article  MathSciNet  Google Scholar 

  8. Peligrad, M, Gut, A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 12, 87-104 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  9. Stoica, G: A note on the rate of convergence in the strong law of large numbers for martingales. J. Math. Anal. Appl. 381, 910-913 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Gut, A: Complete convergence for arrays. Period. Math. Hung. 25, 51-75 (1992)

    Article  MathSciNet  Google Scholar 

  11. Shao, QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 13, 343-356 (2000)

    Article  MATH  Google Scholar 

  12. Utev, S, Peligrad, M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16, 101-115 (2003)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This paper was supported by Wonkwang University in 2015.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mi-Hwa Ko.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ko, MH. Complete moment convergence of moving average process generated by a class of random variables. J Inequal Appl 2015, 225 (2015). https://doi.org/10.1186/s13660-015-0745-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0745-x

MSC

Keywords