Skip to main content

Complete moment convergence for moving average process generated by \(\rho^{-}\)-mixing random variables

Abstract

Let \(\{Y_{i},-\infty< i<\infty\}\) be a sequence of \(\rho^{-}\)-mixing random variables without the assumption of identical distributions, and \(\{a_{i},-\infty< i<\infty\}\) be an absolutely summable sequence of real numbers. In this paper, under some suitable conditions, we establish the complete moment convergence for the partial sum of moving average processes \(\{X_{n}=\sum_{i=-\infty}^{\infty}a_{i}Y_{i+n},n\geq 1\}\). These results promote and improve the corresponding results obtained by Li and Zhang (Stat. Probab. Lett. 70:191-197, 2004) from NA to the case of a \(\rho^{-}\)-mixing setting.

1 Introduction

Let \(\{Y_{i},-\infty< i<\infty\}\) be a sequence of random variables and \(\{a_{i},-\infty< i<\infty\}\) be an absolutely summable sequence of real numbers, and for \(n\geq1\) set \(X_{n}=\sum_{i=-\infty}^{\infty}a_{i}Y_{i+n}\). The limit behavior of the moving average process \(\{X_{n},n\geq1\}\) has been extensively investigated by many authors. For example, Baek et al. [1] have obtained the convergence of moving average processes, Burton and Dehling [2] have obtained a large deviation principle, Ibragimov [3] has established the central limit theorem, Račkauskas and Suquet [4] have proved the functional central limit theorems for self-normalized partial sums of linear processes, and Chen et al. [5], Guo [6], Kim et al. [7, 8], Ko et al. [9], Li et al. [10], Li and Zhang [11], Qiu et al. [12], Wang and Hu [13], Yang and Hu [14], Zhang [15], Zhen et al. [16], Zhou et al. [17], Zhou and Lin [18], Shen et al. [19] have obtained the complete (moment) convergence of moving average process based on a sequence of dependent (or mixing) random variables, respectively. But very few results for moving average process based on a \(\rho^{-}\)-mixing random variables are known. Firstly, we recall some definitions.

For two nonempty disjoint sets S, T of real numbers, we define \(\operatorname{dist}(S, T)=\min\{\vert j-k\vert ; j\in S, k\in T\}\). Let \(\sigma(S)\) be the σ-field generated by \(\{Y_{k}, k\in S\}\), and define \(\sigma(T)\) similarly.

Definition 1.1

A sequence \(\{Y_{i},-\infty< i<\infty\}\) is called \(\rho^{-}\)-mixing, if

$$\rho^{-}(s)=\sup\bigl\{ \rho^{-}(S,T); S,T\subset{Z}, \operatorname{dist}(S,T)\geq{s}\bigr\} \rightarrow{0}\quad \mbox{as } s\rightarrow\infty, $$

where

$$\rho^{-}(S,T)=0\vee{\sup}\{\operatorname{corr}\bigl(f(X_{i},i \in{S}),g(X_{j},j\in{T})\bigr), $$

where the supremum is taken over all coordinatewise increasing real functions f on \(R^{S}\) and g on \(R^{T}\).

Definition 1.2

A sequence \(\{Y_{i},-\infty< i<\infty\}\) is called \(\rho^{*}\)-mixing if

$$\rho^{*}(s)=\sup\bigl\{ \rho(S,T);S,T\subset Z, \operatorname{dist}(S,T)\geq s \bigr\} \rightarrow0\quad \mbox{as } s\rightarrow\infty, $$

where

$$\rho(S,T)=\sup\bigl\{ \bigl\vert \operatorname{corr}(f,g)\bigr\vert ; f\in L_{2}\bigl(\sigma(S)\bigr),g\in L_{2}\bigl(\sigma (T)\bigr) \bigr\} . $$

Definition 1.3

A sequence \(\{Y_{i},i\in Z\}\) is called negatively associated (NA) if for every pair of disjoint subsets S, T of Z and any real coordinatewise increasing functions f on \(R^{S}\) and g on \(R^{T}\)

$$\operatorname{Cov}\bigl\{ f(Y_{i},i\in S),g(Y_{j},j\in T)\bigr\} \leq0. $$

Definition 1.4

A sequence \(\{Y_{i},-\infty< i<\infty\}\) of random variables is said to be stochastically dominated by a random variable Y if there exists a constant C such that

$$P\bigl\{ \vert Y_{i}\vert >x\bigr\} \leq CP\bigl\{ \vert Y\vert >x\bigr\} ,\quad x\geq0, -\infty< i< \infty. $$

Definition 1.5

A real valued function \(l(x)\), positive and measurable on \([0,\infty)\), is said to be slowly varying at infinity if for each \(\lambda>0\), \(\lim_{x\to\infty}\frac{l(\lambda x)}{l(x)}=1\).

Li and Zhang [11] obtained the following complete moment convergence of moving average processes under NA assumptions.

Theorem A

Suppose that \(\{X_{n}=\sum_{i=-\infty}^{\infty} a_{i}\varepsilon_{i+n}, n\geq1\}\), where \(\{a_{i},-\infty< i<\infty\}\) is a sequence of real numbers with \(\sum_{i=-\infty}^{\infty} \vert a_{i}\vert <\infty\) and \(\{\varepsilon_{i},-\infty< i<\infty\}\) is a sequence of identically distributed NA random variables with \(E\varepsilon_{1}=0\), \(E\varepsilon_{1}^{2}<\infty\). Let h be a function slowly varying at infinity, \(1\leq q<2 \), \(r>1+q/2\). Then \(E\vert \varepsilon_{1}\vert ^{r}h(\vert \varepsilon_{1}\vert ^{q})<\infty\) implies

$$\begin{aligned} \sum_{n=1}^{\infty}n^{r/q-2-1/q}h(n) E \Biggl\{ \Biggl\vert \sum_{j=1}^{n}X_{j} \Biggr\vert -\varepsilon n^{1/q}\Biggr\} ^{+} < \infty \end{aligned}$$

for all \(\varepsilon>0\).

Chen et al. [20] also established the following results for moving average processes under NA assumptions.

Theorem B

Let \(q>0\), \(1\leq p<2\), \(r\geq1\), \(rp\neq1\). Suppose that \(\{X_{n}=\sum_{i=-\infty}^{\infty} a_{i}\varepsilon_{i+n}, n\geq1\}\), where \(\{a_{i},-\infty< i<\infty\}\) is a sequence of real numbers with \(\sum_{i=-\infty}^{\infty} \vert a_{i}\vert <\infty\) and \(\{\varepsilon_{i},-\infty< i<\infty\}\) is a sequence of identically distributed NA random variables. If \(E\varepsilon_{1}=0\) and \(E\vert \varepsilon_{1}\vert ^{rp}<\infty\), then

$$\begin{aligned} \sum_{n=1}^{\infty}n^{r-2} P\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert \geq\varepsilon n^{1/p}\Biggr\} < \infty \end{aligned}$$

for all \(\varepsilon>0\). Furthermore if \(E\varepsilon_{1}=0\) and \(E\vert \varepsilon_{1}\vert ^{rp}<\infty\) for \(q< rp\), \(E\vert \varepsilon_{1}\vert ^{rp}\log(1+\vert \varepsilon_{1}\vert )<\infty\) for \(q=rp\), \(E\vert \varepsilon_{1}\vert ^{q}<\infty\) for \(q>rp\), then

$$\begin{aligned} \sum_{n=1}^{\infty}n^{r-2-q/p} E\Biggl( \Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{1/q}\Biggr\} ^{+}\Biggr)^{q} < \infty \end{aligned}$$

for all \(\varepsilon>0\).

Recently, Zhou and Lin [18] obtained the following complete moment convergence of moving average processes under ρ-mixing assumptions.

Theorem C

Let h be a function slowly varying at infinity, \(p\geq1\), \(p\alpha>1\) and \(\alpha>1/2\). Suppose that \(\{X_{n},n\geq1\}\) is a moving average process based on a sequence \(\{ Y_{i},-\infty< i<\infty\}\) of identically distributed ρ-mixing random variables. If \(EY_{1}=0\) and \(E\vert Y_{1}\vert ^{p+\delta }h(\vert Y_{1}\vert ^{1/{\alpha}})<\infty\) for some \(\delta>0\), then for all \(\varepsilon>0\),

$$\begin{aligned} \sum_{n=1}^{\infty}n^{p\alpha-2-\alpha}h(n) E \Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{\alpha}\Biggr\} ^{+} < \infty \end{aligned}$$

and

$$\begin{aligned} \sum_{n=1}^{\infty}n^{p\alpha-2}h(n) E \Biggl\{ \sup_{k\geq n}\Biggl\vert k^{-\alpha}\sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon \Biggr\} ^{+} < \infty. \end{aligned}$$

Obviously, \(\rho^{-}\)-mixing random variables include NA and \(\rho^{*}\)-mixing random variables, which have a lot of applications, their limit properties have aroused wide interest recently, and a lot of results have been obtained; we refer to Wang and Lu [21] for a Rosenthal-type moment inequality and weak convergence, Budsaba et al. [22, 23] for complete convergence for moving average process based on a \(\rho^{-}\)-mixing sequence, Tan et al. [24] for the almost sure central limit theorem. But there are few results on the complete moment convergence of moving average process based on a \(\rho^{-}\)-mixing sequence. Therefore, in this paper, we establish some results on the complete moment convergence for maximum partial sums with less restrictions. Throughout the sequel, C represents a positive constant although its value may change from one appearance to the next, \(I\{A\}\) denotes the indicator function of the set A.

2 Preliminary lemmas

In this section, we list some lemmas which will be useful to prove our main results.

Lemma 2.1

(Zhou [17])

If l is slowly varying at infinity, then

  1. (1)

    \(\sum_{n=1}^{m}n^{s}l(n)\leq C m^{s+1}l(m)\) for \(s>-1\) and positive integer m,

  2. (2)

    \(\sum_{n=m}^{\infty}n^{s}l(n)\leq C m^{s+1}l(m)\) for \(s<-1\) and positive integer m.

Lemma 2.2

(Wang and Lu [21])

For a positive real number \(q\geq2\), if \(\{X_{n},n\geq1\}\) is a sequence of \(\rho^{-}\)-mixing random variables, with \({E}X_{i}=0\), \({E}\vert X_{i}\vert ^{q}<\infty\) for every \(i \geq1\), then for all \(n\geq1\), there is a positive constant \(C=C(q,\rho^{-}(\cdot))\) such that

$${E}\Biggl(\max_{1\leq{j}\leq{n}}\Biggl\vert \sum _{i=1}^{j}X_{i}\Biggr\vert ^{q}\Biggr)\leq {C}\Biggl\{ \sum_{i=1}^{n}{E} \vert X_{i}\vert ^{q}+ \Biggl(\sum _{i=1}^{n}{E}X_{i}^{2} \Biggr)^{\frac{q}{2}}\Biggr\} . $$

Lemma 2.3

(Wang et al. [25])

Let \(\{X_{n}, n\geq1\}\) be a sequence of random variables which is stochastically dominated by a random variable X. Then for any \(a>0\) and \(b>0\),

$$\begin{aligned}& E\vert X_{n}\vert ^{a}I\bigl\{ \vert X_{n} \vert \leq b\bigr\} \leq C\bigl[E\vert X\vert ^{a}I\bigl\{ \vert X\vert \leq b\bigr\} +b^{a}P\bigl(\vert X\vert >b\bigr)\bigr], \\& E\vert X_{n}\vert ^{a}I\bigl\{ \vert X_{n} \vert > b\bigr\} \leq CE\vert X\vert ^{a}I\bigl\{ \vert X\vert > b\bigr\} . \end{aligned}$$

3 Main results and proofs

Theorem 3.1

Let l be a function slowly varying at infinity, \(p\geq1\), \(\alpha>1/2\), \(\alpha p> 1\). Assume that \(\{a_{i},-\infty< i<\infty\}\) is an absolutely summable sequence of real numbers. Suppose that \(\{X_{n}=\sum_{i=-\infty}^{\infty} a_{i}Y_{i+n}, n\geq1\}\) is a moving average process generated by a sequence \(\{Y_{i},-\infty< i<\infty\}\) of \(\rho^{-}\)-mixing random variables which is stochastically dominated by a random variable Y. If \(EY_{i}=0\) for \(1/2<\alpha\leq1\), \(E\vert Y\vert ^{p}l(\vert Y\vert ^{1/{\alpha}})<\infty\) for \(p>1\) and \(E\vert Y\vert ^{1+\delta}<\infty\) for \(p=1\) and some \(\delta>0\), then for any \(\varepsilon>0\)

$$\begin{aligned} \sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}l(n) E\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{\alpha}\Biggr\} ^{+} < \infty \end{aligned}$$
(3.1)

and

$$\begin{aligned} \sum_{n=1}^{\infty}n^{\alpha p-2}l(n) E\Biggl\{ \sup_{ k\geq n}\Biggl\vert k^{-\alpha}\sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon \Biggr\} ^{+} < \infty. \end{aligned}$$
(3.2)

Proof

Firstly to prove (3.1). Let \(f(n)=n^{\alpha p-2-\alpha}l(n)\) and \(Y^{(1)}_{xj}=-xI\{Y_{j}< -x\}+Y_{j}I\{\vert Y_{j}\vert \leq x\}+xI\{Y_{j}> x\}\) and \(Y^{(2)}_{xj}=Y_{j}-Y^{(1)}_{xj}\) be the monotone truncations of \(\{Y_{j},-\infty< j<\infty\}\) for \(x>0\). Then by the property of \(\rho^{-}\)-mixing random variables (cf. Property P2 in Wang and Lu [21]), \(\{Y^{(1)}_{xj}-EY^{(1)}_{xj},-\infty< j<\infty\}\) and \(\{Y^{(2)}_{xj},-\infty< j<\infty\}\) are two sequences of \(\rho^{-}\)-mixing random variables. Note that \(\sum_{k=1}^{n}X_{k}=\sum_{i=-\infty}^{\infty}a_{i}\sum_{j=i+1}^{i+n}Y_{j}\). Since \(\sum_{i=-\infty}^{\infty} \vert a_{i}\vert <\infty\), by Lemma 2.3, we have for \(x>n^{\alpha}\), if \(\alpha>1\)

$$\begin{aligned} &x^{-1}\Biggl\vert E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n}Y^{(1)} _{xj}\Biggr\vert \\ &\quad \leq x^{-1}\sum_{i=-\infty}^{\infty} \vert a_{i}\vert \sum_{j=i+1}^{i+n} \bigl[E\vert Y_{j}\vert I\bigl\{ \vert Y_{j}\vert \leq x\bigr\} +xP\bigl(\vert Y\vert >x\bigr)\bigr] \\ &\quad \leq Cx^{-1}n\bigl[E\vert Y\vert I\bigl\{ \vert Y\vert \leq x \bigr\} +x P\bigl(\vert Y\vert >x\bigr)\bigr] \leq C n^{1-\alpha}\to0, \quad \mbox{as } n\to\infty. \end{aligned}$$

If \(1/2<\alpha\leq1\), note \(\alpha p> 1\), this means \(p>1\). By \(E\vert Y\vert ^{p}l(\vert Y\vert ^{1/{\alpha}})<\infty\) and l is slowly varying at infinity, for any \(0<\epsilon<p-1/{\alpha}\), we have \(E\vert Y\vert ^{p-\epsilon}<\infty\). Then noting \(EY_{i}=0\), by Lemma 2.3 we have

$$\begin{aligned} x^{-1}\Biggl\vert E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n}Y^{(1)}_{xj} \Biggr\vert =& x^{-1}\Biggl\vert E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n}Y^{(2)}_{xj} \Biggr\vert \\ \leq& C x^{-1}\sum_{i=-\infty}^{\infty} \vert a_{i}\vert \sum_{j=i+1}^{i+n}E \vert Y_{j}\vert I\bigl\{ \vert Y_{j}\vert > x\bigr\} \leq Cx^{-1} nE\vert Y\vert I\bigl\{ \vert Y\vert > x\bigr\} \\ \leq& Cx^{1/{\alpha}-1}E\vert Y\vert I\bigl\{ \vert Y\vert > x\bigr\} \leq C E\vert Y\vert ^{1/{\alpha}}I\bigl\{ \vert Y\vert > x\bigr\} \\ \leq& E\vert Y\vert ^{p-\epsilon}I\bigl\{ \vert Y\vert > x\bigr\} \to 0, \quad \mbox{as } x\to\infty. \end{aligned}$$

Hence for \(x>n^{\alpha}\) large enough, we get

$$\begin{aligned} x^{-1}\Biggl\vert E\sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+n}Y^{(1)}_{xj} \Biggr\vert < \varepsilon/4. \end{aligned}$$

Therefore

$$\begin{aligned} &\sum_{n=1}^{\infty}f(n) E\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{\alpha}\Biggr\} ^{+} \\ &\quad \leq \sum_{n=1}^{\infty}f(n)\int _{\varepsilon n^{\alpha }}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert \geq x\Biggr\} \,dx \\ &\quad \leq C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert \geq\varepsilon x\Biggr\} \,dx \\ &\quad \leq C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}Y^{(2)}_{xj} \Biggr\vert \geq \varepsilon x/2\Biggr\} \,dx \\ &\qquad {} +C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}\bigl(Y^{(1)}_{xj}-EY^{(1)}_{xj} \bigr)\Biggr\vert \geq \varepsilon x/4\Biggr\} \,dx \\ &\quad = :I_{1}+I_{2}. \end{aligned}$$
(3.3)

Firstly we show \(I_{1}<\infty\). Noting \(\vert Y^{(2)}_{xj}\vert <\vert Y_{j}\vert I\{\vert Y_{j}\vert > x\}\), by Markov’s inequality and Lemma 2.3, we have

$$\begin{aligned} I_{1} \leq& C \sum_{n=1}^{\infty}f(n) \int_{n^{\alpha}}^{\infty} x^{-1}E\max _{1\leq k\leq n}\Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}Y^{(2)}_{xj} \Biggr\vert \,dx \\ \leq& C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} x^{-1}\sum _{i=-\infty}^{\infty} \vert a_{i}\vert \sum _{j=i+1}^{i+n}E\bigl\vert Y^{(2)}_{xj}\bigr\vert \,dx \\ \leq& C \sum_{n=1}^{\infty}nf(n)\int _{n^{\alpha}}^{\infty} x^{-1}E\vert Y\vert I\bigl\{ \vert Y\vert > x\bigr\} \,dx \\ =&C \sum_{n=1}^{\infty}nf(n)\sum _{m=n}^{\infty}\int_{m^{\alpha }}^{(m+1)^{\alpha}} x^{-1}E\vert Y\vert I\bigl\{ \vert Y\vert > x\bigr\} \,dx \\ \leq& C \sum_{n=1}^{\infty}nf(n)\sum _{m=n}^{\infty} m^{-1}E\vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha}\bigr\} \\ =&C\sum_{m=1}^{\infty} m^{-1}E \vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha}\bigr\} \sum _{n=1}^{m} n^{\alpha p-1-\alpha}l(n). \end{aligned}$$

If \(p>1\), then \(\alpha p-1-\alpha>-1\), and, by Lemma 2.1, we obtain

$$\begin{aligned} I_{1} \leq& C \sum_{m=1}^{\infty} m^{\alpha p-1-\alpha}l(m)E\vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha} \bigr\} \\ =&C \sum_{m=1}^{\infty} m^{\alpha p-1-\alpha}l(m) \sum_{k=m}^{\infty}E \vert Y\vert I\bigl\{ k^{\alpha}< \vert Y\vert \leq{(k+1)}^{\alpha}\bigr\} \\ =&C \sum_{k=1}^{\infty}E \vert Y\vert I \bigl\{ k^{\alpha}< \vert Y\vert \leq{(k+1)}^{\alpha}\bigr\} \sum _{m=1}^{k} m^{\alpha p-1-\alpha}l(m) \\ \leq&C\sum_{k=1}^{\infty}k^{\alpha p-\alpha}l(k) E\vert Y\vert I\bigl\{ k^{\alpha}< \vert Y\vert \leq{(k+1)}^{\alpha} \bigr\} \\ \leq&C E\vert Y\vert ^{p}l\bigl(\vert Y\vert ^{1/{\alpha}} \bigr)< \infty. \end{aligned}$$

If \(p=1\), notice that \(E\vert Y\vert ^{1+\delta}<\infty\) implies \(E\vert Y\vert ^{1+\delta '}l(\vert Y\vert ^{1/{\alpha}})<\infty\) for any \(0<\delta'<\delta\), then by Lemma 2.1, we obtain

$$\begin{aligned} I_{1} \leq& C \sum_{m=1}^{\infty} m^{-1}E\vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha} \bigr\} \sum_{n=1}^{m} n^{-1}l(n) \\ \leq& C \sum_{m=1}^{\infty} m^{-1}E \vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha}\bigr\} \sum _{n=1}^{m} n^{-1+\alpha\delta'}l(n) \\ \leq&C \sum_{m=1}^{\infty} m^{\alpha\delta'-1}l(m)E\vert Y\vert I\bigl\{ \vert Y\vert > m^{\alpha} \bigr\} \\ \leq&C E\vert Y\vert ^{1+\delta'}l\bigl(\vert Y\vert ^{1/{\alpha}} \bigr)\leq CE\vert Y\vert ^{1+\delta} < \infty. \end{aligned}$$

So, we get

$$\begin{aligned} I_{1}< \infty. \end{aligned}$$
(3.4)

Next we show \(I_{2}<\infty\). By Markov’s inequality, the Hölder inequality, and Lemma 2.2, we conclude

$$\begin{aligned} I_{2} \leq& C\sum_{n=1}^{\infty}f(n) \int_{n^{\alpha}}^{\infty} x^{-r}E\max _{1\leq k\leq n} \Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}\bigl(Y^{(1)}_{xj}-EY^{(1)}_{xj} \bigr)\Biggr\vert ^{r} \,dx \\ \leq& C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} x^{-r} E \Biggl[\sum _{i=-\infty}^{\infty}\bigl(\vert a_{i}\vert ^{\frac{r-1}{r}}\bigr) \Biggl(\vert a_{i}\vert ^{1/r} \max _{1\leq k\leq n}\Biggl\vert \sum_{j=i+1}^{i+k} \bigl(Y^{(1)}_{xj}-EY^{(1)}_{xj}\bigr) \Biggr\vert \Biggr) \Biggr]^{r}\,dx \\ \leq& C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} x^{-r} \Biggl(\sum _{i=-\infty}^{\infty} \vert a_{i}\vert \Biggr)^{r-1} \Biggl(\sum_{i=-\infty}^{\infty} \vert a_{i}\vert E\max_{1\leq k\leq n}\Biggl\vert \sum _{j=i+1}^{i+k}\bigl(Y^{(1)}_{xj}-EY^{(1)}_{xj} \bigr)\Biggr\vert ^{r} \Biggr)\,dx \\ \leq& C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} x^{-r} \sum _{i=-\infty}^{\infty} \vert a_{i}\vert \sum _{j=i+1}^{i+n}E\bigl\vert Y^{(1)}_{xj}-EY^{(1)}_{xj}\bigr\vert ^{r}\,dx \\ &{}+C\sum_{n=1}^{\infty}f(n)\int _{n^{\alpha}}^{\infty} x^{-r} \sum _{i=-\infty}^{\infty} \vert a_{i}\vert \Biggl( \sum_{j=i+1}^{i+n}E\bigl\vert Y^{(1)}_{xj}-EY^{(1)}_{xj}\bigr\vert ^{2} \Biggr)^{r/2}\,dx \\ =:&I_{21}+I_{22}, \end{aligned}$$
(3.5)

where \(r\geq2\) will be specialized later.

For \(I_{21}\), if \(p>1\), take \(r>\max\{2,p\}\), then by \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.1, we get

$$\begin{aligned} I_{21} \leq& C\sum _{n=1}^{\infty}f(n)\int_{n^{\alpha}}^{\infty} x^{-r} \sum_{i=-\infty}^{\infty} \vert a_{i}\vert \sum_{j=i+1}^{i+n} \bigl[E\vert Y_{j}\vert ^{r}I\bigl\{ \vert Y_{j}\vert \leq x\bigr\} +x^{r}P\bigl(\vert Y_{j}\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}nf(n)\int _{n^{\alpha}}^{\infty} x^{-r} \bigl[E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq x\bigr\} +x^{r}P\bigl( \vert Y\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}nf(n) \sum _{m=n}^{\infty} \int_{m^{\alpha}}^{(m+1)^{\alpha}} \bigl[x^{-r}E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq x\bigr\} +P\bigl(\vert Y\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}nf(n) \sum _{m=n}^{\infty} \bigl[m^{\alpha(1-r)-1}E\vert Y \vert ^{r}I\bigl\{ \vert Y\vert \leq(m+1)^{\alpha}\bigr\} + m^{\alpha-1}P\bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \\ =&C\sum_{m=1}^{\infty} \bigl[m^{\alpha(1-r)-1}E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq (m+1)^{\alpha}\bigr\} + m^{\alpha-1}P\bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \sum_{n=1}^{m}nf(n) \\ \leq& C\sum_{m=1}^{\infty}m^{\alpha(p-r)-1}l(m) \sum_{k=1}^{m} E\vert Y\vert ^{r}I\bigl\{ k^{\alpha}< \vert Y\vert \leq(k+1)^{\alpha} \bigr\} \\ &{}+C\sum_{m=1}^{\infty}m^{\alpha p-1}l(m) \sum_{k=m}^{\infty}E I\bigl\{ k^{\alpha}< \vert Y\vert \leq(k+1)^{\alpha}\bigr\} \\ =& C\sum_{k=1}^{\infty}E \vert Y \vert ^{r}I\bigl\{ k^{\alpha}< \vert Y\vert \leq (k+1)^{\alpha}\bigr\} \sum_{m=k}^{\infty}m^{\alpha(p-r)-1}l(m) \\ &{}+C\sum_{k=1}^{\infty} E I\bigl\{ k^{\alpha}< \vert Y\vert \leq (k+1)^{\alpha}\bigr\} \sum _{m=1}^{k} m^{\alpha p-1}l(m) \\ \leq&C\sum_{k=1}^{\infty}k^{\alpha(p-r)}l(k) E\vert Y\vert ^{p}\vert Y\vert ^{r-p}I\bigl\{ k^{\alpha}< \vert Y\vert \leq(k+1)^{\alpha}\bigr\} \\ &{}+C\sum_{k=1}^{\infty}k^{\alpha p}l(k) E\vert Y\vert ^{p}\vert Y\vert ^{-p}I\bigl\{ k^{\alpha}< \vert Y\vert \leq(k+1)^{\alpha}\bigr\} \\ \leq& CE\vert Y\vert ^{p}l\bigl(\vert Y\vert ^{1/{\alpha}} \bigr) < \infty. \end{aligned}$$
(3.6)

For \(I_{21}\), if \(p=1\), take \(r>\max\{1+\delta',2\}\), where \(0<\delta '<\delta\), then by the same argument as above we have

$$\begin{aligned} I_{21} \leq&C\sum _{m=1}^{\infty}\bigl[m^{\alpha(1-r)-1}E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq (m+1)^{\alpha}\bigr\} + m^{\alpha-1}P\bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \sum _{n=1}^{m}nf(n) \\ \leq& C\sum_{m=1}^{\infty} \bigl[m^{\alpha(1-r)-1}E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq (m+1)^{\alpha}\bigr\} + m^{\alpha-1}P\bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \sum_{n=1}^{m}n^{-1+\alpha\delta'}l(n) \\ \leq& C\sum_{m=1}^{\infty}\bigl[m^{\alpha(1-r+\delta')-1}l(m) E\vert Y\vert ^{r}I\bigl\{ \vert Y\vert \leq(m+1)^{\alpha} \bigr\} \\ &{}+m^{\alpha(1+\delta')-1}l(m)E I\bigl\{ \vert Y\vert >m^{\alpha} \bigr\} \bigr] \\ \leq& CE\vert Y\vert ^{1+\delta'}l\bigl(\vert Y\vert ^{1/{\alpha}} \bigr)\leq CE\vert Y\vert ^{1+\delta} < \infty. \end{aligned}$$
(3.7)

For \(I_{22}\), if \(1\leq p<2\), take \(r>2\), note \(\alpha p+r/2-\alpha pr/2-1=(\alpha p-1)(1-r/2)<0\), by the \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.1, we obtain

$$\begin{aligned} I_{22} \leq& C\sum _{n=1}^{\infty}n^{r/2}f(n)\int _{n^{\alpha}}^{\infty} x^{-r} \bigl[\bigl(E\vert Y \vert ^{2}I\bigl\{ \vert Y\vert \leq x\bigr\} \bigr)^{r/2}+x^{r}P^{r/2} \bigl(\vert Y\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}n^{r/2}f(n) \sum_{m=n}^{\infty} \int_{m^{\alpha}}^{(m+1)^{\alpha}} \bigl[x^{-r}\bigl(E\vert Y\vert ^{2}I\bigl\{ \vert Y \vert \leq x\bigr\} \bigr)^{r/2}+P^{r/2}\bigl(\vert Y\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}n^{r/2}f(n) \sum_{m=n}^{\infty} \bigl[m^{\alpha(1-r)-1} \bigl(E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq(m+1)^{\alpha}\bigr\} \bigr)^{r/2} + m^{\alpha-1}P^{r/2} \bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \\ =&C\sum_{m=1}^{\infty} \bigl[m^{\alpha(1-r)-1}\bigl(E\vert Y\vert ^{2}I\bigl\{ \vert Y \vert \leq (m+1)^{\alpha}\bigr\} \bigr)^{r/2} + m^{\alpha-1}P^{r/2}\bigl(\vert Y\vert >m^{\alpha}\bigr) \bigr] \sum_{n=1}^{m}n^{r/2}f(n) \\ \leq& C\sum_{m=1}^{\infty}m^{\alpha(p-r)+r/2-2}l(m) \bigl(E\vert Y\vert ^{p}\vert Y\vert ^{2-p}I\bigl\{ \vert Y\vert \leq(m+1)^{\alpha}\bigr\} \bigr)^{r/2} \\ &{}+C\sum_{m=1}^{\infty}m^{\alpha p+r/2-2}l(m) \bigl(E\vert Y\vert ^{p}\vert Y\vert ^{-p}I\bigl\{ \vert Y\vert >m^{\alpha}\bigr\} \bigr)^{r/2} \\ \leq& C\sum_{m=1}^{\infty}m^{\alpha p+r/2-\alpha pr/2-2}l(m) \bigl(E\vert Y\vert ^{p}\bigr)^{r/2} < \infty. \end{aligned}$$
(3.8)

For \(I_{22}\), if \(p\geq2\), take \(r>(\alpha p-1)/({\alpha-1/2})>2\); we have \(\alpha(p-r)+r/2-2<-1\), and therefore one gets

$$\begin{aligned} I_{22} \leq&C\sum _{m=1}^{\infty}\bigl[m^{\alpha(1-r)-1}\bigl(E\vert Y \vert ^{2}I\bigl\{ \vert Y\vert \leq (m+1)^{\alpha}\bigr\} \bigr)^{r/2} + m^{\alpha-1}P^{r/2}\bigl(\vert Y\vert >m^{\alpha}\bigr)\bigr] \sum_{n=1}^{m}n^{r/2}f(n) \\ \leq& C\sum_{m=1}^{\infty}m^{\alpha(p-r)+r/2-2}l(m) \bigl(E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq(m+1)^{\alpha}\bigr\} \bigr)^{r/2} \\ &{}+C\sum_{m=1}^{\infty}m^{\alpha p+r/2-2}l(m) \bigl(E\vert Y\vert ^{2}\vert Y\vert ^{-2}I\bigl\{ \vert Y\vert >m^{\alpha}\bigr\} \bigr)^{r/2} \\ \leq& C\sum_{m=1}^{\infty}m^{\alpha(p-r)+r/2-2}l(m) \bigl(E\vert Y\vert ^{2}\bigr)^{r/2} < \infty. \end{aligned}$$
(3.9)

Thus, (3.1) can be deduced by combining (3.3)-(3.9).

Now, we show (3.2). By Lemma 2.1 and (3.1) we have

$$\begin{aligned} &\sum_{n=1}^{\infty}n^{\alpha p-2}l(n) E \Biggl\{ \sup_{ k\geq n}\Biggl\vert k^{-\alpha}\sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon \Biggr\} ^{+} \\ &\quad = \sum_{n=1}^{\infty}n^{\alpha p-2}l(n) \int_{0}^{\infty} P\Biggl\{ \sup_{ k\geq n} \Biggl\vert k^{-\alpha}\sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon+t \Biggr\} \,dt \\ &\quad = \sum_{i=1}^{\infty}\sum _{n=2^{i-1}}^{2^{i}-1}n^{\alpha p-2}l(n)\int _{0}^{\infty} P\Biggl\{ \sup_{ k\geq n} \Biggl\vert k^{-\alpha}\sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon+t \Biggr\} \,dt \\ &\quad \leq C\sum_{i=1}^{\infty}\int _{0}^{\infty} P\Biggl\{ \sup_{ k\geq 2^{i-1}} \Biggl\vert k^{-\alpha}\sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon+t \Biggr\} \,dt \sum_{n=2^{i-1}}^{2^{i}-1}n^{\alpha p-2}l(n) \\ &\quad \leq C\sum_{i=1}^{\infty}2^{i(\alpha p-1)}l \bigl(2^{i}\bigr)\int_{0}^{\infty} P\Biggl\{ \sup_{ k\geq2^{i-1}}\Biggl\vert k^{-\alpha}\sum _{j=1}^{k}X_{j}\Biggr\vert > \varepsilon+t \Biggr\} \,dt \\ &\quad \leq C\sum_{i=1}^{\infty}2^{i(\alpha p-1)}l \bigl(2^{i}\bigr)\sum_{l=i}^{\infty} \int_{0}^{\infty} P\Biggl\{ \max_{ 2^{l-1}\leq k< 2^{l}} \Biggl\vert k^{-\alpha}\sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon +t \Biggr\} \,dt \\ &\quad \leq C\sum_{l=1}^{\infty}\int _{0}^{\infty} P\Biggl\{ \max_{ 2^{l-1}\leq k< 2^{l}} \Biggl\vert k^{-\alpha}\sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon+t \Biggr\} \,dt \sum_{i=1}^{l}2^{i(\alpha p-1)}l \bigl(2^{i}\bigr) \\ &\quad \leq C\sum_{l=1}^{\infty}2^{l(\alpha p-1)}l \bigl(2^{l}\bigr)\int_{0}^{\infty} P\Biggl\{ \max_{ 2^{l-1}\leq k< 2^{l}}\Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert >(\varepsilon +t)2^{(l-1)\alpha} \Biggr\} \,dt \\ &\quad \leq C\sum_{l=1}^{\infty}2^{l(\alpha p-1-\alpha)}l \bigl(2^{l}\bigr)\int_{0}^{\infty} P\Biggl\{ \max_{ 1\leq k< 2^{l}}\Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon2^{(l-1)\alpha}+y \Biggr\} \,dy \\ &\quad \leq C\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}l(n) \int_{0}^{\infty} P\Biggl\{ \max_{ 1\leq k< n} \Biggl\vert \sum_{j=1}^{k}X_{j} \Biggr\vert >\varepsilon n^{\alpha}2^{-\alpha }+y \Biggr\} \,dy \\ &\quad = C\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}l(n) E \Biggl\{ \max_{ 1\leq k< n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert - \varepsilon_{0} n^{\alpha}\Biggr\} ^{+} < \infty. \end{aligned}$$

Hence the proof of Theorem 3.1 is completed. □

The next theorem treats the case \(\alpha p=1\).

Theorem 3.2

Let l be a function slowly varying at infinity, \(1\leq p<2\). Assume that \(\sum_{i=-\infty}^{\infty} \vert a_{i}\vert ^{\theta}<\infty\), where θ belong to \((0,1)\) if \(p=1\) and \(\theta=1\) if \(1< p<2\). Suppose that \(\{X_{n}=\sum_{i=-\infty}^{\infty} a_{i}Y_{i+n}, n\geq1\}\) is a moving average process generated by a sequence \(\{Y_{i},-\infty< i<\infty\}\) of \(\rho^{-}\)-mixing random variables which is stochastically dominated by a random variable Y. If \(EY_{i}=0\) and \(E\vert Y\vert ^{p}l(\vert Y\vert ^{p})<\infty\), then for any \(\varepsilon>0\)

$$\begin{aligned} \sum_{n=1}^{\infty}n^{-1-1/p}l(n) E\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{1/p}\Biggr\} ^{+} < \infty. \end{aligned}$$
(3.10)

Proof

Let \(g(n)=n^{-1-1/p}l(n)\). Similarly to the proof of (3.3), we have

$$\begin{aligned} & \sum_{n=1}^{\infty}g(n) E\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert -\varepsilon n^{1/p}\Biggr\} ^{+} \\ &\quad \leq C\sum_{n=1}^{\infty}g(n)\int _{n^{1/p}}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}Y_{xj}^{(2)} \Biggr\vert \geq\varepsilon x/2\Biggr\} \,dx \\ &\qquad {}+C\sum_{n=1}^{\infty}g(n)\int _{n^{1/p}}^{\infty} P\Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}\bigl(Y_{xj}^{(1)}-EY_{xj}^{(1)} \bigr)\Biggr\vert \geq\varepsilon x/4\Biggr\} \,dx \\ &\quad = :J_{1}+J_{2}. \end{aligned}$$
(3.11)

For \(J_{1}\), by Markov’s inequality, the \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.1, one gets

$$\begin{aligned} J_{1} \leq& C\sum _{n=1}^{\infty}g(n)\int_{n^{1/p}}^{\infty}x^{-\theta} E\max_{1\leq k\leq n}\Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}Y_{xj}^{(2)} \Biggr\vert ^{\theta}\,dx \\ \leq& C\sum_{n=1}^{\infty}n g(n)\int _{n^{1/p}}^{\infty}x^{-\theta} E\vert Y\vert ^{\theta}I\bigl\{ \vert Y\vert >x\bigr\} \,dx \\ =&C\sum_{n=1}^{\infty}n g(n)\sum _{m=n}^{\infty}\int_{m^{1/p}}^{(m+1)^{1/p}}x^{-\theta} E\vert Y\vert ^{\theta}I\bigl\{ \vert Y\vert >x\bigr\} \,dx \\ \leq& C\sum_{n=1}^{\infty}n g(n)\sum _{m=n}^{\infty} m^{(1-\theta)/p-1}E\vert Y \vert ^{\theta}I\bigl\{ \vert Y\vert >m^{1/p}\bigr\} \\ =&C\sum_{m=1}^{\infty}m^{(1-\theta)/p-1}E \vert Y\vert ^{\theta}I\bigl\{ \vert Y\vert >m^{1/p}\bigr\} \sum_{n=1}^{m}n g(n) \\ \leq&C\sum_{m=1}^{\infty}m^{-\theta/p}l(m)E \vert Y\vert ^{\theta}I\bigl\{ \vert Y\vert >m^{1/p}\bigr\} \\ =&C\sum_{m=1}^{\infty}m^{-\theta/p}l(m) \sum_{k=m}^{\infty} E\vert Y\vert ^{\theta}I\bigl\{ k^{1/p}< \vert Y\vert < (k+1)^{1/p} \bigr\} \\ =&C\sum_{k=1}^{\infty}E \vert Y \vert ^{\theta}I\bigl\{ k^{1/p}< \vert Y\vert < (k+1)^{1/p}\bigr\} \sum_{m=1}^{k}m^{-\theta/p}l(m) \\ \leq&C\sum_{k=1}^{\infty}k^{1-\theta/p}l(k)E \vert Y\vert ^{\theta}I\bigl\{ k^{1/p}< \vert Y\vert < (k+1)^{1/p}\bigr\} \\ \leq&CE\vert Y\vert ^{p}l\bigl(\vert Y\vert ^{p} \bigr)< \infty. \end{aligned}$$
(3.12)

For \(J_{2}\), similar to the proof of \(I_{2}\), take \(r=2\), by Lemma 2.2, Lemma 2.3, and Lemma 2.1, we conclude

$$\begin{aligned} J_{2} \leq& C\sum_{n=1}^{\infty}g(n) \int_{n^{1/p}}^{\infty}x^{-2} E\vert \max _{1\leq k\leq n}\Biggl\vert \sum_{i=-\infty}^{\infty}a_{i} \sum_{j=i+1}^{i+k}\bigl(Y_{xj}^{(1)}-EY_{xj}^{(1)} \bigr)\Biggr\vert ^{2}\,dx \\ \leq& C\sum_{n=1}^{\infty}n g(n)\int _{n^{1/p}}^{\infty}x^{-2} \bigl[E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq x\bigr\} +x^{2}P\bigl( \vert Y\vert >x\bigr)\bigr]\,dx \\ =& C\sum_{n=1}^{\infty}n g(n)\sum _{m=n}^{\infty}\int_{m^{1/p}}^{(m+1)^{1/p}}x^{-2} \bigl[E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq x\bigr\} +x^{2}P\bigl(\vert Y\vert >x\bigr)\bigr]\,dx \\ \leq& C\sum_{n=1}^{\infty}n g(n)\sum _{m=n}^{\infty}\bigl[m^{-1-1/p} E\vert Y \vert ^{2}I\bigl\{ \vert Y\vert \leq(m+1)^{1/p}\bigr\} +m^{1/p-1}P\bigl(\vert Y\vert >m^{1/p}\bigr)\bigr] \\ =& C\sum_{m=1}^{\infty}\bigl[m^{-1-1/p} E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq (m+1)^{1/p}\bigr\} +m^{1/p-1}P\bigl(\vert Y\vert >m^{1/p}\bigr)\bigr] \sum_{n=1}^{m}n g(n) \\ \leq&C\sum_{m=1}^{\infty} \bigl[m^{-2/p}l(m)E\vert Y\vert ^{2}I\bigl\{ \vert Y\vert \leq (m+1)^{1/p}\bigr\} +l(m)P\bigl(\vert Y\vert >m^{1/p} \bigr)\bigr] \\ \leq&CE\vert Y\vert ^{p}l\bigl(\vert Y\vert ^{p} \bigr)< \infty. \end{aligned}$$
(3.13)

Hence from (3.11)-(3.13), (3.10) holds. □

For the complete convergence and strong law of large numbers, we have the following corollary from the above theorems immediately.

Corollary 3.3

Under the assumptions of Theorem  3.1, for any \(\varepsilon>0\) we have

$$\begin{aligned} \sum_{n=1}^{\infty}n^{\alpha p-2}l(n) P\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert >\varepsilon n^{\alpha}\Biggr\} < \infty. \end{aligned}$$
(3.14)

Under the assumptions of Theorem  3.2, for any \(\varepsilon>0\) we have

$$\begin{aligned} \sum_{n=1}^{\infty}n^{-1}l(n) P\Biggl\{ \max_{1\leq k\leq n}\Biggl\vert \sum _{j=1}^{k}X_{j}\Biggr\vert >\varepsilon n^{1/p}\Biggr\} < \infty; \end{aligned}$$
(3.15)

in particular, the assumptions \(EY_{i}=0\) and \(E\vert Y\vert ^{p}<\infty\) imply the following Marcinkiewicz-Zygmund strong law of large numbers:

$$\begin{aligned} \lim_{n\to\infty}\frac{1}{n^{1/p}}\sum _{j=1}^{n}X_{j}=0\quad \textit{a.s.} \end{aligned}$$
(3.16)

Remark 3.4

Corollary 3.3 provides complete convergence for the maximum of partial sums, which extends the corresponding results of Budsaba et al. [22, 23] and Theorem 1 of Baek et al. [1] with less restrictions. Since \(\rho^{-}\)-mixing random variables include NA and \(\rho^{*}\)-mixing random variables, our results also hold for NA and \(\rho^{*}\)-mixing, and therefore Theorem 3.1 improves upon the above Theorem A from Li and Zhang [11] with less restrictions, and our results also extend and generalize the above Theorem B from Chen et al. [20] with \(q=1\) partly.

Remark 3.5

Obviously, the assumption that \(\{Y_{i},-\infty< i<\infty\}\) is stochastically dominated by a random variable Y is weaker than the assumption of identical distribution of the random variables \(\{Y_{i},-\infty< i<\infty\}\), therefore the above results also hold for identically distributed random variables.

Remark 3.6

Let \(a_{0}=1\), \(a_{i}=0\), \(i\neq0\), then \(S_{n}=\sum_{k=1}^{n}X_{k}=\sum_{k=1}^{n}Y_{k}\). Hence the above results hold when \(\{X_{k},k\geq1\}\) is a sequence of \(\rho^{-}\)-mixing random variables which is stochastically dominated by a random variable Y.

References

  1. Baek, JI, Kim, TS, Liang, HY: On the convergence of moving average processes under dependent conditions. Aust. N. Z. J. Stat. 45, 331-342 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  2. Burton, RM, Dehling, H: Large deviations for some weakly dependent random processes. Stat. Probab. Lett. 9, 397-401 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  3. Ibragimov, IA: Some limit theorem for stationary processes. Theory Probab. Appl. 7, 349-382 (1962)

    Article  Google Scholar 

  4. Račkauskas, A, Suquet, C: Functional central limit theorems for self-normalized partial sums of linear processes. Lith. Math. J. 51(2), 251-259 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chen, PY, Hu, TC, Volodin, A: Limiting behaviour of moving average processes under φ-mixing assumption. Stat. Probab. Lett. 79, 105-111 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Guo, ML: On complete moment convergence of weighted sums for arrays of row-wise negatively associated random variables. Stochastics 86(3), 415-428 (2014)

    MathSciNet  MATH  Google Scholar 

  7. Kim, TS, Ko, MH: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 78(7), 839-846 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Kim, TS, Ko, MH, Choi, YK: Complete moment convergence of moving average processes with dependent innovations. J. Korean Math. Soc. 45(2), 355-365 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ko, MH, Kim, TS, Ryu, DH: On the complete moment convergence of moving average processes generated by \(\rho^{\ast}\)-mixing sequences. Commun. Korean Math. Soc. 23(4), 597-606 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, DL, Rao, MB, Wang, XC: Complete convergence of moving average processes. Stat. Probab. Lett. 14, 111-114 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  11. Li, YX, Zhang, LX: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70, 191-197 (2004)

    Article  MATH  Google Scholar 

  12. Qiu, DH, Liu, XD, Chen, PY: Complete moment convergence for maximal partial sums under NOD setup. J. Inequal. Appl. 2015, 58 (2015) 12 pp

    Article  MathSciNet  Google Scholar 

  13. Wang, XJ, Hu, SH: Complete convergence and complete moment convergence for martingale difference sequence. Acta Math. Sin. Engl. Ser. 30(1), 119-132 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Yang, WZ, Hu, SH: Complete moment convergence of pairwise NQD random variables. Stochastics 87(2), 199-208 (2015)

    MathSciNet  Google Scholar 

  15. Zhang, LX: Complete convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 30, 165-170 (1996)

    Article  MATH  Google Scholar 

  16. Zhen, X, Zhang, LL, Lei, YJ, Chen, ZG: Complete moment convergence for weighted sums of negatively superadditive dependent random variables. J. Inequal. Appl. 2015, 117 (2015)

    Article  Google Scholar 

  17. Zhou, XC: Complete moment convergence of moving average processes under φ-mixing assumptions. Stat. Probab. Lett. 80, 285-292 (2010)

    Article  MATH  Google Scholar 

  18. Zhou, XC, Lin, JG: Complete moment convergence of moving average processes under ρ-mixing assumption. Math. Slovaca 61(6), 979-992 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  19. Shen, AT, Wang, XH, Li, XQ, Wang, XJ: On the rate of complete convergence for weighted sums of arrays of rowwise ϕ-mixing random variables. Commun. Stat., Theory Methods 43, 2714-2725 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Chen, PY, Hu, TC, Volodin, A: Limiting behaviour of moving average processes under negative association assumption. Theory Probab. Math. Stat. 77, 154-166 (2007)

    Google Scholar 

  21. Wang, JF, Lu, FB: Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. 22, 693-700 (2006)

    Article  MATH  Google Scholar 

  22. Budsaba, K, Chen, PY, Volodin, A: Limiting behavior of moving average processes based on a sequence of \(\rho^{-}\) mixing random variables. Thail. Stat. 5, 69-80 (2007)

    MathSciNet  MATH  Google Scholar 

  23. Budsaba, K, Chen, PY, Volodin, A: Limiting behavior of moving average processes based on a sequence of \(\rho^{-}\) mixing and NA random variables. Lobachevskii J. Math. 26, 17-25 (2007)

    MathSciNet  MATH  Google Scholar 

  24. Tan, XL, Zhang, Y, Zhang, Y: An almost sure central limit theorem of products of partial sums for \(\rho^{-}\) mixing sequences. J. Inequal. Appl. 2012, 51 (2012). doi:10.1186/1029-242X-2012-51

    Article  MathSciNet  Google Scholar 

  25. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25, 1916-1920 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant No. 11101180) and the Science and Technology Development Program of Jilin Province (Grant No. 20130522096JH).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Zhang.

Additional information

Competing interests

The author declares to have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y. Complete moment convergence for moving average process generated by \(\rho^{-}\)-mixing random variables. J Inequal Appl 2015, 245 (2015). https://doi.org/10.1186/s13660-015-0766-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0766-5

Keywords