Skip to main content

Exponential inequalities for self-normalized martingales

Abstract

In this paper, we establish several exponential inequalities for martingales and self-normalized martingales, which improve some known results.

1 Introduction

A prototypical example of self-normalized random variables is Student’s t-statistic which replaces the population standard deviation σ in the standardized sample mean n ( X ¯ n −μ)/σ by the sample standard deviation. Generally, a self-normalized process is of the form A t / B t , in which B t is a random variable that estimates some dispersion measure of A t . If A t is normalized by nonrandom b t instead, the moment conditions are needed. However, for self-normalized process, the moment conditions can be deleted, for example, Shao [1] obtained the large deviation of self-normalized sums of i.i.d. random variables without moment conditions. In addition, there has been increasing interest in limit theorems and moment bounds for self-normalized sums of i.i.d. zero-mean random variables X i . Bentkus and Götze [2] gave a Berry-Esseen bound for Student’s t-statistic. Giné et al. [3] proved that the t-statistic has a limiting standard normal distribution if and only if X 1 is in the domain of attraction of a normal law. We refer to De la Peña et al. [4] for the comprehensive review of the state of the art of the theory and it applications in statistical inference.

In this paper, we concentrate on the exponential inequalities of the self-normalized martingale. Let ( M n ) be a locally square integrable real martingale adapted to a filtration F=( F n ) with M 0 =0. The predictable quadratic variation and the total quadratic variation of ( M n ) are, respectively, given by

〈 M 〉 n = ∑ k = 1 n E [ Δ ( M k ) 2 | F k − 1 ]

and

[ M ] n = ∑ k = 1 n Δ ( M k ) 2 ,

where Δ M n = M n − M n − 1 . In [5], Bercu and Touati established the following results without any assumptions on ( M n ).

Theorem 1.1 [[5], Theorem 2.1]

Let ( M n ) be a locally square integrable martingale. Then, for all x,y>0,

P ( | M n | ≥ x , [ M ] n + 〈 M 〉 n ≤ y ) ≤2exp ( − x 2 2 y ) .
(1)

Theorem 1.2 [[5], Theorem 2.2]

Let ( M n ) be a locally square integrable martingale. Then, for all x,y>0, a>0 and b>0,

P ( | M n | a + b 〈 M 〉 n ≥ x , 〈 M 〉 n ≥ [ M ] n + y ) ≤2exp ( − x 2 ( a b + b 2 y 2 ) ) .
(2)

Moreover, we also have

P ( | M n | a + b 〈 M 〉 n ≥ x , [ M ] n ≤ y 〈 M 〉 n ) ≤ 2 inf p > 1 ( E [ exp ( − ( p − 1 ) x 2 ( 1 + y ) ( a b + b 2 2 〈 M 〉 n ) ) ] ) 1 p .

It is necessary to point out that, to calculate the above exponential bounds, the following canonical assumption is important. For a pair of random variables (A,B) with B>0, if the following inequality holds:

E [ exp ( λ A − λ 2 B 2 2 ) ] ≤1,λ∈R,

then (A,B) are called satisfying canonical assumption. For such a pair (A,B), De la Pen̂a and Pang [6] proved the following exponential bounds (in their inequalities, there are some misprints, so we state a correction as follows).

Theorem 1.3 [[6], Theorem 2.1]

Let (A,B) be a pair of random variables with B>0 in the probability space (Ω,F,P) satisfying the canonical assumption. Suppose E | A | p <∞ for some p>1. Then, for any x>0 and for q≥1 such that 1 p + 1 q =1,

P ( | A | 2 q − 1 q ( B 2 + [ E | A | p ] 2 p ) ≥ x ) ≤ ( q 2 q − 1 ) q 2 ( 2 q − 1 ) x − q 2 q − 1 e − x 2 2 .

In particular, if E A 2 <∞,

P ( | A | 3 2 ( B 2 + E [ | A | 2 ] ) ≥ x ) ≤ ( 2 3 ) 1 3 x − 2 3 e − x 2 2 ,for x>0,

and if E|A|<∞,

P ( | A | 2 ( B 2 + ( E | A | ) 2 ) ≥ x ) ≤ 2 − 1 4 x − 1 2 e − x 2 2 ,for x>0.

Moreover, if B satisfies E[ B 2 ]=E[ A 2 ]<∞, the upper bound becomes

min [ 2 1 3 e − x 2 2 , ( 2 / 3 ) 1 3 x − 2 3 e − x 2 2 ] ,for x>0.

Here we want to mention the works in Bercu [7], which provided some other different inequality forms.

The purpose of this paper is to establish several exponential inequalities motivated by the above works. In Section 2, we shall propose exponential inequalities for martingales and self-normalized martingales, and, in the last section, the deviation bound of the least-square estimate of the unknown parameter in a linear regressive model is established.

2 Exponential inequalities

2.1 Exponential inequalities for martingales

In this subsection we will give some exponential inequalities involving 〈 M 〉 n and [ M ] n .

Theorem 2.1 Let ( M n ) be a locally square integrable martingale. Then, for all x,y>0,

P ( | M n | ≥ x , 1 2 [ M ] n + 〈 M 〉 n ≤ y ) ≤2exp ( − 3 x 2 4 y ) .
(3)

Remark 2.1 There is no assumption on ( M n ) in the above result. Since, for any x,y>0,

P ( | M | ≥ x , [ M ] n + 〈 M 〉 n ≤ y ) ≤ P ( | M n | ≥ x , 1 2 [ M ] n + 〈 M 〉 n ≤ y ) ≤ 2 exp ( − 3 x 2 4 y ) ≤ 2 exp ( − x 2 2 y ) ,

the inequality (3) is more precise than the inequality (1). Therefore, Theorem 2.1 improves the works of Bercu and Touati [[5], Theorem 2.1].

For self-normalized martingales, we can obtain the following inequality which improves the well-known Theorem 1.1.

Theorem 2.2 Let ( M n ) be a locally square integrable martingale. Then, for all x,y>0, a≥0, b≥0,

P ( | M n | a + b 〈 M 〉 n ≥ x , 〈 M 〉 n ≥ 1 4 [ M ] n + y ) ≤2exp ( − x 2 ( a b + 2 b 2 3 y ) ) .
(4)

Moreover, we also have the result

P ( | M n | a + b 〈 M 〉 n ≥ x , 1 2 [ M ] n ≤ y 〈 M 〉 n ) ≤ 2 inf p > 1 ( E [ exp ( − ( p − 1 ) x 2 ( 1 + y ) ( a b + 2 b 2 3 〈 M 〉 n ) ) ] ) 1 p .
(5)

Remark 2.2 Since, for any x,y>0, a≥0, and b>0,

P ( | M n | a + b 〈 M 〉 n ≥ x , 〈 M 〉 n ≥ [ M ] n + y ) ≤ P ( | M n | a + b 〈 M 〉 n ≥ x , 〈 M 〉 n ≥ 1 4 [ M ] n + y ) ≤ 2 exp ( − x 2 ( a b + 2 b 2 3 y ) ) ≤ 2 exp ( − x 2 ( a b + b 2 y 2 ) ) ,

the inequality (4) is better than the inequality (2). Similarly, we have

P ( | M n | a + b 〈 M 〉 n ≥ x , [ M ] n ≤ y 〈 M 〉 n ) ≤ P ( | M n | a + b 〈 M 〉 n ≥ x , 1 2 [ M ] n ≤ y 〈 M 〉 n ) ≤ 2 inf p > 1 ( E [ exp ( − ( p − 1 ) x 2 ( 1 + y ) ( a b + 2 b 2 3 〈 M 〉 n ) ) ] ) 1 p ≤ 2 inf p > 1 ( E [ exp ( − ( p − 1 ) x 2 ( 1 + y ) ( a b + b 2 2 〈 M 〉 n ) ) ] ) 1 p .

Hence, Theorem 2.2 improves Theorem 1.2.

From Lemma 2.2 in Section 2, we know that 〈 M 〉 n and [ M ] n satisfy the canonical assumption, i.e.

E { exp ( t M n − t 2 2 [ [ M ] n 3 + 2 〈 M 〉 n 3 ] ) } ≤1.

Therefore, we can obtain the following theorem.

Theorem 2.3 Let ( X k ,k≥1) be a martingale different sequence with respect to the filtration F={ F k :k≥1} and suppose that E X k 2 <∞ for all k≥1. Then, for λ∈R,

E { exp ( t M n − t 2 2 [ [ M ] n 3 + 2 〈 M 〉 n 3 ] ) } ≤1,

and for x>0,

P ( | M n | 3 2 ( [ M ] n 3 + 2 〈 M 〉 n 3 + E [ M n 2 ] ) ≥ x ) ≤ ( 2 3 ) − 1 / 3 x − 2 / 3 e − x 2 2 ,
(6)

where M n = ∑ k = 1 n X k , [ M ] n = ∑ k = 1 n X k 2 and 〈 M 〉 n = ∑ k = 1 n E( X k 2 | F k − 1 ).

Remark 2.3 By Fatou’s lemma, it is easy to see that the above results still hold for any stopping time T with respect the filtration with T<∞ a.s.

In [[6], Theorem 3.1], De la Peña and Pang obtained the following inequality: Let T be any stopping time with respect the filtration and assume T<∞ almost surely. Then, for all x>0,

P ( | M T | 3 2 ( [ M ] T + 〈 M 〉 T + E [ M T 2 ] ) ≥ x ) ≤ ( 2 3 ) 1 3 x − 2 3 e − x 2 2 .
(7)

By comparing the inequalities (6) and (7), we know that the inequality (6) is better than the inequality (7). More precisely, we have

P ( | M T | 3 2 ( [ M ] T + 〈 M 〉 T + E [ M T 2 ] ) ≥ x ) ≤ P ( | M T | 3 2 ( [ M ] T 3 + 2 〈 M 〉 T 3 + E [ M T 2 ] ) ≥ x ) ≤ ( 2 3 ) 1 3 x − 2 3 e − x 2 2 .

The following result may be of independent interest.

Theorem 2.4 Let (A,B) be a pair of random variables with B>0 in the probability space (Ω,F,P) satisfying the canonical assumption. For every b>0, S≥1 and λ≥1,

P ( | A | > λ B , b ≤ B ≤ b S ) ≤2 e (1+2λlogS) e − λ 2 / 2 .

From Theorem 2.4, we have the following.

Corollary 2.1 Let ( M n ) be a locally square integrable martingale. Then, for every b>0, S≥1 and λ≥1, we have

P ( | M n | > λ [ M ] n 3 + 2 〈 M 〉 n 3 , b ≤ [ M ] n 3 + 2 〈 M 〉 n 3 ≤ b S ) ≤ 2 e ( 1 + 2 λ log S ) e − λ 2 / 2 .

2.2 Proofs of main results

We start with the following basic lemma.

Lemma 2.1 Let X be a square integrable random variable with EX=0 and 0< σ 2 :=E X 2 <∞. Then, for all t∈R, we have

E [ exp ( t X − t 2 6 X 2 ) ] ≤1+ t 2 3 σ 2 .
(8)

Proof First, we shall prove

exp ( x − x 2 6 ) ≤1+x+ x 2 3 .
(9)

Let

G(x)=exp ( x − x 2 6 ) −1−x− x 2 3 ,

then, by a straightforward calculation, we have

G ′ (x)= ( 1 − x 3 ) exp ( x − x 2 6 ) −1− 2 3 x

and

G ″ ( x ) = − 1 3 exp ( x − x 2 6 ) + ( 1 − x 3 ) 2 exp ( x − x 2 6 ) − 2 3 = − 2 3 [ 1 − ( 1 − x + 1 6 x 2 ) exp ( x − x 2 6 ) ] .

Since

exp ( x 2 6 − x ) >1+ x 2 6 −x,

it follows that

exp ( x − x 2 6 ) ⋅ ( 1 − x + x 2 6 ) <1.

So we get G ″ (x)<0, which means that G(x) is concave function. Because G(0)=0 and G ′ (0)=0, we can prove G(x)≤G(0), i.e. exp(x− x 2 6 )≤1+x+ x 2 3 . Finally we prove Lemma 2.1 by (9). □

In order to prove our main results, we firstly introduce Lemma 2.2.

Lemma 2.2 Let ( M n ) be a locally square integrable martingale. For all t∈R and n≥0, denote

V n (t)=exp ( t M n − t 2 6 [ M ] n − t 2 3 〈 M 〉 n ) .
(10)

Then, for all t∈R, ( V n (t)) is a positive supermartingale with E[ V n (t)]≤1.

Proof For all t∈R and n≥0, we have

V n (t)= V n (t−1)exp ( t M n − t 2 6 [ M ] n − t 2 3 〈 M 〉 n ) ,

where Δ M n = M n − M n − 1 , Δ [ M ] n =Δ M 2 and Δ 〈 M 〉 n =E[Δ M 2 | F n − 1 ]. Hence, we deduce from Lemma 2.1 that, for all t∈R,

E [ V n ( t ) | F n − 1 ] ≤ V n ( t − 1 ) exp ( − t 2 3 Δ 〈 M 〉 n ) ⋅ ( 1 + t 2 3 Δ 〈 M 〉 n ) = V n ( t − 1 ) .

As a result, for all t∈R, ( V n (t)) is a positive supermartingale, i.e. for all n≥1, E[ V n (t)]≤E[ V n − 1 (t)], which implies that E[ V n (t)]≤E[ V 0 (t)]=1. □

Next, we start to prove Theorem 2.1 and Theorem 2.2 inspired by the original article of Bercu and Touati [5].

Proof of Theorem 2.1 First of all, according to the condition of Lemma 2.2, we denote

Z n = 1 2 [ M ] n + 〈 M 〉 n .
(11)

For all x,y≥0 let A n ={| M n |≥x, Z n ≤y}. We define A n = A n + ∪ A n − where A n + ={ M n ≥x, Z n ≤y} and A n − ={ M n ≤−x, Z n ≤y}. By Markov’s inequality, we have, for all t≥0,

P ( A n + ) ≤ E [ exp ( t 2 M n − t 2 x ) I A n + ] ≤ E [ exp ( t 2 M n − t 2 6 Z n ) ⋅ exp ( t 2 6 Z n − t 2 x ) I A n + ] ≤ exp ( t 2 6 y − t 2 x ) E [ V n ( t ) P ( A n + ) ] .

Hence, we deduce from Lemma 2.2 that, for all t>0,

P ( A n + ) ≤exp ( t 2 6 y − t 2 x ) P ( A n + ) .
(12)

Dividing both sides of (12) by P ( A n + ) and choosing the value t= 3 x 2 y , we find that

P ( A n + ) ≤exp ( − 3 x 2 2 y ) .

We also have the same upper bound for P( A n − ), immediately leading to the result of (3). □

Proof of Theorem 2.2 We are going to list the proof of Theorem 2.2 in the special case a=0 and b=1. For all x,y>0, let

B n = { | M n | ≥ x 〈 M 〉 n , 〈 M 〉 n − 1 4 [ M ] n ≥ y } = B n + ∪ B n − ,

where

B n + = { M n ≥ x 〈 M 〉 n , 〈 M 〉 n − 1 4 [ M ] n ≥ y } , B n − = { M n ≤ − x 〈 M 〉 n , 〈 M 〉 n − 1 4 [ M ] n ≥ y } .

By Cauchy-Schwarz’s inequality, we have, for all t>0,

P ( B n + ) ≤ E [ exp ( t 2 M n − t x 2 〈 M 〉 n ) I B n + ] ≤ E [ exp ( t 2 M n − t 2 6 Z n ) exp ( t 6 ( t − 3 x ) 〈 M 〉 n + t 2 12 [ M ] n ) I B n + ] ≤ E [ exp ( t 3 ( t − 3 x ) 〈 M 〉 n + t 2 6 [ M ] n ) I B n + ] 1 / 2 ,
(13)

where Z n is defined in (11). Consequently, we obtain from (13) with the particular choice t=x that

P ( B n + ) ≤exp ( − x 2 y 3 ) P ( B n + ) .
(14)

Therefore, if we divide both sides of (14) by P ( B n + ) , we find that

P ( B n + ) ≤exp ( − 2 x 2 y 3 ) .

The same upper bound holds for P( B n − ), which clearly implies (4). Furthermore, for all x,y>0, let

C n = { | M n | ≥ x 〈 M 〉 n , 1 2 [ M ] n ≤ y 〈 M 〉 n } = C n + ∪ C n − ,

where

C n + = { M n ≥ x 〈 M 〉 n , 1 2 [ M ] n ≤ y 〈 M 〉 n } , C n − = { M n ≤ − x 〈 M 〉 n , 1 2 [ M ] n ≤ y 〈 M 〉 n } .

By Hölder’s inequality, we have, for all t>0 and q>1,

P ( C n + ) ≤ E [ exp ( t q M n − t x q 〈 M 〉 n ) I C n + ] ≤ E [ exp ( t q M n − t 2 3 q Z n ) exp ( t 3 q ( t − 3 x + t y ) 〈 M 〉 n ) I C n + ] ≤ ( E [ exp ( t p 3 q ( t − 3 x + t y ) 〈 M 〉 n ) ] ) 1 p .
(15)

Consequently, as p q =p−1, we can deduce from (15) and the particular choice t= x 1 + y that

P ( C n + ) ≤ inf p > 1 ( E [ exp ( p − 1 3 ( x 2 ( 1 + y ) 2 − 3 x 2 ( 1 + y ) + x 2 y ( 1 + y ) 2 ) 〈 M 〉 n ) ] ) 1 p ≤ inf p > 1 ( E [ exp ( − ( p − 1 ) ⋅ 2 x 2 3 ( 1 + y ) 〈 M 〉 n ) ] ) 1 / p .

We also find the same upper bound for P( C n − ), which completes the proof of Theorem 2.2. □

Proof of Theorem 2.3 Because ( X k ,k≥1) is a martingale different sequence, it satisfies the canonical assumption, i.e. E{exp(t M n − t 2 2 [ [ M ] n 3 + 2 〈 M 〉 n 3 ])}≤1. Putting

A= M n , B 2 = [ M ] n 3 + 2 〈 M 〉 n 3 ,

then, according to the canonical assumption and Fubini’s theorem, for any C>0, we have

1 ≥ ∫ R 1 2 π C − 1 2 e − λ 2 2 C − 1 E [ exp ( λ A − λ 2 B 2 2 ) ] d λ = E [ C 1 2 exp ( A 2 2 ( B 2 + C ) ) ∫ R 1 2 π exp ( − ( λ + A B 2 + C ) 2 2 B 2 + C ) d λ ] = E [ ( C B 2 + C ) 1 2 exp ( A 2 2 ( B 2 + C ) ) ] .

For any measurable set G∈F, by Markov’s inequality, we get

P ( | A | B 2 + C ≥ x , G ) = P ( | A | 2 4 ( B 2 + C ) ≥ x 2 4 , G ) ≤ P ( | A | 1 2 ( B 2 + C ) 1 4 e A 2 4 ( B 2 + C ) ≥ x 1 2 e x 2 4 , G ) ≤ x − 1 / 2 e − x 2 4 E [ ( A 2 B 2 + C ) 1 4 e A 2 4 ( B 2 + C ) I G ] .
(16)

By using Hölder’s inequality, we get

E [ ( A 2 B 2 + C ) 1 4 e A 2 4 ( B 2 + C ) I G ] = E [ C 1 4 | A | 1 2 ( A 2 B 2 + C ) 1 4 e A 2 4 ( B 2 + C ) | A | 1 2 C 1 4 I G ] ≤ ( E [ ( C B 2 + C ) 1 2 exp ( A 2 2 ( B 2 + C ) ) ] ) 1 2 ( E [ | A | C 1 2 I G ] ) 1 2 ≤ ( E [ | A | C 1 2 I G ] ) 1 2 .

Let C=E[ | A | 2 ], and by using Hölder’s inequality, we have the following inequality:

E [ | A | C 1 2 I G ] ≤ ( E [ | A | 2 C ] ) 1 2 P ( G ) 1 2 =P ( G ) 1 2 .

Hence, from (16), it follows that

P ( | A | B 2 + E | A | 2 ≥ x , G ) ≤ x − 1 / 2 e − x 2 4 P ( G ) 1 4 .

Now letting G={ | A | B 2 + E | A | 2 ≥x}, we will have

P ( | A | B 2 + E | A | 2 ≥ x ) ≤ x − 2 / 3 e − x 2 3 .

So we can get the inequality

P ( | M n | 3 2 ( [ M ] n 3 + 2 〈 M 〉 n 3 + E [ M n 2 ] ) ≥ x ) ≤ ( 2 3 ) − 1 / 3 x − 2 / 3 e − x 2 2 .

 □

Proof of Theorem 2.4 Given a>1, let b k =b a k and define random events C k ={ b k ≤ B ≤ b k + 1 }, k=0,1,2,…,K, where K denotes the integer part of log a S. Since the pair (A,B) satisfies the canonical assumption, we have

1 ≥ E exp ( λ b k A − λ 2 2 b k 2 B ) I { A > λ B , C k } ≥ E exp ( λ 2 b k B − λ 2 2 b k 2 B ) I { A > λ B , C k } ≥ E inf b k ≤ v ≤ b k + 1 exp ( λ 2 b k v − λ 2 2 b k 2 v 2 ) I { A > λ B , C k } = exp ( λ 2 ( a − a 2 2 ) ) P { A > λ B , C k } ,

which implies

P(A>λ B , C k )≤exp ( − λ 2 ( a − a 2 2 ) ) .

Therefore, we conclude that

P ( | A | > λ B , b ≤ B ≤ b S ) ≤ 2 ∑ k = 0 K P ( | A | > λ B , b ≤ B ≤ b S , C k ) ≤ 2 ( 1 + log a S ) exp ( − λ 2 ( a − a 2 2 ) ) .
(17)

We need to choose a>1 such that the bound in (17) is possibly small. Now, by taking a=1+1/λ, we have

λ 2 ( a − a 2 2 ) = 1 2 ( λ 2 − 1 ) .
(18)

Since log(1+1/λ)≥1/(2λ) for λ≥1, then we have (1+ log a S)≤(1+2λlogS), which, together with (18), yields the desired result. □

3 Linear regressions

In this section, let us consider the deviation inequality of the least-squares estimate of the unknown parameter in linear regressive model. For all n≥0, let

X n + 1 =θ ϕ n + ε n + 1 ,
(19)

where X n , ϕ n , and ε n are the observation, the regression variable, and the driven noise, respectively. Suppose that ( ϕ n ) is a sequence of independent and identically distributed random variables, and ( ε n ) is a sequence of identically distributed random variables with mean zero and variance σ 2 >0. Furthermore, assume that ( ε n + 1 ) is independent of F n where F n =σ( ϕ i , ε j ;0≤i≤n−1,1≤j≤n). The least-squares estimate of the unknown parameter θ is given by

θ ˆ n = ∑ k = 1 n ϕ k − 1 X k ∑ k = 1 n ϕ k − 1 2 ,

which yields from (19)

θ ˆ n −θ= σ 2 M n 〈 M 〉 n ,
(20)

where

M n = ∑ k = 1 n ϕ k − 1 ε k and 〈 M 〉 n = σ 2 ∑ k = 1 n ϕ k − 1 2 .

Let H and L be the cumulant generating functions of the sequence ( ϕ n 2 ) and ( ε n 2 ), respectively given, for all t∈R, by

H(t)=logE [ exp ( t ϕ n 2 ) ] andL(t)=logE [ exp ( t ε n 2 ) ] .

Bercu and Touati [5] obtained the following result.

Corollary 3.1 [[5], Corollary 5.1]

Assume that L is finite on some interval [0,c] with c>0 and denote by I its Fenchel-Legendre transform on [0,c],

I(x)= sup 0 ≤ t ≤ c { x t − L ( t ) } .

Then, for all n≥1, x>0 and y>0, we have

P ( | θ ˆ n − θ | ≥ x ) ≤2 inf p > 1 exp ( n p H ( − ( p − 1 ) x 2 2 σ 2 ( 1 + y ) ) ) +exp ( − n I ( σ 2 y n ) ) .
(21)

Now, we give the following theorem.

Theorem 3.1 Under the conditions of Corollary  3.1, for all n≥1, x>0, and y>0, we have

P ( | θ ˆ n − θ | ≥ x ) ≤2 inf p > 1 exp ( n p H ( − 2 ( p − 1 ) x 2 3 σ 2 ( 1 + y ) ) ) +exp ( − n I ( 2 σ 2 y n ) ) .
(22)

Remark 3.1 Obviously, the upper bound in (22) is better than the bound (21).

Proof From (20), for all n≥1, x>0 and y>0, we get

P ( | θ ˆ n − θ | ≥ x ) = P ( | M n | ≥ x σ 2 〈 M 〉 n ) ≤ P ( | M n | ≥ x σ 2 〈 M 〉 n , 1 2 [ M ] n ≤ y 〈 M 〉 n ) + P ( 1 2 [ M ] n ≥ y 〈 M 〉 n ) .

By using the inequality (5), it follows that

P ( | M n | ≥ x σ 2 〈 M 〉 n , 1 2 [ M ] n ≤ y 〈 M 〉 n ) ≤ 2 inf p > 1 ( E [ exp ( − ( p − 1 ) 2 x 2 3 σ 4 ( 1 + y ) 〈 M 〉 n ) ] ) 1 p = 2 inf p > 1 exp ( n p H ( − 2 ( p − 1 ) x 2 3 σ 2 ( 1 + y ) ) ) .

Furthermore, since, for any t∈[0,c],

P ( 1 2 [ M ] n ≥ y 〈 M 〉 n ) = P ( ∑ k = 1 n ϕ k − 1 2 ε k 2 ≥ 2 y σ 2 ∑ k = 1 n ϕ k − 1 2 ) ≤ P ( ∑ k = 1 n ε k 2 ≥ 2 y σ 2 ) ≤ exp ( − 2 y σ 2 t ) E exp ( t ∑ k = 1 n ε k 2 ) ≤ exp ( − 2 y σ 2 t + n L ( t ) ) ≤ exp ( − n I ( 2 y σ 2 n ) ) ,

then, from the above discussions, the desired results can be obtained. □

References

  1. Shao QM: Self-normalized large deviations. Ann. Probab. 1997, 25: 285–328.

    Article  MathSciNet  MATH  Google Scholar 

  2. Bentkus V, Götze F: The Berry-Esseen bound for Student’s statistic. Ann. Probab. 1996, 24: 491–503.

    Article  MathSciNet  MATH  Google Scholar 

  3. Giné M, Götze F, Mason DM: When is the Student t -statistic asymptotically standard normal? Ann. Probab. 1997, 25: 1514–1531.

    Article  MathSciNet  MATH  Google Scholar 

  4. De la Peña VH, Lai TL, Shao QM: Self-Normalized Processes: Limit Theory and Statistical Applications. Springer, Berlin; 2009.

    Book  MATH  Google Scholar 

  5. Bercu B, Touati A: Exponential inequalities for self-normalized martingales with applications. Ann. Appl. Probab. 2008, 18: 1848–1869. 10.1214/07-AAP506

    Article  MathSciNet  MATH  Google Scholar 

  6. De la Peña VH, Pang GD: Exponential inequalities for self-normalized processes with applications. Electron. Commun. Probab. 2009, 14: 372–381.

    Article  MathSciNet  MATH  Google Scholar 

  7. Bercu B: An exponential inequality for autoregressive processes in adaptive tracking. J. Syst. Sci. Complex. 2007, 20: 243–250. 10.1007/s11424-007-9021-6

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by IRTSTHN (14IRTSTHN023), NSFC (No. 11001077), NCET (NCET-11-0945), and Plan For Scientific Innovation Talent of Henan Province (124100510014).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Miao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript and read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, S., Wang, Z., Xu, W. et al. Exponential inequalities for self-normalized martingales. J Inequal Appl 2014, 289 (2014). https://doi.org/10.1186/1029-242X-2014-289

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-289

Keywords