- Research
- Open access
- Published:
Maximum likelihood estimators in linear regression models with Ornstein-Uhlenbeck process
Journal of Inequalities and Applications volume 2014, Article number: 301 (2014)
Abstract
The paper studies the linear regression model
where
with parameters , and the standard Brownian motion. Firstly, the maximum likelihood (ML) estimators of β, λ and are given. Secondly, under general conditions, the asymptotic properties of the ML estimators are investigated. And then, limiting distributions for likelihood ratio test statistics of the hypothesis are also given. Lastly, the validity of the method are illuminated by two real examples.
MSC:62J05, 62M10, 60J60.
1 Introduction
Consider the following linear regression model
where ’s are scalar response variables, ’s are explanatory variables, β is an m-dimensional unknown parameter, and is an Ornstein-Uhlenbeck process, which satisfies the linear stochastic differential equation (SDE)
with parameters , and the standard Brownian motion.
It is well known that a linear regression model is the most important and popular model in the statistical literature, which attracts many people to investigate the model. For an ordinary linear regression model (when the errors are independent and identically distributed (i.i.d.) random variables), Wang and Zhou [1], Anatolyev [2], Bai and Guo [3], Chen [4], Gil et al. [5], Hampel et al. [6], Cui [7], Durbin [8] and Li and Yang [9] used various estimation methods to obtain estimators of the unknown parameters in (1.1) and discussed some large or small sample properties of these estimators. Recently, linear regression with serially correlated errors has attracted increasing attention from statisticians and economists. One case of considerable interest is that the errors are autoregressive processes; Hu [10], Wu [11], and Fox and Taqqu [12] established its asymptotic normality with the usual -normalization in the case of long memory stationary Gaussian observations errors. Giraitis and Surgailis [13] extended this result to non-Gaussian linear sequences. Koul and Surgailis [14] established the asymptotic normality of the Whittle estimator in linear regression models with non-Gaussian long memory moving average errors. Shiohama and Taniguchi [15] estimated the regression parameters in a linear regression model with autoregressive process. Fan [16] investigated moderate deviations for M-estimators in linear models with ϕ-mixing errors.
The Ornstein-Uhlenbeck process was originally introduced by Ornstein and Uhlenbeck [17] as a model for particle motion in a fluid. In physical sciences, the Ornstein-Uhlenbeck process is a prototype of a noisy relaxation process, whose probability density function can be described by the Fokker-Planck equation (see Janczura et al. [18], Debbasch et al. [19], Gillespie [20], Ditlevsen and Lansky [21], Garbaczewski and Olkiewicz [22], Plastino and Plastino [23]):
This process is now widely used in many areas of application. The main characteristic of the Ornstein-Uhlenbeck process is the tendency to return towards the long-term equilibrium μ. This property, known as mean-reversion, is found in many real life processes, e.g., in commodity and energy price processes (see Fasen [24], Yu [25], Geman [26]). There are a number of papers concerned with the Ornstein-Uhlenbeck process, for example, Janczura et al. [18], Zhang et al. [27], Rieder [28], Iacus [29], Bishwal [30], Shimizu [31], Zhang and Zhang [32], Chronopoulou and Viens [33], Lin and Wang [34] and Xiao et al. [35]. It is well known that the solution of model (1.2) is an autoregressive process. For a constant or functional or random coefficient autoregressive model, many people (for example, Magdalinos [36], Andrews and Guggenberger [37], Fan and Yao [38], Berk [39], Goldenshluger and Zeevi [40], Liebscher [41], Baran et al. [42], Distaso [43] and Harvill and Ray [44]) used various estimation methods to obtain estimators and discussed some asymptotic properties of these estimators, or investigated hypotheses testing.
By (1.1) and (1.2), we can obtain that the more general process satisfies the SDE
where is a time-dependent mean reversion level with three parameters. Thus, model (1.3) is a general Ornstein-Uhlenbeck process. Its special cases have gained much attention and have been applied to many fields such as economics, physics, geography, geology, biology and agriculture. Dehling et al. [45] considered the model with maximum likelihood estimate, and proved strong consistency and asymptotic normality. Lin and Wang [34] established the existence of a successful coupling for a class of stochastic differential equations given by (1.3). Bishwal [30] investigated the uniform rate of weak convergence of the minimum contrast estimator in the Ornstein-Uhlenbeck process (1.3).
The solution of model (1.2) is given by
where .
The process observed in discrete time is more relevant in statistics and economics. Therefore, by (1.4), the Ornstein-Uhlenbeck time series for is given by
where i.i.d. random errors and with equidistant time lag d, fixed in advance. Models (1.1) and (1.5) include many special cases such as a linear regression model with constant coefficient autoregressive processes (when ; see Hu [10], Wu [11], Maller [46], Pere [47] and Fuller [48]), Ornstein-Uhlenbeck time series or processes (when ; see Rieder [28], Iacus [29], Bishwal [30], Shimizu [31] and Zhang and Zhang [32]), constant coefficient autoregressive processes (when , ; see Chambers [49], Hamilton [50], Brockwell and Davis [51] and Abadir and Lucas [52], etc.).
The paper discusses models (1.1) and (1.5). The organization of the paper is as follows. In Section 2 some estimators of β, θ and are given by the quasi-maximum likelihood method. Under general conditions, the existence and consistency of the quasi-maximum likelihood estimators as well as asymptotic normality are investigated in Section 3. The hypothesis testing is given in Section 4. Some preliminary lemmas are presented in Section 5. The main proofs of theorems are presented in Section 6, with two real examples in Section 7.
2 Estimation method
Without of loss generality, we assume that , in the sequel. Write the ‘true’ model as
and
where i.i.d.
By (2.2), we have
Thus is measurable with respect to the σ-field H generated by , and
Using similar arguments as those of Rieder [28] or Maller [46], we get the log-likelihood of conditional on ,
We maximize (2.5) to obtain QML estimators denoted by , , (when they exist). Then the first derivatives of may be written as
and
Thus , , satisfy the following estimation equations:
and
where
To obtain our results, the following conditions are sufficient (see Maller [46]).
(A1) is positive definite for sufficiently large n and
(A2)
where , denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.
For ease of exposition, we shall introduce the following notations which will be used later in the paper.
Let -vector . Define
By (2.7) and (2.8), we get the components of
and
Hence we have
where the ∗ indicates that the elements are filled in by symmetry. By (2.18), we have
Thus,
3 Large sample properties of the estimators
Theorem 3.1 Suppose that conditions (A1)-(A2) hold. Then there is a sequence such that, for each , as , the probability
Furthermore,
where, for each , and , define neighborhoods
and
Theorem 3.2 Suppose that conditions (A1)-(A2) hold. Then
In the following, we will investigate some special cases in models (1.1) and (1.5). From Theorem 3.1 and Theorem 3.2, we obtain the following results. Here we omit their proofs.
Corollary 3.1 If , then
Corollary 3.2 If , then
4 Hypothesis testing
In order to fit a data set , we may use model (1.3) or an Ornstein-Uhlenbeck process with a constant mean level model
If , then we use model (1.3), namely models (1.1) and (1.2). If , then we use model (1.4). How to know or ? In the section, we shall consider the question about hypothesis testing and obtain limiting distributions for likelihood ratio (LR) test statistics (see Fan and Jiang [53]).
Under the null hypothesis
let , , be the corresponding ML estimators of β, λ, . Also let
and
By (2.9) and (2.5), we have that
And similarly,
By (4.5) and (4.6), we have
Large values of suggest rejection of the null hypothesis.
Theorem 4.1 Suppose that conditions (A1)-(A2) hold. If holds, then
5 Some lemmas
Throughout this paper, let C denote a generic positive constant which could take different value at each occurrence. To prove our main results, we first introduce the following lemmas.
Lemma 5.1 If condition (A1) holds, then for any the matrix is positive definite for large enough n, and
Proof Let and be the smallest and largest roots of . Then from Ex. 22.1 of Rao [54],
for unit vectors u. Thus by (2.18) there are some and such that implies
By (2.16) and (5.1), we have
By Rao [[54], p.60] and (2.17), we have
From (5.3) and ,
□
Lemma 5.2 The matrix is positive definite for large enough n, and .
Proof Note that is positive definite and . It is easy to show that the matrix is positive definite for large enough n. By (2.8), we have
Note that and are independent, so we have . Thus, by (2.7) and , we have
Hence, from (5.5) and (5.6),
By (2.8) and (2.20), we have
Note that is a martingale difference sequence with
so
By (2.7), (2.8), and noting that and are independent, we have
From (5.8)-(5.10), it follows that . The proof is completed. □
Lemma 5.3 (Maller [55])
Let be a symmetric random matrix with eigenvalues , . Then
Lemma 5.4 For each ,
and also
where
Proof Let be a square root decomposition of . Then
Let . Then
From (2.20), (2.21) and (5.14),
where
and
Let
and
As the first step, we will show that, for each ,
In fact, note that
where
and
Let , , and let , . By the Cauchy-Schwarz inequality, Lemma 5.1 and noting , we have
Similar to the proof of , we easily obtain
By the Cauchy-Schwarz inequality, Lemma 5.1 and noting , we have
Hence, (5.23) follows from (5.24)-(5.27).
For the second step, we will show that
Note that
and
Write
where
For and each , we have
By (5.32) and Lemma 5.1, we have
Using the Cauchy-Schwarz inequality and (5.33), we obtain
Using a similar argument as , we obtain that
By the Cauchy-Schwarz inequality and (5.33), (5.25), we get
By (5.25), we have
Thus, by the Chebychev inequality and (5.37),
By Lemma 5.1 and (2.3), we have
Thus, by the Chebychev inequality and (5.39),
Using a similar argument as , we obtain
Thus (5.28) follows immediately from (5.31), (5.34)-(5.36), (5.38), (5.40) and (5.41).
For the third step, we will show that
Write that
By (3.3) and (3.4), we obtain that
and
By (5.29), we have
By (5.32), it is easy to show that
By Lemma 5.1, (2.3) and (5.32), we have
Thus by the Chebychev inequality and (5.48),
Write
where
Note that
so we have
Thus, by (5.46), (5.47), (5.49) and (5.52), we have
By (5.29), we have
It is easy to show that
Note that is a martingale difference sequence, so we have
Hence,
By (5.54)-(5.56), we have
It is easily proved that
Hence, (5.42) follows immediately from (5.43)-(5.45), (5.53), (5.57) and (5.58). This completes the proof of (5.11) from (5.17), (5.23), (5.28) and (5.42).
It is well known that as . To prove (5.12), we need to show that
This follows immediately from (2.20) and the Markov inequality.
Finally, we will prove (5.13). By (5.11) and (5.12), we have
uniformly in for each . Thus, by Lemma 5.3,
This implies (5.13). □
Lemma 5.5 (Hall and Heyde [56])
Let be a zero-mean, square-integrable martingale array with differences , and let be an a.s. finite random variable. Suppose that for all , and . Then
where the r.v. Z has the characteristic function .
6 Proof of theorems
Proof of Theorem 3.1 Take , let
be the boundary of , and let . Using (2.19) and the Taylor expansion, for each , we have
where for some .
Let and . Take and , and by (6.2), we obtain that
By Lemma 5.2 and the Chebychev inequality, we obtain
Let , then , and using (5.13), we have
By (6.3)-(6.5), we have
By Lemma 5.3, as . Hence . Moreover, from (5.13), we have
This implies that is concave on . Noting this fact and (6.6), we get
On the event in the brackets, the continuous function has a unique maximum in θ over the compact neighborhood . Hence
Moreover, there is a sequence such that satisfies
This is a QML estimator for . It is clearly consistent, and
Since are ML estimators for , is an ML estimator for from (2.9).
To complete the proof, we will show that as . If , then and .
By (2.12) and (2.1), we have
By (2.9), (2.11) and (6.8), we have
From (6.8), it follows that
From (2.2), we get
By (6.9)-(6.11), we have
By the law of large numbers and , we have
By the Markov inequality, and noting that , we obtain
Since is a martingale difference sequence with
so we have
By the Chebychev inequality, we have
By (5.33), we have
From (6.12)-(6.14), (6.16) and (6.17), we have .
We therefore complete the proof of Theorem 3.1. □
Proof of Theorem 3.2 It is easy to know that and is nonsingular from Theorem 3.1. By the Taylor expansion, we have
Since , also . By (5.11), we have
where is a symmetric matrix with . By (6.18) and (6.19), we have
Similar to (6.20), we have
Here . By (6.20), (6.21), and noting that and , we obtain that
From (2.7) and (2.8), we have
From (5.14) and (5.15), we have
By (6.23) and (6.24), we have
Let with , and
Then , and we will consider the limiting distribution of the following 2-vector
Note that
Hence, by the Cramer-Wold device, it will suffice to find the asymptotic distribution of the following random
where with . Note that
so the sums in (6.27) are partial sums of a martingale triangular array to , and we will verify the Lindeberg conditions for their convergence to normality.
By (6.27), and noting that , and , we have
Let and . Then .
For any ,
This verifies the Lindeberg conditions, and by Lemma 5.5, we have
Thus we complete the proof of Theorem 3.2. □
Proof of Theorem 4.1 Note that , . Similarly to the proof of Theorem 4.1(3) in Maller [55], by (6.12) and Theorem 3.2, we have
□
7 Empirical examples
In the section, we consider two empirical examples. The first one (β is a one-dimensional unknown parameter, namely ) is water flowing in the Kootenay River in January, which is taken from Hampel et al. [[6], p.310]. The second one (β is a 4-dimensional unknown parameter, namely ) is the consumption of spirits in the United Kingdom, which is taken from Fuller [48].
7.1 Water flowing in the Kootenay river
By the ordinary least squares method, we obtain that
and
where is a sequence of uncorrelated random variables.
By the Huber-Dutter (HD) method, we obtain the following model (see Hu [10]):
and
where is a sequence of uncorrelated random variables.
By the ML method (take and starting values for , , ; here we use pattern search algorithms), we obtain the following model:
and
where is a sequence of uncorrelated random variables.
By model (1.3), we obtain a general process satisfying the following SDE:
Since , our results excel the results of HD and the least squares method in mean squares error (MSE).
By (4.7), we obtain . It is shown that at the significant level . Thus we should apply the linear regression model (1.1) with Ornstein-Uhlenbeck process instead of only the Ornstein-Uhlenbeck process for the data.
It is shown that our estimation method and testing approach are valid in the case of . For a multidimensional parameter β, it is true in the following example.
7.2 Consumption of spirits in the UK
We will use the data studied by Fuller [48]. The data pertain to the consumption of spirits in the United Kingdom from 1870 to 1983. The dependent variable is the annual per capita consumption of spirits in the United Kingdom. The explanatory variables and are per capita income and price of spirits, respectively, both deflated by a general price index. All data are in logarithms. The model suggested by Prest can be written as follows:
where 1869 is the origin for t, , , and assume that is a stationary time series.
Fuller [48] obtained the estimated generalized least squares equation
and
where is a sequence of uncorrelated random variables.
Take and starting values for
Using our method, we obtain the following models:
and
where is a sequence of uncorrelated random variables; or
Since , our results excel the results of Fuller [48] in MSE.
By (4.7), we obtain . It is shown that at the significant level .
References
Wang XM, Zhou W: Bootstrap approximation to the distribution of M -estimates in a linear model. Acta Math. Sin. Engl. Ser. 2004,20(1):93-104. 10.1007/s10114-003-0246-6
Anatolyev S: Inference in regression models with many regressors. J. Econom. 2012, 170: 368-382. 10.1016/j.jeconom.2012.05.011
Bai ZD, Guo M: A paradox in least-squares estimation of linear regression models. Stat. Probab. Lett. 1999, 42: 167-174. 10.1016/S0167-7152(98)00205-3
Chen X: Consistency of LS estimates of multiple regression under a lower order moment condition. Sci. China Ser. A 1995,38(12):1420-1431.
Gil GR, Engela B, Norberto C, Ana C: Least squares estimation of linear regression models for convex compact random sets. Adv. Data Anal. Classif. 2007, 1: 67-81. 10.1007/s11634-006-0003-7
Hampel FR, Ronchetti EM, Rousseeuw PJ, Stahel WA: Robust Statistics. Wiley, New York; 1986.
Cui H: On asymptotics of t -type regression estimation in multiple linear model. Sci. China Ser. A 2004,47(4):628-639. 10.1360/03ys0020
Durbin L: A note on regression when there is extraneous information about one of the coefficients. J. Am. Stat. Assoc. 1953, 48: 799-808. 10.1080/01621459.1953.10501201
Li Y, Yang H: A new stochastic mixed ridge estimator in linear regression model. Stat. Pap. 2010,51(2):315-323. 10.1007/s00362-008-0169-5
Hu HC:Asymptotic normality of Huber-Dutter estimators in a linear model with processes. J. Stat. Plan. Inference 2013,143(3):548-562. 10.1016/j.jspi.2012.08.012
Wu WB: M -Estimation of linear models with dependent errors. Ann. Stat. 2007,35(2):495-521. 10.1214/009053606000001406
Fox R, Taqqu MS: Large sample properties of parameter estimates for strongly dependent stationary Gaussian time series. Ann. Stat. 1986, 14: 517-532. 10.1214/aos/1176349936
Giraitis L, Surgailis D: A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotic normality of Whittle’s estimate. Probab. Theory Relat. Fields 1990, 86: 87-104. 10.1007/BF01207515
Koul HL, Surgailis D: Asymptotic normality of the Whittle estimator in linear regression models with long memory errors. Stat. Inference Stoch. Process. 2000, 3: 129-147. 10.1023/A:1009999607588
Shiohama T, Taniguchi M: Sequential estimation for time series regression models. J. Stat. Plan. Inference 2004, 123: 295-312. 10.1016/S0378-3758(03)00153-8
Fan J: Moderate deviations for M -estimators in linear models with ϕ -mixing errors. Acta Math. Sin. Engl. Ser. 2012,28(6):1275-1294. 10.1007/s10114-011-9188-6
Ornstein LS, Uhlenbeck GE: On the theory of Brownian motion. Phys. Rev. 1930, 36: 823-841. 10.1103/PhysRev.36.823
Janczura J, Orzel S, Wylomanska A: Subordinated α -stable Ornstein-Uhlenbeck process as a tool for financial data description. Physica A 2011, 390: 4379-4387. 10.1016/j.physa.2011.07.007
Debbasch F, Mallick K, Rivet JP: Relativistic Ornstein-Uhlenbeck process. J. Stat. Phys. 1997, 88: 945-966.
Gillespie D: Exact numerical simulation of the Ornstein-Uhlenbeck process and its integral. Phys. Rev. E 1996,54(2):2084-2091. 10.1103/PhysRevE.54.2084
Ditlevsen S, Lansky P: Estimation of the input parameters in the Ornstein-Uhlenbeck neuronal model. Phys. Rev. E 2005. Article ID 011907,71(1): Article ID 011907
Garbaczewski P, Olkiewicz R: Ornstein-Uhlenbeck-Cauchy process. J. Math. Phys. 2000,41(10):6843-6860. 10.1063/1.1290054
Plastino AR, Plastino A: Non-extensive statistical mechanics and generalized Fokker-Planck equation. Physica A 1995, 222: 347-354. 10.1016/0378-4371(95)00211-1
Fasen V: Statistical estimation of multivariate Ornstein-Uhlenbeck processes and applications to co-integration reserved. J. Econom. 2012. 10.1016/j.jeconom.2012.08.019
Yu J: Bias in the estimation of the mean reversion parameter in continuous time models. J. Econom. 2012, 169: 114-122. 10.1016/j.jeconom.2012.01.004
Geman H: Commodities and Commodity Derivatives. Wiley, Chichester; 2005.
Zhang B, Grzelak LA, Oosterlee CM: Efficient pricing of commodity options with early-exercise under the Ornstein-Uhlenbeck process. Appl. Numer. Math. 2012, 62: 91-111. 10.1016/j.apnum.2011.10.005
Rieder S: Robust parameter estimation for the Ornstein-Uhlenbeck process. Stat. Methods Appl. 2012. 10.1007/s10260-012-0195-2
Iacus S: Simulation and Inference for Stochastic Differential Equations. Springer, New York; 2008.
Bishwal JPN: Uniform rate of weak convergence of the minimum contrast estimator in the Ornstein-Uhlenbeck process. Methodol. Comput. Appl. Probab. 2010, 12: 323-334. 10.1007/s11009-008-9099-x
Shimizu Y: Local asymptotic mixed normality for discretely observed non-recurrent Ornstein-Uhlenbeck processes. Ann. Inst. Stat. Math. 2012, 64: 193-211. 10.1007/s10463-010-0307-4
Zhang S, Zhang X: A least squares estimator for discretely observed Ornstein-Uhlenbeck processes driven by symmetric α -stable motions. Ann. Inst. Stat. Math. 2012. 10.1007/s10463-012-0362-0
Chronopoulou A, Viens FG: Estimation and pricing under long-memory stochastic volatility. Ann. Finance 2012, 8: 379-403. 10.1007/s10436-010-0156-4
Lin H, Wang J: Successful couplings for a class of stochastic differential equations driven by Levy processes. Sci. China Math. 2012,55(8):1735-1748. 10.1007/s11425-012-4387-x
Xiao W, Zhang W, Zhang X: Minimum contrast estimator for fractional Ornstein-Uhlenbeck processes. Sci. China Math. 2012,55(7):1497-1511. 10.1007/s11425-012-4386-y
Magdalinos T: Mildly explosive autoregression under weak and strong dependence. J. Econom. 2012, 169: 179-187. 10.1016/j.jeconom.2012.01.024
Andrews DWK, Guggenberger P:Asymptotics for LS, GLS, and feasible GLS statistics in an model with conditional heteroskedasticity. J. Econom. 2012, 169: 196-210. 10.1016/j.jeconom.2012.01.017
Fan J, Yao Q: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York; 2005.
Berk KN: Consistent autoregressive spectral estimates. Ann. Stat. 1974, 2: 489-502. 10.1214/aos/1176342709
Goldenshluger A, Zeevi A: Non-asymptotic bounds for autoregressive time-series modeling. Ann. Stat. 2001, 29: 417-444. 10.1214/aos/1009210547
Liebscher E: Strong convergence of estimators in nonlinear autoregressive models. J. Multivar. Anal. 2003, 84: 247-261. 10.1016/S0047-259X(02)00022-2
Baran S, Pap G, Zuijlen MV: Asymptotic inference for unit roots in spatial triangular autoregression. Acta Appl. Math. 2007, 96: 17-42. 10.1007/s10440-007-9097-y
Distaso W: Testing for unit root processes in random coefficient autoregressive models. J. Econom. 2008, 142: 581-609. 10.1016/j.jeconom.2007.09.002
Harvill JL, Ray BK: Functional coefficient autoregressive models for vector time series. Comput. Stat. Data Anal. 2008, 50: 3547-3566.
Dehling H, Franke B, Kott T: Drift estimation for a periodic mean reversion process. Stat. Inference Stoch. Process. 2010, 13: 175-192. 10.1007/s11203-010-9045-8
Maller RA: Asymptotics of regressions with stationary and nonstationary residuals. Stoch. Process. Appl. 2003, 105: 33-67. 10.1016/S0304-4149(02)00263-6
Pere P:Adjusted estimates and Wald statistics for the model with constant. J. Econom. 2000, 98: 335-363. 10.1016/S0304-4076(00)00023-3
Fuller WA: Introduction to Statistical Time Series. 2nd edition. Wiley, New York; 1996.
Chambers MJ: Jackknife estimation of stationary autoregressive models. J. Econom. 2012. 10.1016/j.jeconom.2012.09.003
Hamilton JD: Time Series Analysis. Princeton University Press, Princeton; 1994.
Brockwell PJ, Davis RA: Time Series: Theory and Methods. Springer, New York; 1987.
Abadir KM, Lucas A: A comparison of minimum MSE and maximum power for the nearly integrated non-Gaussian model. J. Econom. 2004, 119: 45-71. 10.1016/S0304-4076(03)00155-6
Fan JQ, Jiang JC: Nonparametric inference with generalized likelihood ratio tests. Test 2007, 16: 409-444. 10.1007/s11749-007-0080-8
Rao CR: Linear Statistical Inference and Its Applications. Wiley, New York; 1973.
Maller RA: Quadratic negligibility and the asymptotic normality of operator normed sums. J. Multivar. Anal. 1993, 44: 191-219. 10.1006/jmva.1993.1011
Hall P, Heyde CC: Martingale Limit Theory and Its Application. Academic Press, New York; 1980.
Acknowledgements
This work was supported by the Natural Science Foundation of China (No. 41374017), and Science and Technology Research Projects of the Educational Department of Hubei Province (No. Q20142501).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hu, H., Pan, X. & Xu, L. Maximum likelihood estimators in linear regression models with Ornstein-Uhlenbeck process. J Inequal Appl 2014, 301 (2014). https://doi.org/10.1186/1029-242X-2014-301
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-301