首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Deconvolution is a useful statistical technique for recovering an unknown density in the presence of measurement error. Typically, the method hinges on stringent assumptions about the nature of the measurement error, more specifically, that the distribution is entirely known. We relax this assumption in the context of a regression error component model and develop an estimator for the unknown density. We show semi-uniform consistency of the estimator and provide an application to the stochastic frontier model.  相似文献   

2.
Summary A modified form of the Kuiper statistic V n is developed for testing the composite hypothesis that a sample of size n comes from a normal population with unspecified mean and variance. Its distribution is derived using Monte Carlo methods. Power comparison with the adjusted Kuiper test proposed by L outer and K oerts [6] indicates that our test is superior with respect to certain alternatives.  相似文献   

3.
Summary If one wants to test the hypothesis as to whether a set of observations comes from a completely specified continuous distribution or not, one can use the Kuiper test. But if one or more parameters have to be estimated, the standard tables for the Kuiper test are no longer valid. This paper presents a table to use with the Kuiper statistic for testing whether a sample comes from a normal distribution when the mean and variance are to be estimated from the sample. The critical points are obtained by means of Monte-Carlo calculation; the power of the test is estimated by simulation; and the results of the powers for several alternative distributions are compared with the estimated powers of the Kolmogorov-Smirnov test.  相似文献   

4.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data.  相似文献   

5.
6.
In a linear regression model the ordinary least squares (OLS) variance estimator (S2) converges in probability to E(S2) even when the errors are autocorrelated. Of interest, however, is the rate of convergence. In this paper we shed some light on this question for the case of a linear trend model. In particular the relation between the rate of convergence and the correlation property of the errors is explored. It is shown that the retardation of the rate of convergence is not appreciable if the correlation is moderate, but it can be severe for extreme correlations.  相似文献   

7.
On the base of budget or plan information, comparing with actual results, making variance analysis, finding real reasons behind variance, this is important way of control and also important function of budget. However, without consideration of changes of environment, there are some limitations for static variance analysis with benchmark of plan data. Adjusting for benchmark according to actual condition, then doing variance analysis, these will improve utilization of variance analysis. Adjusted benchmark is often known as authorized cost/profit. For different understanding for authority concept, with RCA's (the abbreviation for Resource Consumption Accounting) view and numerical examples, this paper brings out the concept of consumption rate variance to promoting deeper understanding and analyzing reasons behind variance.  相似文献   

8.
9.
In this paper we describe methods and evaluate programs for linear regression by maximum likelihood when the errors have a heavy tailed stable distribution. The asymptotic Fisher information matrix for both the regression coefficients and the error distribution parameters are derived, giving large sample confidence intervals for all parameters. Simulated examples are shown where the errors are stably distributed and also where the errors are heavy tailed but are not stable, as well as a real example using financial data. The results are then extended to nonlinear models and to non-homogeneous error terms.  相似文献   

10.
Reduced rank regression (RRR) models with time varying heterogeneity are considered. Standard information criteria for selecting cointegrating rank are shown to be weakly consistent in semiparametric RRR models in which the errors have general nonparametric short memory components and shifting volatility provided the penalty coefficient Cn→∞Cn and Cn/n→0Cn/n0 as n→∞n. The AIC criterion is inconsistent and its limit distribution is given. The results extend those in Cheng and Phillips (2009a) and are useful in empirical work where structural breaks or time evolution in the error variances is present. An empirical application to exchange rate data is provided.  相似文献   

11.
Forecasting economic time series with unconditional time-varying variance   总被引:1,自引:0,他引:1  
The classical forecasting theory of stationary time series exploits the second-order structure (variance, autocovariance, and spectral density) of an observed process in order to construct some prediction intervals. However, some economic time series show a time-varying unconditional second-order structure. This article focuses on a simple and meaningful model allowing this nonstationary behaviour. We show that this model satisfactorily explains the nonstationary behaviour of several economic data sets, among which are the U.S. stock returns and exchange rates. The question of how to forecast these processes is addressed and evaluated on the data sets.  相似文献   

12.
Consider the model
A(L)xt=B(L)yt+C(L)zt=ut, t=1,…,T
, where
A(L)=(B(L):C(L))
is a matrix of polynomials in the lag operator so that Lrxt=xt?r, and yt is a vector of n endogenous variables,
B(L)=s=0k BsLs
B0In, and the remaining Bs are n × n square matrices,
C(L)=s=0k CsLs
, and Cs is n × m.Suppose that ut satisfies
R(L)ut=et
, where
R(L)=s=0rRs Ls
, R0=In, and Rs is a n × n square matrix. et may be white noise, or generated by a vector moving average stochastic process.Now writing
Ψ(L)=R(L)A(L)
, it is assumed that ignoring the implicit restrictions which follow from eq. (1), Ψ(L) can be consistently estimated, so that if the equation
Ψ(L)xt=et
has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true Ψ(L) eq. (1) has a unique solution for A(L) and R(L).There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that A(L) has a latent root λ in the sense that for some non-zero vector β,
β′A(λ)=0
.The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β,
R(λ)δ=0 and β′A(λ)=0
.Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation
Ψ(z)=R(z)A(z)
in any neighbourhood of the true values.  相似文献   

13.
The paper develops a novel testing procedure for hypotheses on deterministic trends in a multivariate trend stationary model. The trends are estimated by the OLS estimator and the long run variance (LRV) matrix is estimated by a series type estimator with carefully selected basis functions. Regardless of whether the number of basis functions K is fixed or grows with the sample size, the Wald statistic converges to a standard distribution. It is shown that critical values from the fixed-K asymptotics are second-order correct under the large-K asymptotics. A new practical approach is proposed to select K that addresses the central concern of hypothesis testing: the selected smoothing parameter is testing-optimal in that it minimizes the type II error while controlling for the type I error. Simulations indicate that the new test is as accurate in size as the nonstandard test of Vogelsang and Franses (2005) and as powerful as the corresponding Wald test based on the large-K asymptotics. The new test therefore combines the advantages of the nonstandard test and the standard Wald test while avoiding their main disadvantages (power loss and size distortion, respectively).  相似文献   

14.
The analysis of unbalanced linear models with variance components   总被引:2,自引:0,他引:2  
Statistical inference for fixed effects, random effects and components of variance in an unbalanced linear model with variance components will be discussed. Variance components will be estimated by Restricted Maximum Likelihood. Iterative procedures for computing the estimates, such as Fisher scoring and the EM-algorithm, are described.  相似文献   

15.
Dr. M. Deistler 《Metrika》1975,22(1):13-25
The paper consists of two main parts. In the first part we derive the solution of systems of linear stochastic difference equations by means of thez-transform. In the second part thisz-transform is used to treat the problem of identification of linear econometric systems (the term econometric is used to stress the special aspects of the identification problem dealt with in econometrics). It is shown, that under suitable restrictions observationally equivalent structures are related by unimodular matrices. Using this result, we state (rank-) conditions which ensure, that the unimodular matrices are constant, such that the classical econometric identification theorems can be applied. These conditions are given for stationary errors in the general case as well as in the MA, AR and ARMA case.  相似文献   

16.
In the presence of heteroskedastic disturbances, the MLE for the SAR models without taking into account the heteroskedasticity is generally inconsistent. The 2SLS estimates can have large variances and biases for cases where regressors do not have strong effects. In contrast, GMM estimators obtained from certain moment conditions can be robust. Asymptotically valid inferences can be drawn with consistently estimated covariance matrices. Efficiency can be improved by constructing the optimal weighted estimation.  相似文献   

17.
New strategies for the implementation of maximum likelihood estimation of nonlinear time series models are suggested. They make use of recent work on the EM algorithm and iterative simulation techniques. The estimation procedures are applied to the problem of fitting stochastic variance models to exchange rate data.  相似文献   

18.
Classical estimation techniques for linear models either are inconsistent, or perform rather poorly, under αα-stable error densities; most of them are not even rate-optimal. In this paper, we propose an original one-step R-estimation method and investigate its asymptotic performances under stable densities. Contrary to traditional least squares, the proposed R-estimators remain root-nn consistent (the optimal rate) under the whole family of stable distributions, irrespective of their asymmetry and tail index. While parametric stable-likelihood estimation, due to the absence of a closed form for stable densities, is quite cumbersome, our method allows us to construct estimators reaching the parametric efficiency bounds associated with any prescribed values (α0,b0)(α0,b0) of the tail index αα and skewness parameter bb, while preserving root-nn consistency under any (α,b)(α,b) as well as under usual light-tailed densities. The method furthermore avoids all forms of multidimensional argmin computation. Simulations confirm its excellent finite-sample performances.  相似文献   

19.
Lei He  Rong-Xian Yue 《Metrika》2017,80(6-8):717-732
In this paper, we consider the R-optimal design problem for multi-factor regression models with heteroscedastic errors. It is shown that a R-optimal design for the heteroscedastic Kronecker product model is given by the product of the R-optimal designs for the marginal one-factor models. However, R-optimal designs for the additive models can be constructed from R-optimal designs for the one-factor models only if sufficient conditions are satisfied. Several examples are presented to illustrate and check optimal designs based on R-optimality criterion.  相似文献   

20.
In this paper, we consider the estimation problem of individual weights of three objects. For the estimation we use the chemical balance weighing design and the criterion of D-optimality. We assume that the error terms ${\varepsilon_{i},\ i=1,2,\dots,n,}$ are a first-order autoregressive process. This assumption implies that the covariance matrix of errors depends on the known parameter ρ. We present the chemical balance weighing design matrix ${\widetilde{\bf X}}$ and we prove that this design is D-optimal in certain classes of designs for ${\rho\in[0,1)}$ and it is also D-optimal in the class of designs with the design matrix ${{\bf X} \in M_{n\times 3}(\pm 1)}$ for some ρ ≥ 0. We prove also the necessary and sufficient conditions under which the design is D-optimal in the class of designs ${M_{n\times 3}(\pm 1)}$ , if ${\rho\in[0,1/(n-2))}$ . We present also the matrix of the D-optimal factorial design with 3 two-level factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号