共查询到20条相似文献,搜索用时 46 毫秒
1.
Deconvolution is a useful statistical technique for recovering an unknown density in the presence of measurement error. Typically,
the method hinges on stringent assumptions about the nature of the measurement error, more specifically, that the distribution
is entirely known. We relax this assumption in the context of a regression error component model and develop an estimator for the unknown
density. We show semi-uniform consistency of the estimator and provide an application to the stochastic frontier model. 相似文献
2.
R. Srinivasan 《Statistica Neerlandica》1971,25(3):153-157
Summary A modified form of the Kuiper statistic V n is developed for testing the composite hypothesis that a sample of size n comes from a normal population with unspecified mean and variance. Its distribution is derived using Monte Carlo methods. Power comparison with the adjusted Kuiper test proposed by L outer and K oerts [6] indicates that our test is superior with respect to certain alternatives. 相似文献
3.
Summary If one wants to test the hypothesis as to whether a set of observations comes from a completely specified continuous distribution or not, one can use the Kuiper test. But if one or more parameters have to be estimated, the standard tables for the Kuiper test are no longer valid. This paper presents a table to use with the Kuiper statistic for testing whether a sample comes from a normal distribution when the mean and variance are to be estimated from the sample. The critical points are obtained by means of Monte-Carlo calculation; the power of the test is estimated by simulation; and the results of the powers for several alternative distributions are compared with the estimated powers of the Kolmogorov-Smirnov test. 相似文献
4.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data. 相似文献
5.
6.
Subhash C. Sharma 《Journal of econometrics》1985,27(3):335-361
In a linear regression model the ordinary least squares (OLS) variance estimator (S2) converges in probability to E(S2) even when the errors are autocorrelated. Of interest, however, is the rate of convergence. In this paper we shed some light on this question for the case of a linear trend model. In particular the relation between the rate of convergence and the correlation property of the errors is explored. It is shown that the retardation of the rate of convergence is not appreciable if the correlation is moderate, but it can be severe for extreme correlations. 相似文献
7.
ZHAO Xiao-yan 《现代会计与审计》2007,3(4):42-45,52
On the base of budget or plan information, comparing with actual results, making variance analysis, finding real reasons behind variance, this is important way of control and also important function of budget. However, without consideration of changes of environment, there are some limitations for static variance analysis with benchmark of plan data. Adjusting for benchmark according to actual condition, then doing variance analysis, these will improve utilization of variance analysis. Adjusted benchmark is often known as authorized cost/profit. For different understanding for authority concept, with RCA's (the abbreviation for Resource Consumption Accounting) view and numerical examples, this paper brings out the concept of consumption rate variance to promoting deeper understanding and analyzing reasons behind variance. 相似文献
8.
Viktor Witkovaký 《Metrika》1995,42(1):262-263
9.
In this paper we describe methods and evaluate programs for linear regression by maximum likelihood when the errors have a heavy tailed stable distribution. The asymptotic Fisher information matrix for both the regression coefficients and the error distribution parameters are derived, giving large sample confidence intervals for all parameters. Simulated examples are shown where the errors are stably distributed and also where the errors are heavy tailed but are not stable, as well as a real example using financial data. The results are then extended to nonlinear models and to non-homogeneous error terms. 相似文献
10.
Reduced rank regression (RRR) models with time varying heterogeneity are considered. Standard information criteria for selecting cointegrating rank are shown to be weakly consistent in semiparametric RRR models in which the errors have general nonparametric short memory components and shifting volatility provided the penalty coefficient Cn→∞ and Cn/n→0 as n→∞. The AIC criterion is inconsistent and its limit distribution is given. The results extend those in Cheng and Phillips (2009a) and are useful in empirical work where structural breaks or time evolution in the error variances is present. An empirical application to exchange rate data is provided. 相似文献
11.
The classical forecasting theory of stationary time series exploits the second-order structure (variance, autocovariance, and spectral density) of an observed process in order to construct some prediction intervals. However, some economic time series show a time-varying unconditional second-order structure. This article focuses on a simple and meaningful model allowing this nonstationary behaviour. We show that this model satisfactorily explains the nonstationary behaviour of several economic data sets, among which are the U.S. stock returns and exchange rates. The question of how to forecast these processes is addressed and evaluated on the data sets. 相似文献
12.
J.D. Sargan 《Journal of econometrics》1981,16(1):160-161
Consider the model , where is a matrix of polynomials in the lag operator so that , and is a vector of n endogenous variables, , and the remaining are square matrices, , and is .Suppose that satisfies , where , , and is a square matrix. may be white noise, or generated by a vector moving average stochastic process.Now writing , it is assumed that ignoring the implicit restrictions which follow from eq. (1), can be consistently estimated, so that if the equation has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true eq. (1) has a unique solution for and .There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that has a latent root λ in the sense that for some non-zero vector β, .The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β, .Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation in any neighbourhood of the true values. 相似文献
13.
Yixiao Sun 《Journal of econometrics》2011,164(2):345-366
The paper develops a novel testing procedure for hypotheses on deterministic trends in a multivariate trend stationary model. The trends are estimated by the OLS estimator and the long run variance (LRV) matrix is estimated by a series type estimator with carefully selected basis functions. Regardless of whether the number of basis functions K is fixed or grows with the sample size, the Wald statistic converges to a standard distribution. It is shown that critical values from the fixed-K asymptotics are second-order correct under the large-K asymptotics. A new practical approach is proposed to select K that addresses the central concern of hypothesis testing: the selected smoothing parameter is testing-optimal in that it minimizes the type II error while controlling for the type I error. Simulations indicate that the new test is as accurate in size as the nonstandard test of Vogelsang and Franses (2005) and as powerful as the corresponding Wald test based on the large-K asymptotics. The new test therefore combines the advantages of the nonstandard test and the standard Wald test while avoiding their main disadvantages (power loss and size distortion, respectively). 相似文献
14.
B. Engel 《Statistica Neerlandica》1990,44(4):195-219
Statistical inference for fixed effects, random effects and components of variance in an unbalanced linear model with variance components will be discussed. Variance components will be estimated by Restricted Maximum Likelihood. Iterative procedures for computing the estimates, such as Fisher scoring and the EM-algorithm, are described. 相似文献
15.
Dr. M. Deistler 《Metrika》1975,22(1):13-25
The paper consists of two main parts. In the first part we derive the solution of systems of linear stochastic difference equations by means of thez-transform. In the second part thisz-transform is used to treat the problem of identification of linear econometric systems (the term econometric is used to stress the special aspects of the identification problem dealt with in econometrics). It is shown, that under suitable restrictions observationally equivalent structures are related by unimodular matrices. Using this result, we state (rank-) conditions which ensure, that the unimodular matrices are constant, such that the classical econometric identification theorems can be applied. These conditions are given for stationary errors in the general case as well as in the MA, AR and ARMA case. 相似文献
16.
In the presence of heteroskedastic disturbances, the MLE for the SAR models without taking into account the heteroskedasticity is generally inconsistent. The 2SLS estimates can have large variances and biases for cases where regressors do not have strong effects. In contrast, GMM estimators obtained from certain moment conditions can be robust. Asymptotically valid inferences can be drawn with consistently estimated covariance matrices. Efficiency can be improved by constructing the optimal weighted estimation. 相似文献
17.
N. Shephard 《Journal of Applied Econometrics》1993,8(Z1):S135-S152
New strategies for the implementation of maximum likelihood estimation of nonlinear time series models are suggested. They make use of recent work on the EM algorithm and iterative simulation techniques. The estimation procedures are applied to the problem of fitting stochastic variance models to exchange rate data. 相似文献
18.
Classical estimation techniques for linear models either are inconsistent, or perform rather poorly, under α-stable error densities; most of them are not even rate-optimal. In this paper, we propose an original one-step R-estimation method and investigate its asymptotic performances under stable densities. Contrary to traditional least squares, the proposed R-estimators remain root-n consistent (the optimal rate) under the whole family of stable distributions, irrespective of their asymmetry and tail index. While parametric stable-likelihood estimation, due to the absence of a closed form for stable densities, is quite cumbersome, our method allows us to construct estimators reaching the parametric efficiency bounds associated with any prescribed values (α0,b0) of the tail index α and skewness parameter b, while preserving root-n consistency under any (α,b) as well as under usual light-tailed densities. The method furthermore avoids all forms of multidimensional argmin computation. Simulations confirm its excellent finite-sample performances. 相似文献
19.
In this paper, we consider the R-optimal design problem for multi-factor regression models with heteroscedastic errors. It is shown that a R-optimal design for the heteroscedastic Kronecker product model is given by the product of the R-optimal designs for the marginal one-factor models. However, R-optimal designs for the additive models can be constructed from R-optimal designs for the one-factor models only if sufficient conditions are satisfied. Several examples are presented to illustrate and check optimal designs based on R-optimality criterion. 相似文献
20.
In this paper, we consider the estimation problem of individual weights of three objects. For the estimation we use the chemical balance weighing design and the criterion of D-optimality. We assume that the error terms ${\varepsilon_{i},\ i=1,2,\dots,n,}$ are a first-order autoregressive process. This assumption implies that the covariance matrix of errors depends on the known parameter ρ. We present the chemical balance weighing design matrix ${\widetilde{\bf X}}$ and we prove that this design is D-optimal in certain classes of designs for ${\rho\in[0,1)}$ and it is also D-optimal in the class of designs with the design matrix ${{\bf X} \in M_{n\times 3}(\pm 1)}$ for some ρ ≥ 0. We prove also the necessary and sufficient conditions under which the design is D-optimal in the class of designs ${M_{n\times 3}(\pm 1)}$ , if ${\rho\in[0,1/(n-2))}$ . We present also the matrix of the D-optimal factorial design with 3 two-level factors. 相似文献