首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We study piecewise linear density estimators from the L 1 point of view: the frequency polygons investigated by S cott (1985) and J ones et al. (1997), and a new piecewise linear histogram. In contrast to the earlier proposals, a unique multivariate generalization of the new piecewise linear histogram is available. All these estimators are shown to be universally L 1 strongly consistent. We derive large deviation inequalities. For twice differentiable densities with compact support their expected L 1 error is shown to have the same rate of convergence as have kernel density estimators. Some simulated examples are presented.  相似文献   

2.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

3.
Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators.  相似文献   

4.
Using Monte Carlo simulations we study the small sample performance of the traditional TSLS, the LIML and four new jackknife IV estimators when the instruments are weak. We find that the new estimators and LIML have a smaller bias but a larger variance than the TSLS. In terms of root mean square error, neither LIML nor the new estimators perform uniformly better than the TSLS. The main conclusion from the simulations and an empirical application on labour supply functions is that in a situation with many weak instruments, there still does not exist an easy way to obtain reliable estimates in small samples. Better instruments and/or larger samples is the only way to increase precision in the estimates. Since the properties of the estimators are specific to each data-generating process and sample size it would be wise in empirical work to complement the estimates with a Monte Carlo study of the estimators' properties for the relevant sample size and data-generating process believed to be applicable. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
Spatial autoregressive models are powerful tools in the analysis of data sets from diverse scientific areas of research such as econometrics, plant species richness, cancer mortality rates, image processing, analysis of the functional Magnetic Resonance Imaging (fMRI) data, and many more. An important class in the host of spatial autoregressive models is the class of spatial error models in which spatially lagged error terms are assumed. In this paper, we propose efficient shrinkage and penalty estimators for the regression coefficients of the spatial error model. We carry out asymptotic as well as simulation analyses to illustrate the gain in efficiency achieved by these new estimators. Furthermore, we apply the new methodology to housing prices data and provide a bootstrap approach to compute prediction errors of the new estimators.  相似文献   

6.
Standard estimators for the binomial logit model and for the multinomial logit model allow for an error arising from the use of relative frequencies instead of the true probabilities as the dependent variable. Recently Amemiya and Nold (1975) have considered the effect of the presence of an additional specification error in the binomial logit model and have proposed a modified logit estimation scheme to take the additional error variance into account. This paper extends their idea to the multinomial logit model and proposes an estimator that is consistent and asymptotically more efficient than the standard multinomial logit estimator. The paper presents a comparison of the results of applying the new estimator and existing estimators to a logit model for the choice of automobile ownership in the United States.  相似文献   

7.
This paper introduces a new representation for seasonally cointegrated variables, namely the complex error correction model, which allows statistical inference to be performed by reduced rank regression. The suggested estimators and tests statistics are asymptotically equivalent to their maximum likelihood counterparts. The small sample properties are evaluated by a Monte Carlo study and an empirical example is presented to illustrate the concepts and methods.  相似文献   

8.
This paper discusses analysis of categorical data which have been misclassified and where misclassification probabilities are known. Fields where this kind of misclassification occurs are randomized response, statistical disclosure control, and classification with known sensitivity and specificity. Estimates of true frequencies are given, and adjustments to the odds ratio are discussed. Moment estimates and maximum Likelihood estimates are compared and it is proved that they are the same in the interior of the parameter space. Since moment estimators are regularly outside the parameter space, special attention is paid to the possibility of boundary solutions. An example is given.  相似文献   

9.
《Journal of econometrics》2005,126(2):305-334
The paper analyzes a number of competing approaches to modeling efficiency in panel studies. The specifications considered include the fixed effects stochastic frontier, the random effects stochastic frontier, the Hausman–Taylor random effects stochastic frontier, and the random and fixed effects stochastic frontier with an AR(1) error. I have summarized the foundations and properties of estimators that have appeared elsewhere and have described the model assumptions under which each of the estimators have been developed. I discuss parametric and nonparametric treatments of time varying efficiency including the Battese–Coelli estimator and linear programming approaches to efficiency measurement. Monte Carlo simulation is used to compare the various estimators and to assess their relative performances under a variety of misspecified settings. A brief illustration of the estimators is conducted using U.S. banking data.  相似文献   

10.
Abstract. This paper reviews the literature on measurement error in the major US price indexes—the Consumer Price Index (CPI), the Producer Price Index (RPI), and the Gross Domestic Product (GDP) deflators. We take as our point of departure Triplett's, 1975, survey and focus on the studies of measurement error that have appeared since then. We review the problems of substitution bias, quality bias, new goods bias, and outlet substitution bias that are generally considered to be the main sources of error in price indexes. The bulk of the paper is devoted to problems in the CPI and PPI, as the GDP deflators tend to be based mainly on the components of these series. We find that there has been surprisingly little work on the problem of overall measurement error in any of these price indexes, and we conclude that there is very little scientific basis for the commonly accepted notion that measured inflation at 2 to 3 percent a year is consistent with price stability.  相似文献   

11.
《Journal of econometrics》2005,127(1):83-102
An important feature of panel data is that it allows the estimation of parameters characterizing dynamics from individual level data. Several authors argue that such parameters can also be identified from repeated cross-section data and present estimators to do so. This paper reviews the identification conditions underlying these estimators. As grouping data to obtain a pseudo-panel is an application of instrumental variables (IV), identification requires that standard IV conditions are met. This paper explicitly discusses the implications of these conditions for empirical analyses. We also propose a computationally attractive IV estimator that is consistent under essentially the same conditions as existing estimators. While a Monte Carlo study indicates that this estimator may work well under relatively weak conditions, these conditions are not trivially satisfied in applied work. Accordingly, a key conclusion of the paper is that these estimators cannot be implemented under general conditions.  相似文献   

12.
《Journal of econometrics》2005,124(1):91-116
The maximal achievable level of output for a given level of inputs defines the production frontier that can serve as benchmark to evaluate individual firm efficiencies. Nonparametric envelopment estimators (free disposal hull, data envelopment analysis) have been mostly used because they rely on very few assumptions, whereas parametric forms for the frontier allow for richer economic interpretation. Most of the parametric approaches rely on standard regression fitting the shape of the center of the cloud of points. In this paper, we investigated a new approach, which captures the shape of the cloud points near its boundary. It offers parametric approximations of nonparametric frontiers. We provide the statistical theory (asymptotic). Some simulated examples show the advantages of our method compared with the usual regression-type estimators.  相似文献   

13.
在利用含无回答的经济数据建立线性回归模型,选择PMM多重插补法给出无回答的插补值。模拟结果显示,在任意无回答机制下,随着插补重数增大,系数估计量的偏差和均方误差减小不显著。对于任意无回答率,建议插补重数为5。在完全随机无回答机制下,随着无回答率增加,系数估计量的偏差或均方误差增大往往不显著。然而,在随机无回答机制下或在非随机无回答机制下,随着无回答率增加,系数估计量的偏差和均方误差增大往往显著。  相似文献   

14.
For a balanced two-way mixed model, the maximum likelihood (ML) and restricted ML (REML) estimators of the variance components were obtained and compared under the non-negativity requirements of the variance components by L ee and K apadia (1984). In this note, for a mixed (random blocks) incomplete block model, explicit forms for the REML estimators of variance components are obtained. They are always non-negative and have smaller mean squared error (MSE) than the analysis of variance (AOV) estimators. The asymptotic sampling variances of the maximum likelihood (ML) estimators and the REML estimators are compared and the balanced incomplete block design (BIBD) is considered as a special case. The ML estimators are shown to have smaller asymptotic variances than the REML estimators, but a numerical result in the randomized complete block design (RCBD) demonstrated that the performances of the REML and ML estimators are not much different in the MSE sense.  相似文献   

15.
Instrumental variable (IV) methods for regression are well established. More recently, methods have been developed for statistical inference when the instruments are weakly correlated with the endogenous regressor, so that estimators are biased and no longer asymptotically normally distributed. This paper extends such inference to the case where two separate samples are used to implement instrumental variables estimation. We also relax the restrictive assumptions of homoskedastic error structure and equal moments of exogenous covariates across two samples commonly employed in the two‐sample IV literature for strong IV inference. Monte Carlo experiments show good size properties of the proposed tests regardless of the strength of the instruments. We apply the proposed methods to two seminal empirical studies that adopt the two‐sample IV framework.  相似文献   

16.
基于EMB多重插补法的线性模型系数估计量,分析其统计性质,并与PMM多重插补法以及DA插补法进行比较。模拟结果显示,随着无回答率增加,系数估计量的偏差绝对值、均方误差呈递增趋势,估计方差的递增趋势相对更显著。在完全随机无回答机制或随机无回答机制下,建议插补重数为15。在依赖被解释变量的非随机无回答机制下,建议插补重数可适当增大。在依赖其他变量的非随机无回答机制下,估计量的均方误差和估计方差的差异大,使用EMB多重插补法要谨慎。  相似文献   

17.
Hypothesis tests using data envelopment analysis   总被引:5,自引:4,他引:1  
A substantial body of recent work has opened the way to exploring the statistical properties of DEA estimators of production frontiers and related efficiency measures. The purpose of this paper is to survey several possibilities that have been pursued, and to present them in a unified framework. These include the development of statistics to test hypotheses about the characteristics of the production frontier, such as returns to scale, input substitutability, and model specification, and also about variation in efficiencies relative to the production frontier.  相似文献   

18.
N. D. Shukla 《Metrika》1976,23(1):127-133
In sample survey methods the use of product estimators was suggested byMurthy [1964] andSrivastava [1966] and were found to serve good purpose provided the two variables viz. the main variable under study and the auxiliary variable have a very high negative correlation between them. The product estimators suggested by them are biased. In the present paper the author has obtained unbiased product estimators (to the first degree of approximation) with the help of the technique developed byQuenouille [1956] and has established that this new estimator is better than the other product estimator in the mean square error sense.  相似文献   

19.
《Journal of econometrics》2002,109(1):67-105
Censored regression models have received a great deal of attention in both the theoretical and applied econometric literature. Most of the existing estimation procedures for either cross-sectional or panel data models are designed only for models with fixed censoring. In this paper, a new procedure for adapting these estimators designed for fixed censoring to models with random censoring is proposed. This procedure is then applied to the CLAD and quantile estimators of Powell (J. Econom. 25 (1984) 303, 32 (1986a) 143) to obtain an estimator of the coefficients under a mild conditional quantile restriction on the error term that is applicable to samples exhibiting fixed or random censoring. The resulting estimator is shown to have desirable asymptotic properties, and performs well in a small-scale simulation study.  相似文献   

20.
In this paper Monte Carlo techniques are used to examine the performance of several estimators for the linear statistical model under a squared error of prediction loss measure when the data are multicollinear. Under this measure of performance the Stein-like rules that shrink toward the principal components estimator perform very well relative to other minimax estimators for alternative specifications of the characteristics root spectrum. The sampling performance of a non-minimax pretest rule is also considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号