首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 680 毫秒
1.
Maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size and large cross section sample size asymptotics. This paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference, shows unbiasedness and analyzes efficiency. Monte Carlo studies show that our procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and is shown to have superior finite sample properties to the generalized method of moment (GMM) and the bias-corrected ML estimator.  相似文献   

2.
Optimal exact designs are notoriously hard to study and only a few of them are known for polynomial models. Using recently obtained optimal exact designs (I mhof , 1997), we show that the efficiency of the frequently used rounded optimal approximate designs can be sensitive if the sample size is small. For some criteria, the efficiency of the rounded optimal approximate design can vary by as much as 25% when the sample size is changed by one unit. The paper also discusses lower efficiency bounds and shows that they are sometimes the best possible bounds for the rounded optimal approximate designs.  相似文献   

3.
A nonparametric linear programming approach is adopted to measure the productive efficiency of thrift financial institutions. The methodology is applied to a sample of California thrifts in 1989, yielding high mean efficiency scores. High efficiency scores correspond to high levels of operating efficiency. Estimation of a truncated regression model indicates that the major determinants of thrift efficiency are organization form, management style, risk, and firm size. Applying the methodology to a subset of thrifts from 1986 which had failed by 1989 shows technical inefficiency to be a significant indicator of a high probability of failure.  相似文献   

4.
Sample autocorrelation coefficients are widely used to test the randomness of a time series. Despite its unsatisfactory performance, the asymptotic normal distribution is often used to approximate the distribution of the sample autocorrelation coefficients. This is mainly due to the lack of an efficient approach in obtaining the exact distribution of sample autocorrelation coefficients. In this paper, we provide an efficient algorithm for evaluating the exact distribution of the sample autocorrelation coefficients. Under the multivariate elliptical distribution assumption, the exact distribution as well as exact moments and joint moments of sample autocorrelation coefficients are presented. In addition, the exact mean and variance of various autocorrelation-based tests are provided. Actual size properties of the Box–Pierce and Ljung–Box tests are investigated, and they are shown to be poor when the number of lags is moderately large relative to the sample size. Using the exact mean and variance of the Box–Pierce test statistic, we propose an adjusted Box–Pierce test that has a far superior size property than the traditional Box–Pierce and Ljung–Box tests.  相似文献   

5.
Estimating house price appreciation: A comparison of methods   总被引:2,自引:0,他引:2  
Several parametric and nonparametric methods have been advanced over the years for estimating house price appreciation. This paper compares five of these methods in terms of predictive accuracy, using data from Montgomery County, Pennsylvania. The methods are evaluated on the basis of the mean squared prediction error and the mean absolute prediction error. A statistic developed by Diebold and Mariano is used to determine whether differences in prediction errors are statistically significant. We use the same statistic to determine the effect of sample size on the accuracy of the predictions. In general, parametric methods of estimation produce more accurate estimates of house price appreciation than nonparametric methods. And when the mean absolute prediction error is used as the criterion of accuracy, the repeat sales method produces the most accurate estimate among the parametric methods we tested. Finally, of the five methods we tested, the accuracy of the repeat sales method is least diminished by a reduction in sample size.  相似文献   

6.
Zhang and Bartels (1998) show formallyhow DEA efficiency scores are affected by sample size. They demonstratethat comparing measures of structural inefficiency between samplesof different sizes leads to biased results. This note arguesthat this type of sample size bias has much wider implicationsthan suggested by their example. Models which implicitly restrictthe comparison set like some models for non-discretionary variableslead to biased efficiency scores as well. A reanalysis of theBanker and Morey (1986b) data shows that the efficiency scoresderived there are significantly influenced by the variation insample size implicit in their model.  相似文献   

7.
V. K. Srivastava 《Metrika》1980,27(1):99-102
An estimator for the mean of a Normal population is presented and its properties are analyzed. The efficiency with respect to the conventional sample mean is also examined.  相似文献   

8.
This paper investigates the relationship between efficiency and market structure for a sample of industrial facilities dispersed among the U.S. states. In order to measure the relevant efficiency scores, we use a data envelopment analysis allowing for the inclusion of desirable and undesirable (toxic chemical releases) outputs in the production function. In the next stage, we utilize the bootstrapped quantile regression methodology to uncover possible nonlinear relationships between efficiency and competition at the mean and at various quantiles before and after the global financial crisis (2002 and 2012). In this way, we impose no functional form constraints on parameter values over the conditional distribution of the dependent variable (efficiency). At the same time, we estimate at which part of its cumulative distribution function the efficiency is located and draw substantial conclusions about the range of policy measures obtained. The empirical findings indicate that the relationship between efficiency and market concentration did change in the aftermath of the global financial crisis. The empirical results survived robustness checks under the inclusion of an alternative market concentration indicator (CR8).  相似文献   

9.
In the simple errors-in-variables model the least squares estimator of the slope coefficient is known to be biased towards zero for finite sample size as well as asymptotically. In this paper we suggest a new corrected least squares estimator, where the bias correction is based on approximating the finite sample bias by a lower bound. This estimator is computationally very simple. It is compared with previously proposed corrected least squares estimators, where the correction aims at removing the asymptotic bias or the exact finite sample bias. For each type of corrected least squares estimators we consider the theoretical form, which depends on an unknown parameter, as well as various feasible forms. An analytical comparison of the theoretical estimators is complemented by a Monte Carlo study evaluating the performance of the feasible estimators. The new estimator proposed in this paper proves to be superior with respect to the mean squared error.  相似文献   

10.
The paper investigates the efficiency of a sample of Islamic and conventional banks in 10 countries that operate Islamic banking for the period 1996–2002, using an output distance function approach. We obtain measures of efficiency after allowing for environmental influences such as country macroeconomic conditions, accessibility of banking services and bank type. While these factors are assumed to directly influence the shape of the technology, we assume that country dummies and bank size directly influence technical inefficiency. The parameter estimates highlight that during the sample period, Islamic banking appears to be associated with higher input usage. Furthermore, by allowing for bank size and international differences in the underlying inefficiency distributions, we are also able to demonstrate statistically significant differences in inefficiency related to these factors even after controlling for specific environmental characteristics and Islamic banking. Thus, for example, our results suggest that Sudan and Yemen have relatively higher inefficiency while Bahrain and Bangladesh have lower estimated inefficiency. Except for Sudan, where banks exhibits relatively strong returns to scale, most sample banks exhibit very slight returns to scale, although Islamic banks are found to have moderately higher returns to scale than conventional banks. While this suggests that Islamic banks may benefit from increased scale, we would emphasize that our results suggest that identifying and overcoming the factors that cause Islamic banks to have relatively low potential outputs for given input usage levels will be the key challenge for Islamic banking in the coming decades.  相似文献   

11.
The paper examines efficiency, productivity and scale economies in the U.S. property-liability insurance industry. Productivity change is analyzed using Malmquist indices, and efficiency is estimated using data envelopment analysis. The results indicate that the majority of firms below median size in the industry are operating with increasing returns to scale, and the majority of firms above median size are operating with decreasing returns to scale. However, a significant number of firms in each size decile have achieved constant returns to scale. Over the sample period, the industry experienced significant gains in total factor productivity, and there is an upward trend in scale and allocative efficiency. More diversified firms and insurance groups were more likely to achieve efficiency and productivity gains. Higher technology investment is positively related to efficiency and productivity improvements.  相似文献   

12.
In this paper, the power rates and distributional properties of the Outfit, Infit, Lz, ECI2z and ECI4z statistics when they are used in tests with biased or differential item functioning (DIF) were explored. In this study, different conditions of sample size, sample size ratio focal and reference group, impact between groups, DIF effect size, and percentage of DIF items were manipulated. In addition, examinee responses were generated to simulate uniform DIF. Results suggest that item fit statistics generally detected medium percents of DIF in large samples (1000/500 or 1000/1000) only when DIF effect size was relatively high and when the mean of focal and reference group was different. Moreover, when groups had equal mean, low correct identification rates were found in the five item-fit indices. In general, the results showed adequate control of false positive rates. These findings lead to the conclusion that all indices used in this study are partially adequate fit measures for detecting biased items, mainly when impact between groups is present and sample size is large.  相似文献   

13.
The main purpose of this paper is to identify the key factors that impact schools' academic performance and to explore their relationships through a two-stage analysis based on a sample of Tunisian secondary schools. In the first stage, we use the Directional Distance Function approach (DDF) to deal with undesirable outputs. The DDF is estimated using Data Envelopment Analysis method (DEA). In the second stage we apply machine-learning approaches (regression trees and random forests) to identify and visualize variables that are associated with a high school performance. The data is extracted from the Program for International Student Assessment (PISA) 2012 survey. The first stage analysis shows that almost 22% of Tunisian schools are efficient and that they could improve their students’ educational performance by 15.6% while using the same level of resources. Regression trees findings indicate that the most important factors associated with higher performance are school size, competition, class size, parental pressure and proportion of girls. Only, school location appears with no impact on school efficiency. Random forests algorithm outcomes display that proportion of girls at school and school size have the most powerful impact on the predictive accuracy of our model and hence could more influence school efficiency. The findings disclose also the high non-linearity of the relationships between these key factors and school performance and reveal the importance of modeling their interactions in influencing efficiency scores.  相似文献   

14.
We consider the problem of estimating the variance of the partial sums of a stationary time series that has either long memory, short memory, negative/intermediate memory, or is the first-difference of such a process. The rate of growth of this variance depends crucially on the type of memory, and we present results on the behavior of tapered sums of sample autocovariances in this context when the bandwidth vanishes asymptotically. We also present asymptotic results for the case that the bandwidth is a fixed proportion of sample size, extending known results to the case of flat-top tapers. We adopt the fixed proportion bandwidth perspective in our empirical section, presenting two methods for estimating the limiting critical values—both the subsampling method and a plug-in approach. Simulation studies compare the size and power of both approaches as applied to hypothesis testing for the mean. Both methods perform well–although the subsampling method appears to be better sized–and provide a viable framework for conducting inference for the mean. In summary, we supply a unified asymptotic theory that covers all different types of memory under a single umbrella.  相似文献   

15.
Junming Liu  Kaoru Tone 《Socio》2008,42(2):75-91
When measuring technical efficiency with existing data envelopment analysis (DEA) techniques, mean efficiency scores generally exhibit volatile patterns over time. This appears to be at odds with the general perception of learning-by-doing management, due to Arrow [The economic implications of learning by doing. Review of Economic Studies 1964; 154–73]. Further, this phenomenon is largely attributable to the fundamental assumption of deterministic data maintained in DEA models, and to the difficulty such models have in incorporating environmental influences. This paper proposes a three-stage method to measure DEA efficiency while controlling for the impacts of both statistical noise and environmental factors. Using panel data on Japanese banking over the period 1997–2001, we demonstrate that the proposed approach greatly mitigates these weaknesses of DEA models. We find a stable upward trend in mean measured efficiency, indicating that, on average, the bankers were learning over the sample period. Therefore, we conclude that this new method is a significant improvement relative to those DEA models currently used by researchers, corporate management, and industrial regulatory bodies to evaluate performance of their respective interests.  相似文献   

16.
Using Monte Carlo simulations we study the small sample performance of the traditional TSLS, the LIML and four new jackknife IV estimators when the instruments are weak. We find that the new estimators and LIML have a smaller bias but a larger variance than the TSLS. In terms of root mean square error, neither LIML nor the new estimators perform uniformly better than the TSLS. The main conclusion from the simulations and an empirical application on labour supply functions is that in a situation with many weak instruments, there still does not exist an easy way to obtain reliable estimates in small samples. Better instruments and/or larger samples is the only way to increase precision in the estimates. Since the properties of the estimators are specific to each data-generating process and sample size it would be wise in empirical work to complement the estimates with a Monte Carlo study of the estimators' properties for the relevant sample size and data-generating process believed to be applicable. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.

In stochastic frontier analysis, the conventional estimation of unit inefficiency is based on the mean/mode of the inefficiency, conditioned on the composite error. It is known that the conditional mean of inefficiency shrinks towards the mean rather than towards the unit inefficiency. In this paper, we analytically prove that the conditional mode cannot accurately estimate unit inefficiency, either. We propose regularized estimators of unit inefficiency that restrict the unit inefficiency estimators to satisfy some a priori assumptions, and derive the closed form regularized conditional mode estimators for the three most commonly used inefficiency densities. Extensive simulations show that, under common empirical situations, e.g., regarding sample size and signal-to-noise ratio, the regularized estimators outperform the conventional (unregularized) estimators when the inefficiency is greater than its mean/mode. Based on real data from the electricity distribution sector in Sweden, we demonstrate that the conventional conditional estimators and our regularized conditional estimators provide substantially different results for highly inefficient companies.

  相似文献   

18.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

19.
The problem of estimating a normal mean with unknown variance is considered under an asymmetric loss function such that the associated risk is bounded from above by a known quantity. In the absence of a fixed sample size rule, a sequential stopping rule and two sequential estimators of the mean are proposed and second-order asymptotic expansions of their risk functions are derived. It is demonstrated that the sample mean becomes asymptotically inadmissible, being dominated by a shrinkage-type estimator. Also a shrinkage factor is incorporated in the stopping rule and similar inadmissibility results are established. Received September 1997  相似文献   

20.
To examine complex relationships among variables, researchers in human resource management, industrial-organizational psychology, organizational behavior, and related fields have increasingly used meta-analytic procedures to aggregate effect sizes across primary studies to form meta-analytic correlation matrices, which are then subjected to further analyses using linear models (e.g., multiple linear regression). Because missing effect sizes (i.e., correlation coefficients) and different sample sizes across primary studies can occur when constructing meta-analytic correlation matrices, the present study examined the effects of missingness under realistic conditions and various methods for estimating sample size (e.g., minimum sample size, arithmetic mean, harmonic mean, and geometric mean) on the estimated squared multiple correlation coefficient (R2) and the power of the significance test on the overall R2 in linear regression. Simulation results suggest that missing data had a more detrimental effect as the number of primary studies decreased and the number of predictor variables increased. It appears that using second-order sample sizes of at least 10 (i.e., independent effect sizes) can improve both statistical power and estimation of the overall R2 considerably. Results also suggest that although the minimum sample size should not be used to estimate sample size, the other sample size estimates appear to perform similarly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号