首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

2.
In frontier analysis, most nonparametric approaches (DEA, FDH) are based on envelopment ideas which assume that with probability one, all observed units belong to the attainable set. In these “deterministic” frontier models, statistical inference is now possible, by using bootstrap procedures. In the presence of noise, envelopment estimators could behave dramatically since they are very sensitive to extreme observations that might result only from noise. DEA/FDH techniques would provide estimators with an error of the order of the standard deviation of the noise. This paper adapts some recent results on detecting change points [Hall P, Simar L (2002) J Am Stat Assoc 97:523–534] to improve the performances of the classical DEA/FDH estimators in the presence of noise. We show by simulated examples that the procedure works well, and better than the standard DEA/FDH estimators, when the noise is of moderate size in term of signal to noise ratio. It turns out that the procedure is also robust to outliers. The paper can be seen as a first attempt to formalize stochastic DEA/FDH estimators.   相似文献   

3.
It is well-known that the naive bootstrap yields inconsistent inference in the context of data envelopment analysis (DEA) or free disposal hull (FDH) estimators in nonparametric frontier models. For inference about efficiency of a single, fixed point, drawing bootstrap pseudo-samples of size m < n provides consistent inference, although coverages are quite sensitive to the choice of subsample size m. We provide a probabilistic framework in which these methods are shown to valid for statistics comprised of functions of DEA or FDH estimators. We examine a simple, data-based rule for selecting m suggested by Politis et al. (Stat Sin 11:1105–1124, 2001), and provide Monte Carlo evidence on the size and power of our tests. Our methods (i) allow for heterogeneity in the inefficiency process, and unlike previous methods, (ii) do not require multivariate kernel smoothing, and (iii) avoid the need for solutions of intermediate linear programs.  相似文献   

4.
Understanding the effects of operational conditions and practices on productive efficiency can provide valuable economic and managerial insights. The conventional approach is to use a two-stage method where the efficiency estimates are regressed on contextual variables representing the operational conditions. The main problem of the two-stage approach is that it ignores the correlations between inputs and contextual variables. To address this shortcoming, we build on the recently developed regression interpretation of data envelopment analysis (DEA) to develop a new one-stage semi-nonparametric estimator that combines the nonparametric DEA-style frontier with a regression model of the contextual variables. The new method is referred to as stochastic semi-nonparametric envelopment of z variables data (StoNEZD). The StoNEZD estimator for the contextual variables is shown to be statistically consistent under less restrictive assumptions than those required by the two-stage DEA estimator. Further, the StoNEZD estimator is shown to be unbiased, asymptotically efficient, asymptotically normally distributed, and converge at the standard parametric rate of order n −1/2. Therefore, the conventional methods of statistical testing and confidence intervals apply for asymptotic inference. Finite sample performance of the proposed estimators is examined through Monte Carlo simulations.  相似文献   

5.
In most empirical studies, once the best model has been selected according to a certain criterion, subsequent analysis is conducted conditionally on the chosen model. In other words, the uncertainty of model selection is ignored once the best model has been chosen. However, the true data-generating process is in general unknown and may not be consistent with the chosen model. In the analysis of productivity and technical efficiencies in the stochastic frontier settings, if the estimated parameters or the predicted efficiencies differ across competing models, then it is risky to base the prediction on the selected model. Buckland et al. (Biometrics 53:603?C618, 1997) have shown that if model selection uncertainty is ignored, the precision of the estimate is likely to be overestimated, the estimated confidence intervals of the parameters are often below the nominal level, and consequently, the prediction may be less accurate than expected. In this paper, we suggest using the model-averaged estimator based on the multimodel inference to estimate stochastic frontier models. The potential advantages of the proposed approach are twofold: incorporating the model selection uncertainty into statistical inference; reducing the model selection bias and variance of the frontier and technical efficiency estimators. The approach is demonstrated empirically via the estimation of an Indian farm data set.  相似文献   

6.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

7.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.  相似文献   

8.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

9.
This paper uses both Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH) models in order to determine different performance levels in a sample of 353 foreign equities operating in the Greek manufacturing sector. Particularly, convex and non-convex models are used alongside with bootstrap techniques in order to determine the effect of foreign ownership on SMEs’ performance. The study illustrates how the recent developments in efficiency analysis and statistical inference can be applied when evaluating performance issues. The analysis among the foreign equities indicates that the levels of foreign ownership have a positive effect on SMEs’ performance.  相似文献   

10.
DEA, DFA and SFA: A comparison   总被引:1,自引:5,他引:1  
The nonparametric data envelopment analysis (DEA) model has become increasingly popular in the analysis of productive efficiency, and the number of empirical applications is now very large. Recent theoretical and mathematical research has also contributed to a deeper understanding of the seemingly simple but inherently complex DEA model. Less effort has, however, been directed toward comparisons between DEA and other competing efficiency analysis models. This paper undertakes a comparison of the DEA, the deterministic parametric (DFA), and the stochastic frontier (SFA) models. Efficiency comparisons across models in the above categories are done based on 15 Colombian cement plants observed during 1968–1988.  相似文献   

11.
The Components of Output Growth: A Stochastic Frontier Analysis   总被引:1,自引:0,他引:1  
This paper uses Bayesian stochastic frontier methods to decompose output change into technical, efficiency and input changes. In the context of macroeconomic growth exercises, which typically involve small and noisy data sets, we argue that stochastic frontier methods are useful since they incorporate measurement error and assume a (flexible) parametric form for the production relationship. These properties enable us to calculate measures of uncertainty associated with the decomposition and minimize the risk of overfitting the noise in the data. Tools for Bayesian inference in such models are developed. An empirical investigation using data from 17 OECD countries for 10 years illustrates the practicality and usefulness of our approach.  相似文献   

12.
Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). In recent years, it has been widely used to evaluate two-stage systems under different organization mechanisms. This study modifies the conventional leader–follower DEA models for two-stage systems by considering the uncertainty of data. The dual deterministic linear models are first constructed from the stochastic CCR models under the assumption that all components of inputs, outputs, and intermediate products are related only with some basic stochastic factors, which follow continuous and symmetric distributions with nonnegative compact supports. The stochastic leader–follower DEA models are then developed for measuring the efficiencies of the two stages. The stochastic efficiency of the whole system can be uniquely decomposed into the product of the efficiencies of the two stages. Relationships between stochastic efficiencies from stochastic CCR and stochastic leader–follower DEA models are also discussed. An example of the commercial banks in China is considered using the proposed models under different risk levels.  相似文献   

13.
This paper applies the probabilistic approach developed by Daraio and Simar (J Prod Anal 24:93–121, 2005, Advanced robust and nonparametric methods in efficiency analysis. Springer Science, New York, 2007a, J Prod Anal 28:13–32, 2007b) in order to develop conditional and unconditional data envelopment analysis (DEA) models for the measurement of countries’ environmental efficiency levels for a sample of 110 countries in 2007. In order to capture the effect of countries compliance with the Kyoto protocol agreement (KPA) policies, we condition first the years since a country has signed the KPA until 2007 and secondly the obliged percentage level of countries’ emission reductions. Particularly, various DEA models have been applied alongside with bootstrap techniques in order to determine the effect of KPA on countries’ environmental efficiencies. The study illustrates how the recent developments in efficiency analysis and statistical inference can be applied when evaluating environmental performance issues. The results indicate a nonlinear relationship between countries’ obliged percentage levels of emission reductions and their environmental efficiency levels. Finally, a similar nonlinear relationship is also recorded between the duration which a country has signed the KPA and its environmental efficiency levels.  相似文献   

14.
The mathematical programming-based technique data envelopment analysis (DEA) has often treated data as being deterministic. In response to the criticism that in most applications there is error and random noise in the data, a number of mathematically elegant solutions to incorporating stochastic variations in data have been proposed. In this paper, we propose a chance-constrained formulation of DEA that allows random variations in the data. We study properties of the ensuing efficiency measure using a small sample in which multiple inputs and a single output are correlated, and are the result of a stochastic process. We replicate the analysis using Monte Carlo simulations and conclude that using simulations provides a more flexible and computationally less cumbersome approach to studying the effects of noise in the data. We suggest that, in keeping with the tradition of DEA, the simulation approach allows users to explicitly consider different data generating processes and allows for greater flexibility in implementing DEA under stochastic variations in data.  相似文献   

15.
In this paper we propose a new technique for incorporating environmental effects and statistical noise into a producer performance evaluation based on data envelopment analysis (DEA). The technique involves a three-stage analysis. In the first stage, DEA is applied to outputs and inputs only, to obtain initial measures of producer performance. In the second stage, stochastic frontier analysis (SFA) is used to regress first stage performance measures against a set of environmental variables. This provides, for each input or output (depending on the orientation of the first stage DEA model), a three-way decomposition of the variation in performance into a part attributable to environmental effects, a part attributable to managerial inefficiency, and a part attributable to statistical noise. In the third stage, either inputs or outputs (again depending on the orientation of the first stage DEA model) are adjusted to account for the impact of the environmental effects and the statistical noise uncovered in the second stage, and DEA is used to re-evaluate producer performance. Throughout the analysis emphasis is placed on slacks, rather than on radial efficiency scores, as appropriate measures of producer performance. An application to nursing homes is provided to illustrate the power of the three-stage methodology.  相似文献   

16.
Two popular approaches for efficiency measurement are a non-stochastic approach called data envelopment analysis (DEA) and a parametric approach called stochastic frontier analysis (SFA). Both approaches have modeling difficulty, particularly for ranking firm efficiencies. In this paper, a new parametric approach using quantile statistics is developed. The quantile statistic relies less on the stochastic model than SFA methods, and accounts for a firm's relationship to the other firms in the study by acknowledging the firm's influence on the empirical model, and its relationship, in terms of similarity of input levels, to the other firms. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
Most stochastic frontier models have focused on estimating average productive efficiency across all firms. The failure to estimate firm-specific effiicency has been regarded as a major limitation of previous stochastic frontier models. In this paper, we measure firm-level efficiency using panel data, and examine its finite sample distribution over a wide range of the parameter and model space. We also investigate the performance of the stochastic frontier approach using three estimators: maximum likelihood, generalized least squares and dummy variables (or the within estimator). Our results indicate that the performance of the stochastic frontier approach is sensitive to the form of the underlying technology and its complexity. The results appear to be quite stable across estimators. The within estimatoris preferred, however, because of weak assumptions and relative computational ease.The refereeing process of this paper was handled through J. van den Broeck.  相似文献   

18.
There are two main methods for measuring the efficiency of decision-making units (DMUs): data envelopment analysis (DEA) and stochastic frontier analysis (SFA). Each of these methods has advantages and disadvantages. DEA is more popular in the literature due to its simplicity, as it does not require any pre-assumption and can be used for measuring the efficiency of DMUs with multiple inputs and multiple outputs, whereas SFA is a parametric approach that is applicable to multiple inputs and a single output. Since many applied studies feature multiple output variables, SFA cannot be used in such cases. In this research, a unique method to transform multiple outputs to a virtual single output is proposed. We are thus able to obtain efficiency scores from calculated virtual single output by the proposed method that are close (or even the same depending on targeted parameters at the expense of computation time and resources) to the efficiency scores obtained from multiple outputs of DEA. This will enable us to use SFA with a virtual single output. The proposed method is validated using a simulation study, and its usefulness is demonstrated with real application by using a hospital dataset from Turkey.  相似文献   

19.
We employ bootstrap techniques in a production frontier framework to provide statistical inference for each component in the decomposition of labor productivity growth, which has essentially been ignored in this literature. We show that only two of the four components (efficiency changes and human capital accumulation) have significantly contributed to growth in Africa. Although physical capital accumulation is the largest force, it is not statistically significant on average. Thus, ignoring statistical significance would falsely identify physical capital accumulation as a major driver of growth in Africa when it is not.  相似文献   

20.
The field of productive efficiency analysis is currently divided between two main paradigms: the deterministic, nonparametric Data Envelopment Analysis (DEA) and the parametric Stochastic Frontier Analysis (SFA). This paper examines an encompassing semiparametric frontier model that combines the DEA-type nonparametric frontier, which satisfies monotonicity and concavity, with the SFA-style stochastic homoskedastic composite error term. To estimate this model, a new two-stage method is proposed, referred to as Stochastic Non-smooth Envelopment of Data (StoNED). The first stage of the StoNED method applies convex nonparametric least squares (CNLS) to estimate the shape of the frontier without any assumptions about its functional form or smoothness. In the second stage, the conditional expectations of inefficiency are estimated based on the CNLS residuals, using the method of moments or pseudolikelihood techniques. Although in a cross-sectional setting distinguishing inefficiency from noise in general requires distributional assumptions, we also show how these can be relaxed in our approach if panel data are available. Performance of the StoNED method is examined using Monte Carlo simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号