首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
In frontier analysis, most nonparametric approaches (DEA, FDH) are based on envelopment ideas which assume that with probability one, all observed units belong to the attainable set. In these “deterministic” frontier models, statistical inference is now possible, by using bootstrap procedures. In the presence of noise, envelopment estimators could behave dramatically since they are very sensitive to extreme observations that might result only from noise. DEA/FDH techniques would provide estimators with an error of the order of the standard deviation of the noise. This paper adapts some recent results on detecting change points [Hall P, Simar L (2002) J Am Stat Assoc 97:523–534] to improve the performances of the classical DEA/FDH estimators in the presence of noise. We show by simulated examples that the procedure works well, and better than the standard DEA/FDH estimators, when the noise is of moderate size in term of signal to noise ratio. It turns out that the procedure is also robust to outliers. The paper can be seen as a first attempt to formalize stochastic DEA/FDH estimators.   相似文献   

2.
In production theory and efficiency analysis, we estimate the production frontier, the locus of the maximal attainable level of an output (the production), given a set of inputs (the production factors). In other setups, we estimate rather an input (or cost) frontier, the minimal level of the input (cost) attainable for a given set of outputs (goods or services produced). In both cases the problem can be viewed as estimating a surface under shape constraints (monotonicity, …). In this paper we derive the theory of an estimator of the frontier having an asymptotic normal distribution. It is based on the order-mm partial frontier where we let the order mm to converge to infinity when n→∞n but at a slow rate. The final estimator is then corrected for its inherent bias. We thus can view our estimator as a regularized frontier. In addition, the estimator is more robust to extreme values and outliers than the usual nonparametric frontier estimators, like FDH and than the unregularized order-mnmn estimator of Cazals et al. (2002) converging to the frontier with a Weibull distribution if mn→∞mn fast enough when n→∞n. The performances of our estimators are evaluated in finite samples and compared to other estimators through some Monte-Carlo experiments, showing a better behavior (in terms of robustness, bias, MSE and achieved coverage of the resulting confidence intervals). The practical implementation and the robustness properties are illustrated through simulated data sets but also with a real data set.  相似文献   

3.
Journal of Productivity Analysis - Model uncertainty is a prominent feature in many applied settings. This is certainty true in the efficiency analysis realm where concerns over the proper...  相似文献   

4.
The paper proposes a stochastic frontier model with random coefficients to separate technical inefficiency from technological differences across firms, and free the frontier model from the restrictive assumption that all firms must share exactly the same technological possibilities. Inference procedures for the new model are developed based on Bayesian techniques, and computations are performed using Gibbs sampling with data augmentation to allow finite‐sample inference for underlying parameters and latent efficiencies. An empirical example illustrates the procedure. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
This paper proposes a tail-truncated stochastic frontier model that allows for the truncation of technical efficiency from below. The truncation bound implies the inefficiency threshold for survival. Specifically, this paper assumes a uniform distribution of technical inefficiency and derives the likelihood function. Even though this distributional assumption imposes a strong restriction that technical inefficiency has a uniform probability density over [0, θ], where θ is the threshold parameter, this model has two advantages: (1) the reduction in the number of parameters compared with more complicated tail-truncated models allows better performance in numerical optimization; and (2) it is useful for empirical studies of the distribution of efficiency or productivity, particularly the truncation of the distribution. The Monte Carlo simulation results support the argument that this model approximates the distribution of inefficiency precisely, as the data-generating process not only follows the uniform distribution but also the truncated half-normal distribution if the inefficiency threshold is small.  相似文献   

6.
Worker peer-effects and managerial selection have received limited attention in the stochastic frontier analysis literature. We develop a parametric production function model that allows for worker peer-effects in output and worker-level inefficiency that is correlated with a manager’s selection of worker teams. The model is the usual “composed error” specification of the stochastic frontier model, but we allow for managerial selectivity (endogeneity) that works through the worker-level inefficiency term. The new specification captures both worker-level inefficiency and the manager’s ability to efficiently select teams to produce output. As the correlation between the manager’s selection equation and worker inefficiency goes to zero, our parametric model reduces to the normal-exponential stochastic frontier model of Aigner et al. (1977) with peer-effects. A comprehensive application to the NBA is provided.  相似文献   

7.
Journal of Productivity Analysis - In this paper we provide several new specifications within the true random effects model as well as stochastic frontiers models estimated with GLS and MLE that...  相似文献   

8.
This paper considers the disturbance specification ε = v ? u of the stochastic frontier model. For v distributed zero-mean normal and u half normal or exponential, we evaluate the population correlation coefficients between u and three estimators of u, E(u|ε) and two linear estimators, for various values of the signal-to-noise ratio.  相似文献   

9.
Stochastic frontier models with multiple time-varying individual effects   总被引:3,自引:1,他引:2  
This paper proposes a flexible time-varying stochastic frontier model. Similarly to Lee and Schmidt [1993, In: Fried H, Lovell CAK, Schmidt S (eds) The measurement of productive efficiency: techniques and applications. Oxford University Press, Oxford], we assume that individual firms’ technical inefficiencies vary over time. However, the model, which we call the “multiple time-varying individual effects” model, is more general in that it allows multiple factors determining firm-specific time-varying technical inefficiencies. This allows the temporal pattern of inefficiency to vary over firms. The number of such factors can be consistently estimated. The model is applied to data on Indonesian rice farms, and the changes in the efficiency rankings of farms over time demonstrate the model’s flexibility.
Young H. LeeEmail:
  相似文献   

10.
Journal of Productivity Analysis - This paper develops a theoretical framework for modeling farm households’ joint production and consumption decisions in the presence of technical...  相似文献   

11.

In stochastic frontier analysis, the conventional estimation of unit inefficiency is based on the mean/mode of the inefficiency, conditioned on the composite error. It is known that the conditional mean of inefficiency shrinks towards the mean rather than towards the unit inefficiency. In this paper, we analytically prove that the conditional mode cannot accurately estimate unit inefficiency, either. We propose regularized estimators of unit inefficiency that restrict the unit inefficiency estimators to satisfy some a priori assumptions, and derive the closed form regularized conditional mode estimators for the three most commonly used inefficiency densities. Extensive simulations show that, under common empirical situations, e.g., regarding sample size and signal-to-noise ratio, the regularized estimators outperform the conventional (unregularized) estimators when the inefficiency is greater than its mean/mode. Based on real data from the electricity distribution sector in Sweden, we demonstrate that the conventional conditional estimators and our regularized conditional estimators provide substantially different results for highly inefficient companies.

  相似文献   

12.
13.
In this paper, we propose a general approach to find the closest targets for a given unit according to a previously specified criterion of similarity. The idea behind this approach is that closer targets determine less demanding levels of operation for the inputs and outputs of the inefficient units to perform efficiently. Similarity can be interpreted as closeness between the inputs and outputs of the assessed unit and the proposed targets, and this closeness can be measured by using either different distance functions or different efficiency measures. Depending on how closeness is measured, we develop several mathematical programming problems that can be easily solved and guarantee to reach the closest projection point on the Pareto-efficient frontier. Thus, our approach leads to the closest targets by means of a single-stage procedure, which is easier to handle than those based on algorithms aimed at identifying all the facets of the efficient frontier.
José L. RuizEmail:
  相似文献   

14.
Quantile models and estimators for data analysis   总被引:1,自引:0,他引:1  
Quantile regression is used to estimate the cross sectional relationship between high school characteristics and student achievement as measured by ACT scores. The importance of school characteristics on student achievement has been traditionally framed in terms of the effect on the expected value. With quantile regression the impact of school characteristics is allowed to be different at the mean and quantiles of the conditional distribution. Like robust estimation, the quantile approach detects relationships missed by traditional data analysis. Robust estimates detect the influence of the bulk of the data, whereas quantile estimates detect the influence of co-variates on alternate parts of the conditional distribution. Since our design consists of multiple responses (individual student ACT scores) at fixed explanatory variables (school characteristics) the quantile model can be estimated by the usual regression quantiles, but additionally by a regression on the empirical quantile at each school. This is similar to least squares where the estimate based on the entire data is identical to weighted least squares on the school averages. Unlike least squares however, the regression through the quantiles produces a different estimate than the regression quantiles.  相似文献   

15.
This paper presents stochasticmodels in data envelopment analysis (DEA) for the possibilityof variations in inputs and outputs. Efficiency measure of adecision making unit (DMU) is defined via joint probabilisticcomparisons of inputs and outputs with other DMUs and can becharacterized by solving a chance constrained programming problem.By utilizing the theory of chance constrained programming, deterministicequivalents are obtained for both situations of multivariatesymmetric random disturbances and a single random factor in theproduction relationships. The linear deterministic equivalentand its dual form are obtained via the goal programming theoryunder the assumption of the single random factor. An analysisof stochastic variable returns to scale is developed using theidea of stochastic supporting hyperplanes. The relationshipsof our stochastic DEA models with some conventional DEA modelsare also discussed.  相似文献   

16.
17.
Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on joint chance constraints. An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.  相似文献   

18.
Summary The concept of regular estimator is due toRoy/Chakravarti. For its application they confined to the most general class of linear estimators. The present paper considers some subclasses of linear estimators.  相似文献   

19.
The field of productive efficiency analysis is currently divided between two main paradigms: the deterministic, nonparametric Data Envelopment Analysis (DEA) and the parametric Stochastic Frontier Analysis (SFA). This paper examines an encompassing semiparametric frontier model that combines the DEA-type nonparametric frontier, which satisfies monotonicity and concavity, with the SFA-style stochastic homoskedastic composite error term. To estimate this model, a new two-stage method is proposed, referred to as Stochastic Non-smooth Envelopment of Data (StoNED). The first stage of the StoNED method applies convex nonparametric least squares (CNLS) to estimate the shape of the frontier without any assumptions about its functional form or smoothness. In the second stage, the conditional expectations of inefficiency are estimated based on the CNLS residuals, using the method of moments or pseudolikelihood techniques. Although in a cross-sectional setting distinguishing inefficiency from noise in general requires distributional assumptions, we also show how these can be relaxed in our approach if panel data are available. Performance of the StoNED method is examined using Monte Carlo simulations.  相似文献   

20.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号