首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In some applications of data envelopment analysis (DEA) there may be doubt as to whether all the DMUs form a single group with a common efficiency distribution. The Mann–Whitney rank statistic has been used to evaluate if two groups of DMUs come from a common efficiency distribution under the assumption of them sharing a common frontier and to test if the two groups have a common frontier. These procedures have subsequently been extended using the Kruskal–Wallis rank statistic to consider more than two groups. This technical note identifies problems with the second of these applications of both the Mann–Whitney and Kruskal–Wallis rank statistics. It also considers possible alternative methods of testing if groups have a common frontier, and the difficulties of disaggregating managerial and programmatic efficiency within a non-parametric framework.   相似文献   

2.

The presence of outliers in the data has implications for stochastic frontier analysis, and indeed any performance analysis methodology, because they may lead to imprecise parameter estimates and, crucially, lead to an exaggerated spread of efficiency predictions. In this paper we replace the normal distribution for the noise term in the standard stochastic frontier model with a Student’s t distribution, which generalises the normal distribution by adding a shape parameter governing the degree of kurtosis. This has the advantages of introducing flexibility in the heaviness of the tails, which can be determined by the data, as well as containing the normal distribution as a limiting case, and we outline how to test against the standard model. Monte Carlo simulation results for the maximum simulated likelihood estimator confirm that the model recovers appropriate frontier and distributional parameter estimates under various values of the true shape parameter. The simulation results also indicate the influence of a phenomenon we term ‘wrong kurtosis’ in the case of small samples, which is analogous to the issue of ‘wrong skewness’ previously identified in the literature. We apply a Student’s t-half normal cost frontier to data for highways authorities in England, and this formulation is found to be preferred by statistical testing to the comparator normal-half normal cost frontier model. The model yields a significantly narrower range of efficiency predictions, which are non-monotonic at the tails of the residual distribution.

  相似文献   

3.
The paper proposes a stochastic frontier model with random coefficients to separate technical inefficiency from technological differences across firms, and free the frontier model from the restrictive assumption that all firms must share exactly the same technological possibilities. Inference procedures for the new model are developed based on Bayesian techniques, and computations are performed using Gibbs sampling with data augmentation to allow finite‐sample inference for underlying parameters and latent efficiencies. An empirical example illustrates the procedure. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

5.
This article introduces a new count data stochastic frontier model that researchers can use in order to study efficiency in production when the output variable is a count (so that its conditional distribution is discrete). We discuss parametric and nonparametric estimation of the model, and a Monte Carlo study is presented in order to evaluate the merits and applicability of the new model in small samples. Finally, we use the methods discussed in this article to estimate a production function for the number of patents awarded to a firm given expenditure on R&D.  相似文献   

6.
Most econometric models of intrahousehold behavior assume that household decision making is efficient, i.e., utility realizations lie on the Pareto frontier. In this paper, we investigate this claim by adding a number of participation constraints to the household allocation problem. Short-run constraints ensure that each spouse obtains a utility level at least equal to what they would realize under (inefficient) Nash equilibrium. Long-run constraints ensure that each spouse obtains a utility level at least equal to what they would realize by cheating on the efficient allocation and receiving Nash equilibrium payoffs in all successive periods. Given household characteristics and the (common) discount factor of the spouses, not all households may be able to attain payoffs on the Pareto frontier. We estimate these models using a Method of Simulated Moments estimator and data from one wave of the Panel Study of Income Dynamics. We find that both short- and long-run constraints are binding for sizable proportions of households in the sample. We conclude that it is important to carefully model the constraint sets household members face when modeling household allocation decisions, and to allow for the possibility that efficient outcomes may not be implementable for some households.  相似文献   

7.
We introduce a new panel data estimation technique for production and cost functions, the recursive thick frontier approach (RTFA). RTFA has two advantages over existing econometric frontier methods. First, technical inefficiency is allowed to be dependent on the explanatory variables of the frontier model. Secondly, RTFA does not hinge on distributional assumptions on the inefficiency component of the error term. We show by means of simulation experiments that RTFA outperforms the popular stochastic frontier approach and the ‘within’ ordinary least squares estimator for realistic parameterizations of a productivity model. Although RTFAs formal statistical properties are unknown, we argue, based on these simulation experiments, that RTFA is a useful complement to existing methods.  相似文献   

8.
Polarization of the worldwide distribution of productivity   总被引:1,自引:1,他引:0  
We employ data envelopment analysis (DEA) methods to construct the world production frontier, which is in turn used to decompose (labor) productivity growth into components attributable to technological change (shift of the production frontier), efficiency change (movements toward or away from the frontier), physical capital deepening, and human capital accumulation over the 1965–2007 period. Using this decomposition, we provide new findings on the causes of polarization (the emergence of bimodality) and divergence (increased variance) of the world productivity distribution. First, unlike earlier studies, we find that efficiency change is the unique driver of the emergence of a second (higher) mode. Second, while earlier studies attributed the overall change in the distribution exclusively to physical capital accumulation, we find that technological change and human capital accumulation are also significant factors explaining this change in the distribution (most notably the emergence of a long right-hand tail). Robustness exercises indicate that these revisions of earlier findings are attributable to the addition of (more recent) years and a much greater number of countries included in our sample. We also check to see whether our results are changed by a correction for the downward bias in the DEA construction of the frontier, concluding that these corrections affect none of our major findings (essentially because the level correction roughly washes out in changes.)  相似文献   

9.
A general framework for frontier estimation with panel data   总被引:1,自引:0,他引:1  
The main objective of the paper is to present a general framework for estimating production frontier models with panel data. A sample of firms i = 1, ..., N is observed on several time periods t = 1, ... T. In this framework, nonparametric stochastic models for the frontier will be analyzed. The usual parametric formulations of the literature are viewed as particular cases and the convergence of the obtained estimators in this general framework are investigated. Special attention is devoted to the role of N and of T on the speeds of convergence of the obtained estimators. First, a very general model is investigated. In this model almost no restriction is imposed on the structure of the model or of the inefficiencies. This model is estimable from a nonparametric point of view but needs large values of T and of N to obtain reliable estimates of the individual production functions and estimates of the frontier function. Then more specific nonparametric firm effect models are presented. In these cases, only NT must be large to estimate the common production function; but again both large N and T are needed for estimating individual efficiencies and for estimating the frontier. The methods are illustrated through a numerical example with real data.  相似文献   

10.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

11.
This article studies the estimation of production frontiers and efficiency scores when the commodity of interest is an economic bad with a discrete distribution. Existing parametric econometric techniques (stochastic frontier methods) assume that output is a continuous random variable but, if output is discretely distributed, then one faces a scenario of model misspecification. Therefore a new class of econometric models has been developed to overcome this problem. The Delaporte subclass of models is studied in detail, and tests of hypotheses are proposed to discriminate among parametric models. In particular, Pearson’s chi-squared test is adapted to construct a new kernel-based consistent Pearson test. A Monte Carlo experiment evaluates the merits of the new model and methods, and these are used to estimate the frontier and efficiency scores of the production of infant deaths in England. Extensions to the model are discussed.  相似文献   

12.
Worker peer-effects and managerial selection have received limited attention in the stochastic frontier analysis literature. We develop a parametric production function model that allows for worker peer-effects in output and worker-level inefficiency that is correlated with a manager’s selection of worker teams. The model is the usual “composed error” specification of the stochastic frontier model, but we allow for managerial selectivity (endogeneity) that works through the worker-level inefficiency term. The new specification captures both worker-level inefficiency and the manager’s ability to efficiently select teams to produce output. As the correlation between the manager’s selection equation and worker inefficiency goes to zero, our parametric model reduces to the normal-exponential stochastic frontier model of Aigner et al. (1977) with peer-effects. A comprehensive application to the NBA is provided.  相似文献   

13.
Economic transition and growth   总被引:1,自引:0,他引:1  
Some extensions of neoclassical growth models are discussed that allow for cross‐section heterogeneity among economies and evolution in rates of technological progress over time. The models offer a spectrum of transitional behavior among economies that includes convergence to a common steady‐state path as well as various forms of transitional divergence and convergence. Mechanisms for modeling such transitions, measuring them econometrically, assessing group behavior and selecting subgroups are developed in the paper. Some econometric issues with the commonly used augmented Solow regressions are pointed out, including problems of endogeneity and omitted variable bias which arise under conditions of transitional heterogeneity. Alternative regression methods for analyzing economic transition are given which lead to a new test of the convergence hypothesis and a new procedure for detecting club convergence clusters. Transition curves for individual economies and subgroups of economies are estimated in a series of empirical applications of the methods to regional US data, OECD data and Penn World Table data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
This article develops a new method of estimating inefficiencies in joint production and shows that unlike the approaches utilized in the previous studies of inefficiency, this method maintains a consistent relationship between the error term of a profit function and the error terms of its price derivatives. A useful by-product of the method is a proof of a Hotelling-like lemma that relates stochastic input demand and output supply functions to stochastic profit functions. While the previous studies fit a single frontier to data on all firms, this paper estimates a frontier unique to every observed firm to allow each one to have a different potential of achieving maximal levels of profit. The new method is applied in the analysis of annual data, 1984–1989, for U.S. commercial banks. Both the analytical and numerical results of the paper show that the residual that the previous studies attribute to inefficiency includes the effects of excluded variables and of inaccuracies in the specified functional forms. Once accurate estimates of these effects are subtracted from the residual, the distortions in the measured inefficiencies should be considerably reduced. Consequently, this article considers how such estimates might be obtained.  相似文献   

15.
In this paper we examine the productive performance of a group of three East European carriers and compare it to thirteen of their West European competitors during the period 1977–1990. We first model the multiple output/multiple input technology with a stochastic distance frontier using recently developed semiparametric efficient methods. The endogeneity of multiple outputs is addressed in part by introducing multivariate kernel estimators for the joint distribution of the multiple outputs and potentially correlated firm random effects. We augment estimates from our semiparametric stochastic distance function with nonparametric distance function methods, using linear programming techniques, as well as with extended decomposition methods, based on the Malmquist index number. Both semi- and nonparametric methods indicate significant slack in resource utilization in the East European carriers relative to their Western counterparts, and limited convergence in efficiency or technical change between them. The implications are rather stark for the long run viability of the East European carriers in our sample.  相似文献   

16.
We propose estimation of a stochastic production frontier model within a Bayesian framework to obtain the posterior distribution of single-input-oriented technical efficiency at the firm level. All computations are carried out using Markov chain Monte Carlo methods. The approach is illustrated by applying it to production data obtained from a survey of Ukrainian collective farms. We show that looking at the changes in single-input-oriented technical efficiency in addition to the changes in output-oriented technical efficiency improves the understanding of the dynamics of technical efficiency over the first years of transition in the former Soviet Union.  相似文献   

17.
Stochastic frontier models are often employed to estimate fishing vessel technical efficiency. Under certain assumptions, these models yield efficiency measures that are means of truncated normal distributions. We argue that these measures are flawed, and use the results of Horrace ( 2005 ) to estimate efficiency for 39 vessels in the Northeast Atlantic herring fleet, based on each vessel's probability of being efficient. We develop a subset selection technique to identify groups of efficient vessels at pre‐specified probability levels. When homogeneous production is assumed, inferential inconsistencies exist between our methods and the methods of ranking the means of the technical inefficiency distributions for each vessel. When production is allowed to be heterogeneous, these inconsistencies are mitigated. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

19.
龚曲华 《价值工程》2010,29(33):250-250
指出数理统计方法和传统的数据包络分析方法的缺陷所在,提出用模糊的生产前沿面来改造传统的数据包络分析模型,并用该模型分析大学生的学习有效性。  相似文献   

20.
In spite of the current availability of numerous methods of cluster analysis, evaluating a clustering configuration is questionable without the definition of a true population structure, representing the ideal partition that clustering methods should try to approximate. A precise statistical notion of cluster, unshared by most of the mainstream methods, is provided by the density‐based approach, which assumes that clusters are associated to some specific features of the probability distribution underlying the data. The non‐parametric formulation of this approach, known as modal clustering, draws a correspondence between the groups and the modes of the density function. An appealing implication is that the ill‐specified task of cluster detection can be regarded to as a more circumscribed problem of estimation, and the number of clusters is also conceptually well defined. In this work, modal clustering is critically reviewed from both conceptual and operational standpoints. The main directions of current research are outlined as well as some challenges and directions of further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号