首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on joint chance constraints. An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.  相似文献   

2.
There are two main methods for measuring the efficiency of decision-making units (DMUs): data envelopment analysis (DEA) and stochastic frontier analysis (SFA). Each of these methods has advantages and disadvantages. DEA is more popular in the literature due to its simplicity, as it does not require any pre-assumption and can be used for measuring the efficiency of DMUs with multiple inputs and multiple outputs, whereas SFA is a parametric approach that is applicable to multiple inputs and a single output. Since many applied studies feature multiple output variables, SFA cannot be used in such cases. In this research, a unique method to transform multiple outputs to a virtual single output is proposed. We are thus able to obtain efficiency scores from calculated virtual single output by the proposed method that are close (or even the same depending on targeted parameters at the expense of computation time and resources) to the efficiency scores obtained from multiple outputs of DEA. This will enable us to use SFA with a virtual single output. The proposed method is validated using a simulation study, and its usefulness is demonstrated with real application by using a hospital dataset from Turkey.  相似文献   

3.
The paper is concerned with the incorporation of polyhedral cone constraints on the virtual multipliers in DEA. The incorporation of probabilistic bounds on the virtual multipliers based upon a stochastic benchmark vector is demonstrated. The suggested approach involves a stochastic (chance constrained) programming model with multipliers constrained to the cone spanned by confidence intervals for the components of the stochastic benchmark vector at varying probability levels. Consider a polyhedral assurance region based upon bounded pairwise ratios between multipliers. It is shown that in general it is never possible to identify a center-vector defined as a vector in the interior of the cone with identical angles to all extreme rays spanning the cone. Smooth cones are suggested if an asymmetric variation in the set of feasible relative prices is to be avoided.  相似文献   

4.
In this paper we consider the Variable Returns to Scale (VRS) Data Envelopment Analysis (DEA) model. In a DEA model each Decision Making Unit (DMU) is classified either as efficient or inefficient. Changes in inputs or outputs of any DMU can alter its classification, i.e. an efficient DMU can become inefficient and vice versa. The goal of this paper is to assess changes in inputs and outputs of an extreme efficient DMU that will not alter its efficiency status, thus obtaining the region of efficiency for that DMU. Namely, a DMU will remain efficient if and only if after applying changes this DMU stays in that region. The representation of this region is done using an iterative procedure. In the first step an extended DEA model, whereby a DMU under evaluation is excluded from the reference set, is used. In the iterative part of the procedure, by using the obtained optimal simplex tableau we apply parametric programming, thus moving from one facet to the adjacent one. At the end of the procedure we obtain the complete region of efficiency for a DMU under consideration.  相似文献   

5.
Data envelopment analysis (DEA) is extended to the case of stochastic inputs and outputs through the use of chance-constrained programming. The chance-constrained frontier envelops a given set of observations ‘most of the time’. As an empirical illustration, we re-examine the pioneering 1981 study of Program Follow Through by Charnes, Cooper and Rhodes.  相似文献   

6.
在考虑到发线运用过程中不确定性因素的基础上,研究旅客列车停站时间和接续时间随机变动情况下客专大站到发线运用优化问题,以提高车站作业效率和方便旅客乘降为优化目标,建立了到发线运用的随机机会约束规划模型。通过将模型中的随机机会约束转化为相应的确定性等价形式,从而将随机规划模型转化为确定性模型,并设计了解决该问题的蚁群算法。算例表明该随机机会约束规划模型能取得可靠性更高的到发线运用计划。  相似文献   

7.
The mathematical programming-based technique data envelopment analysis (DEA) has often treated data as being deterministic. In response to the criticism that in most applications there is error and random noise in the data, a number of mathematically elegant solutions to incorporating stochastic variations in data have been proposed. In this paper, we propose a chance-constrained formulation of DEA that allows random variations in the data. We study properties of the ensuing efficiency measure using a small sample in which multiple inputs and a single output are correlated, and are the result of a stochastic process. We replicate the analysis using Monte Carlo simulations and conclude that using simulations provides a more flexible and computationally less cumbersome approach to studying the effects of noise in the data. We suggest that, in keeping with the tradition of DEA, the simulation approach allows users to explicitly consider different data generating processes and allows for greater flexibility in implementing DEA under stochastic variations in data.  相似文献   

8.
We propose an extension to the basic DEA models that guarantees that if an intensity is positive then it must be at least as large as a pre-defined lower bound. This requirement adds an integer programming constraint known within Operations Research as a Fixed-Charge (FC) type of constraint. Accordingly, we term the new model DEA_FC. The proposed model lies between the DEA models that allow units to be scaled arbitrarily low, and the Free Disposal Hull model that allows no scaling. We analyze 18 datasets from the literature to demonstrate that sufficiently low intensities—those for which the scaled Decision-Making Unit (DMU) has inputs and outputs that lie below the minimum values observed—are pervasive, and that the new model ensures fairer comparisons without sacrificing the required discriminating power. We explain why the low-intensity phenomenon exists. In sharp contrast to standard DEA models we demonstrate via examples that an inefficient DMU may play a pivotal role in determining the technology. We also propose a goal programming model that determines how deviations from the lower bounds affect efficiency, which we term the trade-off between the deviation gap and the efficiency gap.  相似文献   

9.
DEA (Data Envelopment Analysis) attempts to identify sources and estimate amounts of inefficiencies contained in the outputs and inputs generated by managed entities called DMUs (Decision Making Units). Explicit formulation of underlying functional relations with specified parametric forms relating inputs to outputs is not required. An overall (scalar) measure of efficiency is obtained for each DMU from the observed magnitudes of its multiple inputs and outputs without requiring use of a priori weights or relative value assumptions and, in addition, sources and amounts of inefficiency are estimated for each input and each output for every DMU. Earlier theory is extended so that DEA can deal with zero inputs and outputs and zero virtual multipliers (shadow prices). This is accomplished by partitioning DMUs into six classes via primal and dual representation theorems by means of which restrictions to positive observed values for all inputs and outputs are eliminated along with positivity conditions imposed on the variables which are usually accomplished by recourse to nonarchimedian concepts. Three of the six classes are scale inefficient and two of the three scale efficient classes are also technically (zero waste) efficient.The refereeing process of this paper was handled through R. Banker. This paper was prepared as part of the research supported by National Science Foundation grant SES-8722504 and by the IC2 Institute of The University of Texas and was initially submitted in May 1985.  相似文献   

10.
Data envelopment analysis (DEA) is generally used to evaluate past performance and multi objective linear programming (MOLP) is often used to plan for future performance goals. In this study, we establish an equivalence relationship between MOLP problems and combined-oriented DEA models using a direction distance function designed to account for desirable and undesirable inputs and outputs together with uncontrollable variables. This equivalence model can be effectively used to support interactive processes and performance measures designed to establish future performance goals while taking into account the preferences of decision makers (DMs). In particular, it allows DMs to consider different efficiency improvement strategies when subject to budgetary restrictions. The applicability of the proposed method and the efficacy of the procedures and algorithms are demonstrated using a case study where the performance of high schools in the City of Philadelphia is evaluated.  相似文献   

11.
In this paper we propose a new technique for incorporating environmental effects and statistical noise into a producer performance evaluation based on data envelopment analysis (DEA). The technique involves a three-stage analysis. In the first stage, DEA is applied to outputs and inputs only, to obtain initial measures of producer performance. In the second stage, stochastic frontier analysis (SFA) is used to regress first stage performance measures against a set of environmental variables. This provides, for each input or output (depending on the orientation of the first stage DEA model), a three-way decomposition of the variation in performance into a part attributable to environmental effects, a part attributable to managerial inefficiency, and a part attributable to statistical noise. In the third stage, either inputs or outputs (again depending on the orientation of the first stage DEA model) are adjusted to account for the impact of the environmental effects and the statistical noise uncovered in the second stage, and DEA is used to re-evaluate producer performance. Throughout the analysis emphasis is placed on slacks, rather than on radial efficiency scores, as appropriate measures of producer performance. An application to nursing homes is provided to illustrate the power of the three-stage methodology.  相似文献   

12.
Data Envelopment Analysis (DEA) applications frequently involve nonsubstitutable inputs and nonsubstitutable outputs (that is, fixed proportion technologies). However, DEA theory requires substitutability. In this paper, we illustrate the consequences of nonsubstitutability on DEA efficiency estimates, and we develop new efficiency indicators that are similar to those of conventional DEA models except that they require nonsubstitutability. Then, using simulated and real-world datasets that encompass fixed proportion technologies, we compare DEA efficiency estimates with those of the new indicators. The examples demonstrate that DEA efficiency estimates are biased when inputs and outputs are nonsubstitutable. The degree of bias varies considerably among Decision Making Units, resulting in substantial differences in efficiency rankings between DEA and the new measures. And, over 90% of the units that DEA identifies as efficient are, in truth, not efficient. We conclude that when inputs and outputs are not substituted for either technological or other reasons, conventional DEA models should be replaced with models that account for nonsubstitutability.  相似文献   

13.
Data envelopment analysis (DEA) assumes that inputs and outputs are measured on scales in which larger numerical values correspond to greater consumption of inputs and greater production of outputs. We present a class of DEA problems in which one or more of the inputs or outputs are naturally measured on scales in which higher numerical values represent lower input consumption or lower output production. We refer to such quantities as reverse inputs and reverse outputs. We propose to incorporate reverse inputs and outputs into a DEA model by returning to the basic principles that lead to the DEA model formulation. We compare our method to reverse scoring, the most commonly used approach, and demonstrate the relative advantages of our proposed technique. We use this concept to analyze all 30 Major League Baseball (MLB) organizations during the 1999 regular season to determine their on-field and front office relative efficiencies. Our on-field DEA model employs one output and two symmetrically defined inputs, one to measure offense and one to measure defense. The defensive measure is such that larger values correspond to worse defensive performance, rather than better, and hence is a reverse input. The front office model uses one input. Its outputs, one of which is a reverse output, are the inputs to the on-field model. We discuss the organizational implications of our results.  相似文献   

14.
An analysis of operations efficiency in large-scale distribution systems   总被引:1,自引:0,他引:1  
This research applies Data Envelopment Analysis (DEA) methodology to evaluate the efficiency of units within a large-scale network of petroleum distribution facilities in the USA. Multiple inputs and outputs are incorporated into a broad set of DEA models, yielding a comprehensive approach to evaluating supply chain efficiency. This study empirically separates three recognized, important and yet different causes of performance shortfalls which have been generally difficult for managers to identify. They are: (1) managerial effectiveness; (2) scale of operations and potential for a given market area (and efficiency of resource allocation given the scale); and (3) understanding the resource heterogeneity via programmatic differences in efficiency. Overall, the efficiency differences identified raised insightful questions regarding top management’s selection of the appropriate form and type of inputs and outputs, as well as questions regarding the DEA model form selected.  相似文献   

15.
I use linear programming models to define standardised, aggregate environmental performance indicators for firms. The best practice frontier obtained corresponds to decision making units showing the best environmental behaviour. Results are obtained with data from U.S. fossil fuel-fired electric utilities, starting from four alternative models, among which are three linear programming models that differ in the way they account for undesirable outputs (pollutants) and resources used as inputs. The results indicate important discrepancies in the rankings obtained by the four models. Rather than contradictory, these results are interpreted as giving different, complementary kinds of information, that should all be taken into account by public decision-makers.  相似文献   

16.
In this study, a stochastic multi-objective mixed-integer mathematical programming is proposed for logistic distribution and evacuation planning during an earthquake. Decisions about the pre- and post-phases of the disaster are considered seamless. The decisions of the pre-disaster phase relate to the location of permanent relief distribution centers and the number of the commodities to be stored. The decisions of the second phase are to determine the optimal location for the establishment of temporary care centers to increase the speed of treating the injured people and the distribution of the commodities at the affected areas. Humanitarian and cost issues are considered in the proposed models through three objective functions. Several sets of constraints are also considered in the proposed model to make it flexible to handle real issues. Demands for food, blood, water, blanket, and tent are assumed to be probabilistic which are related to several complicated factors and modeled using a complicated network in this study. A simulation is setup to generate the probabilistic distribution of demands through several scenarios. The stochastic demands are assumed as inputs for the proposed stochastic multi-objective mixed integer mathematical programming model.The model is transformed to its deterministic equivalent using chance constraint programming approach. The equivalent deterministic model is solved using an efficient epsilon-constraint approach and an evolutionary algorithm, called non-dominated sorting genetic algorithm (NSGA-II). First several illustrative numerical examples are solved using both solution procedures. The performance of solution procedures is compared and the most efficient solution procedure, i.e., NSGA-II, is used to handle the case study of Tehran earthquake. The results are promising and show that the proposed model and the solution approach can handle the real case study in an efficient way.  相似文献   

17.
Data Envelopment Analysis (DEA) is a methodology that computes efficiency values for decision making units (DMU) in a given period by comparing the outputs with the inputs. In many applications, inputs and outputs of DMUs are monitored over time. There might be a time lag between the consumption of inputs and the production of outputs. We develop an approach that aims to capture the time lag between the outputs and the inputs in assigning the efficiency values to DMUs. We propose using weight restrictions in conjunction with the model. Our computational results on randomly generated problems demonstrate that the developed approach works well under a large variety of experimental conditions. We also apply our approach on a real data set to evaluate research institutions.  相似文献   

18.
Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). In recent years, it has been widely used to evaluate two-stage systems under different organization mechanisms. This study modifies the conventional leader–follower DEA models for two-stage systems by considering the uncertainty of data. The dual deterministic linear models are first constructed from the stochastic CCR models under the assumption that all components of inputs, outputs, and intermediate products are related only with some basic stochastic factors, which follow continuous and symmetric distributions with nonnegative compact supports. The stochastic leader–follower DEA models are then developed for measuring the efficiencies of the two stages. The stochastic efficiency of the whole system can be uniquely decomposed into the product of the efficiencies of the two stages. Relationships between stochastic efficiencies from stochastic CCR and stochastic leader–follower DEA models are also discussed. An example of the commercial banks in China is considered using the proposed models under different risk levels.  相似文献   

19.
This paper studies the use of DEA (data envelopment analysis) as a tool for possible use in evaluating and planning the economic performance of China's cities (28 in all) which play a critical role in the government's program of economic development. DEA promises advantages which include the absence of any need for the assignment of weights on an a priori basis (to reflect the supposed relative importance of various outputs or inputs) when evaluating technical efficiency. It is also unnecessary to explicitly specify underlying functions that are intended to prescribe the analytical form of the relations between inputs and outputs. Finally, as is illustrated in the paper, DEA can be used to identify sources, and estimate amounts of inefficiencies in each city's performance as well as to identify returns-to-scale possibilities in ways that seem well-suited to the mixture of centralized and decentralized planning and performance that China is currently trying to use.  相似文献   

20.
The selection of inputs and outputs in DEA models represents a vibrant methodological topic. At the same time; however, the problem of the impact of different measurement units of selected inputs is understated in empirical literature. Using the example of Czech farms, we show that the DEA method does not provide consistent score estimates, neither a stable ranking for different popular measurements of labour and capital factors of production. For this reason, studies based on DEA efficiency results for differently measured inputs should be compared only with great caution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号