首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 0 毫秒
1.
This paper presents a comparison of two different models (Land et al (1993) and Olesen and Petersen (1995)), both designed to extend DEA to the case of stochastic inputs and outputs. The two models constitute two approaches within this area, that share certain characteristics. However, the two models behave very differently, and the choice between these two models can be confusing. This paper presents a systematic attempt to point out differences as well as similarities. It is demonstrated that the two models under some assumptions do have Lagrangian duals expressed in closed form. Similarities and differences are discussed based on a comparison of these dual structures. Weaknesses of the each of the two models are discussed and a merged model that combines attractive features of each of the two models is proposed.
O. B. OlesenEmail:
  相似文献   

2.
Using DEA and Worst Practice DEA in Credit Risk Evaluation   总被引:1,自引:0,他引:1  
The purpose of this paper is to introduce the concept of worst practice DEA, which aims at identifying worst performers by placing them on the frontier. This is particularly relevant for our application to credit risk evaluation, but this also has general relevance since the worst performers are where the largest improvement potential can be found. The paper also proposes to use a layering technique instead of the traditional cut-off point approach, since this enables incorporation of risk attitudes and risk-based pricing. Finally, it is shown how the use of a combination of normal and worst practice DEA models enable detection of self-identifiers. The results of the empirical application on credit risk evaluation validate the method. The best combination of layered normal and worst practice DEA models yields an impressive 100% bankruptcy and 78% non-bankruptcy prediction accuracy in the calibration data set, and equally convincing 100% and 67% out-of-sample classification accuracies.  相似文献   

3.
Martin G.  Mikulas  Matthias 《Socio》2006,40(4):314-332
We measure productivity in leading edge economic research by using data envelopment analysis (DEA) for a sample of 21 countries belonging to the Organization for Economic Cooperation and Development (OECD). Publications in ten top journals of economics from 1980 to 1998 are taken as the research output. Inputs are measured by R&D expenditure, the number of universities with economics departments and (as an uncontrollable variable) population. Under constant returns-to-scale, the US emerges as the only efficient country. Under variable returns-to-scale, the efficiency frontier is defined by the US, Ireland and New Zealand. With the exception of the US, all countries in our sample display increasing returns-to-scale, and thus have the potential to raise their efficiency by scaling up their research activities.  相似文献   

4.
Policy recommendations concerning optimal scale of production units may have serious implications for the restructuring of a sector. The piecewise linear frontier production function framework (DEA) is becoming the most popular one for assessing not only technical efficiency of operations, but also for scale efficiency and calculation of optimal scale sizes. The main purpose of the present study is to investigate if neoclassical production theory gives any guidance as to the nature of scale properties in the DEA model, and empirically explore such properties. Theoretical results indicate that the DEA model may have more irregular properties than usually assumed in neoclassical production theory, concerning shape of optimal scale curves and the M-locus. The empirical results indicate that optimal scale may be found over almost the entire size variations in outputs and inputs, thus making policy recommendations about efficient scale difficult. It seems necessary to establish the nature of optimal scale before any practical conclusions can be drawn. Proposals for indexes characterizing the nature of optimal scale are provided.  相似文献   

5.
Improving productive efficiency is an increasingly important determinant of the future of the swine industry in Hawaii. This paper examines the productive efficiency of a sample of swine producers in Hawaii by estimating a stochastic frontier production function and the constant returns to scale (CRS) and variable returns to scale (VRS) output-oriented DEA models. The technical efficiency estimates obtained from the two frontier techniques are compared. The scale properties are also examined under the two approaches. The industry's potential for increasing production through improved efficiency is also discussed.  相似文献   

6.
Although railway services have been suffering financially due to modal shifts and aging populations, they have been, and will continue to be, an essential component of nations' basic social infrastructures. Since railway firms generate positive externalities, and are required to operate in pre-determined licensed areas, governmental intervention/support may, in some cases, be justified. Indeed, many types of subsidies are created and offered for railway operations in Japan; while some are meant to cover large investments, others are used as compensation for regional disparities. However, thus far, no attempt has been made to analyze the reasons for the underperformance of Japanese railway services. In other words, it is unclear whether this underperformance can be attributed to exogenous and uncontrollable causes, or endogenous phenomena and, hence, capable of being handled by managers. The optimal degree of intervention is thus not sufficiently known. In the current paper, we propose a method based on data envelopment analysis (DEA) to analyze the causes of inefficiency in Japanese railway operations, and, further, to calculate optimal subsidy levels. The latter are designed to compensate for railways' lack of complete discretion in changing location of their operations and/or increasing/decreasing these operations since they are a regulated service. Our proposed method was applied to 53 Japanese railway operators. In so doing, we identified several key characteristics related to their inefficiencies, and developed optimal subsidies designed to improve performance.  相似文献   

7.
Quality function deployment (QFD) is a proven tool for process and product development, which translates the voice of customer (VoC) into engineering characteristics (EC), and prioritizes the ECs, in terms of customer's requirements. Traditionally, QFD rates the design requirements (DRs) with respect to customer needs, and aggregates the ratings to get relative importance scores of DRs. An increasing number of studies stress on the need to incorporate additional factors, such as cost and environmental impact, while calculating the relative importance of DRs. However, there is a paucity of methodologies for deriving the relative importance of DRs when several additional factors are considered. Ramanathan and Yunfeng [43] proved that the relative importance values computed by data envelopment analysis (DEA) coincide with traditional QFD calculations when only the ratings of DRs with respect to customer needs are considered, and only one additional factor, namely cost, is considered. Also, Kamvysi et al. [27] discussed the combination of QFD with analytic hierarchy process–analytic network process (AHP–ANP) and DEAHP–DEANP methodologies to prioritize selection criteria in a service context. The objective of this paper is to propose a QFD–imprecise enhanced Russell graph measure (QFD–IERGM) for incorporating the criteria such as cost of services and implementation easiness in QFD. Proposed model is applied in an Iranian hospital.  相似文献   

8.
This study examines the effect of sample size on the mean productive efficiency of firms when the efficiency is evaluated using the non-parametric approach of Data Envelopment Analysis. By employing Monte Carlo simulation, we show how the mean efficiency is related to the sample size. The paper discusses the implications for international comparisons. As an application, we investigate the efficiency of the electricity distribution industries in Australia, Sweden and New Zealand.  相似文献   

9.
We consider situations where the a priori guidance provided by theoretical considerations indicates only that the function linking the endogenous and exogenous variables is monotone and concave (or convex). We present methods to evaluate the adequacy of a parametric functional form to represent the relationship given the minimal maintained assumption of monotonicity and concavity (or convexity). We evaluate the adequacy of an assumed parametric form by comparing the deviations of the fitted parametric form from the observed data with the corresponding deviations estimated under DEA. We illustrate the application of our proposed methods using data collected from school districts in Texas. Specifically, we examine whether the Cobb–Douglas and translog specifications commonly employed in studies of education production are appropriate characterizations. Our tests reject the hypotheses that either the Cobb–Douglas or the translog specification is an adequate approximation to the general monotone and concave production function for the Texas school districts.  相似文献   

10.
The purpose of this paper is to discuss the use of Value Efficiency Analysis (VEA) in efficiency evaluation when preference information is taken into account. Value efficiency analysis is an approach, which applies the ideas developed for Multiple Objective Linear Programming (MOLP) to Data Envelopment Analysis (DEA). Preference information is given through the desirable structure of input- and output-values. The same values can be used for all units under evaluation or the values can be specific for each unit. A decision-maker can specify the input- and output-values subjectively without any support or (s)he can use a multiple criteria support system to assist him/her to find those values on the efficient frontier. The underlying assumption is that the most preferred values maximize the decision-maker's implicitly known value function in a production possibility set or a subset. The purpose of value efficiency analysis is to estimate a need to increase outputs and/or decrease inputs for reaching the indifference contour of the value function at the optimum. In this paper, we briefly review the main ideas in value efficiency analysis and discuss practical aspects related to the use of value efficiency analysis. We also consider some extensions.  相似文献   

11.
Sensitivity of the returns to scale (RTS) classifications in data envelopment analysis is studied by means of linear programming problems. The stability region for an observation preserving its current RTS classification (constant, increasing or decreasing returns to scale) can be easily investigated by the optimal values to a set of particular DEA-type formulations. Necessary and sufficient conditions are determined for preserving the RTS classifications when input or output data perturbations are non-proportional. It is shown that the sensitivity analysis method under proportional data perturbations can also be used to estimate the RTS classifications and discover the identical RTS regions yielded by the input-based and the output-based DEA methods. Thus, our approach provides information on both the RTS classifications and the stability of the classifications. This sensitivity analysis method can easily be applied via existing DEA codes.  相似文献   

12.
A previous paper by Arnold, Bardhan, Cooper and Kumbhakar (1996) introduced a very simple method to estimate a production frontier by proceeding in two stages as follows: Data Envelopment Analysis (DEA) is used in the first stage to identify efficient and inefficient decision-making units (DMUs). In the second stage the thus identified DMUs are incorporated as dummy variables in OLS (ordinary least squares) regressions. This gave very satisfactory results for both the efficient and inefficient DMUs. Here a simulation study provides additional evidence. Using this same two-stage approach with Cobb-Douglas and CES (constant elasticity-of-substitution) production functions, the estimated values for the coefficients associated with efficient DMUs are found to be not significantly different from the true parameter values for the (known) production functions whereas the parameter estimates for the inefficient DMUs are significantly different. A separate section of the present paper is devoted to explanations of these results. Other sections describe methods for estimating input-specific inefficiencies from the first stage use of DEA in the two-stage approaches. A concluding section provides further directions for research and use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号