首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
This paper presents a comparison of two different models (Land et al (1993) and Olesen and Petersen (1995)), both designed to extend DEA to the case of stochastic inputs and outputs. The two models constitute two approaches within this area, that share certain characteristics. However, the two models behave very differently, and the choice between these two models can be confusing. This paper presents a systematic attempt to point out differences as well as similarities. It is demonstrated that the two models under some assumptions do have Lagrangian duals expressed in closed form. Similarities and differences are discussed based on a comparison of these dual structures. Weaknesses of the each of the two models are discussed and a merged model that combines attractive features of each of the two models is proposed.
O. B. OlesenEmail:
  相似文献   

2.
陈俊霞 《价值工程》2006,25(11):94-95
给出经典的基于输入和输出的DEA模型,把基于DEA模型相结合的相对效率模型引入评价证券相对有效性,以风险作为输入,期望收益率作为输出,便可以用DEA模型评价每只证券相对有效性。  相似文献   

3.
Using DEA and Worst Practice DEA in Credit Risk Evaluation   总被引:1,自引:0,他引:1  
The purpose of this paper is to introduce the concept of worst practice DEA, which aims at identifying worst performers by placing them on the frontier. This is particularly relevant for our application to credit risk evaluation, but this also has general relevance since the worst performers are where the largest improvement potential can be found. The paper also proposes to use a layering technique instead of the traditional cut-off point approach, since this enables incorporation of risk attitudes and risk-based pricing. Finally, it is shown how the use of a combination of normal and worst practice DEA models enable detection of self-identifiers. The results of the empirical application on credit risk evaluation validate the method. The best combination of layered normal and worst practice DEA models yields an impressive 100% bankruptcy and 78% non-bankruptcy prediction accuracy in the calibration data set, and equally convincing 100% and 67% out-of-sample classification accuracies.  相似文献   

4.
Many studies devoted to efficiency performance evaluation in the education sector are based on measures of central tendency at school level as, for example, the average values of students belonging to the same school. Although this is a common and accepted way of summarizing data from the original observations (students), it is not less true that this approach neglects the existing dispersion of data, which may become a serious problem if variability across schools is high. Additionally, imprecision may arise when experts on each evaluated subject select the battery of questions, with different levels of difficulty, which will be the base for the final questionnaires completed by students. This paper uses data from US students and schools participating in PISA (Programme for International Student Assessment) 2015 to illustrate that schools' efficiency measures based on aggregate data and imprecision may reflect an inaccurate picture of their performance if they are compared to measures estimated accounting for broader information provided by all students of the same school. In order to operationalize our approach, we resort to Fuzzy Data Envelopment Analysis. This methodology allows us to deal with the notion of fuzziness in some variables such as the socio-economic status of students or test scores. Our results indicate that the estimated measures of performance obtained with the fuzzy DEA approach are highly correlated with those calculated with traditional DEA models. However, we find some relevant divergences in the identification of efficient units when we account for data dispersion and vagueness.  相似文献   

5.
Martin G.  Mikulas  Matthias 《Socio》2006,40(4):314-332
We measure productivity in leading edge economic research by using data envelopment analysis (DEA) for a sample of 21 countries belonging to the Organization for Economic Cooperation and Development (OECD). Publications in ten top journals of economics from 1980 to 1998 are taken as the research output. Inputs are measured by R&D expenditure, the number of universities with economics departments and (as an uncontrollable variable) population. Under constant returns-to-scale, the US emerges as the only efficient country. Under variable returns-to-scale, the efficiency frontier is defined by the US, Ireland and New Zealand. With the exception of the US, all countries in our sample display increasing returns-to-scale, and thus have the potential to raise their efficiency by scaling up their research activities.  相似文献   

6.
Policy recommendations concerning optimal scale of production units may have serious implications for the restructuring of a sector. The piecewise linear frontier production function framework (DEA) is becoming the most popular one for assessing not only technical efficiency of operations, but also for scale efficiency and calculation of optimal scale sizes. The main purpose of the present study is to investigate if neoclassical production theory gives any guidance as to the nature of scale properties in the DEA model, and empirically explore such properties. Theoretical results indicate that the DEA model may have more irregular properties than usually assumed in neoclassical production theory, concerning shape of optimal scale curves and the M-locus. The empirical results indicate that optimal scale may be found over almost the entire size variations in outputs and inputs, thus making policy recommendations about efficient scale difficult. It seems necessary to establish the nature of optimal scale before any practical conclusions can be drawn. Proposals for indexes characterizing the nature of optimal scale are provided.  相似文献   

7.
Due to the existence of free software and pedagogical guides, the use of Data Envelopment Analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run their own efficiency analysis. Within DEA, several alternative models allow for an environmental adjustment. Four alternative models, each user-friendly and easily accessible to practitioners and decision makers, are performed using empirical data of 90 primary schools in the State of Geneva, Switzerland. Results show that the majority of alternative models deliver divergent results. From a political and a managerial standpoint, these diverging results could lead to potentially ineffective decisions. As no consensus emerges on the best model to use, practitioners and decision makers may be tempted to select the model that is right for them, in other words, the model that best reflects their own preferences. Further studies should investigate how an appropriate multi-criteria decision analysis method could help decision makers to select the right model.  相似文献   

8.
The ranking and measurement of efficiency of decision-making units by two methods—data envelopment analysis and frontier production function—may not always lead to identical results. In this framework we attempt here a critical evaluation of the frontier production function theory in terms of theoretical and empirical implications. It is shown that under certain conditions the two approaches to effciency measurement may lead to identical results.  相似文献   

9.
Improving productive efficiency is an increasingly important determinant of the future of the swine industry in Hawaii. This paper examines the productive efficiency of a sample of swine producers in Hawaii by estimating a stochastic frontier production function and the constant returns to scale (CRS) and variable returns to scale (VRS) output-oriented DEA models. The technical efficiency estimates obtained from the two frontier techniques are compared. The scale properties are also examined under the two approaches. The industry's potential for increasing production through improved efficiency is also discussed.  相似文献   

10.
In the conventional data envelopment analysis (DEA) window analysis, a decision-making unit (DMU) in each window is treated as different units in each period so that the evaluation for one unit is performed on different scales over time. This paper proposes a novel window analysis based on common weight across time (CWAT), which evaluates each unit in each window by its common scale independent of time. The model for obtaining common weights is described as linear programming. And the paper suggests the Malmquist productivity index (MPI) on CWAT, CWAT MPI, to analyze productivity change by inheriting the result of window analysis. The numerical experiments are illustrated to examine the validity of CWAT and MPI, and the result shows that the proposed method provides a new evaluation scale compared to previous studies. The proposed model is applied to evaluate the performance of China 45 iron and steel enterprises during 2009–2017. The energy and environmental efficiency are calculated using CWAT, and CWAT MPI analyzes the productivity change.  相似文献   

11.
Although railway services have been suffering financially due to modal shifts and aging populations, they have been, and will continue to be, an essential component of nations' basic social infrastructures. Since railway firms generate positive externalities, and are required to operate in pre-determined licensed areas, governmental intervention/support may, in some cases, be justified. Indeed, many types of subsidies are created and offered for railway operations in Japan; while some are meant to cover large investments, others are used as compensation for regional disparities. However, thus far, no attempt has been made to analyze the reasons for the underperformance of Japanese railway services. In other words, it is unclear whether this underperformance can be attributed to exogenous and uncontrollable causes, or endogenous phenomena and, hence, capable of being handled by managers. The optimal degree of intervention is thus not sufficiently known. In the current paper, we propose a method based on data envelopment analysis (DEA) to analyze the causes of inefficiency in Japanese railway operations, and, further, to calculate optimal subsidy levels. The latter are designed to compensate for railways' lack of complete discretion in changing location of their operations and/or increasing/decreasing these operations since they are a regulated service. Our proposed method was applied to 53 Japanese railway operators. In so doing, we identified several key characteristics related to their inefficiencies, and developed optimal subsidies designed to improve performance.  相似文献   

12.
Quality function deployment (QFD) is a proven tool for process and product development, which translates the voice of customer (VoC) into engineering characteristics (EC), and prioritizes the ECs, in terms of customer's requirements. Traditionally, QFD rates the design requirements (DRs) with respect to customer needs, and aggregates the ratings to get relative importance scores of DRs. An increasing number of studies stress on the need to incorporate additional factors, such as cost and environmental impact, while calculating the relative importance of DRs. However, there is a paucity of methodologies for deriving the relative importance of DRs when several additional factors are considered. Ramanathan and Yunfeng [43] proved that the relative importance values computed by data envelopment analysis (DEA) coincide with traditional QFD calculations when only the ratings of DRs with respect to customer needs are considered, and only one additional factor, namely cost, is considered. Also, Kamvysi et al. [27] discussed the combination of QFD with analytic hierarchy process–analytic network process (AHP–ANP) and DEAHP–DEANP methodologies to prioritize selection criteria in a service context. The objective of this paper is to propose a QFD–imprecise enhanced Russell graph measure (QFD–IERGM) for incorporating the criteria such as cost of services and implementation easiness in QFD. Proposed model is applied in an Iranian hospital.  相似文献   

13.
This study examines the effect of sample size on the mean productive efficiency of firms when the efficiency is evaluated using the non-parametric approach of Data Envelopment Analysis. By employing Monte Carlo simulation, we show how the mean efficiency is related to the sample size. The paper discusses the implications for international comparisons. As an application, we investigate the efficiency of the electricity distribution industries in Australia, Sweden and New Zealand.  相似文献   

14.
We consider situations where the a priori guidance provided by theoretical considerations indicates only that the function linking the endogenous and exogenous variables is monotone and concave (or convex). We present methods to evaluate the adequacy of a parametric functional form to represent the relationship given the minimal maintained assumption of monotonicity and concavity (or convexity). We evaluate the adequacy of an assumed parametric form by comparing the deviations of the fitted parametric form from the observed data with the corresponding deviations estimated under DEA. We illustrate the application of our proposed methods using data collected from school districts in Texas. Specifically, we examine whether the Cobb–Douglas and translog specifications commonly employed in studies of education production are appropriate characterizations. Our tests reject the hypotheses that either the Cobb–Douglas or the translog specification is an adequate approximation to the general monotone and concave production function for the Texas school districts.  相似文献   

15.
The purpose of this paper is to discuss the use of Value Efficiency Analysis (VEA) in efficiency evaluation when preference information is taken into account. Value efficiency analysis is an approach, which applies the ideas developed for Multiple Objective Linear Programming (MOLP) to Data Envelopment Analysis (DEA). Preference information is given through the desirable structure of input- and output-values. The same values can be used for all units under evaluation or the values can be specific for each unit. A decision-maker can specify the input- and output-values subjectively without any support or (s)he can use a multiple criteria support system to assist him/her to find those values on the efficient frontier. The underlying assumption is that the most preferred values maximize the decision-maker's implicitly known value function in a production possibility set or a subset. The purpose of value efficiency analysis is to estimate a need to increase outputs and/or decrease inputs for reaching the indifference contour of the value function at the optimum. In this paper, we briefly review the main ideas in value efficiency analysis and discuss practical aspects related to the use of value efficiency analysis. We also consider some extensions.  相似文献   

16.
Sensitivity of the returns to scale (RTS) classifications in data envelopment analysis is studied by means of linear programming problems. The stability region for an observation preserving its current RTS classification (constant, increasing or decreasing returns to scale) can be easily investigated by the optimal values to a set of particular DEA-type formulations. Necessary and sufficient conditions are determined for preserving the RTS classifications when input or output data perturbations are non-proportional. It is shown that the sensitivity analysis method under proportional data perturbations can also be used to estimate the RTS classifications and discover the identical RTS regions yielded by the input-based and the output-based DEA methods. Thus, our approach provides information on both the RTS classifications and the stability of the classifications. This sensitivity analysis method can easily be applied via existing DEA codes.  相似文献   

17.
A previous paper by Arnold, Bardhan, Cooper and Kumbhakar (1996) introduced a very simple method to estimate a production frontier by proceeding in two stages as follows: Data Envelopment Analysis (DEA) is used in the first stage to identify efficient and inefficient decision-making units (DMUs). In the second stage the thus identified DMUs are incorporated as dummy variables in OLS (ordinary least squares) regressions. This gave very satisfactory results for both the efficient and inefficient DMUs. Here a simulation study provides additional evidence. Using this same two-stage approach with Cobb-Douglas and CES (constant elasticity-of-substitution) production functions, the estimated values for the coefficients associated with efficient DMUs are found to be not significantly different from the true parameter values for the (known) production functions whereas the parameter estimates for the inefficient DMUs are significantly different. A separate section of the present paper is devoted to explanations of these results. Other sections describe methods for estimating input-specific inefficiencies from the first stage use of DEA in the two-stage approaches. A concluding section provides further directions for research and use.  相似文献   

18.
Technological innovation and low-carbon economy are significant for the high-quality development of China's industrial sectors. However, few scholars combine the two stages closely and discuss their coordinated development. This paper establishes an evaluation index system of technological innovation and low-carbon economy in China's industrial sectors. The technological innovation efficiency, low-carbon economy efficiency, and comprehensive efficiency of technological innovation and low-carbon economy are dynamically investigated by the two-stage data envelopment analysis (DEA) model and the DEA window analysis with 35 subsectors panel data during 1996–2018. The inter-industrial differences in the technological innovation efficiency and low-carbon economy efficiency are considered, and the influencing factors of the comprehensive efficiency of technological innovation and low-carbon economy are studied by the bootstrap truncation regression. The results show that: (1) The development of the technological innovation and low-carbon economy is uncoordinated, and the low-carbon economy efficiency needs improvement; (2) There is heterogeneous of the technological innovation efficiency and low-carbon economy efficiency in the 35 subsectors; (3) The density of science and technology institutions, and the average enterprises scale are positive to the comprehensive efficiency of technological innovation and low-carbon economy, while excessive reliance on technology introduction has a negative impaction. The corresponding suggestions are provided for promoting technological innovation efficiency and low-carbon economy efficiency of industrial sectors.  相似文献   

19.
Data envelopment analysis (DEA) has recently become relatively popular with road safety experts. Therefore, various decision-making units (DMUs), such as EU countries, have been assessed in terms of road safety performance (RSP). However, the DEA has been criticized because it evaluates DMUs based only on the concept of self-assessment, and, therefore does not provide a unique ranking for DMUs. Therefore, cross efficiency method (CEM) was developed to overcome this shortcoming. Peer-evaluations in addition to self-evaluation have made the CEM to be recognized as an effective method for ranking DMUs. The traditional CEM is based only on the standard CCR (Charnes, Cooper and Rhodes) model, and it evaluates DMUs according to their position relative to the best practice frontier while neglecting the worst practice frontier. However, the DMUs can also be assessed based on their position relative to the worst practice frontier. In this regard, the present study aims to provide a double-frontier CEM for assessing RSP by taking into account the best and worst frontiers simultaneously. For this purpose, the cross efficiency and cross anti-efficiency matrices are generated.Even though a weighted average method (WAM) is most frequently used for cross efficiency aggregation, the decision maker's (DM) preference structure may not be reflected. For this reason, the present study mainly focuses on the evidential reasoning approach (ERA), as a nonlinear aggregation method, rather than the linear WAM. Equal weights are often used for cross efficiency aggregation; consequently, the effect of the DM's subjective judgments in obtaining the overall efficiency is ignored. In this respect, the minimax entropy approach (MEA) and the maximum disparity approach (MMDA) are applied for determining the ordered weighted averaging (OWA) operator weights for cross efficiency aggregation. The weighted cross efficiencies and cross anti-efficiencies are then aggregated using the ERA. Finally, the proposed method, called DF-CEM-ERA, is used to evaluate the RSP of EU countries as well as Serbian police departments (PDs).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号