共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we propose a general approach to find the closest targets for a given unit according to a previously specified
criterion of similarity. The idea behind this approach is that closer targets determine less demanding levels of operation
for the inputs and outputs of the inefficient units to perform efficiently. Similarity can be interpreted as closeness between
the inputs and outputs of the assessed unit and the proposed targets, and this closeness can be measured by using either different
distance functions or different efficiency measures. Depending on how closeness is measured, we develop several mathematical
programming problems that can be easily solved and guarantee to reach the closest projection point on the Pareto-efficient
frontier. Thus, our approach leads to the closest targets by means of a single-stage procedure, which is easier to handle
than those based on algorithms aimed at identifying all the facets of the efficient frontier.
相似文献
José L. RuizEmail: |
2.
William W. Cooper Jesús T. Pastor Fernando Borras Juan Aparicio Diego Pastor 《Journal of Productivity Analysis》2011,35(2):85-94
A decade ago the Range Adjusted Measure (RAM) was introduced for use with Additive Models. The empirical experience gained since then recommends developing a new measure with similar characteristics but with more discriminatory power. This task is accomplished in this paper by introducing the Bounded Adjusted Measure (BAM) in connection with a new family of Data Envelopment Analysis (DEA) additive models that incorporate lower bounds for inputs and upper bounds for outputs while accepting any returns to scale imposed on the production technology. 相似文献
3.
Vladimir E. Krivonozhko Finn R. F?rsund Andrey V. Lychev 《Journal of Productivity Analysis》2012,38(2):121-130
Attempts can be found in the data envelopment analysis (DEA) literature to identify returns to scale at efficient interior points of a face on the basis of returns to scale at points of the corresponding reference sets of the production possibility set. The purpose of this paper is to show that only an interior point of a face can identify returns-to-scale properties of points lying on this face. We consider all possible cases of dispositions of faces from this point of view. Returns-to-scale properties of the corresponding reference units are then established. We also show that to find returns-to-scale at an interior point of a face is a much easier problem than to find all vertices of this face. 相似文献
4.
Data envelopment analysis (DEA) measures the efficiency of each decision making unit (DMU) by maximizing the ratio of virtual
output to virtual input with the constraint that the ratio does not exceed one for each DMU. In the case that one output variable
has a linear dependence (conic dependence, to be precise) with the other output variables, it can be hypothesized that the
addition or deletion of such an output variable would not change the efficiency estimates. This is also the case for input
variables. However, in the case that a certain set of input and output variables is linearly dependent, the effect of such
a dependency on DEA is not clear. In this paper, we call such a dependency a cross redundancy and examine the effect of a cross redundancy on DEA. We prove that the addition or deletion of a cross-redundant variable
does not affect the efficiency estimates yielded by the CCR or BCC models. Furthermore, we present a sensitivity analysis
to examine the effect of an imperfect cross redundancy on DEA by using accounting data obtained from United States exchange-listed
companies. 相似文献
5.
6.
Finn R. Førsund Lennart Hjalmarsson Vladimir E. Krivonozhko Oleg B. Utkin 《Journal of Productivity Analysis》2007,28(1-2):45-56
The qualitative characterisation of returns to scale in DEA has been a research issue the last decade. However, quantitative information provides the ultimate information. This paper presents two ways of obtaining numerical values of scale elasticity by an indirect approach using efficiency scores and dual variables for radial projections of inefficient points to the frontier, and a direct approach that is more general and powerful and directly evaluates numerically scale elasticity at any point on the DEA surface along intersections with planes. The direct and indirect approaches are compared using real data and a very high correspondence is found. 相似文献
7.
This paper covers some of the past accomplishments of DEA (Data Envelopment Analysis) and some of its future prospects. It
starts with the “engineering-science” definitions of efficiency and uses the duality theory of linear programming to show
how, in DEA, they can be related to the Pareto–Koopmans definitions used in “welfare economics” as well as in the economic
theory of production. Some of the models that have now been developed for implementing these concepts are then described and
properties of these models and the associated measures of efficiency are examined for weaknesses and strengths along with
measures of distance that may be used to determine their optimal values. Relations between the models are also demonstrated
en route to delineating paths for future developments. These include extensions to different objectives such as “satisfactory”
versus “full” (or “strong”) efficiency. They also include extensions from “efficiency” to “effectiveness” evaluations of performances
as well as extensions to evaluate social-economic performances of countries and other entities where “inputs” and “outputs”
give way to other categories in which increases and decreases are located in the numerator or denominator of the ratio (=engineering-science)
definition of efficiency in a manner analogous to the way output (in the numerator) and input (in the denominator) are usually
positioned in the fractional programming form of DEA. Beginnings in each of these extensions are noted and the role of applications
in bringing further possibilities to the fore is highlighted.
相似文献
J. ZhuEmail: |
8.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right. 相似文献
9.
The generalised additive models (GAM) are widely used in data analysis. In the application of the GAM, the link function involved is usually assumed to be a commonly used one without justification. Motivated by a real data example with binary response where the commonly used link function does not work, we propose a generalised additive models with unknown link function (GAMUL) for various types of data, including binary, continuous and ordinal. The proposed estimators are proved to be consistent and asymptotically normal. Semiparametric efficiency of the estimators is demonstrated in terms of their linear functionals. In addition, an iterative algorithm, where all estimators can be expressed explicitly as a linear function of , is proposed to overcome the computational hurdle for the GAM type model. Extensive simulation studies conducted in this paper show the proposed estimation procedure works very well. The proposed GAMUL are finally used to analyze a real dataset about loan repayment in China, which leads to some interesting findings. 相似文献
10.
Umberto Amato Anestis Antoniadis Italia De Feis Yannig Goude Audrey Lagache 《International Journal of Forecasting》2021,37(1):171-185
Short-Term Load Forecasting (STLF) is a fundamental instrument in the efficient operational management and planning of electric utilities. Emerging smart grid technologies pose new challenges and opportunities. Although load forecasting at the aggregate level has been extensively studied, electrical load forecasting at fine-grained geographical scales of households is more challenging. Among existing approaches, semi-parametric generalized additive models (GAM) have been increasingly popular due to their accuracy, flexibility, and interpretability. Their applicability is justified when forecasting is addressed at higher levels of aggregation, since the aggregated load pattern contains relatively smooth additive components. High resolution data are highly volatile, forecasting the average load using GAM models with smooth components does not provide meaningful information about the future demand. Instead, we need to incorporate irregular and volatile effects to enhance the forecast accuracy. We focus on the analysis of such hybrid additive models applied on smart meters data and show that it leads to improvement of the forecasting performances of classical additive models at low aggregation levels. 相似文献
11.
Alexander Cotte Poveda 《Socio》2011,45(4):154-164
In this paper, we analyse economic development and growth through traditional measures (gross domestic product and human development index) and Data Envelopment Analysis (DEA) in Colombian departments over the period 1993–2007. We use a DEA model to measure and rank economic development and growth from different approaches such as poverty, equality and security. The results show considerable variation in efficiency scores across departments. A second-stage panel data analysis with fixed effects reveals that higher levels of economic activity, quality life, employment and security are associated with a higher efficiency score based on the standards of living, poverty, equality and security. All findings of this analysis should demonstrate that economic development and growth could be achieved most effectively through a decrease in poverty, an increase in equality, a reduction in violence, and improved security. This indicates the need to generate effective policies that guarantee the achievement of these elements in the interest of all members of society. 相似文献
12.
This work focuses on developing a forecasting model for the water inflow at an hydroelectric plant’s reservoir for operations planning. The planning horizon is 5 years in monthly steps. Due to the complex behavior of the monthly inflow time series we use a Bayesian dynamic linear model that incorporates seasonal and autoregressive components. We also use climate variables like monthly precipitation, El Niño and other indices as predictor variables when relevant. The Brazilian power system has 140 hydroelectric plants. Based on geographical considerations, these plants are collated by basin and classified into 15 groups that correspond to the major river basins, in order to reduce the dimension of the problem. The model is then tested for these 15 groups. Each group will have a different forecasting model that can best describe its unique seasonality and characteristics. The results show that the forecasting approach taken in this paper produces substantially better predictions than the current model adopted in Brazil (see Maceira & Damazio, 2006), leading to superior operations planning. 相似文献
13.
In this paper the application of results of dynamical system theory to urban retail models will be discussed. First of all attention is paid to the equilibria of these models; their existence and uniqueness as well as their stability. Next the results are aggravated to the situation of a two zonal system. Finally some economic consequences of parameter changes are described. 相似文献
14.
In a sample-selection model with the ‘selection’ variable Q and the ‘outcome’ variable Y∗, Y∗ is observed only when Q=1. For a treatment D affecting both Q and Y∗, three effects are of interest: ‘participation ’ (i.e., the selection) effect of D on Q, ‘visible performance ’ (i.e., the observed outcome) effect of D on Y≡QY∗, and ‘invisible performance ’ (i.e., the latent outcome) effect of D on Y∗. This paper shows the conditions under which the three effects are identified, respectively, by the three corresponding mean differences of Q, Y, and Y|Q=1 (i.e., Y∗|Q=1) across the control (D=0) and treatment (D=1) groups. Our nonparametric estimators for those effects adopt a two-sample framework and have several advantages over the usual matching methods. First, there is no need to select the number of matched observations. Second, the asymptotic distribution is easily obtained. Third, over-sampling the control/treatment group is allowed. Fourth, there is a built-in mechanism that takes into account the ‘non-overlapping support problem’, which the usual matching deals with by choosing a ‘caliper’. Fifth, a sensitivity analysis to gauge the presence of unobserved confounders is available. A simulation study is conducted to compare the proposed methods with matching methods, and a real data illustration is provided. 相似文献
15.
Summary When discrete autoregressive-moving average time series are fitted by least squares, both the residuals and their autocorrelations are for large n representable as singular linear transformations of the true errors (or white noise) and their autocomlations, respectively, and the matrices of these transformations arc both of the form I-X(X'X) -1 X, where the rank of X is the number of parameters estimated. However, the large-sample properties of these two sets of statistics are fundamentally different, a phenomenon which is of considerable importance for the use of the residual autocorrelations in performing tests of fit of these models. 相似文献
16.
Modeling and forecasting of stock index volatility with APARCH models under ordered restriction 下载免费PDF全文
Milton Abdul Thorlie Lixin Song Muhammad Amin Xiaoguang Wang 《Statistica Neerlandica》2015,69(3):329-356
This article examines volatility models for modeling and forecasting the Standard & Poor 500 (S&P 500) daily stock index returns, including the autoregressive moving average, the Taylor and Schwert generalized autoregressive conditional heteroscedasticity (GARCH), the Glosten, Jagannathan and Runkle GARCH and asymmetric power ARCH (APARCH) with the following conditional distributions: normal, Student's t and skewed Student's t‐distributions. In addition, we undertake unit root (augmented Dickey–Fuller and Phillip–Perron) tests, co‐integration test and error correction model. We study the stationary APARCH (p) model with parameters, and the uniform convergence, strong consistency and asymptotic normality are prove under simple ordered restriction. In fitting these models to S&P 500 daily stock index return data over the period 1 January 2002 to 31 December 2012, we found that the APARCH model using a skewed Student's t‐distribution is the most effective and successful for modeling and forecasting the daily stock index returns series. The results of this study would be of great value to policy makers and investors in managing risk in stock markets trading. 相似文献
17.
Kazuo Yamaguchi 《Quality and Quantity》1989,23(1):21-38
This paper introduces a distinct log-linear approach to the analysis of correlates of Guttman-scale positions. I introduce new models, which are referred to as the log-quadratic models. The models are obtained by imposing certain structural constraints on specific quasi-independence models used by Goodman for the Guttman-scale analysis. The derivation of the models owes in part to early studies of Guttman and Suchman on the intensity of attitudes as a structural component of the Guttman scale. The log-quadratic models employ three important structural parameters that characterize (1) the extent of scalability, (2) the mean Guttman-scale position, and (3) the extent of polarization of scale-type response patterns. The third parameter may also be interpreted in certain cases as a measure of attitudinal intensity. These parameters can be used efficiently as dependent variables for a multinomial logit analysis with status covariates. An application examines the effects of region, occupational status, marital status, age cohort and sex on the patterns of responses to a set of questions about mistreatment by tax authorities. A comparison of results with the application of Schwartz's latent-class models to the same data is also made. 相似文献
18.
19.
Determination of the efficiency of the world railway companies by method of DEA and comparison of their efficiency by Tobit analysis 总被引:1,自引:0,他引:1
This paper attempts to measure the performance of railway companies that produce passenger and freight services around the world. The data covering 10 years from 2000 to 2009 is analyzed first via the data envelopment analysis method in order to obtain technical efficiency and allocative efficiency scores of 31 railway companies for the purpose of the study. In the analysis conducted by use of the CCR model, while total 17 firms were efficient in the first year, this figure reaches to 18 companies for the last year with one more addition. While only two companies seem efficient in the first year, this figure goes down to one for the last year. With input oriented and variable return analysis conducted by use of the BCC model, the firms having technical efficiency at the beginning of the period were 20 in number. At the end of the period, the figure reaches to 24. Next, the outputs of DEA are correlated by Tobit regression and tried to determine decisiveness of the outputs on the efficiency. It has been seen that the same output composition used with Tobit analysis gives more compliant results with the allocative efficiency scores rather than with the technical efficiency scores. 相似文献
20.
《International Journal of Forecasting》2003,19(1):95-110
This paper develops a Bayesian vector autoregressive model (BVAR) for the leader of the Portuguese car market to forecast the market share. The model includes five marketing decision variables. The Bayesian prior is selected on the basis of the accuracy of the out-of-sample forecasts. We find that BVAR models generally produce more accurate forecasts. The out-of-sample accuracy of the BVAR forecasts is also compared with that of forecasts from an unrestricted VAR model and of benchmark forecasts produced from three univariate models. Additionally, competitive dynamics are revealed through variance decompositions and impulse response analyses. 相似文献