首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
2.
This paper examines the technical efficiency of US Federal Reserve check processing offices over 1980–2003. We extend results from Park et al. [Park, B., Simar, L., Weiner, C., 2000. FDH efficiency scores from a stochastic point of view. Econometric Theory 16, 855–877] and Daouia and Simar [Daouia, A., Simar, L., 2007. Nonparametric efficiency analysis: a multivariate conditional quantile approach. Journal of Econometrics 140, 375–400] to develop an unconditional, hyperbolic, α-quantile estimator of efficiency. Our new estimator is fully non-parametric and robust with respect to outliers; when used to estimate distance to quantiles lying close to the full frontier, it is strongly consistent and converges at rate root-n, thus avoiding the curse of dimensionality that plagues data envelopment analysis (DEA) estimators. Our methods could be used by policymakers to compare inefficiency levels across offices or by managers of individual offices to identify peer offices.  相似文献   

3.
In this study, we look for empirical support for the hypothesis that there is a positive relationship between the levels of corporate governance quality across firms and the relative efficiency levels of these firms. This hypothesis is related to Leibenstein’s idea of X-efficiency. We use the data envelopment analysis (DEA) estimator to obtain proxies for X-[in]efficiency of firms in our sample and then analyze them with respect to different ownership structures by comparing distributions and aggregate efficiencies across different groups. We also use truncated regression with bootstrap, following Simar and Wilson Estimation and influence in two stage, semi-parametric models of production process, Simar and Zelenyuk (2003) to infer on relationship of inefficiency to various indicators of quality of corporate governance, ownership variables, as well as industry and year specific dummies. The data is coming from seven industries in Ukraine. “The entrepreneurship structure itself may be critical, with the classic issue of the separation of ownership from control being regarded as one of the earliest and most important sources of X-efficiency” (Button and Weyman-Jones, 1992, American Economic Review). We would like to dedicate this paper to the memory of Christos Panzios—co-editor who handled our paper to almost the very end, whose suggestions and encouragement have helped us substantially improve our paper.  相似文献   

4.
This paper analyses efficiency drivers of a representative sample of Spanish football clubs by means of the two-stage data envelopment analysis (DEA) procedure proposed by Simar and Wilson (J Econ, 136:31–64, 2007). In the first stage, the technical efficiency of football clubs is estimated using a bootstrapped DEA model in order to establish which of them are the most efficient; the ranking is based on total productivity in the period 1996–2004. In the second stage, the Simar and Wilson (J Econ, 136:31–64, 2007) procedure is used to bootstrap the DEA scores with a truncated bootstrapped regression. Policy implications of the main findings are also considered.  相似文献   

5.
Data envelopment analysis (DEA) measures the efficiency of each decision making unit (DMU) by maximizing the ratio of virtual output to virtual input with the constraint that the ratio does not exceed one for each DMU. In the case that one output variable has a linear dependence (conic dependence, to be precise) with the other output variables, it can be hypothesized that the addition or deletion of such an output variable would not change the efficiency estimates. This is also the case for input variables. However, in the case that a certain set of input and output variables is linearly dependent, the effect of such a dependency on DEA is not clear. In this paper, we call such a dependency a cross redundancy and examine the effect of a cross redundancy on DEA. We prove that the addition or deletion of a cross-redundant variable does not affect the efficiency estimates yielded by the CCR or BCC models. Furthermore, we present a sensitivity analysis to examine the effect of an imperfect cross redundancy on DEA by using accounting data obtained from United States exchange-listed companies.  相似文献   

6.
It is well-known that the naive bootstrap yields inconsistent inference in the context of data envelopment analysis (DEA) or free disposal hull (FDH) estimators in nonparametric frontier models. For inference about efficiency of a single, fixed point, drawing bootstrap pseudo-samples of size m < n provides consistent inference, although coverages are quite sensitive to the choice of subsample size m. We provide a probabilistic framework in which these methods are shown to valid for statistics comprised of functions of DEA or FDH estimators. We examine a simple, data-based rule for selecting m suggested by Politis et al. (Stat Sin 11:1105–1124, 2001), and provide Monte Carlo evidence on the size and power of our tests. Our methods (i) allow for heterogeneity in the inefficiency process, and unlike previous methods, (ii) do not require multivariate kernel smoothing, and (iii) avoid the need for solutions of intermediate linear programs.  相似文献   

7.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.  相似文献   

8.
Previous bank efficiency studies assumed that the same efficient frontier system exists in different types of banks, but the conventional radial data envelopment analysis (DEA) model slacks are not counted in efficiency scores, and nonradial DEA models do not consider radial features. This paper develops an Epsilou‐based measure meta‐DEA approach for measuring the efficiencies and technology gaps of different bank types. The results are as the follows: The average efficiency and technology gaps of nonfinancial holding banks are better than those of financial holding banks. The nonfinancial holding banks are more efficient than financial holding banks in investment and other income.  相似文献   

9.
Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). In recent years, it has been widely used to evaluate two-stage systems under different organization mechanisms. This study modifies the conventional leader–follower DEA models for two-stage systems by considering the uncertainty of data. The dual deterministic linear models are first constructed from the stochastic CCR models under the assumption that all components of inputs, outputs, and intermediate products are related only with some basic stochastic factors, which follow continuous and symmetric distributions with nonnegative compact supports. The stochastic leader–follower DEA models are then developed for measuring the efficiencies of the two stages. The stochastic efficiency of the whole system can be uniquely decomposed into the product of the efficiencies of the two stages. Relationships between stochastic efficiencies from stochastic CCR and stochastic leader–follower DEA models are also discussed. An example of the commercial banks in China is considered using the proposed models under different risk levels.  相似文献   

10.
《Journal of econometrics》2002,106(2):203-216
The coefficient matrix of a cointegrated first-order autoregression is estimated by reduced rank regression (RRR), depending on the larger canonical correlations and vectors of the first difference of the observed series and the lagged variables. In a suitable coordinate system the components of the least-squares (LS) estimator associated with the lagged nonstationary variables are of order 1/T, where T is the sample size, and are asymptotically functionals of a Brownian motion process; the components associated with the lagged stationary variables are of the order T−1/2 and are asymptotically normal. The components of the RRR estimator associated with the stationary part are asymptotically the same as for the LS estimator. Some components of the RRR estimator associated with nonstationary regressors have zero error to order 1/T and the other components have a more concentrated distribution than the corresponding components of the LS estimator.  相似文献   

11.
The explanation of productivity differentials is very important to identify the economic conditions that create inefficiency and to improve managerial performance. In the literature two main approaches have been developed: one-stage approaches and two-stage approaches. Daraio and Simar (2005, J Prod Anal 24(1):93–121) propose a fully nonparametric methodology based on conditional FDH and conditional order-m frontiers without any convexity assumption on the technology. However, convexity has always been assumed in mainstream production theory and general equilibrium. In addition, in many empirical applications, the convexity assumption can be reasonable and sometimes natural. Lead by these considerations, in this paper we propose a unifying approach to introduce external-environmental variables in nonparametric frontier models for convex and nonconvex technologies. Extending earlier contributions by Daraio and Simar (2005, J Prod Anal 24(1):93–121) as well as Cazals et al. (2002, J Econometrics 106:1–25), we introduce a conditional DEA estimator, i.e., an estimator of production frontier of DEA type conditioned to some external-environmental variables which are neither inputs nor outputs under the control of the producer. A robust version of this conditional estimator is proposed too. These various measures of efficiency provide also indicators of convexity which we illustrate using simulated and real data. Cinzia Daraio received Research support from the Italian Ministry of Education Research on Innovation Systems Project (iRis) “The reorganization of the public system of research for the technological transfer: governance, tools and interventions” and from the Italian Ministry of Educational Research Project (MIUR 40% 2004) “System spillovers on the competitiveness of Italian economy: quantitative analysis for sectoral policies” which are acknowledged. Léopold Simar received Research support from the “Interuniversity Attraction Pole”, Phase V (No. P5/24) from the Belgian Government (Belgian Science Policy) is acknowledged.  相似文献   

12.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

13.
Measuring residential energy efficiency improvements with DEA   总被引:1,自引:0,他引:1  
This paper measures energy efficiency improvements of US single-family homes between 1997 and 2001 using a two-stage procedure. In the first stage, an indicator of energy efficiency is derived by means of Data Envelopment Analysis (DEA), and the analogy between the DEA estimator and traditional measures of energy efficiency is demonstrated. The second stage employs a bootstrapped truncated regression technique to decompose the variation in the obtained efficiency estimates into a climatic component and factors attributed to efficiency improvements. Results indicate a small but significant improvement of energy efficiency over the studied time interval, mainly accounted for by fuel oil and natural gas users.  相似文献   

14.
The article describes a nonlinear three-stage least-squares estimator for the parameters of a system of simultaneous, nonlinear, implicit equations; the method allows the estimation of these parameters subject to nonlinear parametric restrictions across equations. The estimator is shown to be strongly consistent, asymptotically normally distributed, and more efficient than the nonlinear two-stage least-squares estimator. Some practical implications of the regularity conditions used to obtain these results are discussed from the point of view of one whose interest is in applications, Also, computing methods using readily available nonlinear regression programs are described.  相似文献   

15.
This paper considers estimation and inference in linear panel regression models with lagged dependent variables and/or other weakly exogenous regressors when N (the cross‐section dimension) is large relative to T (the time series dimension). It allows for fixed and time effects (FE‐TE) and derives a general formula for the bias of the FE‐TE estimator which generalizes the well‐known Nickell bias formula derived for the pure autoregressive dynamic panel data models. It shows that in the presence of weakly exogenous regressors inference based on the FE‐TE estimator will result in size distortions unless N/T is sufficiently small. To deal with the bias and size distortion of the FE‐TE estimator the use of a half‐panel jackknife FE‐TE estimator is considered and its asymptotic distribution is derived. It is shown that the bias of the half‐panel jackknife FE‐TE estimator is of order T?2, and for valid inference it is only required that N/T3→0, as N,T jointly. Extension to unbalanced panel data models is also provided. The theoretical results are illustrated with Monte Carlo evidence. It is shown that the FE‐TE estimator can suffer from large size distortions when N>T, with the half‐panel jackknife FE‐TE estimator showing little size distortions. The use of half‐panel jackknife FE‐TE estimator is illustrated with two empirical applications from the literature.  相似文献   

16.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

17.
18.
In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement.  相似文献   

19.
Government supported technological research and development can help the private sector to compete globally. A more accurate evaluation system considering multi-factor performance is highly desired. This study offers an alternative perspective and characterization of the performance of Technology Development Programs (TDPs) via a two-stage process that emphasizes research and development (R&D) and technology diffusion. This study shall employ a sequential data envelopment analysis (DEA) with a non-parametric statistical analysis to analyze differences in intellectual capital variables among various TDPs. The results reveal that R&D performance is better than technology diffusion performance for the TDPs. In addition, the “Mechanical, Mechatronic, and Transportation field” is more efficient than the other fields in both R&D and technology diffusion performance models. The findings of this study point to the importance of intellectual capital in achieving high levels of TDP efficiency. The potential applications and strengths of DEA and intellectual capital in assessing the performance of TDP are also highlighted.  相似文献   

20.
Consider the problem of estimating a mean vector in ap-variate normal distribution under two-stage sequential sampling schemes. The paper proposes a stopping rule motivated by the James-Stein shrinkage estimator, and shows that the stopping rule and the corresponding shrinkage estimator asymptotically dominate the usual two-stage procedure under a sequence of local alternatives forp3. Also the results of Monte Carlo simulation for average sample sizes and risks of estimators are stated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号