首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A large literature measures the allocative and technical efficiency of a set of firms using econometric techniques to estimate stochastic production frontiers or distance functions. Typically, researchers compute only the precision of individual efficiency rankings. Recently, Horrace and Schmidt (Journal of Applied Economics 15, 1–26, 2000) have applied sampling theoretic statistical techniques known as multiple comparisons with a control (MCC) and multiple comparisons with the best (MCB) to make statistical comparisons of efficiency rankings. As an alternative, this paper offers a Bayesian multiple comparison procedure that we argue is simpler to implement, gives the researcher increased flexibility over the type of comparison, and provides greater, and more intuitive, information content. For these methods and a parametric bootstrap technique, we carry out multiple comparisons of technical efficiency rankings for a set of U.S. electric generating firms, estimated using a distance function framework. We find that the Bayesian method provides substantially more precise inferences than obtained using the MCB and MCC methods.Jel Classification: C11, C32, D24.  相似文献   

2.
Risk management is now present in many economic sectors. However, none of existing studies consider risk management as a potential determinant of firm performance. In this paper, we investigate the role of risk management and financial intermediation in creating value for financial institutions by analyzing U.S. property-liability insurers. Our main goal is to test how risk management and financial intermediation activities create value for insurers by enhancing economic efficiency through cost reductions. We consider these two activities as intermediate outputs and estimate their shadow prices. Insurer cost efficiency is measured using an econometric cost function. The econometric results show that both activities significantly increase the efficiency of the property-liability insurance industry.  相似文献   

3.
An econometric methodology is developed for nonparametric estimation of concave production technologies. The methodology, based on the principle of maximum likelihood, uses entropic distance and convex programming techniques to estimate production functions. Empirical applications are presented to demonstrate the feasibility of the methodology in small and large datasets. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
This article studies the estimation of production frontiers and efficiency scores when the commodity of interest is an economic bad with a discrete distribution. Existing parametric econometric techniques (stochastic frontier methods) assume that output is a continuous random variable but, if output is discretely distributed, then one faces a scenario of model misspecification. Therefore a new class of econometric models has been developed to overcome this problem. The Delaporte subclass of models is studied in detail, and tests of hypotheses are proposed to discriminate among parametric models. In particular, Pearson’s chi-squared test is adapted to construct a new kernel-based consistent Pearson test. A Monte Carlo experiment evaluates the merits of the new model and methods, and these are used to estimate the frontier and efficiency scores of the production of infant deaths in England. Extensions to the model are discussed.  相似文献   

5.
The paper examines gains in efficiency from joint estimation of systems of ARMA processes where cross-correlation is due to contemporaneous correlation among disturbances. The asymptotic variance of joint estimates is derived and it involves only variances and covariances among purely AR processes corresponding to the AR and MA parts of the constituent processes. Small sample gains are evaluated by Monte Carlo methods. Application of joint estimation to two short-term interest rates is shown to result in more accurate post-sample predictions relative to both univariate models and the FMP econometric model.  相似文献   

6.
The performance on small and medium-size samples of several techniques to solve the classification problem in discriminant analysis is investigated. The techniques considered are two widely used parametric statistical techniques (Fisher's linear discriminant function and Smith's quadratic function), and a class of recently proposed nonparametric estimation techniques based on mathematical programming (linear and mixed-integer programming). A simulation study is performed, analyzing the relative performance of the above techniques in the two-group case, for various small sample sizes, moderate group overlap and across six different data conditions. Training samples as well as validation samples are used to assess the classificatory performance of the techniques. The degree of group overlap and sample sizes selected for analysis in this paper are of interest in practice because they closely reflect conditions of many real data sets. The results of the experiment show that Smith's nonlinear quadratic function tends to be superior on the training samples and validation samples when the variances–covariances across groups are heterogeneous, while the mixed-integer technique performs best on the training samples when the variances–covariances are equal, and on validation samples with equal variances and discrete uniform independent variables. The mixed-integer technique and the quadratic discriminant function are also found to be more sensitive than the other techniques to the sample size, giving disproportionally inaccurate results on small samples.  相似文献   

7.
The World Competitiveness Report (WCR), a report annually produced by the Institute for Management Development, which is based in Switzerland, is a study that rates and ranks the competitiveness of a certain group of nations (OECD countries plus some newly emerging economies) and is a widely quoted report in the international media, especially by government and public leaders. Although some ideas as to the methodology used in the rating and ranking of countries are given, the details are not provided in the WCRs. Therefore, the methodology used in the WCRs is in large part unknown to the public.An intelligent use of the WCR requires a rather sound understanding of the methodology by its potential users; politicians, company executives, and public policy makers. The objective of this paper is to uncover and understand the methodology of the WCR through exact replications of its rankings at all levels of aggregation. An estimation model based on mathematical programming is used to replicate the WCR rankings.  相似文献   

8.
9.
This paper is concerned with the large sample efficiency of the asymptotic least-squares (ALS) estimators introduced by Gouriéroux, Monfort, and Trognon (1982, 1985) and Chamberlain (1982, 1984). We show how the efficiency of these estimators is affected when additional information is incorporated into the estimation procedure. The relationship between ALS and maximum likelihood is discussed. It is shown that ALS can be used to obtain asymptotically efficient estimates for a large range of econometric models. Many results from the literature on estimation are special cases of the framework adopted in this paper. An application of ALS to a dynamic rational expections factor demand model in the manufacturing sector in The Netherlands demonstrates the potential of the method in the estimation of the parameters in models which are subject to nonlinear cross-equation restrictions.  相似文献   

10.
In industry sectors where market prices for goods and services are unavailable, it is common to use estimated output and input distance functions to estimate rates of productivity change. It is also possible, but less common, to use estimated distance functions to estimate the normalised support (or efficient) prices of individual inputs and outputs. A problem that arises in the econometric estimation of these functions is that more than one variable in the estimating equation may be endogenous. In such cases, maximum likelihood estimation can lead to biased and inconsistent parameter estimates. To solve the problem, we use linear programming to construct a quantity index. The distance function is then written in the form of a conventional stochastic frontier model where the explanatory variables are unambiguously exogenous. We use this approach to estimate productivity indexes, measures of environmental change, levels of efficiency, and support prices for a sample of Australian public hospitals.  相似文献   

11.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

12.
The flowshop scheduling problem with no intermediate storage (NIS problem) was studied in this research. This problem, a modification of the classical flowshop scheduling problem, arises when a set of jobs, once started, must be processed with no wait between consecutive machines. By eliminating the need for intermediate storage, reduction of capital investment in work-in-process inventory can be achieved. This approach can be practically applied to a steel mill, in which the metal should be continuously processed in order to maintain high temperature, as well as many other similar processes.To provide insight into selecting an appropriate scheduling technique for solving the NIS problem, six methods were compared in terms of the quality and efficiency of the scheduling solutions they produced. The quality of solution was measured by makespan and the efficiency of solution was measured by the computational time requirements. The six methods examined in this study included: the Gupta algorithm, the Szwarc algorithm, an integer linear programming method, the Campbell et al. algorithm, the Dannenbring rapid access with extensive search algorithm, and a mixed integer linear programming procedure.The problem factors considered in this study were number of jobs, number of machines, and range of processing times. Relatively small-sized problems were tested with up to ten jobs, five machines, and 1–100 processing time units. Six solution techniques were selected and compared, with respect to makespan and computational time requirements, for multiple combinations of the three problem variables.The resulting test data were investigated using graphical procedures and formal statistical analyses. Initially, plots of mean values were used to graphically compare the six solution methods for the two performance criteria. Next, a multivariate analysis of variance study was conducted to investigate the quality and efficiency of the algorithms with respect to the problem factors. Then, a multiple comparison procedure was employed to analyze treatment mean differences among the six solution techniques. Results from the statistical analyses are summarized in this article.It was concluded that the two mathematical programming methods, the integer linear programming procedure and the mixed integer linear programming methods, produced the best performance in terms of makespan. These two methods, however, used a far greater amount of computational time than the other four solution techniques. Producing moderately good results as far as quality of performance, the Gupta and the Szwarc algorithms were comparable with the Campbell et al. and the Dannenbring algorithms in terms of computational efficiency. By comparison, the Campbell et al. and the Dannenbring algorithms produced the poorest performance with respect to the quality of solutions.Certain limitations were imposed for this study. The problem size considered was relatively small and the sample size was also limited to ten problems per cell. In addition, a uniform distribution function was used for generating processing times within certain ranges. These limitations were necessary in order to allow the various scheduling problems to be solved within a reasonable amount of computer time.Finally, some suggestions were provided for future research in the NIS problem area. The integer linear programming method was recommended as a standard of evaluation, owing to its best overall performance. A possible area for future research would involve the improvement of the Gupta and the Szwarc algorithms through the use of backtracking procedures within the branch-and-bound technique, so that they might be competitive with the mathematical programming methods with respect to quality of performance. Other distribution functions could be investigated in terms of the influence of the distribution of processing times on the performance measures.  相似文献   

13.
This study develops an analytical model capable of decomposing both intertemporal and multilateral cost variation. It begins by attributing cost variation to a price effect and a quantity effect. Then the quantity effect is decomposed into a productivity effect and an activity effect. The productivity effect in turn decomposes into a cost efficiency effect and, in the intertemporal context, a technical change effect. This paper also shows how the intertemporal and multilateral cost decompositions can be implemented, using linear programming techniques. These techniques offer certain advantages over conventional econometric techniques whenever a substantial portion of cost variation is due to variation in cost efficiency. The two cost decompositions are illustrated with a pair of benchmarking exercises based on a panel of 93 US electric power generating companies, in which variation in cost efficiency does play a key role. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
在传统教学中,物流与设施规划课程的物流分析、设施布局等内容的一些经典问题采用启发式方法或试验法来解决。为了更精确地解决这些问题,采用数学规划方法提供解决方案。主要围绕建模技巧、应用软件教学和解决物流与设施规划专业案例等方面培养学生运用优化原理与方法构建运筹优化模型的能力以及运用ILOG OPL优化软件解决实际优化问题的能力。以二次分配问题和多产品工艺过程图优化问题为案例阐述了数学规划建模和编程求解的过程。  相似文献   

15.
非等间隔动态面板数据模型:估计方法与应用实例   总被引:1,自引:0,他引:1  
非等间隔动态面板数据模型由于相邻两期观测之间的时间长度不尽相同使得传统动态面板数据模型的估计方法失效,本文提出使用非线性最小二乘、最短距离以及它们的一步估计量对该模型进行估计,证明了这四个估计量的一致性和渐进正态性,同时借助蒙特卡洛模拟的方法验证了它们在有限样本中的估计精度,并且进一步使用所提出的估计量讨论了以往文献由于缺乏相应的估计方法而没有被研究或者充分讨论的问题,得到了一些新的结论。  相似文献   

16.
Automobile insurance companies in the United States currently utilize simple exponential trend models to forecast paid claim costs, an important variable in ratemaking. This paper tests the performance of econometric and ARIMA models, as well as the current insurance industry method, in forecasting two paid claim cost series. The experiments encompass eight forecast periods ranging from 1974 through early 1983. The results indicate that automobile insurers could significantly improve their forecasts of property damage liability claim costs by adopting econometric models. For bodily injury liability claim costs, the accuracy of the econometric and insurance industry methods is approximately the same, and both outperform the ARIMA models. Overall, a net gain in accuracy could be achieved by adopting econometric models.  相似文献   

17.
This paper compares three new methods of estimating the asset returns covariance and evaluates their performances with the conventional covariance estimation methods. We find that taking a simple average of the historical sample covariance matrix and the covariance matrix estimated from the single-index model provides the best overall performance among all competing methods. In addition, we find that commonly used assessment criteria provide systematically different rankings, which explains the preferences to different types of estimation methods in the existing literature. We believe the difference between our results and those of previous studies may be partly due to the differences in the ratio of the time series observations to the number of stocks in the samples that have been used in different studies.  相似文献   

18.
This paper is a survey of recent contributions to, and developments of, the relationship between outsourcing, efficiency and productivity growth in manufacturing and services. The objective is to provide a thorough and up–to–date survey that provides a significant discussion on data, as well as on the core methods of measuring efficiency and productivity. First, the readers are introduced to the measurement of partial and total factor productivity growth. Different parametric and non–parametric approaches to the productivity measurement in the context of static, dynamic and firm–specific modelling are discussed. Second, we survey the econometric approach to efficiency analysis. The issues of modelling, distributional assumptions and estimation methods are discussed assuming that cross–sectional or panel data are available. Third, the relationship between outsourcing and productivity growth in manufacturing and services is discussed. The correspondence between a number of hypotheses and empirical findings are examined. Examples of varieties of relevant empirical applications, their findings and implications are presented. Fourth, measurement of inputs and outputs in manufacturing and services are discussed. Finally, to promote useful research, a number of factors important to the analysis of outsourcing, efficiency and productivity growth in the service sector are summarised.  相似文献   

19.
The methodology of free disposal hull (FDH) measure of productive efficiency is defined and put in perspectivevis-à-vis other nonparametric techniques, in terms of the postulates on which they respectively rest. Computational issues are also considered, in relation to the linear programming techniques used in DEA. The first application bears on a comparison between a private and a public bank, in terms of the relative efficiency of their branches. Important characteristics of the data are revealed by FDH that are not by DEA, due to a better data fit. Next, efficiency estimates of judicial activities are used to evaluate what part of the existing backlog could be reduced by efficiency increases. Finally, with monthly data of an urban transit firm over 12 years, the FDH methodology is extended to a sequential treatment of time series, that supplements efficiency estimation with a measure of technical progress.  相似文献   

20.
This paper proposes an econometric framework for joint estimation of technology and technology choice/adoption decision. The procedure takes into account the endogeneity of technology choice, which is likely to depend on inefficiency. Similarly, output from each technology depends on inefficiency. The effect of the dual role of inefficiency is estimated using a single-step maximum likelihood method. The proposed model is applied to a sample of conventional and organic dairy farms in Finland. The main findings are: the conventional technology is more productive, ceteris paribus; organic farms are, on average, less efficient technically than conventional farms; both efficiency and subsidy are found to be driving forces behind adoption of organic technology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号