首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In most empirical studies, once the best model has been selected according to a certain criterion, subsequent analysis is conducted conditionally on the chosen model. In other words, the uncertainty of model selection is ignored once the best model has been chosen. However, the true data-generating process is in general unknown and may not be consistent with the chosen model. In the analysis of productivity and technical efficiencies in the stochastic frontier settings, if the estimated parameters or the predicted efficiencies differ across competing models, then it is risky to base the prediction on the selected model. Buckland et al. (Biometrics 53:603?C618, 1997) have shown that if model selection uncertainty is ignored, the precision of the estimate is likely to be overestimated, the estimated confidence intervals of the parameters are often below the nominal level, and consequently, the prediction may be less accurate than expected. In this paper, we suggest using the model-averaged estimator based on the multimodel inference to estimate stochastic frontier models. The potential advantages of the proposed approach are twofold: incorporating the model selection uncertainty into statistical inference; reducing the model selection bias and variance of the frontier and technical efficiency estimators. The approach is demonstrated empirically via the estimation of an Indian farm data set.  相似文献   

2.
Journal of Productivity Analysis - This paper considers the maximum likelihood estimation of a stochastic frontier production function with an interval outcome. We derive an analytical formula for...  相似文献   

3.
This paper considers a panel stochastic production frontier model that allows the dynamic adjustment of technical inefficiency. In particular, we assume that inefficiency follows an AR(1) process. That is, the current year's inefficiency for a firm depends on its past inefficiency plus a transient inefficiency incurred in the current year. Interfirm variations in the transient inefficiency are explained by some firm-specific covariates. We consider four likelihood-based approaches to estimate the model: the full maximum likelihood, pairwise composite likelihood, marginal composite likelihood, and quasi-maximum likelihood approaches. Moreover, we provide Monte Carlo simulation results to examine and compare the finite-sample performances of the four above-mentioned likelihood-based estimators of the parameters. Finally, we provide an empirical application of a panel of 73 Finnish electricity distribution companies observed during 2008–2014 to illustrate the working of our proposed models.  相似文献   

4.

We propose a kernel-based Bayesian framework for the analysis of stochastic frontiers and efficiency measurement. The primary feature of this framework is that the unknown distribution of inefficiency is approximated by a transformed Rosenblatt-Parzen kernel density estimator. To justify the kernel-based model, we conduct a Monte Carlo study and also apply the model to a panel of U.S. large banks. Simulation results show that the kernel-based model is capable of providing more precise estimation and prediction results than the commonly-used exponential stochastic frontier model. The Bayes factor also favors the kernel-based model over the exponential model in the empirical application.

  相似文献   

5.
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
Estimation of a non-neutral stochastic frontier production function   总被引:1,自引:8,他引:1  
This article proposed a hybrid of a stochastic frontier regression. The proposed model and estimation differ from the conventional model of Aigner, Lovell, and Schmidt. The model combines a stochastic frontier regression and a truncated regression to estimate the production frontier with non-neutral shifting of the average production function. The truncated regression identifies the sources of efficiency. The article presents empirical evidence of non-neutral effects of the firm's characteristics—the age of the firms, the export ratio, and the R&D expenditure—on the frontier production function and production efficiency in the Taiwan's electronics industry.We would like to express appreciation to George E. Battese, the associate editor, and anonymous referees for various comments and suggestions. The research was partially supported by the University Research Council of Vanderbilt University, and the Sun Yat-Sen Institute for Social Sciences and Philosophy of Academia Sinica, Taiwan. Residual errors are ours alone.  相似文献   

7.

The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus from the other side of the market, in combination also with uneven relative bargaining power. Currently, this model is hindered by the fact that identification and estimation relies on the potentially restrictive assumption that these factors are statistically independent. We present three different models for empirical application that allow for varying degrees of dependence across these latent informational/bargaining factors.

  相似文献   

8.
The classical stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a, b) proposed the so-called “true” fixed-effects specification that distinguishes these two latent components. However, due to the incidental parameters problem, his maximum likelihood estimator may lead to biased variance estimates. We propose two alternative estimators that achieve consistency for n with fixed T. Furthermore, we extend the Chen et al. (2014) results providing a feasible estimator when the inefficiency is heteroskedastic and follows a first-order autoregressive process. We investigate the behavior of the proposed estimators through Monte Carlo simulations showing good finite sample properties, especially in small samples. An application to hospitals’ technical efficiency illustrates the usefulness of the new approach.  相似文献   

9.
Formulation and estimation of stochastic frontier production function models   总被引:22,自引:0,他引:22  
Previous studies of the so-called frontier production function have not utilized an adequate characterization of the disturbance term for such a model. In this paper we provide an appropriate specification, by defining the disturbance term as the sum of symmetric normal and (negative) half-normal random variables. Various aspects of maximum-likelihood estimation for the coefficients of a production function with an additive disturbance term of this sort are then considered.  相似文献   

10.
In this paper we discuss goodness of fit tests for the distribution of technical inefficiency in stochastic frontier models. If we maintain the hypothesis that the assumed normal distribution for statistical noise is correct, the assumed distribution for technical inefficiency is testable. We show that a goodness of fit test can be based on the distribution of estimated technical efficiency, or equivalently on the distribution of the composed error term. We consider both the Pearson χ 2 test and the Kolmogorov–Smirnov test. We provide simulation results to show the extent to which the tests are reliable in finite samples.  相似文献   

11.
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

12.
Most stochastic frontier models have focused on estimating average productive efficiency across all firms. The failure to estimate firm-specific effiicency has been regarded as a major limitation of previous stochastic frontier models. In this paper, we measure firm-level efficiency using panel data, and examine its finite sample distribution over a wide range of the parameter and model space. We also investigate the performance of the stochastic frontier approach using three estimators: maximum likelihood, generalized least squares and dummy variables (or the within estimator). Our results indicate that the performance of the stochastic frontier approach is sensitive to the form of the underlying technology and its complexity. The results appear to be quite stable across estimators. The within estimatoris preferred, however, because of weak assumptions and relative computational ease.The refereeing process of this paper was handled through J. van den Broeck.  相似文献   

13.
Stochastic frontier models with autocorrelated inefficiency have been proposed in the past as a way of addressing the issue of temporal variation in firm-level efficiency scores. They are justified using an underlying model of dynamic firm behavior. In this paper we argue that these models could have radically different implications for the expected long-run efficiency scores in the presence of unobserved heterogeneity. The possibility of accounting for unobserved heterogeneity is explored. Random- and correlated random-effects dynamic stochastic frontier models are proposed and applied to a panel of US electric utilities.  相似文献   

14.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

15.
When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process . Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called “modified OLS” that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process.  相似文献   

16.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

17.
The iterative algorithm suggested by Greene (1982) for the estimation of stochastic frontier production models does not necessarily solve the likelihood equations. Corrected iterative algorithms which generalize Fair's method (1977) and solve the likelihood equations are derived. These algorithms are compared with the Newton method in an empirical case. The Newton method is more time saving than these algorithms.  相似文献   

18.
Previous work on stochastic production frontiers has generated a family of models, of varying degrees of complexity. Since this family is nested (in the sense that the more general models contain the less general), we can test the restrictions that distinguish the model. In this paper we provide tests of these restrictions, based on the results of estimating the simpler (restricted) models. Some of our tests are LM tests. However, in other cases the LM test fails, so we provide alternative simple tests.  相似文献   

19.
Estimation of the one sided error component in stochastic frontier models may erroneously attribute firm characteristics to inefficiency if heterogeneity is unaccounted for. However, unobserved inefficiency heterogeneity has been little explored. In this work, we propose to capture it through a random parameter which may affect the location, scale, or both parameters of a truncated normal inefficiency distribution using a Bayesian approach. Our findings using two real data sets, suggest that the inclusion of a random parameter in the inefficiency distribution is able to capture latent heterogeneity and can be used to validate the suitability of observed covariates to distinguish heterogeneity from inefficiency. Relevant effects are also found on separating and shrinking individual posterior efficiency distributions when heterogeneity affects the location and scale parameters of the one-sided error distribution, and consequently affecting the estimated mean efficiency scores and rankings. In particular, including heterogeneity simultaneously in both parameters of the inefficiency distribution in models that satisfy the scaling property leads to a decrease in the uncertainty around the mean scores and less overlapping of the posterior efficiency distributions, which provides both more reliable efficiency scores and rankings.  相似文献   

20.
Journal of Productivity Analysis - This paper extends the fixed effect panel stochastic frontier models to allow group heterogeneity in the slope coefficients. We propose the first-difference...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号