共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper analyzes the productivity of farms across 370 municipalities in the Center-West region of Brazil. A stochastic
frontier model with a latent spatial structure is proposed to account for possible unknown geographical variation of the outputs.
The paper compares versions of the model that include the latent spatial effect in the mean of output or as a variable that
conditions the distribution of inefficiency, include or not observed municipal variables, and specify independent normal or
conditional autoregressive priors for the spatial effects. The Bayesian paradigm is used to estimate the proposed models.
As the resultant posterior distributions do not have a closed form, stochastic simulation techniques are used to obtain samples
from them. Two model comparison criteria provide support for including the latent spatial effects, even after considering
covariates at the municipal level. Models that ignore the latent spatial effects produce significantly different rankings
of inefficiencies across agents.
相似文献
2.
Traditional stochastic frontier models impose inefficient behavior on all firms in the sample of interest. If the data under investigation represent a mixture of both fully efficient and inefficient firms then off-the-shelf frontier models are statistically inadequate. We introduce the zero inefficiency stochastic frontier model which can accommodate the presence of both efficient and inefficient firms in the sample. We derive the corresponding log-likelihood function, conditional mean of inefficiency, to estimate observation-specific inefficiency and discuss testing for the presence of fully efficient firms. We provide both simulated evidence as well as an empirical example which demonstrates the applicability of the proposed method. 相似文献
3.
In most empirical studies, once the best model has been selected according to a certain criterion, subsequent analysis is conducted conditionally on the chosen model. In other words, the uncertainty of model selection is ignored once the best model has been chosen. However, the true data-generating process is in general unknown and may not be consistent with the chosen model. In the analysis of productivity and technical efficiencies in the stochastic frontier settings, if the estimated parameters or the predicted efficiencies differ across competing models, then it is risky to base the prediction on the selected model. Buckland et al. (Biometrics 53:603?C618, 1997) have shown that if model selection uncertainty is ignored, the precision of the estimate is likely to be overestimated, the estimated confidence intervals of the parameters are often below the nominal level, and consequently, the prediction may be less accurate than expected. In this paper, we suggest using the model-averaged estimator based on the multimodel inference to estimate stochastic frontier models. The potential advantages of the proposed approach are twofold: incorporating the model selection uncertainty into statistical inference; reducing the model selection bias and variance of the frontier and technical efficiency estimators. The approach is demonstrated empirically via the estimation of an Indian farm data set. 相似文献
4.
Estimates of technical inefficiency based on fixed effects estimation of the stochastic frontier model with panel data are biased upward. Previous work has attempted to correct this bias using the bootstrap, but in simulations the bootstrap corrects only part of the bias. The usual panel jackknife is based on the assumption that the bias is of order T −1 and is similar to the bootstrap. We show that when there is a tie or a near tie for the best firm, the bias is of order T −1/2, not T −1, and this calls for a different form of the jackknife. The generalized panel jackknife is quite successful in removing the bias. However, the resulting estimates have a large variance. 相似文献
5.
This paper makes two important contributions to the literature on prediction intervals for firm specific inefficiency estimates in cross sectional SFA models. Firstly, the existing intervals in the literature do not correspond to the minimum width intervals and in this paper we discuss how to compute such intervals and how they either include or exclude zero as a lower bound depending on where the probability mass of the distribution of \( u_{i} |\varepsilon_{i} \) resides. This has useful implications for practitioners and policy makers, with greatest reductions in interval width for the most efficient firms. Secondly, we propose an ‘asymptotic’ approach to incorporating parameter uncertainty into prediction intervals for firm specific inefficiency (given that in practice model parameters have to be estimated) as an alternative to the ‘bagging’ procedure suggested in Simar and Wilson (Econom Rev 29(1):62–98, 2010). The approach is computationally much simpler than the bagging approach. 相似文献
6.
A Bayesian estimator is proposed for a stochastic frontier model with errors in variables. The model assumes a truncated-normal distribution for the inefficiency and accommodates exogenous determinants of inefficiency. An empirical example of Tobin??s Q investment model is provided, in which the Q variable is known to suffer from measurement error. Results show that correcting for measurement error in the Q variable has an important effect on the estimation results. 相似文献
7.
We show how a wide range of stochastic frontier models can be estimated relatively easily using variational Bayes. We derive approximate posterior distributions and point estimates for parameters and inefficiency effects for (a) time invariant models with several alternative inefficiency distributions, (b) models with time varying effects, (c) models incorporating environmental effects, and (d) models with more flexible forms for the regression function and error terms. Despite the abundance of stochastic frontier models, there have been few attempts to test the various models against each other, probably due to the difficulty of performing such tests. One advantage of the variational Bayes approximation is that it facilitates the computation of marginal likelihoods that can be used to compare models. We apply this idea to test stochastic frontier models with different inefficiency distributions. Estimation and testing is illustrated using three examples. 相似文献
8.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods
for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for
cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples
illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are
easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference
with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility
in model specification.
相似文献
9.
In stochastic frontier analysis, firm-specific efficiencies and their distribution are often main variables of interest. If firms fall into several groups, it is natural to allow each group to have its own distribution. This paper considers a method for nonparametrically modelling these distributions using Dirichlet processes. A common problem when applying nonparametric methods to grouped data is small sample sizes for some groups which can lead to poor inference. Methods that allow dependence between each group’s distribution are one set of solutions. The proposed model clusters the groups and assumes that the unknown distribution for each group in a cluster are the same. These clusters are inferred from the data. Markov chain Monte Carlo methods are necessary for model-fitting and efficient methods are described. The model is illustrated on a cost frontier application to US hospitals. 相似文献
10.
Journal of Productivity Analysis - This paper considers the maximum likelihood estimation of a stochastic frontier production function with an interval outcome. We derive an analytical formula for... 相似文献
11.
This paper investigates the technical efficiency of labor market matching from a stochastic frontier approach. The true fixed-effects
model (Greene J Prod Anal 23:7–32, 2005a; J Econom 126:269–303, 2005b) is utilised in order to separate cross-sectional heterogeneity from inefficiency, and inefficiency terms are modelled following
Battese and Coelli (Empir Econ 20:325–332, 1995). The data set consists of almost 17,000 observations from Local Labor Offices (LLOs) in Finland. According to the results,
there are notable differences in matching efficiency between regions, and these differences contribute significantly to the
number of filled vacancies. If all regions were as efficient as the most efficient one, the number of total matches per month
would increase by over 23%. The heterogeneity of the job-seeker stock is an important determinant of matching efficiency:
the weight of the composition of the job-seeker stock in the inefficiency terms is on average 85%.
相似文献
12.
The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus from the other side of the market, in combination also with uneven relative bargaining power. Currently, this model is hindered by the fact that identification and estimation relies on the potentially restrictive assumption that these factors are statistically independent. We present three different models for empirical application that allow for varying degrees of dependence across these latent informational/bargaining factors. 相似文献
13.
Considerable controversy surrounds the role of money in the production of goods and services. Previous empirical research has appeared to find that the real money stock affects aggregate output, holding other, more conventional inputs constant. However, the theoretical literature offers no convincing explanation for this empirical finding. One interpretation is that real money balances reduce the extent to which labor and capital are diverted into exchange-related activities instead of being used in production defined in a more narrow sense. To investigate this hypothesis, we estimate a production function augmented with real money balances as an input, using time-series data for the aggregate U.S. economy. A stochastic production frontier is then estimated without real money balances. We use these estimates to establish the presence of technical inefficiency. Finally, we show that the extent of technical inefficiency is negatively correlated with the real money stock. Our results provide a reconciliation between the empirical literature, which finds that real money balances affect output in a production function framework, and the theoretical literature, which suggests that real money balances enhance the technical efficiency of the economy. 相似文献
14.
In this paper we discuss goodness of fit tests for the distribution of technical inefficiency in stochastic frontier models.
If we maintain the hypothesis that the assumed normal distribution for statistical noise is correct, the assumed distribution
for technical inefficiency is testable. We show that a goodness of fit test can be based on the distribution of estimated
technical efficiency, or equivalently on the distribution of the composed error term. We consider both the Pearson χ
2 test and the Kolmogorov–Smirnov test. We provide simulation results to show the extent to which the tests are reliable in
finite samples. 相似文献
15.
Previous work on stochastic production frontiers has generated a family of models, of varying degrees of complexity. Since this family is nested (in the sense that the more general models contain the less general), we can test the restrictions that distinguish the model. In this paper we provide tests of these restrictions, based on the results of estimating the simpler (restricted) models. Some of our tests are LM tests. However, in other cases the LM test fails, so we provide alternative simple tests. 相似文献
16.
Abstract. Although stochastic volatility (SV) models have an intuitive appeal, their empirical application has been limited mainly due to difficulties involved in their estimation. The main problem is that the likelihood function is hard to evaluate. However, recently, several new estimation methods have been introduced and the literature on SV models has grown substantially. In this article, we review this literature. We describe the main estimators of the parameters and the underlying volatilities focusing on their advantages and limitations both from the theoretical and empirical point of view. We complete the survey with an application of the most important procedures to the S&P 500 stock price index. 相似文献
17.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods. 相似文献
18.
When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process . Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called “modified OLS” that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process. 相似文献
19.
The iterative algorithm suggested by Greene (1982) for the estimation of stochastic frontier production models does not necessarily solve the likelihood equations. Corrected iterative algorithms which generalize Fair's method (1977) and solve the likelihood equations are derived. These algorithms are compared with the Newton method in an empirical case. The Newton method is more time saving than these algorithms. 相似文献
20.
The effect of aggregation on estimates of stochastic frontier functions is considered. Inefficiency is assumed associated with the individual units being aggregated. In this case, the aggregated data have a closed skew normal distribution. Estimating the parameters of a closed skew normal distribution is difficult and so we focus mostly on the biases created by ignoring the fact that the data are aggregated. The conclusions are based on both analytical and Monte Carlo results. When data for firms are aggregates over smaller units and the inefficiency is associated with the units and not the firm, empirical work that does not consider the effect of aggregation will attribute the inefficiency of large firms to diseconomies of scale. 相似文献
|