共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a tail-truncated stochastic frontier model that allows for the truncation of technical efficiency from below. The truncation bound implies the inefficiency threshold for survival. Specifically, this paper assumes a uniform distribution of technical inefficiency and derives the likelihood function. Even though this distributional assumption imposes a strong restriction that technical inefficiency has a uniform probability density over [0, θ], where θ is the threshold parameter, this model has two advantages: (1) the reduction in the number of parameters compared with more complicated tail-truncated models allows better performance in numerical optimization; and (2) it is useful for empirical studies of the distribution of efficiency or productivity, particularly the truncation of the distribution. The Monte Carlo simulation results support the argument that this model approximates the distribution of inefficiency precisely, as the data-generating process not only follows the uniform distribution but also the truncated half-normal distribution if the inefficiency threshold is small. 相似文献
2.
Journal of Productivity Analysis - In this paper we provide several new specifications within the true random effects model as well as stochastic frontiers models estimated with GLS and MLE that... 相似文献
3.
Worker peer-effects and managerial selection have received limited attention in the stochastic frontier analysis literature. We develop a parametric production function model that allows for worker peer-effects in output and worker-level inefficiency that is correlated with a manager’s selection of worker teams. The model is the usual “composed error” specification of the stochastic frontier model, but we allow for managerial selectivity (endogeneity) that works through the worker-level inefficiency term. The new specification captures both worker-level inefficiency and the manager’s ability to efficiently select teams to produce output. As the correlation between the manager’s selection equation and worker inefficiency goes to zero, our parametric model reduces to the normal-exponential stochastic frontier model of Aigner et al. ( 1977) with peer-effects. A comprehensive application to the NBA is provided. 相似文献
4.
This paper proposes a flexible time-varying stochastic frontier model. Similarly to Lee and Schmidt [1993, In: Fried H, Lovell
CAK, Schmidt S (eds) The measurement of productive efficiency: techniques and applications. Oxford University Press, Oxford],
we assume that individual firms’ technical inefficiencies vary over time. However, the model, which we call the “multiple
time-varying individual effects” model, is more general in that it allows multiple factors determining firm-specific time-varying
technical inefficiencies. This allows the temporal pattern of inefficiency to vary over firms. The number of such factors
can be consistently estimated. The model is applied to data on Indonesian rice farms, and the changes in the efficiency rankings
of farms over time demonstrate the model’s flexibility.
相似文献
5.
In economic research, it is often important to express the marginal value of a variable in monetary terms. In random coefficient models, this marginal monetary value is the ratio of two random coefficients and is thus random itself. In this paper, we study the distribution of this ratio and particularly the consequences of different distributional assumptions about the coefficients. It is shown that important characteristics of the distribution of the marginal monetary value may be sensitive to the distributional assumptions about the random coefficients. The median, however, is much less sensitive than the mean. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
6.
Journal of Productivity Analysis - This paper develops a theoretical framework for modeling farm households’ joint production and consumption decisions in the presence of technical... 相似文献
7.
The iterated bootstrap may be used to estimate errors which arise from a single pass of the bootstrap and thereby to correct for them. Here the iteration is employed to correct for coverage probability of confidence intervals obtained by a percentile method in the context of production frontier estimation with panel data. The parameter of interest is the maximum of the intercepts in a fixed firm effect model. The bootstrap distribution estimators are consistent if and only if there are no ties for this maximum. In the regular case (no ties), poor distribution estimators can result when the second largest intercept is close to the maximum. The iterated bootstrap is thus suggested to improve the accuracy of the obtained distributions. The result is illustrated in the analysis of labor efficiency of railway companies.This work was supported in part by grant No. 26 from the program Pôle d'attraction interuniversitaire-Deuxième phase to CORE and by the contract Projet d'Actions de Recherche Concertées of the Belgian governement (PARC) to the Institute of Statistics, Université Catholique de Louvain. The first author was partly financed by the Institut de Mathématiques Appliquées, Université Catholique de Louvain. 相似文献
8.
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two
main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar
to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs
technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric
methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of
local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation
of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical
noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing
them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH
estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets.
Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom
137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave. 相似文献
9.
We shed new light on the performance of Berry, Levinsohn and Pakes’ (1995) GMM estimator of the aggregate random coefficient logit model. Based on an extensive Monte Carlo study, we show that the use of Chamberlain’s (1987) optimal instruments overcomes many problems that have recently been documented with standard, non-optimal instruments. Optimal instruments reduce small sample bias, but they prove even more powerful in increasing the estimator’s efficiency and stability. We consider a wide variety of data-generating processes and an empirical application to the automobile market. We also consider the gains of other recent methodological advances when combined with optimal instruments. 相似文献
10.
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
11.
The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus from the other side of the market, in combination also with uneven relative bargaining power. Currently, this model is hindered by the fact that identification and estimation relies on the potentially restrictive assumption that these factors are statistically independent. We present three different models for empirical application that allow for varying degrees of dependence across these latent informational/bargaining factors. 相似文献
12.
The classical stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a, b) proposed the so-called “true” fixed-effects specification that distinguishes these two latent components. However, due to the incidental parameters problem, his maximum likelihood estimator may lead to biased variance estimates. We propose two alternative estimators that achieve consistency for with fixed . Furthermore, we extend the Chen et al. (2014) results providing a feasible estimator when the inefficiency is heteroskedastic and follows a first-order autoregressive process. We investigate the behavior of the proposed estimators through Monte Carlo simulations showing good finite sample properties, especially in small samples. An application to hospitals’ technical efficiency illustrates the usefulness of the new approach. 相似文献
13.
A Bayesian estimator is proposed for a stochastic frontier model with errors in variables. The model assumes a truncated-normal distribution for the inefficiency and accommodates exogenous determinants of inefficiency. An empirical example of Tobin??s Q investment model is provided, in which the Q variable is known to suffer from measurement error. Results show that correcting for measurement error in the Q variable has an important effect on the estimation results. 相似文献
15.
The random coefficients multinomial choice logit model, also known as the mixed logit, has been widely used in empirical choice analysis for the last thirty years. We prove that the distribution of random coefficients in the multinomial logit model is nonparametrically identified. Our approach requires variation in product characteristics only locally and does not rely on the special regressors with large supports used in related papers. One of our two identification arguments is constructive. Both approaches may be applied to other choice models with random coefficients. 相似文献
16.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods. 相似文献
17.
When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process . Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called “modified OLS” that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process. 相似文献
18.
In this paper we discuss goodness of fit tests for the distribution of technical inefficiency in stochastic frontier models.
If we maintain the hypothesis that the assumed normal distribution for statistical noise is correct, the assumed distribution
for technical inefficiency is testable. We show that a goodness of fit test can be based on the distribution of estimated
technical efficiency, or equivalently on the distribution of the composed error term. We consider both the Pearson χ
2 test and the Kolmogorov–Smirnov test. We provide simulation results to show the extent to which the tests are reliable in
finite samples. 相似文献
19.
In Grosskopf (1995) and Banker (1995) different approaches and problems of statistical inference in DEA frontier models are presented. This paper focuses on the basic characteristics of DEA models from a statistical point of view. It arose from comments and discussions on both papers above. The framework of DEA models is deterministic (all the observed points lie on the same side of the frontier), nevertheless a stochastic model can be constructed once a data generating process is defined. So statistical analysis may be performed and sampling properties of DEA estimators can be established. However, practical statistical inference (such as test of hypothesis, confidence intervals) still needs artifacts like the bootstrap to be performed. A consistent bootstrap relies also on a clear definition of the data generating proces and on a consistent estimator of it: The approach of Simar and Wilson (1995) is described. Finally, some trails are proposed for introducing stochastic noise in DEA models, in the spirit of the Kneip-Simar (1995) approach. 相似文献
20.
Journal of Productivity Analysis - This paper considers the maximum likelihood estimation of a stochastic frontier production function with an interval outcome. We derive an analytical formula for... 相似文献
|