首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper is concerned with several kinds of stochastic frontier models whose likelihood function is not available in closed form. First, with output-oriented stochastic frontier models whose one-sided errors have a distribution other than the standard ones (exponential or half-normal). The gamma and beta distributions are leading examples. Second, with input-oriented stochastic frontier models which are common in theoretical discussions but not in econometric applications. Third, with two-tiered stochastic frontier models when the one-sided error components follow gamma distributions. Fourth, with latent class models with gamma distributed one-sided error terms. Fifth, with models whose two-sided error component is distributed as stable Paretian and the one-sided error is gamma. The principal aim is to propose approximations to the density of the composed error based on the inversion of the characteristic function (which turns out to be manageable) using the Fourier transform. Procedures that are based on the asymptotic normal form of the log-likelihood function and have arbitrary degrees of asymptotic efficiency are also proposed, implemented and evaluated in connection with output-oriented stochastic frontiers. The new methods are illustrated using data for US commercial banks, electric utilities, and a sample from the National Youth Longitudinal Survey.  相似文献   

2.
Formulation and estimation of stochastic frontier production function models   总被引:22,自引:0,他引:22  
Previous studies of the so-called frontier production function have not utilized an adequate characterization of the disturbance term for such a model. In this paper we provide an appropriate specification, by defining the disturbance term as the sum of symmetric normal and (negative) half-normal random variables. Various aspects of maximum-likelihood estimation for the coefficients of a production function with an additive disturbance term of this sort are then considered.  相似文献   

3.
This paper explores the changes in value added (VA) of a sample of schools for cohorts of students finishing secondary education between 2005 and 2008. VA estimates are based on distance measures obtained from DEA models. These measures are computed for each pupil in each school, and evaluate the distance between the school frontier in a given year and a pooled frontier comprising all schools analysed. The school VA is then computed by aggregating the VA scores for the cohort of pupils attending that school in a given year. The ratio between VA estimates for two consecutive cohorts, that attended the school in different years, is taken as the index of VA change. However, the evolution of school performance over time should consider not only the movements of the school frontier, but should also take into account other effects, such as the proximity of the students to the best-practices, represented by the school frontier, observed over time. For that purpose we developed an enhanced Malmquist index to evaluate the evolution of school performance over time. One of the components of the Malmquist index proposed measures VA change, and the other measures the ability of all school students to move closer to their own school best practices over time. The approach developed is applied to a sample of Portuguese secondary schools.  相似文献   

4.
This paper analyzes spatial Probit models for cross sectional dependent data in a binary choice context. Observations are divided by pairwise groups and bivariate normal distributions are specified within each group. Partial maximum likelihood estimators are introduced and they are shown to be consistent and asymptotically normal under some regularity conditions. Consistent covariance matrix estimators are also provided. Estimates of average partial effects can also be obtained once we characterize the conditional distribution of the latent error. Finally, a simulation study shows the advantages of our new estimation procedure in this setting. Our proposed partial maximum likelihood estimators are shown to be more efficient than the generalized method of moments counterparts.  相似文献   

5.
This paper presents a consistent estimator of a censored linear regression model which does not require knowledge of the distribution of the error term. The estimator considered here applies Duncan's (1982) suggestion that the likelihood function for the censored regression model be treated as a functional of both the unknown regression vector and the unknown error distribution. Our estimator is the majorizing regression vector for this non-parametric likelihood functional. We find conditions which ensure the consistency of the NPMLE. The paper concludes with the results of Monte Carlo experiments which show the NPMLE to be more efficient than Powell's Least Absolute Deviations (LAD) estimator, particularly when the fraction of censored observations is large and the sample size is small.  相似文献   

6.
Estimation of the one sided error component in stochastic frontier models may erroneously attribute firm characteristics to inefficiency if heterogeneity is unaccounted for. However, unobserved inefficiency heterogeneity has been little explored. In this work, we propose to capture it through a random parameter which may affect the location, scale, or both parameters of a truncated normal inefficiency distribution using a Bayesian approach. Our findings using two real data sets, suggest that the inclusion of a random parameter in the inefficiency distribution is able to capture latent heterogeneity and can be used to validate the suitability of observed covariates to distinguish heterogeneity from inefficiency. Relevant effects are also found on separating and shrinking individual posterior efficiency distributions when heterogeneity affects the location and scale parameters of the one-sided error distribution, and consequently affecting the estimated mean efficiency scores and rankings. In particular, including heterogeneity simultaneously in both parameters of the inefficiency distribution in models that satisfy the scaling property leads to a decrease in the uncertainty around the mean scores and less overlapping of the posterior efficiency distributions, which provides both more reliable efficiency scores and rankings.  相似文献   

7.
8.
This paper develops an exact maximum likelihood technique for estimating linear models with second-order autoregressive errors, which utilizes the full set of observations, and explicitly constrains the estimates of the error process to satisfy a priori stationarity conditions. A non- linear solution technique which is new to econometrics and works very efficiently is put forward as part of the estimating procedure. Empirical results are presented which emphasize the importance of utilizing the full set of observations and the associated stationarity restrictions.  相似文献   

9.
Pseudo maximum likelihood estimates are developed for higher-order spatial autoregressive models with increasingly many parameters, including models with spatial lags in the dependent variables both with and without a linear or nonlinear regression component, and regression models with spatial autoregressive disturbances. Consistency and asymptotic normality of the estimates are established. Monte Carlo experiments examine finite-sample behaviour.  相似文献   

10.
11.
The likelihood function for the stochastic frontier model is shown to possess an unusual stationary point which may or may not be a maximum. A condition is given to determine if the point is a maximum, and the result is interpreted in the context of specification and estimation.  相似文献   

12.
13.
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
We consider a stochastic frontier model with error ε=v−uε=vu, where vv is normal and uu is half normal. We derive the distribution of the usual estimate of u,E(u|ε)u,E(u|ε). We show that as the variance of vv approaches zero, E(u|ε)−uE(u|ε)u converges to zero, while as the variance of vv approaches infinity, E(u|ε)E(u|ε) converges to E(u)E(u). We graph the density of E(u|ε)E(u|ε) for intermediate cases. To show that E(u|ε)E(u|ε) is a shrinkage of u towards its mean, we derive and graph the distribution of E(u|ε)E(u|ε) conditional on uu. We also consider the distribution of estimated inefficiency in the fixed-effects panel data setting.  相似文献   

15.
16.
The classical stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a, b) proposed the so-called “true” fixed-effects specification that distinguishes these two latent components. However, due to the incidental parameters problem, his maximum likelihood estimator may lead to biased variance estimates. We propose two alternative estimators that achieve consistency for n with fixed T. Furthermore, we extend the Chen et al. (2014) results providing a feasible estimator when the inefficiency is heteroskedastic and follows a first-order autoregressive process. We investigate the behavior of the proposed estimators through Monte Carlo simulations showing good finite sample properties, especially in small samples. An application to hospitals’ technical efficiency illustrates the usefulness of the new approach.  相似文献   

17.

The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus from the other side of the market, in combination also with uneven relative bargaining power. Currently, this model is hindered by the fact that identification and estimation relies on the potentially restrictive assumption that these factors are statistically independent. We present three different models for empirical application that allow for varying degrees of dependence across these latent informational/bargaining factors.

  相似文献   

18.
Efficient frontier estimation: a maximum entropy approach   总被引:1,自引:0,他引:1  
An alternative efficiency estimation approach is developed utilizing generalized maximum entropy (GME). GME combines the strengths of both SFA and DEA, allowing for the estimation of a frontier that is stochastic, without making an ad hoc assumption about the distribution of the efficiency component. GME results approach SFA results as the one-sided inefficiency bounds used by GME shrink. Results similar to DEA are achieved as the bounds increase. The GME results are distributed like DEA, but yield virtually the same rankings as SFA. The results suggest that GME may provide a link between various estimators of efficiency.
Jon RezekEmail:
  相似文献   

19.
This paper develops a pure simulation-based approach for computing maximum likelihood estimates in latent state variable models using Markov Chain Monte Carlo methods (MCMC). Our MCMC algorithm simultaneously evaluates and optimizes the likelihood function without resorting to gradient methods. The approach relies on data augmentation, with insights similar to simulated annealing and evolutionary Monte Carlo algorithms. We prove a limit theorem in the degree of data augmentation and use this to provide standard errors and convergence diagnostics. The resulting estimator inherits the sampling asymptotic properties of maximum likelihood. We demonstrate the approach on two latent state models central to financial econometrics: a stochastic volatility and a multivariate jump-diffusion models. We find that convergence to the MLE is fast, requiring only a small degree of augmentation.  相似文献   

20.
This paper proposes a new approach to handle nonparametric stochastic frontier (SF) models. It is based on local maximum likelihood techniques. The model is presented as encompassing some anchorage parametric model in a nonparametric way. First, we derive asymptotic properties of the estimator for the general case (local linear approximations). Then the results are tailored to a SF model where the convoluted error term (efficiency plus noise) is the sum of a half normal and a normal random variable. The parametric anchorage model is a linear production function with a homoscedastic error term. The local approximation is linear for both the production function and the parameters of the error terms. The performance of our estimator is then established in finite samples using simulated data sets as well as with a cross-sectional data on US commercial banks. The methods appear to be robust, numerically stable and particularly useful for investigating a production process and the derived efficiency scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号