首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider Bayesian estimation of a stochastic production frontier with ordered categorical output, where the inefficiency error is assumed to follow an exponential distribution, and where output, conditional on the inefficiency error, is modelled as an ordered probit model. Gibbs sampling algorithms are provided for estimation with both cross-sectional and panel data, with panel data being our main focus. A Monte Carlo study and a comparison of results from an example where data are used in both continuous and categorical form supports the usefulness of the approach. New efficiency measures are suggested to overcome a lack-of-invariance problem suffered by traditional efficiency measures. Potential applications include health and happiness production, university research output, financial credit ratings, and agricultural output recorded in broad bands. In our application to individual health production we use data from an Australian panel survey to compute posterior densities for marginal effects, outcome probabilities, and a number of within-sample and out-of-sample efficiency measures.  相似文献   

2.
This article studies the estimation of production frontiers and efficiency scores when the commodity of interest is an economic bad with a discrete distribution. Existing parametric econometric techniques (stochastic frontier methods) assume that output is a continuous random variable but, if output is discretely distributed, then one faces a scenario of model misspecification. Therefore a new class of econometric models has been developed to overcome this problem. The Delaporte subclass of models is studied in detail, and tests of hypotheses are proposed to discriminate among parametric models. In particular, Pearson’s chi-squared test is adapted to construct a new kernel-based consistent Pearson test. A Monte Carlo experiment evaluates the merits of the new model and methods, and these are used to estimate the frontier and efficiency scores of the production of infant deaths in England. Extensions to the model are discussed.  相似文献   

3.
We develop a generalized method of moments (GMM) estimator for the distribution of a variable where summary statistics are available only for intervals of the random variable. Without individual data, one cannot calculate the weighting matrix for the GMM estimator. Instead, we propose a simulated weighting matrix based on a first-step consistent estimate. When the functional form of the underlying distribution is unknown, we estimate it using a simple yet flexible maximum entropy density. Our Monte Carlo simulations show that the proposed maximum entropy density is able to approximate various distributions extremely well. The two-step GMM estimator with a simulated weighting matrix improves the efficiency of the one-step GMM considerably. We use this method to estimate the U.S. income distribution and compare these results with those based on the underlying raw income data.  相似文献   

4.
5.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

6.
A rapidly aging U. S. population is straining the resources available for long term care and increasing the urgency of efficient operations in nursing homes. The scope for productivity improvements can be examined by estimating a stochastic frontier production function. We apply the methods of maximum likelihood and quantile regression to a panel of Texas nursing facilities and infer that the average productivity shortfall due to avoidable technical inefficiency is at least 8 percent and perhaps as large as 20 percent. Non-profit facilities are notably less productive than comparable facilities operated for profit, and the industry has constant returns to scale.  相似文献   

7.
This paper considers the estimation of Kumbhakar et al. (J Prod Anal. doi:10.1007/s11123-012-0303-1, 2012) (KLH) four random components stochastic frontier (SF) model using MLE techniques. We derive the log-likelihood function of the model using results from the closed-skew normal distribution. Our Monte Carlo analysis shows that MLE is more efficient and less biased than the multi-step KLH estimator. Moreover, we obtain closed-form expressions for the posterior expected values of the random effects, used to estimate short-run and long-run (in)efficiency as well as random-firm effects. The model is general enough to nest most of the currently used panel SF models; hence, its appropriateness can be tested. This is exemplified by analyzing empirical results from three different applications.  相似文献   

8.
Dr. Franz Konecny 《Metrika》1987,34(1):143-155
In this paper we are concerned with the large sample behavior of the MLE for a class of marked Poisson processes arising in hydrology. We establish strong consistency, asymptotic normality and asymptotic efficiency of the MLE. As an application we present the asymptotic distribution of the design discharge of a river flow.  相似文献   

9.
We show that the monotonicity condition is conceptually important in Stochastic Frontier Analysis (SFA). Despite its importance, most empirical studies do not impose monotonicity—probably because existing approaches are rather complex and laborious. Therefore, we propose a three-step procedure that is much simpler than existing approaches. We demonstrate how monotonicity of a translog function can be imposed regionally at a connected set (region) of input quantities. Our method can be applied not only to impose monotonicity on translog production frontiers but also to impose other restrictions on cost, distance, or profit frontiers.  相似文献   

10.
This study combines the output distance function approach with a latent class model to estimate technical efficiency in English football in the presence of productive heterogeneity within a stochastic frontier analysis framework. The distance function approach allows the researcher to estimate technical efficiency including both on-field and off-field production, which is important in the case of English football where clubs are generally thought to maximize something other than profit. On-field production is measured using total league points, and off-field production is measured using total revenue. The data set consists of 2177 club-level observations on 88 clubs that competed in the four divisions of professional football in England over the 29-season period from 1981/82 to 2009/10. The results show evidence of three separate productivity classes in English football. As might be expected, technical efficiency estimated using the latent class model is, on average, higher than technical efficiency using an alternative method which confines heterogeneity to the intercept coefficient. Specifically, average efficiency for the sample is 87.3 and 93.2% for the random-intercept model and the latent class model respectively.  相似文献   

11.
We replace the conventional market clearing process by maximizing the information entropy. At fixed return agents optimize their demands using an utility with a statistical tolerance against deviations from the deterministic maximum of the utility which leads to information entropies for the agents. Interactions are described by coupling the agents to a large system, called ‘market’. The main problem in economic markets is the absence of the analogue of the first law of thermodynamics (energy conservation in physics). We solve this problem by restricting the utilities to be at most quadratic in the demand (ideal gas approximation). Maximizing the sum of agent and market entropy serves to eliminate the unknown properties of the latter. Assuming a stochastic volatility decomposition for the return we derive an expression for the pdf of the return which is in excellent agreement with the daily DAX data. The pdf exhibits a power law behaviour with an integer tail index equal to the number of agent groups. With the assumption of an Ornstein Uhlenbeck model for the risk aversity parameters of the agents the autocorrelation for the absolute return is well described up to time lags of 2.5 years.  相似文献   

12.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

13.
The standard approach to measuring total factor productivity can produce biased results if the data are drawn from a market that is not in long-run competititve equilibrium. This article presents a methodology for adjusting data on output and variable inputs to the values they would have if the market were in long-run competitive equilibrium, given the fixed inputs and input prices. The method uses nonstochastic, parametric translog cost frontiers and calculates equilibrium values for output and varible inputs using an iterative linear programming procedure. Data from seven industries for 1970–1979 are used to illustrate the methodology.The editor for this paper was William H. Greene.  相似文献   

14.
The nonparametric frontier methodology is applied to a sample of banks, where output levels are measured either by the number of accounts and their average size, or by the total balances of the accounts. The efficiency rankings of individual banks are found to depend substantially on our choice of output metric, whereas the estimated size of potential productivity improvements in the banking sector are less affected. The results on economies of scale are also largely unchanged.The refereeing process of this paper was handled through S. Grosskopf.  相似文献   

15.
New matrix, determinant and trace versions of the Kantorovich inequality (KI) involving two positive definite matrices are presented. Some of these are used to study the efficiencies of minimum-distance (MD) estimators, generalized method-of-moments (GMM) estimators and several estimators specific to longitudinal or panel-data analysis. They are also used to give upper bounds for the determinant and trace of the asymptotic variance matrix of a weighted least-squares (WLS) estimator in the generalized linear model.  相似文献   

16.
This article examines the impact of fixed effects production functions vis-à-vis stochastic production frontiers on technical efficiency measures. An unbalanced panel consisting of 96 Vermont dairy farmers for the 1971–1984 period was used in the analysis. The models examined incorporated both time-variant and time-invariant technical efficiency. The major source of variation in efficiency levels across models stemmed from the assumption made concerning the distribution of the one-sided term in the stochastic frontiers. In general, the fixed effects technique was found superior to the stochastic production frontier methodology. Despite the fact that the results of various statistical tests revealed the superiority of some specifications over others, the overall conclusion of the study is that the efficiency analysis was fairly consistent throughout all the models considered.  相似文献   

17.
This paper deals with estimation of input-oriented (IO) technical inefficiency using a stochastic production frontier model. Econometrically the model is similar to a class of models that arise in specifying technical inefficiency in cost-minimizing and profit-maximizing frameworks. The standard maximum likelihood (ML) method that is used to estimate output-oriented (OO) technical efficiency cannot be applied to estimate these models. We use a simulated ML approach to estimate the IO production function and compare results from the IO and OO models, mainly to emphasize the point that estimated efficiency, returns to scale, technical change, etc., differ depending on whether one uses the model with IO or OO technical inefficiency.  相似文献   

18.
Technical efficiency in farming: a meta-regression analysis   总被引:1,自引:0,他引:1  
A meta-regression analysis including 167 farm level technical efficiency (TE) studies of developing and developed countries was undertaken. The econometric results suggest that stochastic frontier models generate lower mean TE (MTE) estimates than non-parametric deterministic models, while parametric deterministic frontier models yield lower estimates than the stochastic approach. The primal approach is the most common technological representation. In addition, frontier models based on cross-sectional data produce lower estimates than those based on panel data whereas the relationship between functional form and MTE is inconclusive. On average, studies for animal production show a higher MTE than crop farming. The results also suggest that the studies for countries in Western Europe and Oceania present, on average, the highest levels of MTE among all regions after accounting for various methodological features. In contrast, studies for Eastern European countries exhibit the lowest estimate followed by those from Asian, African, Latin American, and North American countries. Additional analysis reveals that MTEs are positively and significantly related to the average income of the countries in the data set but this pattern is broken by the upper middle income group which displays the lowest MTE.
Teodoro RivasEmail:
  相似文献   

19.
This paper constructs hybrid forecasts that combine forecasts from vector autoregressive (VAR) model(s) with both short- and long-term expectations from surveys. Specifically, we use the relative entropy to tilt one-step-ahead and long-horizon VAR forecasts to match the nowcasts and long-horizon forecasts from the Survey of Professional Forecasters. We consider a variety of VAR models, ranging from simple fixed-parameter to time-varying parameters. The results across models indicate meaningful gains in multi-horizon forecast accuracy relative to model forecasts that do not incorporate long-term survey conditions. Accuracy improvements are achieved for a range of variables, including those that are not tilted directly but are affected through spillover effects from tilted variables. The accuracy gains for hybrid inflation forecasts from simple VARs are substantial, statistically significant, and competitive to time-varying VARs, univariate benchmarks, and survey forecasts. We view our proposal as an indirect approach to accommodating structural change and moving end points.  相似文献   

20.
Journal of Productivity Analysis - National statistical organizations often rely on non-exhaustive surveys to estimate industry-level production functions in years in which a full census is not...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号