共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers the measurement of firm's specific (in)efficiency while allows for the possible heterogeneous technologies adopted by different firms. A flexible stochastic frontier model with random coefficients is proposed to distinguish technical inefficiency from technological differences across firms. Posterior inference of the model is made possible via the simulation-based approach, namely, Markov chain Monte Carlo method. The model is applied to a real data set which has also been considered in Christensen and Greene (1976), Greene (1990), Tsionas (2002), among others. Empirical results show that the regression coefficients can vary across firms, indicating the adoption of heterogeneous technologies by different firms. More importantly, we find that, without considering this possible heterogeneity, the inefficiency of firms can be over-estimated. 相似文献
2.
In this paper, Bayesian estimation of log odds ratios over R × C and 2 × 2 × K contingency tables is considered, which is practically reasonable in the presence of prior information. Likelihood functions for log odds ratios are derived for each table structure. A prior specification strategy is proposed. Posterior inferences are drawn using Gibbs sampling and Metropolis–Hastings algorithm. Two numerical examples are given to illustrate the matters argued. 相似文献
3.
This paper proposes two types of stochastic correlation structures for Multivariate Stochastic Volatility (MSV) models, namely the constant correlation (CC) MSV and dynamic correlation (DC) MSV models, from which the stochastic covariance structures can easily be obtained. Both structures can be used for purposes of determining optimal portfolio and risk management strategies through the use of correlation matrices, and for calculating Value-at-Risk (VaR) forecasts and optimal capital charges under the Basel Accord through the use of covariance matrices. A technique is developed to estimate the DC MSV model using the Markov Chain Monte Carlo (MCMC) procedure, and simulated data show that the estimation method works well. Various multivariate conditional volatility and MSV models are compared via simulation, including an evaluation of alternative VaR estimators. The DC MSV model is also estimated using three sets of empirical data, namely Nikkei 225 Index, Hang Seng Index and Straits Times Index returns, and significant dynamic correlations are found. The Dynamic Conditional Correlation (DCC) model is also estimated, and is found to be far less sensitive to the covariation in the shocks to the indexes. The correlation process for the DCC model also appears to have a unit root, and hence constant conditional correlations in the long run. In contrast, the estimates arising from the DC MSV model indicate that the dynamic correlation process is stationary. 相似文献
4.
Gary Koop 《Journal of economic surveys》1994,8(1):1-34
Abstract. Bayesian methods are widely used by theoretical econometricians and statisticians but have not won widespread acceptance from applied researchers. After briefly describing the basic of the Bayesian approach, we discuss several issues relating to the empirical application of Bayesian methods. The existing Bayesian empirical literature is also partially summarized. The remainder of the paper offers a non-technical survey of some recent computational advances in Bayesian econometrics. The overall goal is to persuade economists that Bayesian methods are both computationally feasible and easy to implement in empirical research. 相似文献
5.
In many applications involving time-varying parameter VARs, it is desirable to restrict the VAR coefficients at each point in time to be non-explosive. This is an example of a problem where inequality restrictions are imposed on states in a state space model. In this paper, we describe how existing MCMC algorithms for imposing such inequality restrictions can work poorly (or not at all) and suggest alternative algorithms which exhibit better performance. Furthermore, we show that previous algorithms involve an approximation relating to a key prior integrating constant. Our algorithms are exact, not involving this approximation. In an application involving a commonly used U.S. data set, we present evidence that the algorithms proposed in this paper work well. 相似文献
6.
We propose and examine a panel data model for isolating the effect of a treatment, taken once at baseline, from outcomes observed over subsequent time periods. In the model, the treatment intake and outcomes are assumed to be correlated, due to unobserved or unmeasured confounders. Intake is partly determined by a set of instrumental variables and the confounding on unobservables is modeled in a flexible way, varying both by time and treatment state. Covariate effects are assumed to be subject-specific and potentially correlated with other covariates. Estimation and inference is by Bayesian methods that are implemented by tuned Markov chain Monte Carlo methods. Because our analysis is based on the framework developed by Chib [2004. Analysis of treatment response data without the joint distribution of counterfactuals. Journal of Econometrics, in press], the modeling and estimation does not involve either the unknowable joint distribution of the potential outcomes or the missing counterfactuals. The problem of model choice through marginal likelihoods and Bayes factors is also considered. The methods are illustrated in simulation experiments and in an application dealing with the effect of participation in high school athletics on future labor market earnings. 相似文献
7.
A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The Propp–Wilson algorithm requires this distribution to have a certain structure called monotonicity. In this paper an idea of Kendall is applied to show how the algorithm can be extended to the case where monotonicity is replaced by anti-monotonicity. As illustrating examples, simulations of the hard-core model and the random-cluster model are presented. 相似文献
8.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models. 相似文献
9.
Model specification for state space models is a difficult task as one has to decide which components to include in the model and to specify whether these components are fixed or time-varying. To this aim a new model space MCMC method is developed in this paper. It is based on extending the Bayesian variable selection approach which is usually applied to variable selection in regression models to state space models. For non-Gaussian state space models stochastic model search MCMC makes use of auxiliary mixture sampling. We focus on structural time series models including seasonal components, trend or intervention. The method is applied to various well-known time series. 相似文献
10.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following and and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner. 相似文献
11.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models. 相似文献
12.
Bayesian modification indices are presented that provide information for the process of model evaluation and model modification. These indices can be used to investigate the improvement in a model if fixed parameters are re-specified as free parameters. The indices can be seen as a Bayesian analogue to the modification indices commonly used in a frequentist framework. The aim is to provide diagnostic information for multi-parameter models where the number of possible model violations and the related number of alternative models is too large to render estimation of each alternative practical. As an example, the method is applied to an item response theory (IRT) model, that is, to the two-parameter model. The method is used to investigate differential item functioning and violations of the assumption of local independence. 相似文献
13.
《Spatial Economic Analysis》2013,8(3):275-304
Abstract We attempt to clarify a number of points regarding use of spatial regression models for regional growth analysis. We show that as in the case of non-spatial growth regressions, the effect of initial regional income levels wears off over time. Unlike the non-spatial case, long-run regional income levels depend on: own region as well as neighbouring region characteristics, the spatial connectivity structure of the regions, and the strength of spatial dependence. Given this, the search for regional characteristics that exert important influences on income levels or growth rates should take place using spatial econometric methods that account for spatial dependence as well as own and neighbouring region characteristics, the type of spatial regression model specification, and weight matrix. The framework adopted here illustrates a unified approach for dealing with these issues. 相似文献
14.
In this paper we use Monte Carlo study to investigate the finite sample properties of the Bayesian estimator obtained by the Gibbs sampler and its classical counterpart (i.e. the MLE) for a stochastic frontier model. Our Monte Carlo results show that the MSE performance of the estimates of Gibbs sampling are substantially better than that of the MLE. 相似文献
15.
Juan Carlos Martín Concepción Román Augusto Voltes-Dorta 《Journal of Productivity Analysis》2009,31(3):163-176
There exists a common belief among researchers and regional policy makers that the actual central system of Aeropuertos Españoles y Navegación Aérea (AENA) should be changed to one more decentralized where airport managers could have more autonomy. The main objective of this article is to evaluate the efficiency of the Spanish airports using Markov chain Monte Carlo (MCMC) simulation to estimate a stochastic frontier analysis (SFA) model. Our results show the existence of a significant level of inefficiency in airport operations. Additionally, we provide efficient marginal cost estimates for each airport which also cast some doubts about the current pricing practices. 相似文献
16.
In this paper we study the Candy model, a marked point process introduced by S toica et al. (2000) . We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present a simulation study. 相似文献
17.
Tom Brijs Filip Van den Bossche Geert Wets Dimitris Karlis 《Statistica Neerlandica》2006,60(4):457-476
These days, road safety has become a major concern in most modern societies. In this respect, the determination of road locations that are more dangerous than others (black spots or also called sites with promise) can help in better scheduling road safety policies. The present paper proposes a multivariate model to identify and rank sites according to their total expected cost to the society. Bayesian estimation of the model via a Markov Chain Monte Carlo approach is discussed in this paper. To illustrate the proposed model, accident data from 23,184 accident locations in Flanders (Belgium) are used and a cost function proposed by the European Transport Safety Council is adopted to illustrate the model. It is shown in the paper that the model produces insightful results that can help policy makers in prioritizing road infrastructure investments. 相似文献
18.
Yasuhiro Omori Siddhartha Chib Neil Shephard Jouchi Nakajima 《Journal of econometrics》2007,140(2):425-449
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal). 相似文献
19.
N. A. Sheehan 《Revue internationale de statistique》2000,68(1):83-110
Markov chain Monte Carlo methods are frequently used in the analyses of genetic data on pedigrees for the estimation of probabilities and likelihoods which cannot be calculated by existing exact methods. In the case of discrete data, the underlying Markov chain may be reducible and care must be taken to ensure that reliable estimates are obtained. Potential reducibility thus has implications for the analysis of the mixed inheritance model, for example, where genetic variation is assumed to be due to one single locus of large effect and many loci each with a small effect. Similarly, reducibility arises in the detection of quantitative trait loci from incomplete discrete marker data. This paper aims to describe the estimation problem in terms of simple discrete genetic models and the single-site Gibbs sampler. Reducibility of the Gibbs sampler is discussed and some current methods for circumventing the problem outlined. 相似文献
20.
This paper explores the importance of incorporating the financial leverage effect in the stochastic volatility models when pricing options. For the illustrative purpose, we first conduct the simulation experiment by using the Markov Chain Monte Carlo (MCMC) sampling method. We then make an empirical analysis by applying the volatility models to the real return data of the Hang Seng index during the period from January 1, 2013 to December 31, 2017. Our results highlight the accuracy of the stochastic volatility models with leverage in option pricing when leverage is high. In addition, the leverage effect becomes more significant as the maturity of options increases. Moreover, leverage affects the pricing of in-the-money options more than that of at-the-money and out-of-money options. Our study is therefore useful for both asset pricing and portfolio investment in the Hong Kong market where volatility is an inherent nature of the economy. 相似文献