首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Journal of econometrics》2005,124(2):311-334
We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification. C-code along with an R interface for our algorithms is publicly available.1  相似文献   

2.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

3.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

4.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models.  相似文献   

5.
In this study, we consider Bayesian methods for the estimation of a sample selection model with spatially correlated disturbance terms. We design a set of Markov chain Monte Carlo algorithms based on the method of data augmentation. The natural parameterization for the covariance structure of our model involves an unidentified parameter that complicates posterior analysis. The unidentified parameter – the variance of the disturbance term in the selection equation – is handled in different ways in these algorithms to achieve identification for other parameters. The Bayesian estimator based on these algorithms can account for the selection bias and the full covariance structure implied by the spatial correlation. We illustrate the implementation of these algorithms through a simulation study and an empirical application.  相似文献   

6.
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

7.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
A general parametric framework based on the generalized Student t‐distribution is developed for pricing S&P500 options. Higher order moments in stock returns as well as time‐varying volatility are priced. An important computational advantage of the proposed framework over Monte Carlo‐based pricing methods is that options can be priced using one‐dimensional quadrature integration. The empirical application is based on S&P500 options traded on select days in April 1995, a total sample of over 100,000 observations. A range of performance criteria are used to evaluate the proposed model, as well as a number of alternative models. The empirical results show that pricing higher order moments and time‐varying volatility yields improvements in the pricing of options, as well as correcting the volatility skew associated with the Black–Scholes model. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
This article proposes three methods for merging homogeneous clusters of observations that are grouped according to a pre‐existing (known) classification. This clusterwise regression problem is at the very least compelling in analyzing international trade data, where transaction prices can be grouped according to the corresponding origin–destination combination. A proper merging of these prices could simplify the analysis of the market without affecting the representativeness of the data and highlight commercial anomalies that may hide frauds. The three algorithms proposed are based on an iterative application of the F‐test and have the advantage of being extremely flexible, as they do not require to predetermine the number of final clusters, and their output depends only on a tuning parameter. Monte Carlo results show very good performances of all the procedures, whereas the application to a couple of empirical data sets proves the practical utility of the methods proposed for reducing the dimension of the market and isolating suspicious commercial behaviors.  相似文献   

10.
Discrete choice experiments are widely used to learn about the distribution of individual preferences for product attributes. Such experiments are often designed and conducted deliberately for the purpose of designing new products. There is a long-standing literature on nonparametric and Bayesian modelling of preferences for the study of consumer choice when there is a market for each product, but this work does not apply when such markets fail to exist as is the case with most product attributes. This paper takes up the common case in which attributes can be quantified and preferences over these attributes are monotone. It shows that monotonicity is the only shape constraint appropriate for a utility function in these circumstances. The paper models components of utility using a Dirichlet prior distribution and demonstrates that all monotone nondecreasing utility functions are supported by the prior. It develops a Markov chain Monte Carlo algorithm for posterior simulation that is reliable and practical given the number of attributes, choices and sample sizes characteristic of discrete choice experiments. The paper uses the algorithm to demonstrate the flexibility of the model in capturing heterogeneous preferences and applies it to a discrete choice experiment that elicits preferences for different auto insurance policies.  相似文献   

11.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

12.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

13.
This paper investigates whether there is time variation in the excess sensitivity of aggregate consumption growth to anticipated aggregate disposable income growth using quarterly US data over the period 1953–2014. Our empirical framework contains the possibility of stickiness in aggregate consumption growth and takes into account measurement error and time aggregation. Our empirical specification is cast into a Bayesian state‐space model and estimated using Markov chain Monte Carlo (MCMC) methods. We use a Bayesian model selection approach to deal with the non‐regular test for the null hypothesis of no time variation in the excess sensitivity parameter. Anticipated disposable income growth is calculated by incorporating an instrumental variables estimation approach into our MCMC algorithm. Our results suggest that the excess sensitivity parameter in the USA is stable at around 0.23 over the entire sample period. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
Inference in the inequality constrained normal linear regression model is approached as a problem in Bayesian inference, using a prior that is the product of a conventional uninformative distribution and an indicator function representing the inequality constraints. The posterior distribution is calculated using Monte Carlo numerical integration, which leads directly to the evaluation of expected values of functions of interest. This approach is compared with others that have been proposed. Three empirical examples illustrate the utility of the proposed methods using an inexpensive 32-bit microcomputer.  相似文献   

15.
Today quasi-Monte Carlo methods are used successfully in computational finance and economics as an alternative to the Monte Carlo method. One drawback of these methods, however, is the lack of a practical way of error estimation. To address this issue several researchers introduced the so-called randomized quasi-Monte Carlo methods in the last decade. In this paper we will present a survey of randomized quasi-Monte Carlo methods, and compare their efficiencies with the efficiency of the Monte Carlo method in pricing certain securities. We will also investigate the effects of Box–Muller and inverse transformation techniques when they are applied to low-discrepancy sequences.  相似文献   

16.
This paper shows that regular fractional polynomials can approximate regular cost, production and utility functions and their first two derivatives on closed compact subsets of the strictly positive orthant of Euclidean space arbitrarily well. These functions therefore can provide reliable approximations to demand functions and other economically relevant characteristics of tastes and technology. Using canonical cost function data, it shows that full Bayesian inference for these approximations can be implemented using standard Markov chain Monte Carlo methods.  相似文献   

17.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

18.
Vector autoregressions with Markov‐switching parameters (MS‐VARs) offer substantial gains in data fit over VARs with constant parameters. However, Bayesian inference for MS‐VARs has remained challenging, impeding their uptake for empirical applications. We show that sequential Monte Carlo (SMC) estimators can accurately estimate MS‐VAR posteriors. Relative to multi‐step, model‐specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. We use SMC's flexibility to demonstrate that model selection among MS‐VARs can be highly sensitive to the choice of prior.  相似文献   

19.
We consider Bayesian estimation of a stochastic production frontier with ordered categorical output, where the inefficiency error is assumed to follow an exponential distribution, and where output, conditional on the inefficiency error, is modelled as an ordered probit model. Gibbs sampling algorithms are provided for estimation with both cross-sectional and panel data, with panel data being our main focus. A Monte Carlo study and a comparison of results from an example where data are used in both continuous and categorical form supports the usefulness of the approach. New efficiency measures are suggested to overcome a lack-of-invariance problem suffered by traditional efficiency measures. Potential applications include health and happiness production, university research output, financial credit ratings, and agricultural output recorded in broad bands. In our application to individual health production we use data from an Australian panel survey to compute posterior densities for marginal effects, outcome probabilities, and a number of within-sample and out-of-sample efficiency measures.  相似文献   

20.
In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provide better forecasts and are preferable from a theoretical standpoint. Several of these priors require numerical methods in order to evaluate the posterior distribution. Different ways of implementing Monte Carlo integration are considered. It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size. We also report on the forecasting performance of the different prior distributions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号