首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 794 毫秒
1.
Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, Hierarchical Bayes models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first level.  相似文献   

2.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

3.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

4.
One- and two-factor stochastic volatility models are assessed over three sets of stock returns data: S&P 500, DJIA, and Nasdaq. Estimation is done by simulated maximum likelihood using techniques that are computationally efficient, robust, straightforward to implement, and easy to adapt to different models. The models are evaluated using standard, easily interpretable time-series tools. The results are broadly similar across the three data sets. The tests provide no evidence that even the simple single-factor models are unable to capture the dynamics of volatility adequately; the problem is to get the shape of the conditional returns distribution right. None of the models come close to matching the tails of this distribution. Including a second factor provides only a relatively small improvement over the single-factor models. Fitting this aspect of the data is important for option pricing and risk management.  相似文献   

5.
This paper demonstrates that the class of conditionally linear and Gaussian state-space models offers a general and convenient framework for simultaneously handling nonlinearity, structural change and outliers in time series. Many popular nonlinear time series models, including threshold, smooth transition and Markov-switching models, can be written in state-space form. It is then straightforward to add components that capture parameter instability and intervention effects. We advocate a Bayesian approach to estimation and inference, using an efficient implementation of Markov Chain Monte Carlo sampling schemes for such linear dynamic mixture models. The general modelling framework and the Bayesian methodology are illustrated by means of several examples. An application to quarterly industrial production growth rates for the G7 countries demonstrates the empirical usefulness of the approach.  相似文献   

6.
Statistical analysis of autoregressive-moving average (ARMA) models is an important non-standard problem. No classical approach is widely accepted; legitimacy for most classical approaches is based solely on asymptotic grounds, while small sample sizes are common. The only obstacle to the Bayesian approach are designing a structure through which prior information can be incorporated and designing a practical computational method. The objective of this work is to overcome these two obstacles. In addition to the standard results, the Bayesian approach gives a different method of determining the order of the ARMA model, that is (p, q).  相似文献   

7.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

8.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

9.
The paper develops a general Bayesian framework for robust linear static panel data models usingε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior means are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman–Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.  相似文献   

10.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

11.
Multiple time series data may exhibit clustering over time and the clustering effect may change across different series. This paper is motivated by the Bayesian non-parametric modelling of the dependence between clustering effects in multiple time series analysis. We follow a Dirichlet process mixture approach and define a new class of multivariate dependent Pitman–Yor processes (DPY). The proposed DPY are represented in terms of vectors of stick-breaking processes which determine dependent clustering structures in the time series. We follow a hierarchical specification of the DPY base measure to account for various degrees of information pooling across the series. We discuss some theoretical properties of the DPY and use them to define Bayesian non-parametric repeated measurement and vector autoregressive models. We provide efficient Monte Carlo Markov Chain algorithms for posterior computation of the proposed models and illustrate the effectiveness of the method with a simulation study and an application to the United States and the European Union business cycle.  相似文献   

12.
Bayesian analysis of a Tobit quantile regression model   总被引:1,自引:0,他引:1  
This paper develops a Bayesian framework for Tobit quantile regression. Our approach is organized around a likelihood function that is based on the asymmetric Laplace distribution, a choice that turns out to be natural in this context. We discuss families of prior distributions on the quantile regression vector that lead to proper posterior distributions with finite moments. We show how the posterior distribution can be sampled and summarized by Markov chain Monte Carlo methods. A method for comparing alternative quantile regression models is also developed and illustrated. The techniques are illustrated with both simulated and real data. In particular, in an empirical comparison, our approach out-performed two other common classical estimators.  相似文献   

13.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

14.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

15.
This paper considers the issue of selecting the number of regressors and the number of structural breaks in multivariate regression models in the possible presence of multiple structural changes. We develop a modified Akaike information criterion (AIC), a modified Mallows’ Cp criterion and a modified Bayesian information criterion (BIC). The penalty terms in these criteria are shown to be different from the usual terms. We prove that the modified BIC consistently selects the regressors and the number of breaks whereas the modified AIC and the modified Cp criterion tend to overfit with positive probability. The finite sample performance of these criteria is investigated through Monte Carlo simulations and it turns out that our modification is successful in comparison to the classical model selection criteria and the sequential testing procedure robust to heteroskedasticity and autocorrelation.  相似文献   

16.
This paper presents a Bayesian approach to regression models with time-varying parameters, or state vector models. Unlike most previous research in this field the model allows for multiple observations for each time period. Bayesian estimators and their properties are developed for the general case where the regression parameters follow an ARMA(s,q) process over time. This methodology is applied to the estimation of time-varying price elasticity for a consumer product, using biweekly sales data for eleven domestic markets. The parameter estimates and forecasting performance of the model are compared with various alternative approaches.  相似文献   

17.
Bayesian stochastic search for VAR model restrictions   总被引:1,自引:0,他引:1  
We propose a Bayesian stochastic search approach to selecting restrictions for vector autoregressive (VAR) models. For this purpose, we develop a Markov chain Monte Carlo (MCMC) algorithm that visits high posterior probability restrictions on the elements of both the VAR regression coefficients and the error variance matrix. Numerical simulations show that stochastic search based on this algorithm can be effective at both selecting a satisfactory model and improving forecasting performance. To illustrate the potential of our approach, we apply our stochastic search to VAR modeling of inflation transmission from producer price index (PPI) components to the consumer price index (CPI).  相似文献   

18.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

19.
Least squares model averaging by Mallows criterion   总被引:1,自引:0,他引:1  
This paper is in response to a recent paper by Hansen (2007) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. The first is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic from a practical viewpoint. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.  相似文献   

20.
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970’s, the beginning of the 1980’s and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990’s and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号