首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
This paper presents a Bayesian model averaging regression framework for forecasting US inflation, in which the set of predictors included in the model is automatically selected from a large pool of potential predictors and the set of regressors is allowed to change over time. Using real‐time data on the 1960–2011 period, this model is applied to forecast personal consumption expenditures and gross domestic product deflator inflation. The results of this forecasting exercise show that, although it is not able to beat a simple random‐walk model in terms of point forecasts, it does produce superior density forecasts compared with a range of alternative forecasting models. Moreover, a sensitivity analysis shows that the forecasting results are relatively insensitive to prior choices and the forecasting performance is not affected by the inclusion of a very large set of potential predictors.  相似文献   

2.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

3.
This article considers ultrahigh-dimensional forecasting problems with survival response variables. We propose a two-step model averaging procedure for improving the forecasting accuracy of the true conditional mean of a survival response variable. The first step is to construct a class of candidate models, each with low-dimensional covariates. For this, a feature screening procedure is developed to separate the active and inactive predictors through a marginal Buckley–James index, and to group covariates with a similar index size together to form regression models with survival response variables. The proposed screening method can select active predictors under covariate-dependent censoring, and enjoys sure screening consistency under mild regularity conditions. The second step is to find the optimal model weights for averaging by adapting a delete-one cross-validation criterion, without the standard constraint that the weights sum to one. The theoretical results show that the delete-one cross-validation criterion achieves the lowest possible forecasting loss asymptotically. Numerical studies demonstrate the superior performance of the proposed variable screening and model averaging procedures over existing methods.  相似文献   

4.
In this paper it is pointed out that a Bayesian forecasting procedure performed better according to an average mean square error (MSE) criterion than the many other forecasting procedures utilized in the forecasting experiments reported in an extensive study by Makridakis et al. (1982). This fact was not mentioned or discussed by the authors. Also, it is emphasized that if criteria other than MSE are employed, Bayesian forecasts that are optimal relative to them should be employed. Specific examples are provided and analyzed to illustrate this point.  相似文献   

5.
6.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

7.
Parameter estimation under model uncertainty is a difficult and fundamental issue in econometrics. This paper compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) — currently one of the standard methods used in growth empirics — with a new method called weighted-average least squares (WALS). The new method has two major advantages over BMA: its computational burden is trivial and it is based on a transparent definition of prior ignorance. The theory is applied to and sheds new light on growth empirics where a high degree of model uncertainty is typically present.  相似文献   

8.
We extend the recently introduced latent threshold dynamic models to include dependencies among the dynamic latent factors which underlie multivariate volatility. With an ability to induce time-varying sparsity in factor loadings, these models now also allow time-varying correlations among factors, which may be exploited in order to improve volatility forecasts. We couple multi-period, out-of-sample forecasting with portfolio analysis using standard and novel benchmark neutral portfolios. Detailed studies of stock index and FX time series include: multi-period, out-of-sample forecasting, statistical model comparisons, and portfolio performance testing using raw returns, risk-adjusted returns and portfolio volatility. We find uniform improvements on all measures relative to standard dynamic factor models. This is due to the parsimony of latent threshold models and their ability to exploit between-factor correlations so as to improve the characterization and prediction of volatility. These advances will be of interest to financial analysts, investors and practitioners, as well as to modeling researchers.  相似文献   

9.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

10.
Accurate forecasts of mortality rates are essential to various types of demographic research like population projection, and to the pricing of insurance products such as pensions and annuities. Recent studies have considered a spatial–temporal vector autoregressive (STVAR) model for the mortality surface, where mortality rates of each age depend on the historical values for that age (temporality) and the neighboring cohorts ages (spatiality). This model has sound statistical properties including co-integrated dependent variables, the existence of closed-form solutions and a simple error structure. Despite its improved forecasting performance over the famous Lee–Carter (LC) model, the constraint that only the effects of the same and neighboring cohorts are significant can be too restrictive. In this study, we adopt the concept of hyperbolic memory to the spatial dimension and propose a hyperbolic STVAR (HSTVAR) model. Retaining all desirable features of the STVAR, our model uniformly beats the LC, the weighted functional demographic model, STVAR and sparse VAR counterparties for forecasting accuracy, when French and Spanish mortality data over 1950–2016 are considered. Simulation results also lead to robust conclusions. Long-term forecasting analyses up to 2050 comparing the four models are further performed. To illustrate the extensible feature of HSTVAR to a multi-population case, a two-population illustrative example using the same sample is further presented.  相似文献   

11.
We use a dynamic modeling and selection approach for studying the informational content of various macroeconomic, monetary, and demographic fundamentals for forecasting house-price growth in the six largest countries of the European Monetary Union. The approach accounts for model uncertainty and model instability. We find superior performance compared to various alternative forecasting models. Plots of cumulative forecast errors visualize the superior performance of our approach, particularly after the recent financial crisis.  相似文献   

12.
Humanitarian aid agencies usually resort to inventory prepositioning to mitigate the impact of disasters by sending emergency supplies to the affected area as quickly as possible. However, a lack of replenishment opportunity after a disaster can greatly hamper the effectiveness of the relief operation due to uncertainty in demand. In this paper, a prepositioning problem is formulated as a two-period newsvendor model where the response phase is divided into two periods. The model acknowledges the demand to be uncertain even after the disaster and utilises the Bayesian approach to revise the demand of the second period. Based on the revised demand, an order is placed at the beginning of the second period to be replenished instantaneously. A two-stage solution methodology is proposed to find the prepositioning quantity and post-disaster replenishment quantity, which minimise the total expected costs of relief operations. A numerical example is presented to demonstrate the solution methodology, and sensitivity analysis is performed to examine the effect of model parameters. The results highlight the indifferent characteristics shown by the replenishment quantity with the variation in model parameters.  相似文献   

13.
In this paper, we present a new methodology for forecasting the results of mixed martial arts contests. Our approach utilises data scraped from freely available websites to estimate fighters’ skills in various key aspects of the sport. With these skill estimates, we simulate the contest as an actual fight using Markov chains, rather than predicting a binary outcome. We compare the model’s accuracy to that of the bookmakers using their historical odds and show that the model can be used as the basis of a successful betting strategy.  相似文献   

14.
This article introduces the winning method at the M5 Accuracy competition. The presented method takes a simple manner of averaging the results of multiple base forecasting models that have been constructed via partial pooling of multi-level data. All base forecasting models of adopting direct or recursive multi-step forecasting methods are trained by the machine learning technique, LightGBM, from three different levels of data pools. At the competition, the simple averaging of the multiple direct and recursive forecasting models, called DRFAM, obtained the complementary effects between direct and recursive multi-step forecasting of the multi-level product sales to improve the accuracy and the robustness.  相似文献   

15.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

16.
Bayesian model selection using encompassing priors   总被引:1,自引:0,他引:1  
This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes.  相似文献   

17.
Multi-population mortality forecasting has become an increasingly important area in actuarial science and demography, as a means to avoid long-run divergence in mortality projections. This paper aims to establish a unified state-space Bayesian framework to model, estimate, and forecast mortality rates in a multi-population context. In this regard, we reformulate the augmented common factor model to account for structural/trend changes in the mortality indexes. We conduct a Bayesian analysis to make inferences and generate forecasts so that process and parameter uncertainties can be considered simultaneously and appropriately. We illustrate the efficiency of our methodology through two case studies. Both point and probabilistic forecast evaluations are considered in the empirical analysis. The derived results support the fact that the incorporation of stochastic drifts mitigates the impact of the structural changes in the time indexes on mortality projections.  相似文献   

18.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method.  相似文献   

19.
This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets and Wouters (2012), estimated on euro area data. It investigates the extent to which the inclusion of forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated in the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model, an AR(1) model, a sample mean and a random walk.  相似文献   

20.
Bayesian model averaging (BMA) provides a coherent and systematic mechanism for accounting for model uncertainty. It can be regarded as an direct application of Bayesian inference to the problem of model selection, combined estimation and prediction. BMA produces a straightforward model choice criterion and less risky predictions. However, the application of BMA is not always straightforward, leading to diverse assumptions and situational choices on its different aspects. Despite the widespread application of BMA in the literature, there were not many accounts of these differences and trends besides a few landmark revisions in the late 1990s and early 2000s, therefore not accounting for advancements made in the last decades. In this work, we present an account of these developments through a careful content analysis of 820 articles in BMA published between 1996 and 2016. We also develop a conceptual classification scheme to better describe this vast literature, understand its trends and future directions and provide guidance for the researcher interested in both the application and development of the methodology. The results of the classification scheme and content review are then used to discuss the present and future of the BMA literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号