首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Sometimes forecasts of the original variable are of interest, even though a variable appears in logarithms (logs) in a system of time series. In that case, converting the forecast for the log of the variable to a naïve forecast of the original variable by simply applying the exponential transformation is not theoretically optimal. A simple expression for the optimal forecast under normality assumptions is derived. However, despite its theoretical advantages, the optimal forecast is shown to be inferior to the naïve forecast if specification and estimation uncertainty are taken into account. Hence, in practice, using the exponential of the log forecast is preferable to using the optimal forecast.  相似文献   

2.
The use of stochastic models and performance measures for the analysis of real life queuing scenarios are based on the fundamental premise that parameters values are known. This is a rarity since more often than not, parameters are usually unknown and require to be estimated. This paper presents techniques for the same from Bayesian perspective. The queue we intend to deal with is the M/M/1 queuing model. Several closed form expressions on posterior inference and prediction are presented which can be readily implemented using standard spreadsheet tools. Previous work in this direction resulted in non-existence of posterior moments. A way out is suggested. Interval estimates and tests of hypothesis on performance measures are also presented.  相似文献   

3.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

4.
Parameter estimation under model uncertainty is a difficult and fundamental issue in econometrics. This paper compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) — currently one of the standard methods used in growth empirics — with a new method called weighted-average least squares (WALS). The new method has two major advantages over BMA: its computational burden is trivial and it is based on a transparent definition of prior ignorance. The theory is applied to and sheds new light on growth empirics where a high degree of model uncertainty is typically present.  相似文献   

5.
This paper develops an efficient approach to modelling and forecasting time series data with an unknown number of change-points. Using a conjugate prior and conditioning on time-invariant parameters, the predictive density and the posterior distribution of the change-points have closed forms. Furthermore, the conjugate prior is modeled as hierarchical in order to exploit the information across regimes. This framework allows breaks in the variance, the regression coefficients, or both. The regime duration can be modelled as a Poisson distribution. A new, efficient Markov chain Monte Carlo sampler draws the parameters from the posterior distribution as one block. An application to a Canadian inflation series shows the gains in forecasting precision that our model provides.  相似文献   

6.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

7.
    
The relative performances of forecasting models change over time. This empirical observation raises two questions. First, is the relative performance itself predictable? Second, if so, can it be exploited in order to improve the forecast accuracy? We address these questions by evaluating the predictive abilities of a wide range of economic variables for two key US macroeconomic aggregates, namely industrial production and inflation, relative to simple benchmarks. We find that business cycle indicators, financial conditions, uncertainty and measures of past relative performances are generally useful for explaining the models’ relative forecasting performances. In addition, we conduct a pseudo-real-time forecasting exercise, where we use the information about the conditional performance for model selection and model averaging. The newly proposed strategies deliver sizable improvements over competitive benchmark models and commonly-used combination schemes. The gains are larger when model selection and averaging are based on both financial conditions and past performances measured at the forecast origin date.  相似文献   

8.
    
We estimate a Bayesian VAR (BVAR) for the UK economy and assess its performance in forecasting GDP growth and CPI inflation in real time relative to forecasts from COMPASS, the Bank of England’s DSGE model, and other benchmarks. We find that the BVAR outperformed COMPASS when forecasting both GDP and its expenditure components. In contrast, their performances when forecasting CPI were similar. We also find that the BVAR density forecasts outperformed those of COMPASS, despite under-predicting inflation at most forecast horizons. Both models over-predicted GDP growth at all forecast horizons, but the issue was less pronounced in the BVAR. The BVAR’s point and density forecast performances are also comparable to those of a Bank of England in-house statistical suite for both GDP and CPI inflation, as well as to the official Inflation Report projections. Our results are broadly consistent with the findings of similar studies for other advanced economies.  相似文献   

9.
Empirical prediction intervals are constructed based on the distribution of previous out-of-sample forecast errors. Given historical data, a sample of such forecast errors is generated by successively applying a chosen point forecasting model to a sequence of fixed windows of past observations and recording the associated deviations of the model predictions from the actual observations out-of-sample. The suitable quantiles of the distribution of these forecast errors are then used along with the point forecast made by the selected model to construct an empirical prediction interval. This paper re-examines the properties of the empirical prediction interval. Specifically, we provide conditions for its asymptotic validity, evaluate its small sample performance and discuss its limitations.  相似文献   

10.
    
We compare real-time density forecasts for the euro area using three DSGE models. The benchmark is the Smets and Wouters model, and its forecasts of real GDP growth and inflation are compared with those from two extensions. The first adds financial frictions and expands the observables to include a measure of the external finance premium. The second allows for the extensive labor-market margin and adds the unemployment rate to the observables. The main question that we address is whether these extensions improve the density forecasts of real GDP and inflation and their joint forecasts up to an eight-quarter horizon. We find that adding financial frictions leads to a deterioration in the forecasts, with the exception of longer-term inflation forecasts and the period around the Great Recession. The labor market extension improves the medium- to longer-term real GDP growth and shorter- to medium-term inflation forecasts weakly compared with the benchmark model.  相似文献   

11.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

12.
    
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.  相似文献   

13.
针对传统B櫣hlmann-Straub信用模型不能有效地解决缺失数据信息处理问题,本文利用贝叶斯统计方法,构造了一类新的贝叶斯信用分析模型,引入基于吉布斯抽样的马尔科夫链蒙特卡洛方法进行数值计算,建立了一个索赔后验分层正态模型进行实证分析,证明模型的有效性。研究结果表明,基于MCMC的贝叶斯信用模型能够动态模拟模型参数的后验分布,提高模型估计的精度,对保险公司经验费率厘定方法的改进具有重要的现实意义。  相似文献   

14.
    
  相似文献   

15.
We discuss the problem of constructing a suitable regression model from a nonparametric Bayesian viewpoint. For this purpose, we consider the case when the error terms have symmetric and unimodal densities. By the Khintchine and Shepp theorem, the density of response variable can be written as a scale mixture of uniform densities. The mixing distribution is assumed to have a Dirichlet process prior. We further consider appropriate prior distributions for other parameters as the components of the predictive device. Among the possible submodels, we select the one which has the highest posterior probability. An example is given to illustrate the approach.  相似文献   

16.
A. S. Young 《Metrika》1987,34(1):325-339
Summary We treat the model selection problem in regression as a decision problem in which the decisions are the alternative predictive distributions based on the different sub-models and the parameter space is the set of possible future values of the regressand. The loss function balances out the conflicting needs for a predictive distribution with mean close to the true value ofy but without too great a variation. The treatment is Bayesian and the criterion derived is a Bayesian generalization of Mallows (1973)C p , the Bivar criterion (Young 1982) and AIC (Akaike 1974). An application using a graphical sensitivity analysis is presented.  相似文献   

17.
Abstract.  In this paper, we review and unite the literatures on returns to schooling and Bayesian model averaging. We observe that most studies seeking to estimate the returns to education have done so using particular (and often different across researchers) model specifications. Given this, we review Bayesian methods which formally account for uncertainty in the specification of the model itself, and apply these techniques to estimate the economic return to a college education. The approach described in this paper enables us to determine those model specifications which are most favored by the given data, and also enables us to use the predictions obtained from all of the competing regression models to estimate the returns to schooling. The reported precision of such estimates also account for the uncertainty inherent in the model specification. Using U.S. data from the National Longitudinal Survey of Youth (NLSY), we also revisit several 'stylized facts' in the returns to education literature and examine if they continue to hold after formally accounting for model uncertainty.  相似文献   

18.
    
  相似文献   

19.
    
Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted‐average least squares (WALS) is a recent model‐average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.  相似文献   

20.
    
Decision-makers often collect and aggregate experts’ point predictions about continuous outcomes, such as stock returns or product sales. In this article, we model experts as Bayesian agents and show that means, including the (weighted) arithmetic mean, trimmed means, median, geometric mean, and essentially all other measures of central tendency, do not use all information in the predictions. Intuitively, they assume idiosyncratic differences to arise from error instead of private information and hence do not update the prior with all available information. Updating means in terms of unused information improves their expected accuracy but depends on the experts’ prior and information structure that cannot be estimated based on a single prediction per expert. In many applications, however, experts consider multiple stocks, products, or other related items at the same time. For such contexts, we introduce ANOVA updating – an unsupervised technique that updates means based on experts’ predictions of multiple outcomes from a common population. The technique is illustrated on several real-world datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号