首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 974 毫秒
1.
We introduce various methods that combine forecasts using constrained optimization with penalty. A non-negativity constraint is imposed on the weights, and several penalties are considered, taking the form of a divergence from a reference combination scheme. In contrast with most of the existing approaches, our framework performs forecast selection and combination in one step, allowing for potentially sparse combining schemes. Moreover, by exploiting the analogy between forecasts combination and portfolio optimization, we provide the analytical expression of the optimal penalty strength when penalizing with the L2-divergence from the equally-weighted scheme. An extensive simulation study and two empirical applications allow us to investigate the impact of the divergence function, the reference scheme, and the non-negativity constraint on the predictive performance. Our results suggest that the proposed models outperform those considered in previous studies.  相似文献   

2.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

3.
We propose a Bayesian shrinkage approach for vector autoregressions (VARs) that uses short‐term survey forecasts as an additional source of information about model parameters. In particular, we augment the vector of dependent variables by their survey nowcasts, and claim that each variable modelled in the VAR and its nowcast are likely to depend in a similar way on the lagged dependent variables. In an application to macroeconomic data, we find that the forecasts obtained from a VAR fitted by our new shrinkage approach typically yield smaller mean squared forecast errors than the forecasts obtained from a range of benchmark methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
This paper proposes a framework for the analysis of the theoretical properties of forecast combination, with the forecast performance being measured in terms of mean squared forecast errors (MSFE). Such a framework is useful for deriving all existing results with ease. In addition, it also provides insights into two forecast combination puzzles. Specifically, it investigates why a simple average of forecasts often outperforms forecasts from single models in terms of MSFEs, and why a more complicated weighting scheme does not always perform better than a simple average. In addition, this paper presents two new findings that are particularly relevant in practice. First, the MSFE of a forecast combination decreases as the number of models increases. Second, the conventional approach to the selection of optimal models, based on a simple comparison of MSFEs without further statistical testing, leads to a biased selection.  相似文献   

5.
US yield curve dynamics are subject to time-variation, but there is ambiguity about its precise form. This paper develops a vector autoregressive (VAR) model with time-varying parameters and stochastic volatility, which treats the nature of parameter dynamics as unknown. Coefficients can evolve according to a random walk, a Markov switching process, observed predictors, or depend on a mixture of these. To decide which form is supported by the data and to carry out model selection, we adopt Bayesian shrinkage priors. Our framework is applied to model the US yield curve. We show that the model forecasts well, and focus on selected in-sample features to analyze determinants of structural breaks in US yield curve dynamics.  相似文献   

6.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
The safety stock calculation requires a measure of the forecast error uncertainty. Such errors are usually assumed to be Gaussian iid (independently and identically distributed). However, deviations from iid lead to a deterioration in the performance of the supply chain. Recent research has shown that, contrary to theoretical approaches, empirical techniques that do not rely on the aforementioned assumptions can enhance the calculation of safety stocks. In particular, GARCH models cope with time-varying heterocedastic forecast error, and kernel density estimation does not need to rely on a determined distribution. However, if the forecast errors are time-varying heterocedastic and do not follow a determined distribution, the previous approaches are inadequate. We overcome this by proposing an optimal combination of the empirical methods that minimizes the asymmetric piecewise linear loss function, also known as the tick loss. The results show that combining quantile forecasts yields safety stocks with a lower cost. The methodology is illustrated with simulations and real data experiments for different lead times.  相似文献   

8.
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970’s, the beginning of the 1980’s and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990’s and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.  相似文献   

9.
Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF and the World Bank, but the econometric models used by such institutions are frequently unknown. This paper shows how to use the information available on point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknown forecast error distributions. We use real‐time data to forecast the density of US inflation. The results indicate that the proposed method materially improves the real‐time accuracy of density forecasts vis‐à‐vis those from the (unknown) individual econometric models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

11.
We introduce a new dataset of real gross domestic product (GDP) growth and core personal consumption expenditures (PCE) inflation forecasts produced by the staff of the Board of Governors of the Federal Reserve System. In contrast to the eight Greenbook forecasts a year the staff produce for Federal Open Market Committee (FOMC) meetings, our dataset has roughly weekly forecasts. We use these data to study whether the staff forecasts efficiently. Prespecified regressions of forecast errors on forecast revisions show the staff's GDP forecasts exhibit time-varying inefficiency between FOMC meetings, and also show some evidence for inefficient inflation forecasts.  相似文献   

12.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
We provide probabilistic forecasts of photovoltaic (PV) production, for several PV plants located in France up to 6 days of lead time, with a 30-min timestep. First, we derive multiple forecasts from numerical weather predictions (ECMWF and Météo France), including ensemble forecasts. Second, our parameter-free online learning technique generates a weighted combination of the production forecasts for each PV plant. The weights are computed sequentially before each forecast using only past information. Our strategy is to minimize the Continuous Ranked Probability Score (CRPS). We show that our technique provides forecast improvements for both deterministic and probabilistic evaluation tools.  相似文献   

14.
Multi-population mortality forecasting has become an increasingly important area in actuarial science and demography, as a means to avoid long-run divergence in mortality projections. This paper aims to establish a unified state-space Bayesian framework to model, estimate, and forecast mortality rates in a multi-population context. In this regard, we reformulate the augmented common factor model to account for structural/trend changes in the mortality indexes. We conduct a Bayesian analysis to make inferences and generate forecasts so that process and parameter uncertainties can be considered simultaneously and appropriately. We illustrate the efficiency of our methodology through two case studies. Both point and probabilistic forecast evaluations are considered in the empirical analysis. The derived results support the fact that the incorporation of stochastic drifts mitigates the impact of the structural changes in the time indexes on mortality projections.  相似文献   

15.
The linear opinion pool (LOP) produces potentially non-Gaussian combination forecast densities. In this paper, we propose a computationally convenient transformation for the LOP to mirror the non-Gaussianity exhibited by the target variable. Our methodology involves a Smirnov transform to reshape the LOP combination forecasts using the empirical cumulative distribution function. We illustrate our empirically transformed opinion pool (EtLOP) approach with an application examining quarterly real-time forecasts for U.S. inflation evaluated on a sample from 1990:1 to 2020:2. EtLOP improves performance by approximately 10% to 30% in terms of the continuous ranked probability score across forecasting horizons.  相似文献   

16.
A large body of empirical studies has shown that a forecast developed by combining individual base forecasts performs surprisingly well. Previous work on the combination of forecasts has been confined to the area of time series forecasting. This work extends the combination of forecasts technique into the domain of forecasting one-time competitive events, specifically the scaled, relative finishing position of horses in thoroughbred sprint races. The present research develops a framework for the selection of the base forecasts and selects 12 base forecasts for analysis. The performance of the combination of the base forecasts is assessed on a sample of sprint races. Results of the analysis strongly suggest that the combination approach is both appropriate and effective. Some differences in results between this work and previous work in the time series domain suggest promising avenues for future research.  相似文献   

17.
In this study, we investigate whether low-frequency data improve volatility forecasting when high-frequency data are available. To answer this question, we utilize four forecast combination strategies that combine low-frequency and high-frequency volatility models and employ a rolling window and a range of loss functions in the framework of the novel Model Confidence Set test. Out-of-sample results show that combination forecasts with GARCH-class models can achieve high forecast accuracy. However, the combination forecast methods appear not to significantly outperform individual high-frequency volatility models. Furthermore, we find that models that combine low-frequency and high-frequency volatility yield significantly better performance than other models and combination forecast strategies in both a statistical and economic sense.  相似文献   

18.
We propose a Bayesian estimation procedure for the generalized Bass model that is used in product diffusion models. Our method forecasts product sales early based on previous similar markets; that is, we obtain pre-launch forecasts by analogy. We compare our forecasting proposal to traditional estimation approaches, and alternative new product diffusion specifications. We perform several simulation exercises, and use our method to forecast the sales of room air conditioners, BlackBerry handheld devices, and compressed natural gas. The results show that our Bayesian proposal provides better predictive performances than competing alternatives when little or no historical data are available, which is when sales projections are the most useful.  相似文献   

19.
This paper develops a Bayesian variant of global vector autoregressive (B‐GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B‐GVAR models in terms of point and density forecasts for one‐quarter‐ahead and four‐quarter‐ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country‐specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B‐GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country‐specific vector autoregressions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Least-squares forecast averaging   总被引:2,自引:0,他引:2  
This paper proposes forecast combination based on the method of Mallows Model Averaging (MMA). The method selects forecast weights by minimizing a Mallows criterion. This criterion is an asymptotically unbiased estimate of both the in-sample mean-squared error (MSE) and the out-of-sample one-step-ahead mean-squared forecast error (MSFE). Furthermore, the MMA weights are asymptotically mean-square optimal in the absence of time-series dependence. We show how to compute MMA weights in forecasting settings, and investigate the performance of the method in simple but illustrative simulation environments. We find that the MMA forecasts have low MSFE and have much lower maximum regret than other feasible forecasting methods, including equal weighting, BIC selection, weighted BIC, AIC selection, weighted AIC, Bates–Granger combination, predictive least squares, and Granger–Ramanathan combination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号