首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
We propose using the statistical method of Bagging to forecast the equity premium out-of-sample for multivariate regression models. Bagging allows for the flexible and efficient extraction of valuable informational content from a large set of predictors, leading to statistically and economically significant gains relative to not only the historical mean, but also other soft-threshold methods such as forecast combinations and shrinkage estimators in our empirical results. Furthermore, we find that the source of economic gains for Bagging primarily comes from the fact that it encourages the investor to actively manage portfolio by flexibly utilizing short selling or leveraging to better time the market following correctly prognosticated trends. However, other strategies such as forecast combinations keep the equity shares nearly fixed regardless of the predicted market prospect.  相似文献   

2.
Using monthly data from 1973 through 2020, we explore whether it is possible to improve the accuracy of one-month ahead log-aggregate equity return realized volatility point forecasts by conditioning on various nonlinear crude oil price measures widely relied on in the literature. When evaluating the evidence of unconditional relative equal predictive ability as specified in Diebold and Mariano (1995), we observe that similar to well-known economic variables, such as the dividend yield, the default yield spread and the rate of inflation, we rarely observe evidence of statistical gains in relative point forecast accuracy in favor of the crude oil price-based models. However, when evaluating the evidence of conditionalrelative equal predictive ability as specified in Giacomini and White (2006), we observe that contrary to well-known economic predictors, certain nonlinear crude oil price variables, such as the one-year net crude oil price increase suggested in Hamilton (1996) offer sizable point forecast accuracy gains relative to the benchmark. These statistical gains can also be translated into economic gains.  相似文献   

3.
Forecast combination through dimension reduction techniques   总被引:2,自引:0,他引:2  
This paper considers several methods of producing a single forecast from several individual ones. We compare “standard” but hard to beat combination schemes (such as the average of forecasts at each period, or consensus forecast and OLS-based combination schemes) with more sophisticated alternatives that involve dimension reduction techniques. Specifically, we consider principal components, dynamic factor models, partial least squares and sliced inverse regression.Our source of forecasts is the Survey of Professional Forecasters, which provides forecasts for the main US macroeconomic aggregates. The forecasting results show that partial least squares, principal component regression and factor analysis have similar performances (better than the usual benchmark models), but sliced inverse regression shows an extreme behavior (performs either very well or very poorly).  相似文献   

4.
The relative performances of forecasting models change over time. This empirical observation raises two questions. First, is the relative performance itself predictable? Second, if so, can it be exploited in order to improve the forecast accuracy? We address these questions by evaluating the predictive abilities of a wide range of economic variables for two key US macroeconomic aggregates, namely industrial production and inflation, relative to simple benchmarks. We find that business cycle indicators, financial conditions, uncertainty and measures of past relative performances are generally useful for explaining the models’ relative forecasting performances. In addition, we conduct a pseudo-real-time forecasting exercise, where we use the information about the conditional performance for model selection and model averaging. The newly proposed strategies deliver sizable improvements over competitive benchmark models and commonly-used combination schemes. The gains are larger when model selection and averaging are based on both financial conditions and past performances measured at the forecast origin date.  相似文献   

5.
This paper presents empirical evidence on how judgmental adjustments affect the accuracy of macroeconomic density forecasts. Judgment is defined as the difference between professional forecasters’ densities and the forecast densities from statistical models. Using entropic tilting, we evaluate whether judgments about the mean, variance and skew improve the accuracy of density forecasts for UK output growth and inflation. We find that not all judgmental adjustments help. Judgments about point forecasts tend to improve density forecast accuracy at short horizons and at times of heightened macroeconomic uncertainty. Judgments about the variance hinder at short horizons, but can improve tail risk forecasts at longer horizons. Judgments about skew in general take value away, with gains seen only for longer horizon output growth forecasts when statistical models took longer to learn that downside risks had reduced with the end of the Great Recession. Overall, density forecasts from statistical models prove hard to beat.  相似文献   

6.
This paper aims to improve the predictability of aggregate oil market volatility with a substantially large macroeconomic database, including 127 macro variables. To this end, we use machine learning from both the variable selection (VS) and common factor (i.e., dimension reduction) perspectives. We first use the lasso, elastic net (ENet), and two conventional supervised learning approaches based on the significance level of predictors’ regression coefficients and the incremental R-square to select useful predictors relevant to forecasting oil market volatility. We then rely on the principal component analysis (PCA) to extract a common factor from the selected predictors. Finally, we augment the autoregression (AR) benchmark model by including the supervised PCA common index. Our empirical results show that the supervised PCA regression model can successfully predict oil market volatility both in-sample and out-of-sample. Also, the recommended models can yield forecasting gains in both statistical and economic perspectives. We further shed light on the nature of VS over time. In particular, option-implied volatility is always the most powerful predictor.  相似文献   

7.
We develop a Bayesian random compressed multivariate heterogeneous autoregressive (BRC-MHAR) model to forecast the realized covariance matrices of stock returns. The proposed model randomly compresses the predictors and reduces the number of parameters. We also construct several competing multivariate volatility models with the alternative shrinkage methods to compress the parameter’s dimensions. We compare the forecast performances of the proposed models with the competing models based on both statistical and economic evaluations. The results of statistical evaluation suggest that the BRC-MHAR models have the better forecast precision than the competing models for the short-term horizon. The results of economic evaluation suggest that the BRC-MHAR models are superior to the competing models in terms of the average return, the Shape ratio and the economic value.  相似文献   

8.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

9.
We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso‐type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics as well as the performance of the proposed models for forecasting the realized covariance matrices of the 30 Dow Jones stocks. We find that the dynamics are not stable as the data are aggregated from the daily to lower frequencies. Furthermore, we are able beat our benchmark by a wide margin. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

11.
The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large‐scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two‐step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke ( 1996 ). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the way to using large‐scale reduced rank models for empirical analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
We find that it does, but choosing the right specification is not trivial. Based on an extensive forecast evaluation we document notable forecast instabilities for most simple Phillips curves. Euro area inflation was particularly hard to forecast in the run-up to the Economic and Monetary Union and after the sovereign debt crisis, when the trends—and, for the latter period, also the amount of slack—were harder to pin down. Yet, some specifications outperform a univariate benchmark and point to the following lessons: (i) the key type of time variation to consider is an inflation trend; (ii) a simple filter-based output gap works well, but after the Great Recession it is outperformed by endogenously estimated slack or by “institutional” estimates; (iii) external variables do not bring forecast gains; (iv) newer-generation Phillips curve models with several time-varying features are a promising avenue for forecasting; and (v) averaging over a wide range of modelling choices helps.  相似文献   

13.
A popular approach to forecasting macroeconomic variables is to utilize a large number of predictors. Several regularization and shrinkage methods can be used to exploit such high-dimensional datasets, and have been shown to improve forecast accuracy for the US economy. To assess whether similar results hold for economies with different characteristics, an Australian dataset containing observations on 151 aggregate and disaggregate economic series as well as 185 international variables, is introduced. An extensive empirical study is carried out investigating forecasts at different horizons, using a variety of methods and with information sets containing an increasing number of predictors. In contrast to other countries the results show that it is difficult to forecast Australian key macroeconomic variables more accurately than some simple benchmarks. In line with other studies we also find that there is little to no improvement in forecast accuracy when the number of predictors is expanded beyond 20–40 variables and international factors do not seem to help.  相似文献   

14.
In this paper, we assess whether using non-linear dimension reduction techniques pays off for forecasting inflation in real-time. Several recent methods from the machine learning literature are adopted to map a large dimensional dataset into a lower-dimensional set of latent factors. We model the relationship between inflation and the latent factors using constant and time-varying parameter (TVP) regressions with shrinkage priors. Our models are then used to forecast monthly US inflation in real-time. The results suggest that sophisticated dimension reduction methods yield inflation forecasts that are highly competitive with linear approaches based on principal components. Among the techniques considered, the Autoencoder and squared principal components yield factors that have high predictive power for one-month- and one-quarter-ahead inflation. Zooming into model performance over time reveals that controlling for non-linear relations in the data is of particular importance during recessionary episodes of the business cycle or the current COVID-19 pandemic.  相似文献   

15.
Financial data often contain information that is helpful for macroeconomic forecasting, while multi-step forecast accuracy benefits from incorporating good nowcasts of macroeconomic variables. This paper considers the usefulness of financial nowcasts for making conditional forecasts of macroeconomic variables with quarterly Bayesian vector autoregressions (BVARs). When nowcasting quarterly financial variables’ values, we find that taking the average of the available daily data and a daily random walk forecast to complete the quarter typically outperforms other nowcasting approaches. Using real-time data, we find gains in out-of-sample forecast accuracy from the inclusion of financial nowcasts relative to unconditional forecasts, with further gains from the incorporation of nowcasts of macroeconomic variables. Conditional forecasts from quarterly BVARs augmented with financial nowcasts rival the forecast accuracy of mixed-frequency dynamic factor models and mixed-data sampling (MIDAS) models.  相似文献   

16.
This paper evaluates the forecasting performances of several small open-economy DSGE models relative to a closed-economy benchmark using a long span of data for Australia, Canada and the United Kingdom. We find that opening the model economy usually does not improve the quality of point and density forecasts for key domestic variables, and can even cause it to deteriorate. We show that this result can be attributed largely to an increase in the forecast error due to the more sophisticated structure of the extended setup, which is not compensated for by a better model specification. This claim is based on a Monte Carlo experiment in which an open-economy model fails to beat its closed-economy benchmark consistently even if the former is the true data generating process.  相似文献   

17.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

18.
We propose a straightforward algorithm to estimate large Bayesian time‐varying parameter vector autoregressions with mixture innovation components for each coefficient in the system. The computational burden becomes manageable by approximating the mixture indicators driving the time‐variation in the coefficients with a latent threshold process that depends on the absolute size of the shocks. Two applications illustrate the merits of our approach. First, we forecast the US term structure of interest rates and demonstrate forecast gains relative to benchmark models. Second, we apply our approach to US macroeconomic data and find significant evidence for time‐varying effects of a monetary policy tightening.  相似文献   

19.
This paper develops a Bayesian vector autoregressive model (BVAR) for the leader of the Portuguese car market to forecast the market share. The model includes five marketing decision variables. The Bayesian prior is selected on the basis of the accuracy of the out-of-sample forecasts. We find that BVAR models generally produce more accurate forecasts. The out-of-sample accuracy of the BVAR forecasts is also compared with that of forecasts from an unrestricted VAR model and of benchmark forecasts produced from three univariate models. Additionally, competitive dynamics are revealed through variance decompositions and impulse response analyses.  相似文献   

20.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号