首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We construct a real-time dataset (FRED-SD) with vintage data for the U.S. states that can be used to forecast both state-level and national-level variables. Our dataset includes approximately 28 variables per state, including labor-market, production, and housing variables. We conduct two sets of real-time forecasting exercises. The first forecasts state-level labor-market variables using five different models and different levels of industrially disaggregated data. The second forecasts a national-level variable exploiting the cross-section of state data. The state-forecasting experiments suggest that large models with industrially disaggregated data tend to have higher predictive ability for industrially diversified states. For national-level data, we find that forecasting and aggregating state-level data can outperform a random walk but not an autoregression. We compare these real-time data experiments with forecasting experiments using final-vintage data and find very different results. Because these final-vintage results are obtained with revised data that would not have been available at the time the forecasts would have been made, we conclude that the use of real-time data is essential for drawing proper conclusions about state-level forecasting models.  相似文献   

2.
This paper compares alternative models of time‐varying volatility on the basis of the accuracy of real‐time point and density forecasts of key macroeconomic time series for the USA. We consider Bayesian autoregressive and vector autoregressive models that incorporate some form of time‐varying volatility, precisely random walk stochastic volatility, stochastic volatility following a stationary AR process, stochastic volatility coupled with fat tails, GARCH and mixture of innovation models. The results show that the AR and VAR specifications with conventional stochastic volatility dominate other volatility specifications, in terms of point forecasting to some degree and density forecasting to a greater degree. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
This article studies a simple, coherent approach for identifying and estimating error‐correcting vector autoregressive moving average (EC‐VARMA) models. Canonical correlation analysis is implemented for both determining the cointegrating rank, using a strongly consistent method, and identifying the short‐run VARMA dynamics, using the scalar component methodology. Finite‐sample performance is evaluated via Monte Carlo simulations and the approach is applied to modelling and forecasting US interest rates. The results reveal that EC‐VARMA models generate significantly more accurate out‐of‐sample forecasts than vector error correction models (VECMs), especially for short horizons. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Abstract This paper unifies two methodologies for multi‐step forecasting from autoregressive time series models. The first is covered in most of the traditional time series literature and it uses short‐horizon forecasts to compute longer‐horizon forecasts, while the estimation method minimizes one‐step‐ahead forecast errors. The second methodology considers direct multi‐step estimation and forecasting. In this paper, we show that both approaches are special (boundary) cases of a technique called partial least squares (PLS) when this technique is applied to an autoregression. We outline this methodology and show how it unifies the other two. We also illustrate the practical relevance of the resultant PLS autoregression for 17 quarterly, seasonally adjusted, industrial production series. Our main findings are that both boundary models can be improved by including factors indicated from the PLS technique.  相似文献   

5.
We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso‐type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics as well as the performance of the proposed models for forecasting the realized covariance matrices of the 30 Dow Jones stocks. We find that the dynamics are not stable as the data are aggregated from the daily to lower frequencies. Furthermore, we are able beat our benchmark by a wide margin. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
We compare a number of methods that have been proposed in the literature for obtaining h-step ahead minimum mean square error forecasts for self-exciting threshold autoregressive (SETAR) models. These forecasts are compared to those from an AR model. The comparison of forecasting methods is made using Monte Carlo simulation. The Monte-Carlo method of calculating SETAR forecasts is generally at least as good as that of the other methods we consider. An exception is when the disturbances in the SETAR model come from a highly asymmetric distribution, when a Bootstrap method is to be preferred.An empirical application calculates multi-period forecasts from a SETAR model of US gross national product using a number of the forecasting methods. We find that whether there are improvements in forecast performance relative to a linear AR model depends on the historical epoch we select, and whether forecasts are evaluated conditional on the regime the process was in at the time the forecast was made.  相似文献   

7.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

8.
A probabilistic forecast is the estimated probability with which a future event will occur. One interesting feature of such forecasts is their calibration, or the match between the predicted probabilities and the actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. We show here that we can do this without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating the predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on the calibration error in a number of economic applications, including the prediction of recessions and inflation, using both forecasts made and stored in real time and pseudo-forecasts made using the data vintage available at the forecast date. The outcomes are evaluated using both first-release outcome measures and subsequent revised data. We find substantial evidence of incorrect calibration in professional forecasts of recessions and inflation from the SPF, as well as in real-time inflation forecasts from a variety of output gap models.  相似文献   

9.
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small‐sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey‐based probability forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Accurate solar forecasts are necessary to improve the integration of solar renewables into the energy grid. In recent years, numerous methods have been developed for predicting the solar irradiance or the output of solar renewables. By definition, a forecast is uncertain. Thus, the models developed predict the mean and the associated uncertainty. Comparisons are therefore necessary and useful for assessing the skill and accuracy of these new methods in the field of solar energy.The aim of this paper is to present a comparison of various models that provide probabilistic forecasts of the solar irradiance within a very strict framework. Indeed, we consider focusing on intraday forecasts, with lead times ranging from 1 to 6 h. The models selected use only endogenous inputs for generating the forecasts. In other words, the only inputs of the models are the past solar irradiance data. In this context, the most common way of generating the forecasts is to combine point forecasting methods with probabilistic approaches in order to provide prediction intervals for the solar irradiance forecasts. For this task, we selected from the literature three point forecasting models (recursive autoregressive and moving average (ARMA), coupled autoregressive and dynamical system (CARDS), and neural network (NN)), and seven methods for assessing the distribution of their error (linear model in quantile regression (LMQR), weighted quantile regression (WQR), quantile regression neural network (QRNN), recursive generalized autoregressive conditional heteroskedasticity (GARCHrls), sieve bootstrap (SB), quantile regression forest (QRF), and gradient boosting decision trees (GBDT)), leading to a comparison of 20 combinations of models.None of the model combinations clearly outperform the others; nevertheless, some trends emerge from the comparison. First, the use of the clear sky index ensures the accuracy of the forecasts. This derived parameter permits time series to be deseasonalized with missing data, and is also a good explanatory variable of the distribution of the forecasting errors. Second, regardless of the point forecasting method used, linear models in quantile regression, weighted quantile regression and gradient boosting decision trees are able to forecast the prediction intervals accurately.  相似文献   

11.
We construct factor models based on disaggregate survey data for forecasting national aggregate macroeconomic variables. Our methodology applies regional and sectoral factor models to Norges Bank’s regional survey and to the Swedish Business Tendency Survey. The analysis identifies which of the pieces of information extracted from the individual regions in Norges Bank’s survey and the sectors for the two surveys perform particularly well at forecasting different variables at various horizons. The results show that several factor models beat an autoregressive benchmark in forecasting inflation and the unemployment rate. However, the factor models are most successful at forecasting GDP growth. Forecast combinations using the past performances of regional and sectoral factor models yield the most accurate forecasts in the majority of the cases.  相似文献   

12.
There is considerable interest today in the forecasting of conflict dynamics. Commonly, the root mean square error and other point metrics are used to evaluate the forecasts from such models. However, conflict processes are non-linear, so these point metrics often do not produce adequate evaluations of the calibration and sharpness of the forecast models. Forecast density evaluation improves the model evaluation. We review tools for density evaluation, including continuous rank probability scores, verification rank histograms, and sharpness plots. The usefulness of these tools for evaluating conflict forecasting models is explained. We illustrate this, first, in a comparison of several time series models’ forecasts of simulated data from a Markov-switching process, and second, in a comparison of several models’ abilities to forecast conflict dynamics in the Cross Straits. These applications show the pitfalls of relying on point metrics alone for evaluating the quality of conflict forecasting models. As in other fields, it is more useful to employ a suite of tools. A non-linear vector autoregressive model emerges as the model which is best able to forecast conflict dynamics between China and Taiwan.  相似文献   

13.
Probabilistic forecasting, i.e., estimating a time series’ future probability distribution given its past, is a key enabler for optimizing business processes. In retail businesses, for example, probabilistic demand forecasts are crucial for having the right inventory available at the right time and in the right place. This paper proposes DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an autoregressive recurrent neural network model on a large number of related time series. We demonstrate how the application of deep learning techniques to forecasting can overcome many of the challenges that are faced by widely-used classical approaches to the problem. By means of extensive empirical evaluations on several real-world forecasting datasets, we show that our methodology produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work.  相似文献   

14.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

15.
In forecasting and regression analysis, it is often necessary to select predictors from a large feasible set. When the predictors have no natural ordering, an exhaustive evaluation of all possible combinations of the predictors can be computationally costly. This paper considers ‘boosting’ as a methodology of selecting the predictors in factor‐augmented autoregressions. As some of the predictors are being estimated, we propose a stopping rule for boosting to prevent the model from being overfitted with estimated predictors. We also consider two ways of handling lags of variables: a componentwise approach and a block‐wise approach. The best forecasting method will necessarily depend on the data‐generating process. Simulations show that for each data type there is one form of boosting that performs quite well. When applied to four key economic variables, some form of boosting is found to outperform the standard factor‐augmented forecasts and is far superior to an autoregressive forecast. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
This paper investigates the predictive ability of money for future inflation in the Czech Republic, Hungary, Poland and Slovakia. We construct monetary indicators similar to those the European Central Bank regularly uses for monetary analysis. We find in-sample evidence that money matters for future inflation at the policy horizons that central banks typically focus on, but our pseudo out-of-sample forecasting exercise shows that money does not in general improve the inflation forecasts vis-à-vis some benchmark models such as the autoregressive process. Since at least some models containing money improve the inflation forecasts in certain periods, we argue that money still serves as a useful cross-check for monetary policy analysis.  相似文献   

18.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

19.
This paper proposes an accurate, parsimonious and fast-to-estimate forecasting model for integer-valued time series with long memory and seasonality. The modelling is achieved through an autoregressive Poisson process with a predictable stochastic intensity that is determined by two factors: a seasonal intraday pattern and a heterogeneous autoregressive component. We call the model SHARP, which is an acronym for seasonal heterogeneous autoregressive Poisson. We also present a mixed-data sampling extension of the model, which adopts the historical information flow more efficiently and provides the best (among all the models considered) forecasting performances, empirically, for the bid–ask spreads of NYSE equity stocks. We conclude by showing how bid–ask spread forecasts based on the SHARP model can be exploited in order to reduce the total cost incurred by a trader who is willing to buy or sell a given amount of an equity stock.  相似文献   

20.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号