首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets and Wouters (2012), estimated on euro area data. It investigates the extent to which the inclusion of forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated in the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model, an AR(1) model, a sample mean and a random walk.  相似文献   

2.
The Makridakis Competitions seek to identify the most accurate forecasting methods for different types of predictions. The M4 competition was the first in which a model of the type commonly described as “machine learning” has outperformed the more traditional statistical approaches, winning the competition. However, many approaches that were self-labeled as “machine learning” failed to produce accurate results, which generated discussion about the respective benefits and drawbacks of “statistical” and “machine learning” approaches. Both terms have remained ill-defined in the context of forecasting. This paper introduces the terms “structured” and “unstructured” models to better define what is intended by the use of the terms “statistical” and “machine learning” in the context of forecasting based on the model’s data generating process. The mechanisms that underlie specific challenges to unstructured modeling are examined in the context of forecasting, along with common solutions. Finally, the innovations in the winning model that allowed it to overcome these challenges and produce highly accurate results are highlighted.  相似文献   

3.
Macroeconomic data are subject to data revisions. Yet, the usual way of generating real-time density forecasts from Bayesian Vector Autoregressive (BVAR) models makes no allowance for data uncertainty from future data revisions. We develop methods of allowing for data uncertainty when forecasting with BVAR models with stochastic volatility. First, the BVAR forecasting model is estimated on real-time vintages. Second, the BVAR model is jointly estimated with a model of data revisions such that forecasts are conditioned on estimates of the ‘true’ values. We find that this second method generally improves upon conventional practice for density forecasting, especially for the United States.  相似文献   

4.
In the last decade VAR models have become a widely-used tool for forecasting macroeconomic time series. To improve the out-of-sample forecasting accuracy of these models, Bayesian random-walk prior restrictions are often imposed on VAR model parameters. This paper focuses on whether placing an alternative type of restriction on the parameters of unrestricted VAR models improves the out-of-sample forecasting performance of these models. The type of restriction analyzed here is based on the business cycle characteristics of U.S. macroeconomic data, and in particular, requires that the dynamic behavior of the restricted VAR model mimic the business cycle characteristics of historical data. The question posed in this paper is: would a VAR model, estimated subject to the restriction that the cyclical characteristics of simulated data from the model “match up” with the business cycle characteristics of U.S. data, generate more accurate out-of-sample forecasts than unrestricted or Bayesian VAR models?  相似文献   

5.
This study examines the role of financial ratios in predicting companies’ default risk using the quantile hazard model (QHM) approach and compares its results to the discrete hazard model (DHM). We adopt the LASSO method to select essential predictors among the variables mentioned in the literature. We show the preeminence of our proposed QHM through the fact that it presents a different degree of financial ratios’ effect over various quantile levels. While DHM only confirms the aftermaths of “stock return volatilities” and “total liabilities” and the positive effects of “stock price”, “stock excess return”, and “profitability” on businesses, under high quantile levels QHM is able to supplement “cash and short-term investment to total assets”, “market capitalization”, and “current liabilities ratio” into the list of factors that influence a default. More interestingly, “cash and short-term investment to total assets” and “market capitalization” switch signs in high quantile levels, showing their different influence on companies with different risk levels. We also discover evidence for the distinction of default probability among different industrial sectors. Lastly, our proposed QHM empirically demonstrates improved out-of-sample forecasting performance.  相似文献   

6.
US payroll employment data come from a survey and are subject to revisions. While revisions are generally small at the national level, they can be large enough at the state level to alter assessments of current economic conditions. Users must therefore exercise caution in interpreting state employment data until they are “benchmarked” against administrative data 5–16 months after the reference period. This article develops a state-space model that predicts benchmarked state employment data in real time. The model has two distinct features: (1) an explicit model of the data revision process and (2) a dynamic factor model that incorporates real-time information from other state-level labor market indicators. We find that the model reduces the average size of benchmark revisions by about 11 percent. When we optimally average the model’s predictions with those of existing models, the model reduces the average size of the revisions by about 14 percent.  相似文献   

7.
This paper examines the forecast performance of a cointegrated system relative to the forecast performance of a comparable VAR that fails to recognize that the system is characterized by cointegration. The cointegrated system we examine is composed of three vectors, a money demand representation, a Fisher equation, and a risk premium captured by an interest rate differential. The forecasts produced by the vector error correction model (VECM) associated with this system are compared with those obtained from a corresponding differenced vector autoregression, (DVAR) as well as a vector autoregression based upon the levels of the data (LVAR). Forecast evaluation is conducted using both the ‘full-system’ criterion proposed by Clements and Hendry (1993) and by comparing forecast performance for specific variables. Overall our findings suggest that selective forecast performance improvement (especially at long forecast horizons) may be observed by incorporating knowledge of cointegration rank. Our general conclusion is that when the advantage of incorporating cointegration appears, it is generally at longer forecast horizons. This is consistent with the predictions of Engle and Yoo (1987). But we also find, consistent with Clements and Hendry (1995) that relative gain in forecast performance clearly depends upon the chosen data transformation.  相似文献   

8.
In this paper, we dissect the Twitter debate about the future course of monetary policy and trace the effects of selected topics of this discourse on U.S. asset prices. We focus on the “taper tantrum” episode in 2013, a period with large revisions in expectations about future Fed policy. Based on a novel data set of 90,000 Twitter messages (“tweets”) covering the debate of Fed tapering on Twitter, we use Latent Dirichlet Allocation, a computational text analysis tool, to quantify the content of the discussion. Several estimated topic frequencies are then included in a VAR model to estimate the effects of topic shocks on asset prices. We find that the discussion about Fed policy on social media contains price-relevant information. Shocks to the discussion about the timing of the tapering, about the broader economic policy context and worrying investors are shown to lead to significant asset price changes. We also show that the effects are mostly due to changes in the term premium of yields consistent with the portfolio balance channel of unconventional monetary policy.  相似文献   

9.
We incorporate external information extracted from the European Central Bank’s Survey of Professional Forecasters into the predictions of a Bayesian VAR using entropic tilting and soft conditioning. The resulting conditional forecasts significantly improve the plain BVAR point and density forecasts. Importantly, we do not restrict the forecasts at a specific quarterly horizon but their possible paths over several horizons jointly since the survey information comes in the form of one- and two-year-ahead expectations. As well as improving the accuracy of the variable that we target, the spillover effects on “other-than-targeted” variables are relevant in size and are statistically significant. We document that the baseline BVAR exhibits an upward bias for GDP growth after the financial crisis, and our results provide evidence that survey forecasts can help mitigate the effects of structural breaks on the forecasting performance of a popular macroeconometric model.  相似文献   

10.
It is common practice to evaluate fixed-event forecast revisions in macroeconomics by regressing current forecast revisions on one-period lagged forecast revisions. Under weak-form (forecast) efficiency, the correlation between the current and one-period lagged revisions should be zero. The empirical findings in the literature suggest that this null hypothesis of zero correlation is rejected frequently, and the correlation can be either positive (which is widely interpreted in the literature as “smoothing”) or negative (which is widely interpreted as “over-reacting”). We propose a methodology for interpreting such non-zero correlations in a straightforward and clear manner. Our approach is based on the assumption that numerical forecasts can be decomposed into both an econometric model and random expert intuition. We show that the interpretation of the sign of the correlation between the current and one-period lagged revisions depends on the process governing intuition, and the current and lagged correlations between intuition and news (or shocks to the numerical forecasts). It follows that the estimated non-zero correlation cannot be given a direct interpretation in terms of either smoothing or over-reaction.  相似文献   

11.
The M4 competition is the continuation of three previous competitions started more than 45 years ago whose purpose was to learn how to improve forecasting accuracy, and how such learning can be applied to advance the theory and practice of forecasting. The purpose of M4 was to replicate the results of the previous ones and extend them into three directions: First significantly increase the number of series, second include Machine Learning (ML) forecasting methods, and third evaluate both point forecasts and prediction intervals. The five major findings of the M4 Competitions are: 1. Out Of the 17 most accurate methods, 12 were “combinations” of mostly statistical approaches. 2. The biggest surprise was a “hybrid” approach that utilized both statistical and ML features. This method’s average sMAPE was close to 10% more accurate than the combination benchmark used to compare the submitted methods. 3. The second most accurate method was a combination of seven statistical methods and one ML one, with the weights for the averaging being calculated by a ML algorithm that was trained to minimize the forecasting. 4. The two most accurate methods also achieved an amazing success in specifying the 95% prediction intervals correctly. 5. The six pure ML methods performed poorly, with none of them being more accurate than the combination benchmark and only one being more accurate than Naïve2. This paper presents some initial results of M4, its major findings and a logical conclusion. Finally, it outlines what the authors consider to be the way forward for the field of forecasting.  相似文献   

12.
Impact factors     
In this paper we discuss sensitivity of forecasts with respect to the information set considered in prediction; a sensitivity measure called impact factor, IF, is defined. This notion is specialized to the case of VAR processes integrated of order 0, 1 and 2. For stationary VARs this measure corresponds to the sum of the impulse response coefficients. For integrated VAR systems, the IF has a direct interpretation in terms of long-run forecasts. Various applications of this concept are reviewed; they include questions of policy effectiveness and of forecast uncertainty due to data revisions. A unified approach to inference on the IF is given, showing under what circumstances standard asymptotic inference can be conducted also in systems integrated of order 1 and 2. It is shown how the results reported here can be used to calculate similar sensitivity measures for models with a simultaneity structure.  相似文献   

13.
The excellent article by David Hendry describes how to nest “theory-driven” and “data-driven” approaches when deciding between alternative models in macroeconomics. The article’s final conclusion is that theory allows the econometrician to select a set of variables, while data allows him/her to select across a wide range of alternatives: lag selection, structural breaks, functional forms, etc. The aim of this discussion is to provide the reader with an illustration of this proposed mixing of theory and data in one of the fields mentioned in the paper, macroeconomic forecasting.  相似文献   

14.
Abstract This paper surveys the rise of the Vector AutoRegressive (VAR) approach from a historical perspective. It shows that the VAR approach arises from a fusion of the Cowles Commission tradition and time series statistical methods, catalysed by the rational expectations (RE) movement, that the approach offers a systematic solution to the issue of ‘model choice’ bypassed by Cowles researchers, hence essentially inheriting and enhancing the Cowles legacy rather than abandoning or opposing it. By tackling model choice, however, the VAR approach helps reform econometrics by shifting the research focus from measurement of given theories to identification/verification of data‐coherent theories.  相似文献   

15.
When questions in business surveys about the direction of change have three reply options, “up”, “down”, and “unchanged”, a common practice is to release the results as balance indices. These are linear combinations of the response shares, i.e., the percentage share of the respondents who answered “up” minus the percentage share of those who answered “down”. Forecasters traditionally use these indices for short-term business cycle forecasting. Survey response shares can also be combined non-linearly into alternative indices, using the Carlson–Parkin method. Using IFO and ISM data, this paper tests the relative performance of Carlson–Parkin type indices versus balance indices for the short-term forecasting of industrial production growth. The main finding is that the two types of indices show no difference in forecasting performance during the Great Moderation. However, the Carlson–Parkin type indices outperform the balance indices during periods with higher output volatilities, such as before and after the Great Moderation.  相似文献   

16.
Data revisions to national accounts pose a serious challenge to policy decision making. Well-behaved revisions should be unbiased, small, and unpredictable. This article shows that revisions to German national accounts are biased, large, and predictable. Moreover, with use of filtering techniques designed to process data subject to revisions, the real-time forecasting performance of initial releases can be increased by up to 23%. For total real GDP growth, however, the initial release is an optimal forecast. Yet, given the results for disaggregated variables, the averaging out of biases and inefficiencies at the aggregate GDP level appears to be good luck rather than good forecasting.  相似文献   

17.
This paper surveys the empirical research on fiscal policy analysis based on real‐time data. This literature can be broadly divided into four groups that focus on: (1) the statistical properties of revisions in fiscal data; (2) the political and institutional determinants of projection errors by governments; (3) the reaction of fiscal policies to the business cycle and (4) the use of real‐time fiscal data in structural vector autoregression (VAR) models. It emerges that, first, fiscal revisions are large and initial releases are biased estimates of final values. Secondly, strong fiscal rules and institutions lead to more accurate releases of fiscal data and smaller deviations of fiscal outcomes from government plans. Thirdly, the cyclical stance of fiscal policies is estimated to be more ‘counter‐cyclical’ when real‐time data are used instead of ex post data. Fourthly, real‐time data can be useful for the identification of fiscal shocks. Finally, it is shown that existing real‐time fiscal data sets cover only a limited number of countries and variables. For example, real‐time data for developing countries are generally unavailable. In addition, real‐time data on European countries are often missing, especially with respect to government revenues and expenditures. Therefore, more work is needed in this field.  相似文献   

18.
This commentary introduces a correlation analysis of the top-10 ranked forecasting methods that participated in the M4 forecasting competition. The “M” competitions attempt to promote and advance research in the field of forecasting by inviting both industry and academia to submit forecasting algorithms for evaluation over a large corpus of real-world datasets. After performing the initial analysis to derive the errors of each method, we proceed to investigate the pairwise correlations among them in order to understand the extent to which they produce errors in similar ways. Based on our results, we conclude that there is indeed a certain degree of correlation among the top-10 ranked methods, largely due to the fact that many of them consist of a combination of well-known, statistical and machine learning techniques. This fact has a strong impact on the results of the correlation analysis, and therefore leads to similar forecasting error patterns.  相似文献   

19.
Interest in the use of “big data” when it comes to forecasting macroeconomic time series such as private consumption or unemployment has increased; however, applications to the forecasting of GDP remain rather rare. This paper incorporates Google search data into a bridge equation model, a version of which usually belongs to the suite of forecasting models at central banks. We show how such big data information can be integrated, with an emphasis on the appeal of the underlying model in this respect. As the decision as to which Google search terms should be added to which equation is crucial —- both for the forecasting performance itself and for the economic consistency of the implied relationships —- we compare different (ad-hoc, factor and shrinkage) approaches in terms of their pseudo real time out-of-sample forecast performances for GDP, various GDP components and monthly activity indicators. We find that sizeable gains can indeed be obtained by using Google search data, where the best-performing Google variable selection approach varies according to the target variable. Thus, assigning the selection methods flexibly to the targets leads to the most robust outcomes overall in all layers of the system.  相似文献   

20.
This paper uses real-time data to mimic real-time GDP forecasting activity. Through automatic searches for the best indicators for predicting GDP one and four steps ahead, we compare the out-of-sample forecasting performance of adaptive models using different data vintages, and produce three main findings. First, despite data revisions, the forecasting performance of models with indicators is better, but this advantage tends to vanish over longer forecasting horizons. Second, the practice of using fully updated datasets at the time the forecast is made (i.e., taking the best available measures of today's economic situation) does not appear to bring any effective improvement in forecasting ability: the first GDP release is predicted equally well by models using real-time data as by models using the latest available data. Third, although the first release is a rational forecast of GDP data after all statistical revisions have taken place, the forecast based on the latest available GDP data (i.e. the “temporarily best” measures) may be improved by combining preliminary official releases with one-step-ahead forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号