首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large‐scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two‐step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke ( 1996 ). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the way to using large‐scale reduced rank models for empirical analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

3.
Abstract This paper unifies two methodologies for multi‐step forecasting from autoregressive time series models. The first is covered in most of the traditional time series literature and it uses short‐horizon forecasts to compute longer‐horizon forecasts, while the estimation method minimizes one‐step‐ahead forecast errors. The second methodology considers direct multi‐step estimation and forecasting. In this paper, we show that both approaches are special (boundary) cases of a technique called partial least squares (PLS) when this technique is applied to an autoregression. We outline this methodology and show how it unifies the other two. We also illustrate the practical relevance of the resultant PLS autoregression for 17 quarterly, seasonally adjusted, industrial production series. Our main findings are that both boundary models can be improved by including factors indicated from the PLS technique.  相似文献   

4.
In this paper, we apply the model selection approach based on likelihood ratio (LR) tests developed in Vuong (1986) to the problem of choosing between two normal linear regression models which are non-nested. We explicitly derive the procedure when the competing linear models are both misspecified. Some simplifications arise when the models are contained in a larger correctly specified linear regression model, or when one computing linear model is correctly specified.  相似文献   

5.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

6.
We analyse the forecasting power of different monetary aggregates and credit variables for US GDP. Special attention is paid to the influence of the recent financial market crisis. For that purpose, in the first step we use a three-variable single-equation framework with real GDP, an interest rate spread and a monetary or credit variable, in forecasting horizons of one to eight quarters. This first stage thus serves to pre-select the variables with the highest forecasting content. In a second step, we use the selected monetary and credit variables within different VAR models, and compare their forecasting properties against a benchmark VAR model with GDP and the term spread (and univariate AR models). Our findings suggest that narrow monetary aggregates, as well as different credit variables, comprise useful predictive information for economic dynamics beyond that contained in the term spread. However, this finding only holds true in a sample that includes the most recent financial crisis. Looking forward, an open question is whether this change in the relationship between money, credit, the term spread and economic activity has been the result of a permanent structural break or whether we might return to the previous relationships.  相似文献   

7.
This paper develops estimators for dynamic microeconomic models with serially correlated unobserved state variables using sequential Monte Carlo methods to estimate the parameters and the distribution of the unobservables. If persistent unobservables are ignored, the estimates can be subject to a dynamic form of sample selection bias. We focus on single‐agent dynamic discrete‐choice models and dynamic games of incomplete information. We propose a full‐solution maximum likelihood procedure and a two‐step method and use them to estimate an extended version of the capital replacement model of Rust with the original data and in a Monte Carlo study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

9.
We suggest to use a factor model based backdating procedure to construct historical Euro‐area macroeconomic time series data for the pre‐Euro period. We argue that this is a useful alternative to standard contemporaneous aggregation methods. The article investigates for a number of Euro‐area variables whether forecasts based on the factor‐backdated data are more precise than those obtained with standard area‐wide data. A recursive pseudo‐out‐of‐sample forecasting experiment using quarterly data is conducted. Our results suggest that some key variables (e.g. real GDP, inflation and long‐term interest rate) can indeed be forecasted more precisely with the factor‐backdated data.  相似文献   

10.
We examine how the accuracy of real‐time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest‐available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple‐vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
This paper examines the out-of-sample forecasting properties of six different economic uncertainty variables for the growth of the real M2 and real M4 Divisia money series for the U.S. using monthly data. The core contention is that information on economic uncertainty improves the forecasting accuracy. We estimate vector autoregressive models using the iterated rolling-window forecasting scheme, in combination with modern regularisation techniques from the field of machine learning. Applying the Hansen-Lunde-Nason model confidence set approach under two different loss functions reveals strong evidence that uncertainty variables that are related to financial markets, the state of the macroeconomy or economic policy provide additional informational content when forecasting monetary dynamics. The use of regularisation techniques improves the forecast accuracy substantially.  相似文献   

12.
The performance of six classes of models in forecasting different types of economic series is evaluated in an extensive pseudo out‐of‐sample exercise. One of these forecasting models, regularized data‐rich model averaging (RDRMA), is new in the literature. The findings can be summarized in four points. First, RDRMA is difficult to beat in general and generates the best forecasts for real variables. This performance is attributed to the combination of regularization and model averaging, and it confirms that a smart handling of large data sets can lead to substantial improvements over univariate approaches. Second, the ARMA(1,1) model emerges as the best to forecast inflation changes in the short run, while RDRMA dominates at longer horizons. Third, the returns on the S&P 500 index are predictable by RDRMA at short horizons. Finally, the forecast accuracy and the optimal structure of the forecasting equations are quite unstable over time.  相似文献   

13.
Factor models have been applied extensively for forecasting when high‐dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of literature indicating that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques that take into account the variable to be forecast when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecast while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well‐known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show good performance in improving efficiency compared to widely used factor methods in macroeconomic forecasting. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
We perform a fully real‐time nowcasting (forecasting) exercise of US GDP growth using Giannone et al.'s (2008) factor model framework. To this end, we have constructed a real‐time database of vintages from 1997 to 2010 for a panel of variables, enabling us to reproduce, for any given day in that range, the exact information that was available to a real‐time forecaster. We track the daily evolution of the model performance along the real‐time data flow and find that the precision of the nowcasts increases with information releases and the model fares well relative to the Survey of Professional Forecasters (SPF).  相似文献   

15.
This article considers ultrahigh-dimensional forecasting problems with survival response variables. We propose a two-step model averaging procedure for improving the forecasting accuracy of the true conditional mean of a survival response variable. The first step is to construct a class of candidate models, each with low-dimensional covariates. For this, a feature screening procedure is developed to separate the active and inactive predictors through a marginal Buckley–James index, and to group covariates with a similar index size together to form regression models with survival response variables. The proposed screening method can select active predictors under covariate-dependent censoring, and enjoys sure screening consistency under mild regularity conditions. The second step is to find the optimal model weights for averaging by adapting a delete-one cross-validation criterion, without the standard constraint that the weights sum to one. The theoretical results show that the delete-one cross-validation criterion achieves the lowest possible forecasting loss asymptotically. Numerical studies demonstrate the superior performance of the proposed variable screening and model averaging procedures over existing methods.  相似文献   

16.
Baumeister and Kilian (Journal of Business and Economic Statistics, 2015, 33(3), 338–351) combine forecasts from six empirical models to predict real oil prices. In this paper, we broadly reproduce their main economic findings, employing their preferred measures of the real oil price and other real‐time variables. Mindful of the importance of Brent crude oil as a global price benchmark, we extend consideration to the North Sea‐based measure and update the evaluation sample to 2017:12. We model the oil price futures curve using a factor‐based Nelson–Siegel specification estimated in real time to fill in missing values for oil price futures in the raw data. We find that the combined forecasts for Brent are as effective as for other oil price measures. The extended sample using the oil price measures adopted by Baumeister and Kilian yields similar results to those reported in their paper. Also, the futures‐based model improves forecast accuracy at longer horizons.  相似文献   

17.
This paper merges two specifications recently developed in the forecasting literature: the MS‐MIDAS model (Guérin and Marcellino, 2013) and the factor‐MIDAS model (Marcellino and Schumacher, 2010). The MS‐factor MIDAS model that we introduce incorporates the information provided by a large data set consisting of mixed frequency variables and captures regime‐switching behaviours. Monte Carlo simulations show that this specification tracks the dynamics of the process and predicts the regime switches successfully, both in‐sample and out‐of‐sample. We apply this model to US data from 1959 to 2010 and properly detect recessions by exploiting the link between GDP growth and higher frequency financial variables.  相似文献   

18.
This paper both narrowly and widely replicates the results of Anundsen et al. (Journal of Applied Econometrics, 2016, 31(7), 1291–1311). I am able to reproduce the same results as theirs. Furthermore, I find that allowing for time‐varying parameters of early warning system models can considerably improve the in‐sample model fit and out‐of‐sample forecasting performance based on an expanding window forecasting exercise.  相似文献   

19.
This study examines whether geographic information disclosed at an increasingly disaggregated level (specifically, consolidated vs. continent vs. country) results in increased predictive ability of company operations (specifically, sales, gross profit, and earnings). Multinational corporations (MNCs) are formed using a simulated merger approach by combining the annual operating results of six individual firms, one from each of six countries. This approach makes it possible to compare the forecasting accuracy of data disclosed at the country, continent, and consolidated levels, not possible using current geographic segment disclosures. Previous studies using year-ahead forecast models implicitly assume the predictive factors included in the models are significant in forecasting operating results. Using regression forecast models, this study tests whether the predictive factors included in the models are effective in forecasting operating results by examining the direction, size, and significance of the regression coefficient estimates. The coefficients provide evidence that exchange rate changes, inflation, and real GNP growth are useful in forecasting annual sales and gross profit. Whereas, at least for this sample and this time period, exchange rate changes, inflation, and real GNP growth are not significant variables in forecasting annual earnings. The results indicate that the accuracy of forecasts increase as sales and gross profit are disclosed at a more disaggregated geographic level. The hypothesized relationship between consolidated, continent, and country levels, while holding strongly under perfect foresight, holds to a lesser extent using forecasts of exchange rates, inflation, and real GNP.  相似文献   

20.
The objective of the paper is to compare the informational efficiency of five macroeconometric and one statistical quarterly forecasting models. The results suggest that the forecasters inefficiently utilize readily available economic information. The qualitative effect for a particular information variable is the same across all forecasters exhibiting inefficiency. Further, the magnitude on coefficients of significant information variables are quite close. In particular, real GNP forecasts appear to not fully incorporate information about lagged M 1 growth and lagged changes in housing starts. Deflator forecasts can be improved by more fully specifying the degree of slackness in the economy as captured by capacity utilization and changes in the labor market.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号