首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Temporal aggregation in general introduces a moving‐average (MA) component in the aggregated model. A similar feature emerges when not all but only a few variables are aggregated, which generates a mixed‐frequency (MF) model. The MA component is generally neglected, likely to preserve the possibility of ordinary least squares estimation, but the consequences have never been properly studied in the MF context. In this paper we show, analytically, in Monte Carlo simulations and in a forecasting application on US macroeconomic variables, the relevance of considering the MA component in MF mixed‐data sampling (MIDAS) and unrestricted MIDAS models (MIDAS–autoregressive moving average (ARMA) and UMIDAS‐ARMA). Specifically, the simulation results indicate that the short‐term forecasting performance of MIDAS‐ARMA and UMIDAS‐ARMA are better than that of, respectively, MIDAS and UMIDAS. The empirical applications on nowcasting US gross domestic product (GDP) growth, investment growth, and GDP deflator inflation confirm this ranking. Moreover, in both simulation and empirical results, MIDAS‐ARMA is better than UMIDAS‐ARMA.  相似文献   

2.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

3.
We suggest to use a factor model based backdating procedure to construct historical Euro‐area macroeconomic time series data for the pre‐Euro period. We argue that this is a useful alternative to standard contemporaneous aggregation methods. The article investigates for a number of Euro‐area variables whether forecasts based on the factor‐backdated data are more precise than those obtained with standard area‐wide data. A recursive pseudo‐out‐of‐sample forecasting experiment using quarterly data is conducted. Our results suggest that some key variables (e.g. real GDP, inflation and long‐term interest rate) can indeed be forecasted more precisely with the factor‐backdated data.  相似文献   

4.
We investigate the relationship between long‐term US stock market risks and the macroeconomic environment using a two‐component GARCH‐MIDAS model. Our results show that macroeconomic variables are important determinants of the secular component of stock market volatility. Among the various macro variables in our dataset the term spread, housing starts, corporate profits and the unemployment rate have the highest predictive ability for long‐term stock market volatility. While the term spread and housing starts are leading variables with respect to stock market volatility, for industrial production and the unemployment rate expectations data from the Survey of Professional Forecasters regarding the future development are most informative. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
This paper employs a Markov regime‐switching VAR model to describe and analyse the time‐varying credibility of Hong Kong's currency board system. The endogenously estimated discrete regime shifts are made dependent on macroeconomic fundamentals. This enables us to determine which changes in macroeconomic variables can trigger switches between the low and high credibility regimes. We carry out extensive testing to search for the most appropriate specification of the Markov regime‐switching model. We find strong evidence of regime switching behaviour that portrays the time‐varying nature of credibility in the historical data.  相似文献   

6.
We perform a fully real‐time nowcasting (forecasting) exercise of US GDP growth using Giannone et al.'s (2008) factor model framework. To this end, we have constructed a real‐time database of vintages from 1997 to 2010 for a panel of variables, enabling us to reproduce, for any given day in that range, the exact information that was available to a real‐time forecaster. We track the daily evolution of the model performance along the real‐time data flow and find that the precision of the nowcasts increases with information releases and the model fares well relative to the Survey of Professional Forecasters (SPF).  相似文献   

7.
This paper discusses pooling versus model selection for nowcasting with large datasets in the presence of model uncertainty. In practice, nowcasting a low‐frequency variable with a large number of high‐frequency indicators should account for at least two data irregularities: (i) unbalanced data with missing observations at the end of the sample due to publication delays; and (ii) different sampling frequencies of the data. Two model classes suited in this context are factor models based on large datasets and mixed‐data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst other things, the factor estimation method and the number of factors, lag length and indicator selection. Thus there are many sources of misspecification when selecting a particular model, and an alternative would be pooling over a large set of different model specifications. We evaluate the relative performance of pooling and model selection for nowcasting quarterly GDP for six large industrialized countries. We find that the nowcast performance of single models varies considerably over time, in line with the forecasting literature. Model selection based on sequential application of information criteria can outperform benchmarks. However, the results highly depend on the selection method chosen. In contrast, pooling of nowcast models provides an overall very stable nowcast performance over time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Baumeister and Kilian (Journal of Business and Economic Statistics, 2015, 33(3), 338–351) combine forecasts from six empirical models to predict real oil prices. In this paper, we broadly reproduce their main economic findings, employing their preferred measures of the real oil price and other real‐time variables. Mindful of the importance of Brent crude oil as a global price benchmark, we extend consideration to the North Sea‐based measure and update the evaluation sample to 2017:12. We model the oil price futures curve using a factor‐based Nelson–Siegel specification estimated in real time to fill in missing values for oil price futures in the raw data. We find that the combined forecasts for Brent are as effective as for other oil price measures. The extended sample using the oil price measures adopted by Baumeister and Kilian yields similar results to those reported in their paper. Also, the futures‐based model improves forecast accuracy at longer horizons.  相似文献   

9.
In this paper we propose a simulation‐based technique to investigate the finite sample performance of likelihood ratio (LR) tests for the nonlinear restrictions that arise when a class of forward‐looking (FL) models typically used in monetary policy analysis is evaluated with vector autoregressive (VAR) models. We consider ‘one‐shot’ tests to evaluate the FL model under the rational expectations hypothesis and sequences of tests obtained under the adaptive learning hypothesis. The analysis is based on a comparison between the unrestricted and restricted VAR likelihoods, and the p‐values associated with the LR test statistics are computed by Monte Carlo simulation. We also address the case where the variables of the FL model can be approximated as non‐stationary cointegrated processes. Application to the ‘hybrid’ New Keynesian Phillips Curve (NKPC) in the euro area shows that (i) the forward‐looking component of inflation dynamics is much larger than the backward‐looking component and (ii) the sequence of restrictions implied by the cointegrated NKPC under learning dynamics is not rejected over the monitoring period 1984–2005. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
This paper examines the intertemporal relation between risk and return for the aggregate stock market using high‐frequency data. We use daily realized, GARCH, implied, and range‐based volatility estimators to determine the existence and significance of a risk–return trade‐off for several stock market indices. We find a positive and statistically significant relation between the conditional mean and conditional volatility of market returns at the daily level. This result is robust to alternative specifications of the volatility process, across different measures of market return and sample periods, and after controlling for macro‐economic variables associated with business cycle fluctuations. We also analyze the risk–return relationship over time using rolling regressions, and find that the strong positive relation persists throughout our sample period. The market risk measures adopted in the paper add power to the analysis by incorporating valuable information, either by taking advantage of high‐frequency intraday data (in the case of realized, GARCH, and range volatility) or by utilizing the market's expectation of future volatility (in the case of implied volatility index). Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
This work investigates how the state of credit markets affects the impact of fiscal policies. We estimate a threshold vector autoregression (TVAR) model on US quarterly data for the period 1984–2010. We employ the spread between BAA‐rated corporate bond yield and 10‐year treasury constant maturity rate as a proxy for credit conditions. We find that the response of output to fiscal policy shocks is stronger and more persistent when the economy is in the ‘tight’ credit regime. Fiscal multipliers are significantly different in the two regimes: they are abundantly and persistently higher than one when firms face increasing financing costs, whereas they are feebler and often lower than one in the ‘normal’ credit regime. The results appear to be robust to different model specifications, fiscal foresight, alternative threshold variables, different measure of variables and sample periods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
We test for the presence of time‐varying parameters (TVP) in the long‐run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean‐reverting models. We also propose residual‐based diagnostic tests and examine out‐of‐sample forecasts. In‐sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out‐of‐sample analysis suggests a random‐walk specification for oil price, and TVP models for both real‐time forecasting in the case of gas and long‐run forecasting in the case of coal. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
The paper estimates a large‐scale mixed‐frequency dynamic factor model for the euro area, using monthly series along with gross domestic product (GDP) and its main components, obtained from the quarterly national accounts (NA). The latter define broad measures of real economic activity (such as GDP and its decomposition by expenditure type and by branch of activity) that we are willing to include in the factor model, in order to improve its coverage of the economy and thus the representativeness of the factors. The main problem with their inclusion is not one of model consistency, but rather of data availability and timeliness, as the NA series are quarterly and are available with a large publication lag. Our model is a traditional dynamic factor model formulated at the monthly frequency in terms of the stationary representation of the variables, which however becomes nonlinear when the observational constraints are taken into account. These are of two kinds: nonlinear temporal aggregation constraints, due to the fact that the model is formulated in terms of the unobserved monthly logarithmic changes, but we observe only the sum of the monthly levels within a quarter, and nonlinear cross‐sectional constraints, since GDP and its main components are linked by the NA identities, but the series are expressed in chained volumes. The paper provides an exact treatment of the observational constraints and proposes iterative algorithms for estimating the parameters of the factor model and for signal extraction, thereby producing nowcasts of monthly GDP and its main components, as well as measures of their reliability.  相似文献   

14.
This paper studies in some detail a class of high‐frequency‐based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realised measures constructed from high‐frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model‐based bootstrap which allows us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Vector autoregressions with Markov‐switching parameters (MS‐VARs) offer substantial gains in data fit over VARs with constant parameters. However, Bayesian inference for MS‐VARs has remained challenging, impeding their uptake for empirical applications. We show that sequential Monte Carlo (SMC) estimators can accurately estimate MS‐VAR posteriors. Relative to multi‐step, model‐specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. We use SMC's flexibility to demonstrate that model selection among MS‐VARs can be highly sensitive to the choice of prior.  相似文献   

16.
This paper evaluates the effects of high‐frequency uncertainty shocks on a set of low‐frequency macroeconomic variables representative of the US economy. Rather than estimating models at the same common low frequency, we use recently developed econometric models, which allow us to deal with data of different sampling frequencies. We find that credit and labor market variables react the most to uncertainty shocks in that they exhibit a prolonged negative response to such shocks. When looking at detailed investment subcategories, our estimates suggest that the most irreversible investment projects are the most affected by uncertainty shocks. We also find that the responses of macroeconomic variables to uncertainty shocks are relatively similar across single‐frequency and mixed‐frequency data models, suggesting that the temporal aggregation bias is not acute in this context.  相似文献   

17.
Starting from the dynamic factor model for nonstationary data we derive the factor‐augmented error correction model (FECM) and its moving‐average representation. The latter is used for the identification of structural shocks and their propagation mechanisms. We show how to implement classical identification schemes based on long‐run restrictions in the case of large panels. The importance of the error correction mechanism for impulse response analysis is analyzed by means of both empirical examples and simulation experiments. Our results show that the bias in estimated impulse responses in a factor‐augmented vector autoregressive (FAVAR) model is positively related to the strength of the error correction mechanism and the cross‐section dimension of the panel. We observe empirically in a large panel of US data that these features have a substantial effect on the responses of several variables to the identified permanent real (productivity) and monetary policy shocks.  相似文献   

18.
We analyze ways of incorporating low frequency information into models for the prediction of high frequency variables. In doing so, we consider the two existing versions of the mixed frequency VAR, with a focus on the forecasts for the high frequency variables. Furthermore, we introduce new models, namely the reverse unrestricted MIDAS (RU-MIDAS) and reverse MIDAS (R-MIDAS), which can be used for producing forecasts of high frequency variables that also incorporate low frequency information. We then conduct several empirical applications for assessing the relevance of quarterly survey data for forecasting a set of monthly macroeconomic indicators. Overall, it turns out that low frequency information is important, particularly when it has just been released.  相似文献   

19.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We consider univariate low‐frequency filters applicable in real‐time as a macroeconomic forecasting method. This amounts to targeting only low frequency fluctuations of the time series of interest. We show through simulations that such approach is warranted and, using US data, we confirm empirically that consistent gains in forecast accuracy can be obtained in comparison with a variety of other methods. There is an inherent arbitrariness in the choice of the cut‐off defining low and high frequencies, which calls for a careful characterization of the implied optimal (for forecasting) degree of smoothing of the key macroeconomic indicators we analyse. We document interesting patterns that emerge: for most variables the optimal choice amounts to disregarding fluctuations well below the standard business cycle cut‐off of 32 quarters while generally increasing with the forecast horizon; for inflation and variables related to housing this cut‐off lies around 32 quarters for all horizons, which is below the optimal level for federal government spending.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号