首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

2.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

3.
The Stock–Watson coincident index and its subsequent extensions assume a static linear one‐factor model for the component indicators. This restrictive assumption is unnecessary if one defines a coincident index as an estimate of monthly real gross domestic products (GDP). This paper estimates Gaussian vector autoregression (VAR) and factor models for latent monthly real GDP and other coincident indicators using the observable mixed‐frequency series. For maximum likelihood estimation of a VAR model, the expectation‐maximization (EM) algorithm helps in finding a good starting value for a quasi‐Newton method. The smoothed estimate of latent monthly real GDP is a natural extension of the Stock–Watson coincident index.  相似文献   

4.
Benchmark revisions in non‐stationary real‐time data may adversely affect the results of regular revision analysis and the estimates of long‐run economic relationships. Cointegration analysis can reveal the nature of vintage heterogeneity and guide the adjustment of real‐time data for benchmark revisions. Affine vintage transformation functions estimated by cointegration regressions are a flexible tool, whereas differencing and rebasing work well only under certain circumstances. Inappropriate vintage transformation may cause observed revision statistics to be affected by nuisance parameters. Using real‐time data of German industrial production and orders, the econometric techniques are exemplified and the theoretical claims are examined empirically.  相似文献   

5.
We analyse a novel dataset of Business and Consumer Surveys, using dynamic factor techniques, to produce composite coincident indices (CCIs) at the sectoral level for the European countries and for Europe. Surveys are timely available, not subject to revision, and fully comparable across countries. Moreover, the substantial discrepancies in activity at the sectoral level justify the interest in sectoral disaggregation. Compared with the confidence indicators produced by the European Commission we show that factor‐based CCIs, using survey answers at a more disaggregate level, produce higher correlation with the reference series for the majority of sectors and countries.  相似文献   

6.
This article proposes a Bayesian approach to examining money‐output causality within the context of a logistic smooth transition vector error correction model. Our empirical results provide substantial evidence that the postwar US money‐output relationship is nonlinear, with regime changes mainly governed by the output growth and price levels. Furthermore, we obtain strong support for nonlinear Granger causality from money to output, although there is also some evidence for models indicating that money is not Granger causal or long‐run causal to output.  相似文献   

7.
General‐to‐Specific (GETS) modelling has witnessed major advances thanks to the automation of multi‐path GETS specification search. However, the estimation complexity associated with financial models constitutes an obstacle to automated multi‐path GETS modelling in finance. Making use of a recent result we provide and study simple but general and flexible methods that automate financial multi‐path GETS modelling. Starting from a general model where the mean specification can contain autoregressive terms and explanatory variables, and where the exponential volatility specification can include log‐ARCH terms, asymmetry terms, volatility proxies and other explanatory variables, the algorithm we propose returns parsimonious mean and volatility specifications.  相似文献   

8.
Unit‐root testing can be a preliminary step in model development, an intermediate step, or an end in itself. Some researchers have questioned the value of any unit‐root and cointegration testing, arguing that restrictions based on theory are at least as effective. Such confusion is unsatisfactory. Needed is a set of principles that limit and define the role of the tacit knowledge of the model builders. In a forecasting context, we enumerate the various possible model selection strategies and, based on simulation and empirical evidence, recommend using these tests to improve the specification of an initial general vector autoregression model.  相似文献   

9.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

10.
By contrasting endogenous growth models with facts, one is frequently confronted with the prediction that levels of economic variables, such as R&D expenditures, imply lasting effects on the growth rate of an economy. As stylized facts show, the research intensity in most advanced countries has dramatically increased, mostly more than the GDP. Yet, the growth rates have roughly remained constant or even declined. In this paper we modify the Romer endogenous growth model and test our variant of the model using time series data. We estimate the market version both for the US and Germany for the time period January 1962 to April 1996. Our results demonstrate that the model is compatible with the time series for aggregate data in those countries. All parameters fall into a reasonable range.  相似文献   

11.
Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revisions.Recent empirical work suggests that measurement errors typically have much more complex dynamics than existing models of data revisions allow. This paper describes a state-space model that allows for richer dynamics in these measurement errors, including the noise, news and spillover effects documented in this literature. We also show how to relax the common assumption that “true” values are observed after a few revisions.The result is a unified and flexible framework that allows for more realistic data revision properties, and allows the use of standard methods for optimal real-time estimation of trends and cycles. We illustrate the application of this framework with real-time data on US real output growth.  相似文献   

12.
The finite mixture distribution is proposed as an appropriate statistical model for a combined density forecast. Its implications for measures of uncertainty and disagreement, and for combining interval forecasts, are described. Related proposals in the literature and applications to the U.S. Survey of Professional Forecasters are discussed.  相似文献   

13.
This article combines a Structural Vector Autoregression with a no‐arbitrage approach to build a multifactor Affine Term Structure Model (ATSM). The resulting No‐Arbitrage Structural Vector Autoregressive (NASVAR) model implies that expected excess returns are driven by structural macroeconomic shocks. This is in contrast with a standard ATSM, in which agents are concerned with non‐structural risks. As a simple application, we study the effects of supply, demand and monetary policy shocks on the UK yield curve. We show that all structural shocks affect the slope of the yield curve, with demand and supply shocks accounting for a large part of the time variation in bond yields.  相似文献   

14.
This paper proposes and analyses the Kullback–Leibler information criterion (KLIC) as a unified statistical tool to evaluate, compare and combine density forecasts. Use of the KLIC is particularly attractive, as well as operationally convenient, given its equivalence with the widely used Berkowitz likelihood ratio test for the evaluation of individual density forecasts that exploits the probability integral transforms. Parallels with the comparison and combination of point forecasts are made. This and related Monte Carlo experiments help draw out properties of combined density forecasts. We illustrate the uses of the KLIC in an application to two widely used published density forecasts for UK inflation, namely the Bank of England and NIESR ‘fan’ charts.  相似文献   

15.
In this article, a three‐regime multivariate threshold vector error correction model with a ‘band of inaction’ is formulated to examine uncovered interest rate parity (UIRP) and expectation hypothesis of the term structure (EHTS) of interest rates for Switzerland. Combining both UIRP and EHTS in a model that allows for nonlinearities, we investigate whether the Swiss advantage is disappearing with respect to Europe. Our results favour threshold cointegration and show that both hypotheses hold, at least in one of the three regimes of the process for Switzerland/Germany. The same is not true between Switzerland and the United States.  相似文献   

16.
This article presents a new semi‐nonparametric (SNP) density function, named Positive Edgeworth‐Sargan (PES). We show that this distribution belongs to the family of (positive) Gram‐Charlier (GC) densities and thus it preserves all the good properties of this type of SNP distributions but with a much simpler structure. The in‐ and out‐of‐sample performance of the PES is compared with symmetric and skewed GC distributions and other widely used densities in economics and finance. The results confirm the PES as a good alternative to approximate financial returns distribution, specially when skewness is not severe.  相似文献   

17.
This paper addresses the issue of testing the ‘hybrid’ New Keynesian Phillips curve (NKPC) through vector autoregressive (VAR) systems and likelihood methods, giving special emphasis to the case where the variables are non‐stationary. The idea is to use a VAR for both the inflation rate and the explanatory variable(s) to approximate the dynamics of the system and derive testable restrictions. Attention is focused on the ‘inexact’ formulation of the NKPC. Empirical results over the period 1971–98 show that the NKPC is far from providing a ‘good first approximation’ of inflation dynamics in the Euro area.  相似文献   

18.
Business cycle analyses have proved to be helpful to practitioners in assessing current economic conditions and anticipating upcoming fluctuations. In this article, we focus on the acceleration cycle in the euro area, namely the peaks and troughs of the growth rate which delimit the slowdown and acceleration phases of the economy. Our aim is twofold: first, we put forward a reference turning point chronology of this cycle on a monthly basis, based on gross domestic product and industrial production indices. We consider both euro area aggregate level and country‐specific cycles for the six main countries of the zone. Second, we come up with a new turning point indicator, based on business surveys carefully watched by central banks and short‐term analysts, to follow in real‐time the fluctuations of the acceleration cycle.  相似文献   

19.
The first two influential books on economic forecasting are by Henri Theil [1961, second edition 1965. Economic Forecasts and Policy. North-Holland, Amsterdam] and by George Box and Gwilym Jenkins [1970. Time Series Analysis, Forecasting and Control. Holden Day, San Francisco]. Theil introduced advanced mathematical statistical techniques and considered a variety of types of data. Box and Jenkins introduced ARIMA models and how they are used to forecast. With these foundations, the field of economic forecasting has considered a wide range of techniques and models, wider and deeper information sets, longer horizons, and deeper questions including how to better evaluate all forecasts and how to disentangle a forecast, a policy, and the outcomes. Originally, forecasts were just for means (or expectations) then moved to variances, and now consider predictive distributions. Eventually, multivariate distributions will have to be considered, but evaluation will be difficult.  相似文献   

20.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号