首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

2.
Combined density nowcasts for quarterly Euro‐area GDP growth are produced based on the real‐time performance of component models. Components are distinguished by their use of ‘hard’ and ‘soft’, aggregate and disaggregate, indicators. We consider the accuracy of the density nowcasts as within‐quarter indicator data accumulate. We find that the relative utility of ‘soft’ indicators surged during the recession. But as this instability was hard to detect in real‐time it helps, when producing density nowcasts unknowing any within‐quarter ‘hard’ data, to weight the different indicators equally. On receipt of ‘hard’ data for the second month in the quarter better calibrated densities are obtained by giving a higher weight in the combination to ‘hard’ indicators.  相似文献   

3.
We analyse a novel dataset of Business and Consumer Surveys, using dynamic factor techniques, to produce composite coincident indices (CCIs) at the sectoral level for the European countries and for Europe. Surveys are timely available, not subject to revision, and fully comparable across countries. Moreover, the substantial discrepancies in activity at the sectoral level justify the interest in sectoral disaggregation. Compared with the confidence indicators produced by the European Commission we show that factor‐based CCIs, using survey answers at a more disaggregate level, produce higher correlation with the reference series for the majority of sectors and countries.  相似文献   

4.
This paper assesses different ways of converting qualitative data obtained in surveys into quantitative indices for a number of economic variables. The research reported here focuses on the main UK employers’ business survey for manufacturing – the CBI industrial trends survey. Six response variables are investigated – plant and machinery investment, output, employment, exports, price and cost. We find that the balance statistic is a satisfactory method of transforming three of the variables: investment, output and exports.  相似文献   

5.
The finite mixture distribution is proposed as an appropriate statistical model for a combined density forecast. Its implications for measures of uncertainty and disagreement, and for combining interval forecasts, are described. Related proposals in the literature and applications to the U.S. Survey of Professional Forecasters are discussed.  相似文献   

6.
Models for the 12‐month‐ahead US rate of inflation, measured by the chain‐weighted consumer expenditure deflator, are estimated for 1974–98 and subsequent pseudo out‐of‐sample forecasting performance is examined. Alternative forecasting approaches for different information sets are compared with benchmark univariate autoregressive models, and substantial out‐performance is demonstrated including against Stock and Watson's unobserved components‐stochastic volatility model. Three key ingredients to the out‐performance are: including equilibrium correction component terms in relative prices; introducing nonlinearities to proxy state‐dependence in the inflation process and replacing the information criterion, commonly used in VARs to select lag length, with a ‘parsimonious longer lags’ parameterization. Forecast pooling or averaging also improves forecast performance.  相似文献   

7.
This paper proposes and analyses the Kullback–Leibler information criterion (KLIC) as a unified statistical tool to evaluate, compare and combine density forecasts. Use of the KLIC is particularly attractive, as well as operationally convenient, given its equivalence with the widely used Berkowitz likelihood ratio test for the evaluation of individual density forecasts that exploits the probability integral transforms. Parallels with the comparison and combination of point forecasts are made. This and related Monte Carlo experiments help draw out properties of combined density forecasts. We illustrate the uses of the KLIC in an application to two widely used published density forecasts for UK inflation, namely the Bank of England and NIESR ‘fan’ charts.  相似文献   

8.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

9.
We perform a fully real‐time nowcasting (forecasting) exercise of US GDP growth using Giannone et al.'s (2008) factor model framework. To this end, we have constructed a real‐time database of vintages from 1997 to 2010 for a panel of variables, enabling us to reproduce, for any given day in that range, the exact information that was available to a real‐time forecaster. We track the daily evolution of the model performance along the real‐time data flow and find that the precision of the nowcasts increases with information releases and the model fares well relative to the Survey of Professional Forecasters (SPF).  相似文献   

10.
This article presents a new semi‐nonparametric (SNP) density function, named Positive Edgeworth‐Sargan (PES). We show that this distribution belongs to the family of (positive) Gram‐Charlier (GC) densities and thus it preserves all the good properties of this type of SNP distributions but with a much simpler structure. The in‐ and out‐of‐sample performance of the PES is compared with symmetric and skewed GC distributions and other widely used densities in economics and finance. The results confirm the PES as a good alternative to approximate financial returns distribution, specially when skewness is not severe.  相似文献   

11.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

12.
This study focuses on the estimation and predictive performance of several estimators for the dynamic and autoregressive spatial lag panel data model with spatially correlated disturbances. In the spirit of Arellano and Bond (1991) and Mutl (2006) , a dynamic spatial generalized method of moments (GMM) estimator is proposed based on Kapoor, Kelejian and Prucha (2007) for the spatial autoregressive (SAR) error model. The main idea is to mix non‐spatial and spatial instruments to obtain consistent estimates of the parameters. Then, a linear predictor of this spatial dynamic model is derived. Using Monte Carlo simulations, we compare the performance of the GMM spatial estimator to that of spatial and non‐spatial estimators and illustrate our approach with an application to new economic geography.  相似文献   

13.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

14.
Unit‐root testing can be a preliminary step in model development, an intermediate step, or an end in itself. Some researchers have questioned the value of any unit‐root and cointegration testing, arguing that restrictions based on theory are at least as effective. Such confusion is unsatisfactory. Needed is a set of principles that limit and define the role of the tacit knowledge of the model builders. In a forecasting context, we enumerate the various possible model selection strategies and, based on simulation and empirical evidence, recommend using these tests to improve the specification of an initial general vector autoregression model.  相似文献   

15.
To quantify qualitative survey data, the Carlson–Parkin method assumes normality, a time‐invariant symmetric indifference interval, and long‐run unbiased expectations. These assumptions are unnecessary for interval‐coded data. In April 2004, the Monthly Consumer Confidence Survey in Japan started to ask households about their price expectations a year ahead in seven categories with partially known boundaries. Thus one can identify up to six parameters including an indifference interval each month. This paper compares normal, skew normal (SN), skew exponential power (SEP), and skew t (St) distributions, and finds that an St distribution fits the data well. The results help us to better understand the dynamics of heterogeneous expectations.  相似文献   

16.
Benchmark revisions in non‐stationary real‐time data may adversely affect the results of regular revision analysis and the estimates of long‐run economic relationships. Cointegration analysis can reveal the nature of vintage heterogeneity and guide the adjustment of real‐time data for benchmark revisions. Affine vintage transformation functions estimated by cointegration regressions are a flexible tool, whereas differencing and rebasing work well only under certain circumstances. Inappropriate vintage transformation may cause observed revision statistics to be affected by nuisance parameters. Using real‐time data of German industrial production and orders, the econometric techniques are exemplified and the theoretical claims are examined empirically.  相似文献   

17.
This article combines a Structural Vector Autoregression with a no‐arbitrage approach to build a multifactor Affine Term Structure Model (ATSM). The resulting No‐Arbitrage Structural Vector Autoregressive (NASVAR) model implies that expected excess returns are driven by structural macroeconomic shocks. This is in contrast with a standard ATSM, in which agents are concerned with non‐structural risks. As a simple application, we study the effects of supply, demand and monetary policy shocks on the UK yield curve. We show that all structural shocks affect the slope of the yield curve, with demand and supply shocks accounting for a large part of the time variation in bond yields.  相似文献   

18.
General‐to‐Specific (GETS) modelling has witnessed major advances thanks to the automation of multi‐path GETS specification search. However, the estimation complexity associated with financial models constitutes an obstacle to automated multi‐path GETS modelling in finance. Making use of a recent result we provide and study simple but general and flexible methods that automate financial multi‐path GETS modelling. Starting from a general model where the mean specification can contain autoregressive terms and explanatory variables, and where the exponential volatility specification can include log‐ARCH terms, asymmetry terms, volatility proxies and other explanatory variables, the algorithm we propose returns parsimonious mean and volatility specifications.  相似文献   

19.
The Stock–Watson coincident index and its subsequent extensions assume a static linear one‐factor model for the component indicators. This restrictive assumption is unnecessary if one defines a coincident index as an estimate of monthly real gross domestic products (GDP). This paper estimates Gaussian vector autoregression (VAR) and factor models for latent monthly real GDP and other coincident indicators using the observable mixed‐frequency series. For maximum likelihood estimation of a VAR model, the expectation‐maximization (EM) algorithm helps in finding a good starting value for a quasi‐Newton method. The smoothed estimate of latent monthly real GDP is a natural extension of the Stock–Watson coincident index.  相似文献   

20.
A popular macroeconomic forecasting strategy utilizes many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude dynamic stochastic general equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we use the linear opinion pool to combine inflation forecast densities from many vector autoregressions (VARs) and a policymaking DSGE model. The DSGE receives a substantial weight in the pool (at short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, but produces well-calibrated forecast densities for inflation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号