首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigate whether combining forecasts from surveys of expectations is a helpful empirical strategy for forecasting inflation in Brazil. We employ the FGV–IBRE Economic Tendency Survey, which consists of monthly qualitative information from approximately 2000 consumers since 2006, and also the Focus Survey of the Central Bank of Brazil, with daily forecasts since 1999 from roughly 250 professional forecasters. Natural candidates to win a forecast competition in the literature of surveys of expectations are the (consensus) cross-sectional average forecasts (AF). We first show that these forecasts are a bias-ridden version of the conditional expectation of inflation using the no-bias tests proposed in Issler and Lima (J Econom 152(2):153–164, 2009) and Gaglianone and Issler (Microfounded forecasting, 2015). The results reveal interesting data features: Consumers systematically overestimate inflation (by 2.01 p.p., on average), whereas market agents underestimate it (by 0.68 p.p. over the same sample). Next, we employ a pseudo out-of-sample analysis to evaluate different forecasting methods: the AR(1) model, the Granger and Ramanathan (J Forecast 3:197–204, 1984) forecast combination (GR) technique, a Phillips-curve based method, the Capistrán and Timmermann (J Bus Econ Stat 27:428–440, 2009) combination method, the consensus forecast (AF), the bias-corrected average forecast (BCAF), and the extended BCAF. Results reveal that: (i) the MSE of the AR(1) model is higher compared to the GR (and usually lower compared to the AF); and (ii) the extended BCAF is more accurate than the BCAF, which, in turn, dominates the AF. This validates the view that the bias corrections are a useful device for forecasting using surveys. The Phillips-curve based method has a median performance in terms of MSE, and the Capistrán and Timmermann (2009) combination method fares slightly worse.  相似文献   

2.
Recently De Luca and Carfora (Statistica e Applicazioni 8:123–134, 2010) have proposed a novel model for binary time series, the Binomial Heterogenous Autoregressive (BHAR) model, successfully applied for the analysis of the quarterly binary time series of U.S. recessions. In this work we want to measure the efficacy of the out-of-sample forecasts of the BHAR model compared to the probit models by Kauppi and Saikkonen (Rev Econ Stat 90:777–791, 2008). Given the substantial indifference of the predictive accuracy between the BHAR and the probit models, a combination of forecasts using the method proposed by Bates and Granger (Oper Res Q 20:451–468, 1969) for probability forecasts is analyzed. We show how the forecasts obtained by the combination between the BHAR model and each of the probit models are superior compared to the forecasts obtained by each single model.  相似文献   

3.
In this paper we examine which macroeconomic and financial variables have most predictive ability for the federal funds target rate decisions made by the Federal Open Market Committee (FOMC). We conduct the analysis for the 157 FOMC decisions during the period January 1990–June 2008, using dynamic ordered probit models with a Bayesian endogenous variable selection methodology and real-time data for a set of 33 candidate predictor variables. We find that indicators of economic activity and forward-looking term structure variables, as well as survey measures are most informative from a forecasting perspective. For the full sample period, in-sample probability forecasts achieve a hit rate of 90%. Based on out-of-sample forecasts for the period January 2001–June 2008, 82% of the FOMC decisions are predicted correctly.  相似文献   

4.
This paper presents a methodology for producing a probability forecast of a turning point in U.S. economy using Composite Leading Indicators. This methodology is based on classical statistical decision theory and uses information-theoretic measurement to produce a probability. The methodology is flexible using as many historical data points as desired. This methodology is applied to producing probability forecasts of a downturn in U.S. economy in the 1970–1990 period. Four probability forecasts are produced using different amounts of information. The performance of these forecasts is evaluated using the actual downturn points and the scores measuring accuracy, calibration, and resolution. An indirect comparison of these forecasts with Diebold and Rudebusch's sequential probability recursion is also presented. It is shown that the performances of our best two models are statistically different from the performance of the three-consecutive-month decline model and are the same as the one for the best probit model. The probit model, however, is more conservative in its predictions than our two models.  相似文献   

5.
We explore origin–destination forecasting of commodity flows between 15 Spanish regions, using data covering the period from 1995 to 2004. The 1-year-ahead forecasts are based on a recently introduced spatial autoregressive variant of the traditional gravity model. Gravity (or spatial interaction models) attempt to explain variation in \(N = n^2\) flows between \(n\) origin and destination regions that reflect a vector arising from an \(n\) by \(n\) flow matrix. The spatial autoregressive variant of the gravity model used here takes into account spatial dependence between flows from regions neighboring both the origin and destinations during estimation and forecasting. One-year-ahead forecast accuracy of non-spatial and spatial models are compared.  相似文献   

6.
The interest rate assumptions for macroeconomic forecasts differ among central banks. Common approaches are given by the assumptions that interest rates remain constant over the forecast horizon, follow a path as expected by market participants or follow a path as expected by the central bank itself. Theoretical papers such as Svensson (The instrument-rate projection under inflation targeting: the Norwegian example. Centre for European Policy Studies Working Paper (127), 2006) and Galí (J Monet Econ 58:537–550, 2011) suggest an accuracy ranking for these forecasts, from employing central bank expectations yielding the highest forecast accuracy to conditioning on constant interest rates yielding the lowest. Yet, when investigating the predictive accuracy of the Bank of England’s and the Banco Central do Brasil’s forecasts for interest rates, inflation and output growth, we hardly find any significant differences between forecasts based on the different interest rate paths. Our results suggest that the choice of the interest rate assumption appears to be of minor relevance empirically.  相似文献   

7.
The Brier score and a covariance partition due to Yates are considered to study the probabilistic forecasts of a vector autoregression on stock market returns. Probabilistic forecasts from a model and data developed by Campbell (1991 Campbell, JY. 1991. A variance decomposition for stock returns. Economic Journal, 101: 15779.  ) are studied with ordinary least squares. Calibration measures and the Brier score and its partition are used for model assessment. The partitions indicate that the ordinary least squares version of Campbell's model does not forecast stock market returns particularly well. While the model offers honest probabilistic forecasts (they are well-calibrated), the model shows little ability to sort events that occur into different groups from events that do not occur. The Yates-partition demonstrates this shortcoming. Calibration metrics do not.  相似文献   

8.
Jing Zeng 《Empirica》2016,43(2):415-444
European Monetary Union member countries’ forecasts are often combined to obtain the forecasts of the Euro area macroeconomic aggregate variables. The aggregation weights which are used to produce the aggregates are often considered as combination weights. This paper investigates whether using different combination weights instead of the usual aggregation weights can help to provide more accurate forecasts. In this context, we examine the performance of equal weights, the least squares estimators of the weights, the combination method recently proposed by Hyndman et al.  (Comput Stat Data Anal 55(9):2579–2589, 2011) and the weights suggested by shrinkage methods. We find that some variables like real GDP and the GDP deflator can be forecasted more precisely by using flexible combination weights. Furthermore, combining only forecasts of the three largest European countries helps to improve the forecasting performance. The persistence of the individual series seems to play an important role for the relative performance of the combination.  相似文献   

9.
We defend the forecasting performance of the Federal Reserve Open Market Committee (FOMC) against the criticism of Christina and David Romer (2008, American Economic Review 98, 230–235) by assuming that the FOMC’s forecasts depict a worst‐case scenario that it uses to design decisions that are robust to misspecification of the staff’s model. We use a simple macro model and a plausible loss function to illustrate how such an interpretation of the FOMC’s forecasts can explain the findings of Romer and Romer, including the pattern of differences between FOMC forecasts and forecasts published by the staff of the Federal Reserve System in the Greenbook.  相似文献   

10.
The forecast performance of the empirical ESTAR model of Taylor et al. (2001) is examined for 4 bilateral real exchange rate series over an out-of-sample evaluation period of nearly 12?years. Point as well as density forecasts are constructed, considering forecast horizons of 1 to 22 steps head. The study finds that no forecast gains over a simple AR(1) specification exist at any of the forecast horizons that are considered, regardless of whether point or density forecasts are utilised in the evaluation. Non-parametric methods are used in conjunction with simulation techniques to learn about the models and their forecasts. It is shown graphically that the nonlinearity in the conditional means (or point forecasts) of the ESTAR model decreases as the forecast horizon increases. The non-parametric methods show also that the multiple steps ahead forecast densities are normal looking with no signs of bi-modality, skewness or kurtosis.  相似文献   

11.
A survey of contemporary literature suggests that empirical studies on developing economies are few or almost non-existent. Engle and Patton (2001, What good is a volatility model. Quantitative Finance, 1, 237–245) as well as Poon (2005, A Practical Guide to Forecasting Financial Market Volatility. New Jersey: Wiley.) suggest that a good volatility model is one that utilizes the empirical regularities of financial market volatility (of which most were observed on industrialized economies markets). This paper uses exchange rate series from Ghana, Mozambique and Tanzania to show that;
  1. they are not different from other financial markets as they exhibit most of the empirical regularities including volatility sign asymmetry, non-normal distribution and volatility clustering. It is however observed that the three exchange rate series are very volatile, with induced volatile shocks highly persistent and asymmetric, and extreme prices commonplace;

  2. the ARCH technique (which has been well documented to capture these empirical regularities and produce good forecasts) generally produced a good fit to the three exchange rate series when compared with volatility forecasts generated using the EWMA technique. In the simple analysis of a day-ahead volatility forecast abilities of estimated models, it was observed that best fit does not necessarily ensure best forecast.

  相似文献   

12.
13.
This paper discusses mixture periodic GARCH (M-PGARCH) models that constitute very flexible class of nonlinear time series models of the conditional variance. It turns out that they are more parsimonious comparatively to MPARCH models. We first provide some probabilistic properties of this class of models. We thus propose an estimation method based on the expectation-maximization algorithm. Finally, we apply this methodology to model the spot rates of the Algerian dinar against euro and US dollar. This empirical analysis shows that M-PGARCH models yield the best performance among the competing models.  相似文献   

14.
Based on the approach advanced by Elliott, Komunjer, and Timmermann (2005 Elliott, G., Komunjer, I., & Timmermann, A. (2005). Estimation and testing of forecast rationality under flexible loss. Review of Economics Studies, 72, 11071125. doi: 10.1111/0034-6527.00363[Crossref], [Web of Science ®] [Google Scholar]), we analyzed whether the loss function of a sample of exchange-rate forecasters is asymmetric in the forecast error. Using forecasts of the dollar/euro exchange rate, we found that the shape of the loss function varies across forecasters. Some forecasters appear to make forecasts under an asymmetric loss function, while a symmetric loss function seems to describe well the loss function of other forecasters. Accounting for an asymmetric loss function does not necessarily make forecasts ‘look’ rational.  相似文献   

15.
In this paper, we compare the performance of dynamic conditional score (DCS) and standard financial time-series models for Central American energy prices. We extend the Student’s t and the exponential generalised beta distribution of the second kind stochastic location and stochastic seasonal DCS models. We consider the generalised t distribution as an alternative for the error term and also consider dynamic specifications of volatility. We use a unique dataset of spot electricity prices for El Salvador, Guatemala and Panama. We consider two data windows for each country, which are defined with respect to the liberalisation and development process of the energy market in Central America. We study the identification of a wide range of DCS specifications, likelihood-based model performance, time-series components of energy prices, maximum likelihood parameter estimates, the discounting property of conditional score, and out-of-sample forecast performance. Our main results are the following. (i) We determine the most robust models of energy prices, with respect to parameter identification, from a wide range of DCS specifications. (ii) For most of the cases, the in-sample statistical performance of DCS is superior to that of the standard model. (iii) For El Salvador and Panama, the standard model provides better point forecasts than DCS, and for Guatemala the point forecast precision of standard and DCS models does not differ significantly. (iv) For El Salvador, the standard model provides better density forecasts than DCS, and for Guatemala and Panama, the density forecast precision of standard and DCS models does not differ significantly.  相似文献   

16.
In this paper, using daily data for six major international stock market indexes and a modified EGARCH specification, the links between stock market returns, volatility and trading volume are investigated in a new nonlinear conditional variance framework with multiple regimes and volume effects. Volatility forecast comparisons, using the Harvey-Newbold test for multiple forecasts encompassing, seem to demonstrate that the MSV-EGARCH complex threshold structure is able to correctly fit GARCH-type dynamics of the series under study and dominates competing standard asymmetric models in several of the considered stock indexes.
José Dias CurtoEmail:
  相似文献   

17.
The Federal Open Market Committee (FOMC) of the U.S. Federal Reserve publishes the range of members’ forecasts for key macroeconomic variables, but not the distribution of forecasts within this range. To evaluate these projections, previous papers compare the midpoint of the range with the realized outcome. This paper proposes an alternative approach to forecast evaluation that takes account of the interval nature of projections. It is shown that using the conventional Mincer–Zarnowitz approach to evaluate FOMC forecasts misses important information contained in the width of the forecast interval. This additional information plays a minor role at short forecast horizons but turns out to be of sometimes crucial importance for longer-horizon forecasts. For 18-month-ahead forecasts, the variation of members’ projections contains information that is more relevant for explaining future inflation than information embodied in the midpoint. Likewise, when longer-range forecasts for real GDP growth and the unemployment rate are considered, the width of the forecast interval comprises information over and above the one given by the midpoint alone.  相似文献   

18.
This paper introduces the Rossi and Sekhposyan (Am Econ Rev 105(5): 650–655, 2015) uncertainty index for the Euro Area and its member countries. The index captures how unexpected a forecast error associated with a realization of a macroeconomic variable is relative to the unconditional distribution of forecast errors. Furthermore, it can differentiate between upside and downside uncertainty, which could be relevant for addressing a variety of economic questions. The index is particularly useful since it can be constructed for any country/variable for which point forecasts and realizations are available. We show the usefulness of the index in studying the heterogeneity of uncertainty across Euro Area countries as well as the spillover effects via a network approach.  相似文献   

19.
We propose to produce accurate point and interval forecasts of exchange rates by combining a number of well known fundamental based panel models. Combination of each model utilizes a set of weights computed using a linear mixture of experts's framework, where weights are determined by log scores assigned to each model's predictive performance. As well as model uncertainty, we take potential structural break in the parameters of the models into consideration. In our application, to quarterly data for ten currencies (including the Euro) for the period 1990q1–2008q4, we show that the forecasts from ensemble models produce mean and interval forecasts that outperform equal weight, and to a lesser extent random walk benchmark models. The gain from combining forecasts is particularly pronounced for longer-horizon forecasts for central forecasts, but much less so for interval forecasts. Calculations of the probability of the exchange rate rising or falling using the combined or ensemble model show a good correspondence with known events and potentially provide a useful measure for uncertainty of whether the exchange rate is likely to rise or fall.  相似文献   

20.
We investigate the impact of an uncertain number of false individual null hypotheses on commonly used p value combination methods. Under such uncertainty, these methods perform quite differently and often yield conflicting results. Consequently, we develop a combination of “combinations of p values” (CCP) test aimed at maintaining good power properties across such uncertainty. The CCP test is based on a simple union–intersection principle that exploits the weak correspondence between two underlying p value combination methods. Monte Carlo simulations show that the CCP test controls size and closely tracks the power of the best individual methods. We empirically apply the CCP test to explore the stationarity in real exchange rates and the information rigidity in inflation and output growth forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号