首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We estimate the EU Commission loss preferences for major economic forecasts of 12 Member States. Based on a recently proposed method by Elliott, Komunjer and Timmermann (2005) the paper provides evidence of asymmetries in the underlying forecast loss preference of the Commission that tend to vary across Member States. In some cases, our results show that EU forecasts tend to display a rather optimistic picture for main economic variables, e.g. government balance, thus allowing a certain degree of leeway in the fiscal adjustment path towards the medium‐term objective of ‘close to balance’ or ‘in surplus’ of the recently revised Stability and Growth Pact. Over the period of our sample, 1970–2004, this apparent asymmetry in the underlying loss preferences tends to deter prudent advice over economic policy. Lastly, we provide an analysis on the trade‐off between loss and distribution asymmetries, for which simulation results show that the testing method is robust in the presence of skewness. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.  相似文献   

3.
Lanne and Saikkonen [Oxford Bulletin of Economics and Statistics (2011a) Vol. 73, pp. 581–592], show that the generalized method of moments (GMM) estimator is inconsistent, when the instruments are lags of variables that admit a non‐causal autoregressive representation. This article argues that this inconsistency depends on distributional assumptions, that do not always hold. In particular under rational expectations, the GMM estimator is found to be consistent. This result is derived in a linear context and illustrated by simulation of a nonlinear asset pricing model.  相似文献   

4.
Although the Lisbon Treaty recognises the necessity to limit the power of the European Union, some of its limitations are poorly expressed. As a result, the European Commission has the possibility to act arbitrarily by expanding Union power. The position of the Commission is pre‐eminent, notably with respect to the drafting of EU measures. Not only can the Commission expand Union power, but it may also favour certain actors at the expense of the principals (Member States and their citizens). Indeed, the Commission may apply definitions of the ‘common European interest’ that go beyond the preferences of the principals.  相似文献   

5.
The monetary policy reaction function of the Bank of England is estimated by the standard GMM approach and the ex ante forecast method developed by Goodhart (2005) , with particular attention to the horizons for inflation and output at which each approach gives the best fit. The horizons for the ex ante approach are much closer to what is implied by the Bank's view of the transmission mechanism, while the GMM approach produces an implausibly slow adjustment of the interest rate, and suffers from a weak instruments problem. These findings suggest a strong preference for the ex ante approach.  相似文献   

6.
Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF and the World Bank, but the econometric models used by such institutions are frequently unknown. This paper shows how to use the information available on point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknown forecast error distributions. We use real‐time data to forecast the density of US inflation. The results indicate that the proposed method materially improves the real‐time accuracy of density forecasts vis‐à‐vis those from the (unknown) individual econometric models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Kim, Belaire‐Franch and Amador [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66] present different percentiles for the same mean score test statistic. We find that the difference by a factor 0.6 is due to systematically different sample analogues. Furthermore, we clarify which sample versions of the mean‐exponential test statistic should be correctly used with which set of critical values. At the same time, we correct some of the limiting distributions found in the literature.  相似文献   

8.
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments. Even though conditional forecasting is common, there has been little work on methods for evaluating conditional forecasts. This paper provides analytical, Monte Carlo and empirical evidence on tests of predictive ability for conditional forecasts from estimated models. In the empirical analysis, we examine conditional forecasts obtained with a VAR in the variables included in the DSGE model of Smets and Wouters (American Economic Review 2007; 97 : 586–606). Throughout the analysis, we focus on tests of bias, efficiency and equal accuracy applied to conditional forecasts from VAR models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
This note provides a warning against careless use of the generalized method of moments (GMM) with time series data. We show that if time series follow non‐causal autoregressive processes, their lags are not valid instruments, and the GMM estimator is inconsistent. Moreover, endogeneity of the instruments may not be revealed by the J‐test of overidentifying restrictions that may be inconsistent and has, in general, low finite‐sample power. Our explicit results pertain to a simple linear regression, but they can easily be generalized. Our empirical results indicate that non‐causality is quite common among economic variables, making these problems highly relevant.  相似文献   

10.
11.
Although out‐of‐sample forecast performance is often deemed to be the ‘gold standard’ of evaluation, it is not in fact a good yardstick for evaluating models in general. The arguments are illustrated with reference to a recent paper by Carruth, Hooker and Oswald [Review of Economics and Statistics (1998) , Vol. 80, pp. 621–628], who suggest that the good dynamic forecasts of their model support the efficiency‐wage theory on which it is based.  相似文献   

12.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

13.
We assess the accuracy of real GDP growth forecasts released by governments and international organizations for European countries in the years 1999–2017. We implement three testing procedures characterized by different assumptions on the forecasters’ loss functions. First, we test forecast rationality within the traditional approach based on a quadratic loss function (Mincer and Zarnowitz, 1969). Second, following Elliott, Timmermann and Komunjer (2005), we test rationality by allowing for a flexible loss function where the shape parameter driving the extent of asymmetry is unknown and estimated from the empirical distribution of forecast errors. Lastly, we implement the tests proposed by Patton and Timmermann (2007a) that hold regardless of the functional form of the loss function. We conclude that governmental forecasts are biased and not rational under a symmetric and quadratic loss function, but they are optimal under more general assumptions on the loss function. We also find that the preferences of forecasters change with the forecasting horizon: when moving from one- to two-year-ahead forecasts, the optimistic bias increases and the parameter of asymmetry in the loss function significantly increases.  相似文献   

14.
This article provides a first analysis of the forecasts of inflation and GDP growth obtained from the Bank of England's Survey of External Forecasters, considering both the survey average forecasts published in the quarterly Inflation Report, and the individual survey responses, recently made available by the Bank. These comprise a conventional incomplete panel dataset, with an additional dimension arising from the collection of forecasts at several horizons; both point forecasts and density forecasts are collected. The inflation forecasts show good performance in tests of unbiasedness and efficiency, albeit over a relatively calm period for the UK economy, and there is considerable individual heterogeneity. For GDP growth, inaccurate real-time data and their subsequent revisions are seen to cause serious difficulties for forecast construction and evaluation, although the forecasts are again unbiased. There is evidence that some forecasters have asymmetric loss functions.  相似文献   

15.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

16.
Abstract The expected-utility (EU) approach brings great richness to the study of decision making in a large variety of stochastic environments. Research in the EU paradigm often starts from plausible assumptions on risk preferences or optimal responses to changes in the risk structure, and then investigates how such assumptions are reflected by properties of the von-Neumann-Morgenstern (vNM) utility functions underlying the EU concept. Building on Pratt’s (1964) analysis of risk aversion, several measures for risk attitude have been analyzed, including absolute prudence and temperance (Kimball (1990), Eeckhoudt, Goiller and Schlesinger (1996)), their relative and partial relative counterparts (Choi, Kim and Snow (2001), Honda (1985)) as well as extensions such as mixed risk aversion (Caballé and Pomansky (1996)). Mathematics Subject Classification (2000): 91B16, 91B06 Journal of Economic Literature Classification: D81, D82  相似文献   

17.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

18.
A small-scale vector autoregression (VAR) is used to shed some light on the roles of extreme shocks and non-linearities during stress events observed in the economy. The model focuses on the link between credit/financial markets and the real economy and is estimated on US quarterly data for the period 1984–2013. Extreme shocks are accounted for by assuming t-distributed reduced-form shocks. Non-linearity is allowed by the possibility of regime switch in the shock propagation mechanism. Strong evidence for fat tails in error distributions is found. Moreover, the results suggest that accounting for extreme shocks rather than explicit modeling of non-linearity contributes to the explanatory power of the model. Finally, it is shown that the accuracy of density forecasts improves if non-linearities and shock distributions with fat tails are considered.  相似文献   

19.
In a competitive information market, a single information source can only dominate other sources individually, not collectively. We explore whether earnings announcements constitute such a dominant source using Ball and Shivakumar's (2008) [How much new information is there in earnings?, Journal of Accounting Research, 2008, 46(5), pp. 975–1016] R 2 metric: the proportion of the variation in annual returns explained by the four quarterly earnings announcement returns. We find that the earnings announcement days' R 2 is 11% – higher than the corresponding R 2 of days with dividend announcements, management forecasts, preannouncements, and 10-K and 10-Q filings and their amendments, and comparable to that of the four days with the largest realised absolute returns in a year. Additional analysis reveals that earnings announcements convey extreme bad news as often as management forecasts and preannouncements; for any other type of news, earnings announcements are much more frequent. We conclude that earnings announcements are an important source of new information in the equity market.  相似文献   

20.
This paper proposes a novel procedure to estimate linear models when the number of instruments is large. At the heart of such models is the need to balance the trade off between attaining asymptotic efficiency, which requires more instruments, and minimizing bias, which is adversely affected by the addition of instruments. Two questions are of central concern: (1) What is the optimal number of instruments to use? (2) Should the instruments receive different weights? This paper contains the following contributions toward resolving these issues. First, I propose a kernel weighted generalized method of moments (GMM) estimator that uses a trapezoidal kernel. This kernel turns out to be attractive to select and weight the number of moments. Second, I derive the higher order mean squared error of the kernel weighted GMM estimator and show that the trapezoidal kernel generates a lower asymptotic variance than regular kernels. Finally, Monte Carlo simulations show that in finite samples the kernel weighted GMM estimator performs on par with other estimators that choose optimal instruments and improves upon a GMM estimator that uses all instruments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号