首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
The objective of this paper is to analyze the effects of uncertainty on density forecasts of stationary linear univariate ARMA models. We consider three specific sources of uncertainty: parameter estimation, error distribution, and lag order. Depending on the estimation sample size and the forecast horizon, each of these sources may have different effects. We consider asymptotic, Bayesian, and bootstrap procedures proposed to deal with uncertainty and compare their finite sample properties. The results are illustrated constructing fan charts for UK inflation.  相似文献   

2.
The Federal Open Market Committee (FOMC) of the U.S. Federal Reserve regularly publishes participants’ qualitative assessments of forecast uncertainty, expressed relative to that seen on average in the past. The benchmarks used for these historical comparisons are the average root mean squared forecast errors (RMSEs) made by various private and government forecasters over the past twenty years. This paper documents how these benchmarks are constructed and discusses some of their properties. We draw several conclusions. First, if past performance is a reasonable guide to future accuracy, considerable uncertainty surrounds macroeconomic projections. Second, different forecasters have similar accuracy. Third, estimates of uncertainty about future real activity and interest rates are now considerably greater than prior to the financial crisis; in contrast, estimates of inflation accuracy have changed little. Finally, fan charts, constructed under certain assumptions and viewed in conjunction with the FOMC’s qualitative assessments, provide a reasonable approximation to future uncertainty.  相似文献   

3.
When location shifts occur, cointegration-based equilibrium-correction models (EqCMs) face forecasting problems. We consider alleviating such forecast failure by updating, intercept corrections, differencing, and estimating the future progress of an ‘internal’ break. Updating leads to a loss of cointegration when an EqCM suffers an equilibrium-mean shift, but helps when collinearities are changed by an ‘external’ break with the EqCM staying constant. Both mechanistic corrections help compared to retaining a pre-break estimated model, but an estimated model of the break process could outperform. We apply the approaches to EqCMs for UK M1, compared with updating a learning function as the break evolves.  相似文献   

4.
A nonparametric method for comparing multiple forecast models is developed and implemented. The hypothesis of Optimal Predictive Ability generalizes the Superior Predictive Ability hypothesis from a single given loss function to an entire class of loss functions. Distinction is drawn between General Loss functions, Convex Loss functions, and Symmetric Convex Loss functions. The research hypothesis is formulated in terms of moment inequality conditions. The empirical moment conditions are reduced to an exact and finite system of linear inequalities based on piecewise-linear loss functions. The hypothesis can be tested in a statistically consistent way using a blockwise Empirical Likelihood Ratio test statistic. A computationally feasible test procedure computes the test statistic using Convex Optimization methods, and estimates conservative, data-dependent critical values using a majorizing chi-square limit distribution and a moment selection method. An empirical application to inflation forecasting reveals that a very large majority of thousands of forecast models are redundant, leaving predominantly Phillips Curve-type models, when convexity and symmetry are assumed.  相似文献   

5.
This paper proposes and analyses the Kullback–Leibler information criterion (KLIC) as a unified statistical tool to evaluate, compare and combine density forecasts. Use of the KLIC is particularly attractive, as well as operationally convenient, given its equivalence with the widely used Berkowitz likelihood ratio test for the evaluation of individual density forecasts that exploits the probability integral transforms. Parallels with the comparison and combination of point forecasts are made. This and related Monte Carlo experiments help draw out properties of combined density forecasts. We illustrate the uses of the KLIC in an application to two widely used published density forecasts for UK inflation, namely the Bank of England and NIESR ‘fan’ charts.  相似文献   

6.
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as ‘approximate Bayesian forecasting’. The four key issues explored are: (i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; (ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; (iii) the performance of approximate Bayesian forecasting in state space models; and (iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.  相似文献   

7.
It is commonly accepted that information is helpful if it can be exploited to improve a decision making process. In economics, decisions are often based on forecasts of the upward or downward movements of the variable of interest. We point out that directional forecasts can provide a useful framework for assessing the economic forecast value when loss functions (or success measures) are properly formulated to account for the realized signs and realized magnitudes of directional movements. We discuss a general approach to (directional) forecast evaluation which is based on the loss function proposed by Granger, Pesaran and Skouras. It is simple to implement and provides an economically interpretable loss/success functional framework. We show that, in addition, this loss function is more robust to outlying forecasts than traditional loss functions. As such, the measure of the directional forecast value is a readily available complement to the commonly used squared error loss criterion.  相似文献   

8.
In this paper, we define forecast (in)stability in terms of the variability in forecasts for a specific time period caused by updating the forecast for this time period when new observations become available, i.e., as time passes. We propose an extension to the state-of-the-art N-BEATS deep learning architecture for the univariate time series point forecasting problem. The extension allows us to optimize forecasts from both a traditional forecast accuracy perspective as well as a forecast stability perspective. We show that the proposed extension results in forecasts that are more stable without leading to a deterioration in forecast accuracy for the M3 and M4 data sets. Moreover, our experimental study shows that it is possible to improve both forecast accuracy and stability compared to the original N-BEATS architecture, indicating that including a forecast instability component in the loss function can be used as regularization mechanism.  相似文献   

9.
Sample autocorrelation coefficients are widely used to test the randomness of a time series. Despite its unsatisfactory performance, the asymptotic normal distribution is often used to approximate the distribution of the sample autocorrelation coefficients. This is mainly due to the lack of an efficient approach in obtaining the exact distribution of sample autocorrelation coefficients. In this paper, we provide an efficient algorithm for evaluating the exact distribution of the sample autocorrelation coefficients. Under the multivariate elliptical distribution assumption, the exact distribution as well as exact moments and joint moments of sample autocorrelation coefficients are presented. In addition, the exact mean and variance of various autocorrelation-based tests are provided. Actual size properties of the Box–Pierce and Ljung–Box tests are investigated, and they are shown to be poor when the number of lags is moderately large relative to the sample size. Using the exact mean and variance of the Box–Pierce test statistic, we propose an adjusted Box–Pierce test that has a far superior size property than the traditional Box–Pierce and Ljung–Box tests.  相似文献   

10.
Properties of optimal forecasts under asymmetric loss and nonlinearity   总被引:1,自引:0,他引:1  
Evaluation of forecast optimality in economics and finance has almost exclusively been conducted under the assumption of mean squared error loss. Under this loss function optimal forecasts should be unbiased and forecast errors serially uncorrelated at the single period horizon with increasing variance as the forecast horizon grows. Using analytical results we show that standard properties of optimal forecasts can be invalid under asymmetric loss and nonlinear data generating processes and thus may be very misleading as a benchmark for an optimal forecast. We establish instead that a suitable transformation of the forecast error—known as the generalized forecast error—possesses an equivalent set of properties. The paper also provides empirical examples to illustrate the significance in practice of asymmetric loss and nonlinearities and discusses the effect of parameter estimation error on optimal forecasts.  相似文献   

11.
The problems of constructing prediction intervals (PIs) for the binomial and Poisson distributions are considered. New highest posterior mass (HPM) PIs based on fiducial approach are proposed. Other fiducial PIs, an exact PI and approximate PIs are reviewed and compared with the HPM-PIs. Exact coverage studies and expected widths of prediction intervals show that the new prediction intervals are less conservative than other fiducial PIs and comparable with the approximate one based on the joint sampling approach for the binomial case. For the Poisson case, the HPM-PIs are better than the other PIs in terms of coverage probabilities and precision. The methods are illustrated using some practical examples.  相似文献   

12.
I propose a quasi-maximum likelihood framework for estimating nonlinear models with continuous or discrete endogenous explanatory variables. Joint and two-step estimation procedures are considered. The joint procedure is a quasi-limited information maximum likelihood procedure, as one or both of the log likelihoods may be misspecified. The two-step control function approach is computationally simple and leads to straightforward tests of endogeneity. In the case of discrete endogenous explanatory variables, I argue that the control function approach can be applied with generalized residuals to obtain average partial effects. I show how the results apply to nonlinear models for fractional and nonnegative responses.  相似文献   

13.
We assess the accuracy of real GDP growth forecasts released by governments and international organizations for European countries in the years 1999–2017. We implement three testing procedures characterized by different assumptions on the forecasters’ loss functions. First, we test forecast rationality within the traditional approach based on a quadratic loss function (Mincer and Zarnowitz, 1969). Second, following Elliott, Timmermann and Komunjer (2005), we test rationality by allowing for a flexible loss function where the shape parameter driving the extent of asymmetry is unknown and estimated from the empirical distribution of forecast errors. Lastly, we implement the tests proposed by Patton and Timmermann (2007a) that hold regardless of the functional form of the loss function. We conclude that governmental forecasts are biased and not rational under a symmetric and quadratic loss function, but they are optimal under more general assumptions on the loss function. We also find that the preferences of forecasters change with the forecasting horizon: when moving from one- to two-year-ahead forecasts, the optimistic bias increases and the parameter of asymmetry in the loss function significantly increases.  相似文献   

14.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

15.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

16.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

17.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

18.
Combining provides a pragmatic way of synthesising the information provided by individual forecasting methods. In the context of forecasting the mean, numerous studies have shown that combining often leads to improvements in accuracy. Despite the importance of the value at risk (VaR), though, few papers have considered quantile forecast combinations. One risk measure that is receiving an increasing amount of attention is the expected shortfall (ES), which is the expectation of the exceedances beyond the VaR. There have been no previous studies on combining ES predictions, presumably due to there being no suitable loss function for ES. However, it has been shown recently that a set of scoring functions exist for the joint estimation or backtesting of VaR and ES forecasts. We use such scoring functions to estimate combining weights for VaR and ES prediction. The results from five stock indices show that combining outperforms the individual methods for the 1% and 5% probability levels.  相似文献   

19.
We consider conditional moment models under semi-strong identification. Identification strength is directly defined through the conditional moments that flatten as the sample size increases. Our new minimum distance estimator is consistent, asymptotically normal, robust to semi-strong identification, and does not rely on the choice of a user-chosen parameter, such as the number of instruments or some smoothing parameter. Heteroskedasticity-robust inference is possible through Wald testing without prior knowledge of the identification pattern. Simulations show that our estimator is competitive with alternative estimators based on many instruments, being well-centered with better coverage rates for confidence intervals.  相似文献   

20.
Volatility forecast comparison using imperfect volatility proxies   总被引:1,自引:0,他引:1  
The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We motivate our study with analytical results on the distortions caused by some widely used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. We then derive necessary and sufficient conditions on the functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some useful special cases of this class of “robust” loss functions. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号