首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
This paper proposes a framework for the analysis of the theoretical properties of forecast combination, with the forecast performance being measured in terms of mean squared forecast errors (MSFE). Such a framework is useful for deriving all existing results with ease. In addition, it also provides insights into two forecast combination puzzles. Specifically, it investigates why a simple average of forecasts often outperforms forecasts from single models in terms of MSFEs, and why a more complicated weighting scheme does not always perform better than a simple average. In addition, this paper presents two new findings that are particularly relevant in practice. First, the MSFE of a forecast combination decreases as the number of models increases. Second, the conventional approach to the selection of optimal models, based on a simple comparison of MSFEs without further statistical testing, leads to a biased selection.  相似文献   

2.
This paper proposes a framework to implement regression‐based tests of predictive ability in unstable environments, including, in particular, forecast unbiasedness and efficiency tests, commonly referred to as tests of forecast rationality. Our framework is general: it can be applied to model‐based forecasts obtained either with recursive or rolling window estimation schemes, as well as to forecasts that are model free. The proposed tests provide more evidence against forecast rationality than previously found in the Federal Reserve's Greenbook forecasts as well as survey‐based private forecasts. It confirms, however, that the Federal Reserve has additional information about current and future states of the economy relative to market participants. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
We propose a framework for evaluating the conditionality of forecasts. The crux of our framework is the observation that a forecast is conditional if revisions to the conditioning factor are incorporated faithfully into the remainder of the forecast. We consider whether the Greenbook, Blue Chip survey and Survey of Professional Forecasters exhibit systematic biases in the manner in which they incorporate interest rate projections into the forecasts of other macroeconomic variables. We do not find strong evidence of systematic biases in the three economic forecasts that we consider, as the interest rate projections in these forecasts appear to be incorporated efficiently into the forecasts of other economic variables.  相似文献   

4.
A geometric interpretation is developed for so-called reconciliation methodologies used to forecast time series that adhere to known linear constraints. In particular, a general framework is established that nests many existing popular reconciliation methods within the class of projections. This interpretation facilitates the derivation of novel theoretical results. First, reconciliation via projection is guaranteed to improve forecast accuracy with respect to a class of loss functions based on a generalised distance metric. Second, the Minimum Trace (MinT) method minimises expected loss for this same class of loss functions. Third, the geometric interpretation provides a new proof that forecast reconciliation using projections results in unbiased forecasts, provided that the initial base forecasts are also unbiased. Approaches for dealing with biased base forecasts are proposed. An extensive empirical study of Australian tourism flows demonstrates the theoretical results of the paper and shows that bias correction prior to reconciliation outperforms alternatives that only bias-correct or only reconcile forecasts.  相似文献   

5.
We assess the accuracy of real GDP growth forecasts released by governments and international organizations for European countries in the years 1999–2017. We implement three testing procedures characterized by different assumptions on the forecasters’ loss functions. First, we test forecast rationality within the traditional approach based on a quadratic loss function (Mincer and Zarnowitz, 1969). Second, following Elliott, Timmermann and Komunjer (2005), we test rationality by allowing for a flexible loss function where the shape parameter driving the extent of asymmetry is unknown and estimated from the empirical distribution of forecast errors. Lastly, we implement the tests proposed by Patton and Timmermann (2007a) that hold regardless of the functional form of the loss function. We conclude that governmental forecasts are biased and not rational under a symmetric and quadratic loss function, but they are optimal under more general assumptions on the loss function. We also find that the preferences of forecasters change with the forecasting horizon: when moving from one- to two-year-ahead forecasts, the optimistic bias increases and the parameter of asymmetry in the loss function significantly increases.  相似文献   

6.
Deterministic forecasts (as opposed to ensemble or probabilistic forecasts) issued by numerical weather prediction (NWP) models require post-processing. Such corrective procedure can be viewed as a form of calibration. It is well known that, based on different objective functions, e.g., minimizing the mean square error or the mean absolute error, the calibrated forecasts have different impacts on verification. In this regard, this paper investigates how a calibration directive can affect various aspects of forecast quality outlined in the Murphy–Winkler distribution-oriented verification framework. It is argued that the correlation coefficient is the best measure for the potential performance of NWP forecast verification when linear calibration is involved, because (1) it is not affected by the directive of linear calibration, (2) it can be used to compute the skill score of the linearly calibrated forecasts, and (3) it can avoid the potential deficiency of using squared error to rank forecasts. Since no single error metric can fully represent all aspects of forecast quality, forecasters need to understand the trade-offs between different calibration strategies. To echo the increasing need to bridge atmospheric sciences, renewable energy engineering, and power system engineering, as to move toward the grand goal of carbon neutrality, this paper first provides a brief introduction to solar forecasting, and then revolves its discussion around a solar forecasting case study, such that the readers of this journal can gain further understanding on the subject and thus potentially contribute to it.  相似文献   

7.
This paper considers the problem of forecasting realized variance measures. These measures are highly persistent estimates of the underlying integrated variance, but are also noisy. Bollerslev, Patton and Quaedvlieg (2016), Journal of Econometrics 192(1), 1–18 exploited this so as to extend the commonly used heterogeneous autoregressive (HAR) by letting the model parameters vary over time depending on the estimated measurement error variances. We propose an alternative specification that allows the autoregressive parameters of HAR models to be driven by a latent Gaussian autoregressive process that may also depend on the estimated measurement error variance. The model parameters are estimated by maximum likelihood using the Kalman filter. Our empirical analysis considers the realized variances of 40 stocks from the S&P 500. Our model based on log variances shows the best overall performance and generates superior forecasts both in terms of a range of different loss functions and for various subsamples of the forecasting period.  相似文献   

8.
This study investigates how integrated reporting (IR) creates value for investors. It examines how providers of financial capital benefit from an improved firm information environment provided by IR. Specifically, this study investigates the effect of voluntary IR disclosure on analyst earnings forecast accuracy as well as on firm value. To do so, we use an international sample of 167 listed companies that voluntarily publish an integrated report. Our analysis shows no significant effect of a voluntary IR publication on analyst earnings forecast accuracy and no significant effect on firm value. We thus do not find evidence for the fulfillment of IR's promises regarding improved information environment and value creation of voluntary adopters. We conclude that such companies might already have a relatively high level of transparency leading to an absent additional effect of IR disclosure. Positive effects of IR appear to be more relevant in environments where IR is mandatory.  相似文献   

9.
Quantiles as optimal point forecasts   总被引:1,自引:0,他引:1  
Loss functions play a central role in the theory and practice of forecasting. If the loss function is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is symmetric piecewise linear, any median is an optimal point forecast. Quantiles arise as optimal point forecasts under a general class of economically relevant loss functions, which nests the asymmetric piecewise linear loss, and which we refer to as generalized piecewise linear (GPL). The level of the quantile depends on a generic asymmetry parameter which reflects the possibly distinct costs of underprediction and overprediction. Conversely, a loss function for which quantiles are optimal point forecasts is necessarily GPL. We review characterizations of this type in the work of Thomson, Saerens and Komunjer, and relate to proper scoring rules, incentive-compatible compensation schemes and quantile regression. In the empirical part of the paper, the relevance of decision theoretic guidance in the transition from a predictive distribution to a point forecast is illustrated using the Bank of England’s density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest.  相似文献   

10.
Forecast evaluations aim to choose an accurate forecast for making decisions by using loss functions. However, different loss functions often generate different ranking results for forecasts, which complicates the task of comparisons. In this paper, we develop statistical tests for comparing performances of forecasting expectiles and quantiles of a random variable under consistent loss functions. The test statistics are constructed with the extremal consistent loss functions of Ehm et al. (2016). The null hypothesis of the tests is that a benchmark forecast at least performs equally well as a competing one under all extremal consistent loss functions. It can be shown that if such a null holds, the benchmark will also perform at least equally well as the competitor under all consistent loss functions. Thus under the null, when different consistent loss functions are used, the result that the competitor does not outperform the benchmark will not be altered. We establish asymptotic properties of the proposed test statistics and propose to use the re-centered bootstrap to construct their empirical distributions. Through simulations, we show that the proposed test statistics perform reasonably well. We then apply the proposed method to evaluations of several different forecast methods.  相似文献   

11.
Properties of optimal forecasts under asymmetric loss and nonlinearity   总被引:1,自引:0,他引:1  
Evaluation of forecast optimality in economics and finance has almost exclusively been conducted under the assumption of mean squared error loss. Under this loss function optimal forecasts should be unbiased and forecast errors serially uncorrelated at the single period horizon with increasing variance as the forecast horizon grows. Using analytical results we show that standard properties of optimal forecasts can be invalid under asymmetric loss and nonlinear data generating processes and thus may be very misleading as a benchmark for an optimal forecast. We establish instead that a suitable transformation of the forecast error—known as the generalized forecast error—possesses an equivalent set of properties. The paper also provides empirical examples to illustrate the significance in practice of asymmetric loss and nonlinearities and discusses the effect of parameter estimation error on optimal forecasts.  相似文献   

12.
Abstract

We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.  相似文献   

13.
We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso‐type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics as well as the performance of the proposed models for forecasting the realized covariance matrices of the 30 Dow Jones stocks. We find that the dynamics are not stable as the data are aggregated from the daily to lower frequencies. Furthermore, we are able beat our benchmark by a wide margin. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
We use high-frequency intra-day realized volatility data to evaluate the relative forecasting performances of various models that are used commonly for forecasting the volatility of crude oil daily spot returns at multiple horizons. These models include the RiskMetrics, GARCH, asymmetric GARCH, fractional integrated GARCH and Markov switching GARCH models. We begin by implementing Carrasco, Hu, and Ploberger’s (2014) test for regime switching in the mean and variance of the GARCH(1, 1), and find overwhelming support for regime switching. We then perform a comprehensive out-of-sample forecasting performance evaluation using a battery of tests. We find that, under the MSE and QLIKE loss functions: (i) models with a Student’s t innovation are favored over those with a normal innovation; (ii) RiskMetrics and GARCH(1, 1) have good predictive accuracies at short forecast horizons, whereas EGARCH(1, 1) yields the most accurate forecasts at medium horizons; and (iii) the Markov switching GARCH shows a superior predictive accuracy at long horizons. These results are established by computing the equal predictive ability test of Diebold and Mariano (1995) and West (1996) and the model confidence set of Hansen, Lunde, and Nason (2011) over the entire evaluation sample. In addition, a comparison of the MSPE ratios computed using a rolling window suggests that the Markov switching GARCH model is better at predicting the volatility during periods of turmoil.  相似文献   

15.
In this paper, we define forecast (in)stability in terms of the variability in forecasts for a specific time period caused by updating the forecast for this time period when new observations become available, i.e., as time passes. We propose an extension to the state-of-the-art N-BEATS deep learning architecture for the univariate time series point forecasting problem. The extension allows us to optimize forecasts from both a traditional forecast accuracy perspective as well as a forecast stability perspective. We show that the proposed extension results in forecasts that are more stable without leading to a deterioration in forecast accuracy for the M3 and M4 data sets. Moreover, our experimental study shows that it is possible to improve both forecast accuracy and stability compared to the original N-BEATS architecture, indicating that including a forecast instability component in the loss function can be used as regularization mechanism.  相似文献   

16.
There is general agreement in many forecasting contexts that combining individual predictions leads to better final forecasts. However, the relative error reduction in a combined forecast depends upon the extent to which the component forecasts contain unique/independent information. Unfortunately, obtaining independent predictions is difficult in many situations, as these forecasts may be based on similar statistical models and/or overlapping information. The current study addresses this problem by incorporating a measure of coherence into an analytic evaluation framework so that the degree of independence between sets of forecasts can be identified easily. The framework also decomposes the performance and coherence measures in order to illustrate the underlying aspects that are responsible for error reduction. The framework is demonstrated using UK retail prices index inflation forecasts for the period 1998–2014, and implications for forecast users are discussed.  相似文献   

17.
Several empirical studies have documented that the signs of excess stock returns are, to some extent, predictable. In this paper, we consider the predictive ability of the binary dependent dynamic probit model in predicting the direction of monthly excess stock returns. The recession forecast obtained from the model for a binary recession indicator appears to be the most useful predictive variable, and once it is employed, the sign of the excess return is predictable in-sample. The new dynamic “error correction” probit model proposed in the paper yields better out-of-sample sign forecasts, with the resulting average trading returns being higher than those of either the buy-and-hold strategy or trading rules based on ARMAX models.  相似文献   

18.
In this study, we investigate whether low-frequency data improve volatility forecasting when high-frequency data are available. To answer this question, we utilize four forecast combination strategies that combine low-frequency and high-frequency volatility models and employ a rolling window and a range of loss functions in the framework of the novel Model Confidence Set test. Out-of-sample results show that combination forecasts with GARCH-class models can achieve high forecast accuracy. However, the combination forecast methods appear not to significantly outperform individual high-frequency volatility models. Furthermore, we find that models that combine low-frequency and high-frequency volatility yield significantly better performance than other models and combination forecast strategies in both a statistical and economic sense.  相似文献   

19.
A probabilistic forecast is the estimated probability with which a future event will occur. One interesting feature of such forecasts is their calibration, or the match between the predicted probabilities and the actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. We show here that we can do this without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating the predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on the calibration error in a number of economic applications, including the prediction of recessions and inflation, using both forecasts made and stored in real time and pseudo-forecasts made using the data vintage available at the forecast date. The outcomes are evaluated using both first-release outcome measures and subsequent revised data. We find substantial evidence of incorrect calibration in professional forecasts of recessions and inflation from the SPF, as well as in real-time inflation forecasts from a variety of output gap models.  相似文献   

20.
In many economic applications, it is convenient to model and forecast a variable of interest in logs rather than in levels. However, the reverse transformation from log forecasts to levels introduces a bias. This paper compares different bias correction methods for such transformations of log series which follow a linear process with various types of error distributions. Based on Monte Carlo simulations and an empirical study of realized volatilities, we find no choice of correction method that is uniformly best. We recommend the use of the variance-based correction, either by itself or as part of a hybrid procedure where one first decides (using a pretest) whether the log series is highly persistent or not, and then proceeds either without bias correction (high persistence) or with bias correction (low persistence).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号