首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Combining provides a pragmatic way of synthesising the information provided by individual forecasting methods. In the context of forecasting the mean, numerous studies have shown that combining often leads to improvements in accuracy. Despite the importance of the value at risk (VaR), though, few papers have considered quantile forecast combinations. One risk measure that is receiving an increasing amount of attention is the expected shortfall (ES), which is the expectation of the exceedances beyond the VaR. There have been no previous studies on combining ES predictions, presumably due to there being no suitable loss function for ES. However, it has been shown recently that a set of scoring functions exist for the joint estimation or backtesting of VaR and ES forecasts. We use such scoring functions to estimate combining weights for VaR and ES prediction. The results from five stock indices show that combining outperforms the individual methods for the 1% and 5% probability levels.  相似文献   

2.
There are two potential directions of forecast combination: combining for adaptation and combining for improvement. The former direction targets the performance of the best forecaster, while the latter attempts to combine forecasts to improve on the best forecaster. It is often useful to infer which goal is more appropriate so that a suitable combination method may be used. This paper proposes an AI-AFTER approach that can not only determine the appropriate goal of forecast combination but also intelligently combine the forecasts to automatically achieve the proper goal. As a result of this approach, the combined forecasts from AI-AFTER perform well universally in both adaptation and improvement scenarios. The proposed forecasting approach is implemented in our R package AIafter, which is available at https://github.com/weiqian1/AIafter.  相似文献   

3.
While combining forecasts is well-known to reduce error, the question of how to best combine forecasts remains. Prior research suggests that combining is most beneficial when relying on diverse forecasts that incorporate different information. Here, I provide evidence in support of this hypothesis by analyzing data from the PollyVote project, which has published combined forecasts of the popular vote in U.S. presidential elections since 2004. Prior to the 2020 election, the PollyVote revised its original method of combining forecasts by, first, restructuring individual forecasts based on their underlying information and, second, adding naïve forecasts as a new component method. On average across the last 100 days prior to the five elections from 2004 to 2020, the revised PollyVote reduced the error of the original specification by eight percent and, with a mean absolute error (MAE) of 0.8 percentage points, was more accurate than any of its component forecasts. The results suggest that, when deciding about which forecasts to include in the combination, forecasters should be more concerned about the component forecasts’ diversity than their historical accuracy.  相似文献   

4.
5.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

6.
The generalized smooth transition autoregression (GSTAR) parametrizes the joint asymmetry in the duration and length of cycles in macroeconomic time series by using particular generalizations of the logistic function. The symmetric smooth transition and linear autoregressions are nested in the GSTAR. A test for the null hypothesis of dynamic symmetry is presented. Two case studies indicate that dynamic asymmetry is a key feature of the U.S. economy. The GSTAR model beats its competitors for point forecasting, but this superiority becomes less evident for density forecasting and in uncertain forecasting environments.  相似文献   

7.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

8.
In this paper, we use survey data to analyze the accuracy, unbiasedness and efficiency of professional macroeconomic forecasts. We analyze a large panel of individual forecasts that has not previously been analyzed in the literature. We provide evidence on the properties of forecasts for all G7-countries and for four different macroeconomic variables. Our results show a high degree of dispersion of forecast accuracy across forecasters. We also find that there are large differences in the performances of forecasters, not only across countries but also across different macroeconomic variables. In general, the forecasts tend to be biased in situations where the forecasters have to learn about large structural shocks or gradual changes in the trend of a variable. Furthermore, while a sizable fraction of forecasters seem to smooth their GDP forecasts significantly, this does not apply to forecasts made for other macroeconomic variables.  相似文献   

9.
Utilizing a sample of 138 printing firm employees, this field experiment tested two hypotheses regarding the perceived fairness of using personality tests in employment contexts. The results largely supported Hypothesis 1, which proposed that a selection procedure utilizing both an employment interview and a personality test would receive significantly lower fairness ratings than an interview-only selection procedure. In contrast, the results provided only partial support for Hypothesis 2, which proposed that negative fairness perceptions of personality tests can be reduced via the use of explanations for the use of such tests. In total, these findings suggest that it may be difficult to overcome negative perceptions toward the use of personality tests in employment contexts.  相似文献   

10.
Consider using a likelihood ratio to measure the strength of statistical evidence for one hypothesis over another. Recent work has shown that when the model is correctly specified, the likelihood ratio is seldom misleading. But when the model is not, misleading evidence may be observed quite frequently. Here we consider how to choose a working regression model so that the statistical evidence is correctly represented as often as it would be under the true model. We argue that the criteria for choosing a working model should be how often it correctly represents the statistical evidence about the object of interest (regression coefficient in the true model). We see that misleading evidence about the object of interest is more likely to be observed when the working model is chosen according to other criteria (e.g., parsimony or predictive accuracy).  相似文献   

11.
Statistical post-processing techniques are now used widely for correcting systematic biases and errors in the calibration of ensemble forecasts obtained from multiple runs of numerical weather prediction models. A standard approach is the ensemble model output statistics (EMOS) method, which results in a predictive distribution that is given by a single parametric law, with parameters that depend on the ensemble members. This article assesses the merits of combining multiple EMOS models based on different parametric families. In four case studies with wind speed and precipitation forecasts from two ensemble prediction systems, we investigate the performances of state of the art forecast combination methods and propose a computationally efficient approach for determining linear pool combination weights. We study the performance of forecast combination compared to that of the theoretically superior but cumbersome estimation of a full mixture model, and assess which degree of flexibility of the forecast combination approach yields the best practical results for post-processing applications.  相似文献   

12.
This paper presents analytical, Monte Carlo and empirical evidence concerning out-of-sample tests of Granger causality. The environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regression-based statistic suggested by Granger and Newbold [1977. Forecasting Economic Time Series. Academic Press Inc., London], a t-type statistic comparable to those suggested by Diebold and Mariano [1995, Comparing Predictive Accuracy. Journal of Business and Economic Statistics, 13, 253–263] and West [1996. Asymptotic Inference About Predictive Ability, Econometrica, 64, 1067–1084], and an F-type statistic akin to Theil's U. Since the asymptotic distributions under the null are nonstandard, tables of asymptotically valid critical values are provided. Monte Carlo evidence supports the theoretical results. An empirical example evaluates the predictive content of the Chicago Fed National Activity Index for growth in Industrial Production and core PCE-based inflation.  相似文献   

13.
Several researchers (Armstrong, 2001; Clemen, 1989; Makridakis and Winkler, 1983) have shown empirically that combination-based forecasting methods are very effective in real world settings. This paper discusses a combination-based forecasting approach that was used successfully in the M4 competition. The proposed approach was evaluated on a set of 100K time series across multiple domain areas with varied frequencies. The point forecasts submitted finished fourth based on the overall weighted average (OWA) error measure and second based on the symmetric mean absolute percent error (sMAPE).  相似文献   

14.
15.
本文基于模拟方法比较了不同非线性时序模型的LM检验的功效和规模,同时也考虑一般化线性检验BDS检验参与比较,目的在于探讨蒙特一卡洛渐近法检验与自举法(bootstrap)检验的两类临界值的统计功效何者更为有效。通过实证与对比分析,结果表明,当样本小于200或自回归系数接近单位根,或者线性性检验是ARCHT或BDS时,就可以考虑应用自举法临界值而非渐近临界值。而且还发现,BDS检验仅在一般性上优于LM检验。  相似文献   

16.
影响上市公司财务业绩的潜在因素很多,既有公司自身因素(主要是财务报表信息),又有宏观经济因素和行业因素.如何使用一种科学的变量选择方法,筛选出影响财务业绩的主要变量进行财务业绩预测是我们需要研究的问题.遗传算法可以将变量选择转换成一个求最优解的过程,据各自变量对因变量(财务业绩)的预测能力来决定变量的取舍,有利于提高财务业绩的预测精度.  相似文献   

17.
The paper looks at the sensitivity of commonly used income inequality measures to changes in the ranking, size and number of regions into which a country is divided. During the analysis, several test distributions of populations and incomes are compared with a ‘reference’ distribution, characterized by an even distribution of population across regional subdivisions. Random permutation tests are also run to determine whether inequality measures commonly used in regional analysis produce meaningful estimates when applied to regions of different population size. The results show that only the population weighted coefficient of variation (Williamson’s index) and population-weighted Gini coefficient may be considered sufficiently reliable inequality measures, when applied to countries with a small number of regions and with varying population sizes.  相似文献   

18.
Data driven test procedure for detection of change is introduced and its properties are studied. The new solution is max-type statistic related to data-driven rank tests for two-sample subproblems. Simulations show that the new test possesses high and stable power. The test is consistent at essentially any alternative. Asymptotic null distribution of the test is derived. The work of the first two authors has been partially supported by the grants GAČR 201/06/0186 and MSM 02160839.  相似文献   

19.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

20.
To test the null hypothesis of a Poisson marginal distribution, test statistics based on the Stein–Chen identity are proposed. For a wide class of Poisson count time series, the asymptotic distribution of different types of Stein–Chen statistics is derived, also if multiple statistics are jointly applied. The performance of the tests is analyzed with simulations, as well as the question which Stein–Chen functions should be used for which alternative. Illustrative data examples are presented, and possible extensions of the novel Stein–Chen approach are discussed as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号