首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When a dependent variable y is related to present and past values of an exogenous variable x in a dynamic regression (distributed lag) model, and when x must be forecast in order to forecast y, necessary and sufficient conditions are derived in order for optimal forecasts of y to possess lower mean square error as a result of including x in the model, relative to forecasting y solely from its own past. The contribution to this forecast MSE reduction of non-invertibility in the lag distribution is assessed. Examples from econometrics and engineering are provided to illustrate the results.  相似文献   

2.
The purpose of this study is to investigate the efficacy of combining forecasting models in order to improve earnings per share forecasts. The utility industry is used because regulation causes the accounting procedures of the firms to be more homogeneous than other industries. Three types of forecasting models which use historical data are compared to the forecasts of the Value Line Investment Survey. It is found that predictions of the analysts of Value Line are more accurate than the predictions of the models which use only historical data. However the study also shows that forecasts of earnings per share can be improved by combining the predictions of Value Line with the predictions of other models. Specifically, the forecast error is the least when the Value Line forecast is combined with the forecast of the Brown-Rozeff ARIMA model.  相似文献   

3.
The main objective of this paper it to model the dynamic relationship between global averaged measures of Total Radiative Forcing (RTF) and surface temperature, measured by the Global Temperature Anomaly (GTA), and then use this model to forecast the GTA. The analysis utilizes the Data-Based Mechanistic (DBM) approach to the modelling and forecasting where, in this application, the unobserved component model includes a novel hybrid Box-Jenkins stochastic model in which the relationship between RTF and GTA is based on a continuous time transfer function (differential equation) model. This model then provides the basis for short term, inter-annual to decadal, forecasting of the GTA, using a transfer function form of the Kalman Filter, which produces a good prediction of the ‘pause’ or ‘levelling’ in the temperature rise over the period 2000 to 2011. This derives in part from the effects of a quasi-periodic component that is modelled and forecast by a Dynamic Harmonic Regression (DHR) relationship and is shown to be correlated with the Atlantic Multidecadal Oscillation (AMO) index.  相似文献   

4.
Volatility forecasts aim to measure future risk and they are key inputs for financial analysis. In this study, we forecast the realized variance as an observable measure of volatility for several major international stock market indices and accounted for the different predictive information present in jump, continuous, and option-implied variance components. We allowed for volatility spillovers in different stock markets by using a multivariate modeling approach. We used heterogeneous autoregressive (HAR)-type models to obtain the forecasts. Based an out-of-sample forecast study, we show that: (i) including option-implied variances in the HAR model substantially improves the forecast accuracy, (ii) lasso-based lag selection methods do not outperform the parsimonious day-week-month lag structure of the HAR model, and (iii) cross-market spillover effects embedded in the multivariate HAR model have long-term forecasting power.  相似文献   

5.
Interest in the use of “big data” when it comes to forecasting macroeconomic time series such as private consumption or unemployment has increased; however, applications to the forecasting of GDP remain rather rare. This paper incorporates Google search data into a bridge equation model, a version of which usually belongs to the suite of forecasting models at central banks. We show how such big data information can be integrated, with an emphasis on the appeal of the underlying model in this respect. As the decision as to which Google search terms should be added to which equation is crucial —- both for the forecasting performance itself and for the economic consistency of the implied relationships —- we compare different (ad-hoc, factor and shrinkage) approaches in terms of their pseudo real time out-of-sample forecast performances for GDP, various GDP components and monthly activity indicators. We find that sizeable gains can indeed be obtained by using Google search data, where the best-performing Google variable selection approach varies according to the target variable. Thus, assigning the selection methods flexibly to the targets leads to the most robust outcomes overall in all layers of the system.  相似文献   

6.
This paper presented a Fuzzy Regression Forecasting Model (FRFM) to forecast demand by examining present international air cargo market. Accuracy is one of the most important concerns when dealing with forecasts. However, there is one problem that is often overlooked. That is, an accurate forecast model for one does not necessarily suit the other. This is mainly due to individual’s different perceptions toward their socioeconomic environment as well as their competitiveness when evaluating risk. Therefore people make divergent judgments toward various scenarios. Yet even when faced with the same challenge, distinctive responses are generated due to individual evaluations in their strengths and weaknesses. How to resolve these uncertainties and indefiniteness while accommodating individuality is the main purpose of constructing this FRFM. When forecasting air cargo volumes, uncertainty factors often cause deviation in estimations derived from traditional linear regression analysis. Aiming to enhance forecast accuracy by minimizing deviations, fuzzy regression analysis and linear regression analysis were integrated to reduce the residual resulted from these uncertain factors. The authors applied α-cut and Index of Optimism λ to achieve a more flexible and persuasive future volume forecast.  相似文献   

7.
Empirical evidence has shown that seasonal patterns of tourism demand and the effects of various influencing factors on this demand tend to change over time. To forecast future tourism demand accurately requires appropriate modelling of these changes. Based on the structural time series model (STSM) and the time-varying parameter (TVP) regression approach, this study develops the causal STSM further by introducing TVP estimation of the explanatory variable coefficients, and therefore combines the merits of the STSM and TVP models. This new model, the TVP-STSM, is employed for modelling and forecasting quarterly tourist arrivals to Hong Kong from four key source markets: China, South Korea, the UK and the USA. The empirical results show that the TVP-STSM outperforms all seven competitors, including the basic and causal STSMs and the TVP model for one- to four-quarter-ahead ex post forecasts and one-quarter-ahead ex ante forecasts.  相似文献   

8.
We estimate a Bayesian VAR (BVAR) for the UK economy and assess its performance in forecasting GDP growth and CPI inflation in real time relative to forecasts from COMPASS, the Bank of England’s DSGE model, and other benchmarks. We find that the BVAR outperformed COMPASS when forecasting both GDP and its expenditure components. In contrast, their performances when forecasting CPI were similar. We also find that the BVAR density forecasts outperformed those of COMPASS, despite under-predicting inflation at most forecast horizons. Both models over-predicted GDP growth at all forecast horizons, but the issue was less pronounced in the BVAR. The BVAR’s point and density forecast performances are also comparable to those of a Bank of England in-house statistical suite for both GDP and CPI inflation, as well as to the official Inflation Report projections. Our results are broadly consistent with the findings of similar studies for other advanced economies.  相似文献   

9.
We compare a number of methods that have been proposed in the literature for obtaining h-step ahead minimum mean square error forecasts for self-exciting threshold autoregressive (SETAR) models. These forecasts are compared to those from an AR model. The comparison of forecasting methods is made using Monte Carlo simulation. The Monte-Carlo method of calculating SETAR forecasts is generally at least as good as that of the other methods we consider. An exception is when the disturbances in the SETAR model come from a highly asymmetric distribution, when a Bootstrap method is to be preferred.An empirical application calculates multi-period forecasts from a SETAR model of US gross national product using a number of the forecasting methods. We find that whether there are improvements in forecast performance relative to a linear AR model depends on the historical epoch we select, and whether forecasts are evaluated conditional on the regime the process was in at the time the forecast was made.  相似文献   

10.
Forecasting competitions have usually compared the accuracy of different forecasting methods across a number of different time series. This paper describes a study of the application of ten forecasting methods to a single time series: that of peak electricity demand in England and Wales. The performance measure used, however, is not one of the usual forecast ones, e.g., MSE or MAPS, but a managerial one in that the impact of different forecast methods on the profitability of the Central Electricity Generating Board for England and Wales is assessed using a financial simulation model. As well as examing the effects of forecast method on profitability the effects of two other factors, namely the use of a temperature corrected data series and the impact of log transformation of the data are considered. All these effects are both statistically and practically significant. The results are then examined from a different standpoint: specifically the extent to which the financial impacts of alternative forecast methods can be explained using a number of conventional forecast accuracy measures. This question is of major importance in applications since accuracy is one of the few easily measured characteristics of a potential forecasting method. It is concluded that much, though not all, of the results can be explained by accuracy considerations.  相似文献   

11.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

12.
Factor models have been applied extensively for forecasting when high‐dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of literature indicating that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques that take into account the variable to be forecast when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecast while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well‐known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show good performance in improving efficiency compared to widely used factor methods in macroeconomic forecasting. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
It is widely believed that the large econometric models cannot be used for forecasting without considerable intervention on the part of the forecaster. In this paper we challenge this view by reproducing a number of recent forecasts published by the National Institute but without the ad hoc interventions used at the time. We show that in no case would the forecast, produced by the model used mechanically, have been radically different from that actually published. Further, in an ex-post comparison against actual out-turns, the mechanical model forecast is not obviously dominated by the published version.  相似文献   

14.
As the internet’s footprint continues to expand, cybersecurity is becoming a major concern for both governments and the private sector. One such cybersecurity issue relates to data integrity attacks. This paper focuses on the power industry, where the forecasting processes rely heavily on the quality of the data. Data integrity attacks are expected to harm the performances of forecasting systems, which will have a major impact on both the financial bottom line of power companies and the resilience of power grids. This paper reveals the effect of data integrity attacks on the accuracy of four representative load forecasting models (multiple linear regression, support vector regression, artificial neural networks, and fuzzy interaction regression). We begin by simulating some data integrity attacks through the random injection of some multipliers that follow a normal or uniform distribution into the load series. Then, the four aforementioned load forecasting models are used to generate one-year-ahead ex post point forecasts in order to provide a comparison of their forecast errors. The results show that the support vector regression model is most robust, followed closely by the multiple linear regression model, while the fuzzy interaction regression model is the least robust of the four. Nevertheless, all four models fail to provide satisfying forecasts when the scale of the data integrity attacks becomes large. This presents a serious challenge to both load forecasters and the broader forecasting community: the generation of accurate forecasts under data integrity attacks. We construct our case study using the publicly-available data from Global Energy Forecasting Competition 2012. At the end, we also offer an overview of potential research topics for future studies.  相似文献   

15.
There are two potential directions of forecast combination: combining for adaptation and combining for improvement. The former direction targets the performance of the best forecaster, while the latter attempts to combine forecasts to improve on the best forecaster. It is often useful to infer which goal is more appropriate so that a suitable combination method may be used. This paper proposes an AI-AFTER approach that can not only determine the appropriate goal of forecast combination but also intelligently combine the forecasts to automatically achieve the proper goal. As a result of this approach, the combined forecasts from AI-AFTER perform well universally in both adaptation and improvement scenarios. The proposed forecasting approach is implemented in our R package AIafter, which is available at https://github.com/weiqian1/AIafter.  相似文献   

16.
Adaptive combining is generally a desirable approach for forecasting, which, however, has rarely been explored for discrete response time series. In this paper, we propose an adaptively combined forecasting method for such discrete response data. We demonstrate in theory that the proposed forecast is of the desired adaptation with respect to the widely used squared risk and other significant risk functions under mild conditions. Furthermore, we study the issue of adaptation for the proposed forecasting method in the presence of model screening that is often useful in applications. Our simulation study and two real-world data examples show promise for the proposed approach.  相似文献   

17.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

18.
We introduce a new forecasting methodology, referred to as adaptive learning forecasting, that allows for both forecast averaging and forecast error learning. We analyze its theoretical properties and demonstrate that it provides a priori MSE improvements under certain conditions. The learning rate based on past forecast errors is shown to be non-linear. This methodology is of wide applicability and can provide MSE improvements even for the simplest benchmark models. We illustrate the method’s application using data on agricultural prices for several agricultural products, as well as on real GDP growth for several of the corresponding countries. The time series of agricultural prices are short and show an irregular cyclicality that can be linked to economic performance and productivity, and we consider a variety of forecasting models, both univariate and bivariate, that are linked to output and productivity. Our results support both the efficacy of the new method and the forecastability of agricultural prices.  相似文献   

19.
We develop a forecasting methodology for providing credible forecasts for time series that have recently undergone a shock. We achieve this by borrowing knowledge from other time series that have undergone similar shocks for which post-shock outcomes are observed. Three shock effect estimators are motivated with the aim of minimizing average forecast risk. We propose risk-reduction propositions that provide conditions that establish when our methodology works. Bootstrap and leave-one-out cross-validation procedures are provided to prospectively assess the performance of our methodology. Several simulated data examples and two real data examples of forecasting Conoco Phillips and Apple stock price are provided for verification and illustration.  相似文献   

20.
This paper aims to improve the predictability of aggregate oil market volatility with a substantially large macroeconomic database, including 127 macro variables. To this end, we use machine learning from both the variable selection (VS) and common factor (i.e., dimension reduction) perspectives. We first use the lasso, elastic net (ENet), and two conventional supervised learning approaches based on the significance level of predictors’ regression coefficients and the incremental R-square to select useful predictors relevant to forecasting oil market volatility. We then rely on the principal component analysis (PCA) to extract a common factor from the selected predictors. Finally, we augment the autoregression (AR) benchmark model by including the supervised PCA common index. Our empirical results show that the supervised PCA regression model can successfully predict oil market volatility both in-sample and out-of-sample. Also, the recommended models can yield forecasting gains in both statistical and economic perspectives. We further shed light on the nature of VS over time. In particular, option-implied volatility is always the most powerful predictor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号