首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we investigate the multi-period forecast performance of a number of empirical self-exciting threshold autoregressive (SETAR) models that have been proposed in the literature for modelling exchange rates and GNP, among other variables. We take each of the empirical SETAR models in turn as the DGP to ensure that the ‘non-linearity’ characterizes the future, and compare the forecast performance of SETAR and linear autoregressive models on a number of quantitative and qualitative criteria. Our results indicate that non-linear models have an edge in certain states of nature but not in others, and that this can be highlighted by evaluating forecasts conditional upon the regime. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

2.
This paper studies the relationship between corporate failure forecasting and earnings management variables. Using a new threshold model approach that separates samples into different regimes according to a threshold variable, the authors examine regimes to evaluate the prediction capacities of earnings management variables. By proposing this threshold model and applying it innovatively, this research reveals boundaries within which earnings management variables can yield superior corporate failure forecasting. The inclusion of earnings management variables in corporate failure models improves failure prediction capacities for firms that manipulate substantial earnings. Furthermore, an accruals-based variable improves predictions of failed firms, but the real activities-based variable improves predictions of non-failed firms. These findings highlight the importance of indicators of the magnitude of earnings management and the tools used to improve the performance of corporate failure models. The proposed model can determine the predictive power of particular explanatory variables to forecast corporate failure.  相似文献   

3.
Forecasting competitions have been a major driver not only of improvements in forecasting methods’ performances, but also of the development of new forecasting approaches. However, despite the tremendous value and impact of these competitions, they do suffer from the limitation that performances are measured only in terms of the forecast accuracy and bias, ignoring utility metrics. Using the monthly industry series of the M3 competition, we empirically explore the inventory performances of various widely used forecasting techniques, including exponential smoothing, ARIMA models, the Theta method, and approaches based on multiple temporal aggregation. We employ a rolling simulation approach and analyse the results for the order-up-to policy under various lead times. We find that the methods that are based on combinations result in superior inventory performances, while the Naïve, Holt, and Holt-Winters methods perform poorly.  相似文献   

4.
With the concept of trend inflation now being widely understood to be important to the accuracy of longer-term inflation forecasts, this paper assesses alternative models of trend inflation. Reflecting the models which are common in reduced-form inflation modeling and forecasting, we specify a range of models of inflation that incorporate different trend specifications. We compare the models on the basis of their accuracies in out-of-sample forecasting, both point and density. Our results show that it is difficult to say that any one model of trend inflation is the best. Several different trend specifications seem to be about equally accurate, and the relative accuracy is somewhat prone to instabilities over time.  相似文献   

5.
Bootstrap prediction intervals for SETAR models   总被引:1,自引:0,他引:1  
This paper considers four methods for obtaining bootstrap prediction intervals (BPIs) for the self-exciting threshold autoregressive (SETAR) model. Method 1 ignores the sampling variability of the threshold parameter estimator. Method 2 corrects the finite sample biases of the autoregressive coefficient estimators before constructing BPIs. Method 3 takes into account the sampling variability of both the autoregressive coefficient estimators and the threshold parameter estimator. Method 4 resamples the residuals in each regime separately. A Monte Carlo experiment shows that (1) accounting for the sampling variability of the threshold parameter estimator is necessary, despite its super-consistency; (2) correcting the small-sample biases of the autoregressive parameter estimators improves the small-sample properties of bootstrap prediction intervals under certain circumstances; and (3) the two-sample bootstrap can improve the long-term forecasts when the error terms are regime-dependent.  相似文献   

6.
7.
We study the suitability of applying lasso-type penalized regression techniques to macroe-conomic forecasting with high-dimensional datasets. We consider the performances of lasso-type methods when the true DGP is a factor model, contradicting the sparsity assumptionthat underlies penalized regression methods. We also investigate how the methods handle unit roots and cointegration in the data. In an extensive simulation study we find that penalized regression methods are more robust to mis-specification than factor models, even if the underlying DGP possesses a factor structure. Furthermore, the penalized regression methods can be demonstrated to deliver forecast improvements over traditional approaches when applied to non-stationary data that contain cointegrated variables, despite a deterioration in their selective capabilities. Finally, we also consider an empirical applicationto a large macroeconomic U.S. dataset and demonstrate the competitive performance of penalized regression methods.  相似文献   

8.
We propose a novel approach to the modelling and forecasting of high-frequency trading volumes. The new model extends the component multiplicative error model of Brownlees et al. (2011) by introducing a more flexible specification of the long-run component. This uses an additive cascade of MIDAS polynomial filters, moving at different frequencies, to reproduce the changing long-run level and the persistent autocorrelation structure of high-frequency trading volumes. After investigating the statistical properties of the proposed approach, we illustrate its merits by means of an application to six stocks that are traded on the XETRA market in the German Stock Exchange.  相似文献   

9.
It is often suggested that non-linear models are needed to capture business cycle features. In this paper, we subject this view to some critical analysis. We examine two types of non-linear models designed to capture the bounce-back effect in US expansions. This means that these non-linear models produce an improved explanation of the shape of expansions over that provided by linear models. But this is at the expense of making expansions last much longer than they do in reality. Interestingly, the fitted models seem to be influenced by a single point in 1958 when a large negative growth rate in GDP was followed by good positive growth in the next quarter. This seems to have become embedded as a population characteristic and results in overly long and strong expansions. That feature is likely to be a problem for forecasting if another large negative growth rate was observed.  相似文献   

10.
This paper addresses the issue of estimating seasonal indices for multi-item, short-term forecasting, based upon both individual time series estimates and groups of similar time series. This development of the joint use of individual and group seasonal estimation is extended in two directions. One class of methods is derived from the procedures developed for combining forecasts. The second employs the general class of Stein Rules to obtain shrinkage estimates of seasonal components. A comparative evaluation has been undertaken of several versions of these methods, based upon a sample of retail sales data. The results favour these newly developed methods and provide some interesting insights for practical implementation.  相似文献   

11.
The M4 Competition: 100,000 time series and 61 forecasting methods   总被引:1,自引:0,他引:1  
The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field.  相似文献   

12.
This commentary introduces a correlation analysis of the top-10 ranked forecasting methods that participated in the M4 forecasting competition. The “M” competitions attempt to promote and advance research in the field of forecasting by inviting both industry and academia to submit forecasting algorithms for evaluation over a large corpus of real-world datasets. After performing the initial analysis to derive the errors of each method, we proceed to investigate the pairwise correlations among them in order to understand the extent to which they produce errors in similar ways. Based on our results, we conclude that there is indeed a certain degree of correlation among the top-10 ranked methods, largely due to the fact that many of them consist of a combination of well-known, statistical and machine learning techniques. This fact has a strong impact on the results of the correlation analysis, and therefore leads to similar forecasting error patterns.  相似文献   

13.
This study evaluates the forecasting accuracy of six alternative econometric models in the context of the demand for international tourism in Denmark. These econometric models are special cases of a general autoregressive distributed lag specification. In addition, the forecasting accuracy of two univariate time series models is evaluated for benchmark comparison purposes. The forecasting competition is based on annual data on inbound tourism to Denmark. Individual models are estimated for each of the six major origin countries over the period 1969–93 and forecasting performance is assessed using data for the period 1994–97. Rankings of these forecasting models over different time horizons are established based on mean absolute percentage error and root mean square percentage error.  相似文献   

14.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

15.
We sum up the methodology of the team tololo for the Global Energy Forecasting Competition 2012: Load Forecasting. Our strategy consisted of a temporal multi-scale model that combines three components. The first component was a long term trend estimated by means of non-parametric smoothing. The second was a medium term component describing the sensitivity of the electricity demand to the temperature at each time step. We use a generalized additive model to fit this component, using calendar information as well. Finally, a short term component models local behaviours. As the factors that drive this component are unknown, we use a random forest model to estimate it.  相似文献   

16.
In areas from medicine to climate change to economics, we are faced with huge challenges and a need for accurate forecasts, yet our ability to predict the future has been found wanting. The basic problem is that complex systems such as the atmosphere or the economy can not be reduced to simple mathematical laws and modeled accordingly. The equations in numerical models are therefore only approximations to reality, and are often highly sensitive to external influences and small changes in parameterisation — they can be made to fit past data, but are less good at prediction. Since decisions are usually based on our best models of the future, how can we proceed? This paper draws a comparison between two apparently different fields: biology and economics. In biology, drug development is a highly inefficient and expensive process, which in the past has relied heavily on trial and error. Institutions such as pharmaceutical companies and universities are now radically changing their approach and adopting techniques from the new field of systems biology to integrate information from disparate sources and improve the development process. A similar revolution is required in economics if models are to reflect the nature of human economic activity and provide useful tools for policy makers. We outline the main foundations for a theory of systems economics.  相似文献   

17.
Forecasting the success of megaprojects, such as the Olympic Games or space exploration missions, is a very difficult but important task, due to their complexity and the large capital investment they require. Typically, megaproject stakeholders do not employ formal forecasting methods, but instead rely on impact assessments and/or cost–benefit analysis; however, as these tools do not necessarily include forecasts, there is no accountability. This study evaluates the effectiveness of judgmental methods for successfully forecasting the accomplishment of specific megaproject objectives, where the measure of success is the collective accomplishment of such objectives. We compare the performances of three judgmental methods used by a group of 69 semi-experts: unaided judgement (UJ), semi-structured analogies (s-SA), and interaction groups (IG). The empirical evidence reveals that the use of s-SA leads to accuracy improvements relative to UJ. These improvements are amplified further when we introduce the pooling of analogies through teamwork in IG.  相似文献   

18.
19.
The paper analyzes the use of information in companies planning strategically versus those which are not. This contrast is used to build the case for developing strategic forecasting capability which focuses on a variety of environments, is proactive and interactive, and creates a need for different kinds of data bases and forecasting techniques.  相似文献   

20.
We construct a real-time dataset (FRED-SD) with vintage data for the U.S. states that can be used to forecast both state-level and national-level variables. Our dataset includes approximately 28 variables per state, including labor-market, production, and housing variables. We conduct two sets of real-time forecasting exercises. The first forecasts state-level labor-market variables using five different models and different levels of industrially disaggregated data. The second forecasts a national-level variable exploiting the cross-section of state data. The state-forecasting experiments suggest that large models with industrially disaggregated data tend to have higher predictive ability for industrially diversified states. For national-level data, we find that forecasting and aggregating state-level data can outperform a random walk but not an autoregression. We compare these real-time data experiments with forecasting experiments using final-vintage data and find very different results. Because these final-vintage results are obtained with revised data that would not have been available at the time the forecasts would have been made, we conclude that the use of real-time data is essential for drawing proper conclusions about state-level forecasting models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号