首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the bias-corrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.  相似文献   

2.
Bootstrap prediction intervals for SETAR models   总被引:1,自引:0,他引:1  
This paper considers four methods for obtaining bootstrap prediction intervals (BPIs) for the self-exciting threshold autoregressive (SETAR) model. Method 1 ignores the sampling variability of the threshold parameter estimator. Method 2 corrects the finite sample biases of the autoregressive coefficient estimators before constructing BPIs. Method 3 takes into account the sampling variability of both the autoregressive coefficient estimators and the threshold parameter estimator. Method 4 resamples the residuals in each regime separately. A Monte Carlo experiment shows that (1) accounting for the sampling variability of the threshold parameter estimator is necessary, despite its super-consistency; (2) correcting the small-sample biases of the autoregressive parameter estimators improves the small-sample properties of bootstrap prediction intervals under certain circumstances; and (3) the two-sample bootstrap can improve the long-term forecasts when the error terms are regime-dependent.  相似文献   

3.
Increasingly, prediction markets are being embraced as a mechanism for eliciting and aggregating dispersed information and providing a means of deriving probabilistic forecasts of future uncertain events. The efficient market hypothesis postulates that prediction market prices should incorporate all information that is relevant to the performances of the contracts traded. This paper shows that such may not be the case in relation to information regarding environmental factors such as the weather and atmospheric conditions. In the context of horserace betting markets, we demonstrate that even after the effects of these factors on the contestants (horses and jockeys) have been discounted, the accuracy of the probabilities derived from market prices is affected systematically by the prevailing weather and atmospheric conditions. We show that significantly better forecasts can be derived from prediction markets if we correct for this phenomenon, and that these improvements have substantial economic value.  相似文献   

4.
A new method is proposed to obtain interval forecasts for autoregressive models taking into account the variability due to the estimation of the order and the parameters. The procedure improves that introduced by Masarotto (1990), allows a substantial reduction of the variance of the predictive distribution percentile estimators and should thus be considered as a useful alternative to the classic Box and Jenkins interval forecast. The method uses the bootstrap technique and is distribution-free. An empirical application is considered.  相似文献   

5.
Household projections are key components of analyses of several issues of social concern, including the welfare of the elderly, housing, and environmentally significant consumption patterns. Researchers or policy makers that use such projections need appropriate representations of uncertainty in order to inform their analyses. However, the weaknesses of the traditional approach of providing alternative variants to single "best guess" projection are magnified in household projections, which have many output variables of interest, and many input variables beyond fertility, mortality, and migration. We review current methods of household projections and the potential for using them to produce probabilistic projections, which would address many of these weaknesses. We then propose a new framework for a household projection method of intermediate complexity that we believe is a good candidate for providing a basis for further development of probabilistic household projections. An extension of the traditional headship rate approach, this method is based on modelling changes in headship rates decomposed by household size as a function of variables describing demographic events such as parity specific fertility, union formation and dissolution, and leaving home. It has moderate data requirements, manageable complexity, allows for direct specification of demographic events, and produces output that includes the most important household characteristics for many applications. An illustration of how such a model might be constructed, using data on the U.S. and China over the past several decades, demonstrates the viability of the approach.  相似文献   

6.
This paper describes the methods used by Team Cassandra, a joint effort between IBM Research Australia and the University of Melbourne, in the GEFCom2017 load forecasting competition. An important first phase in the forecasting effort involved a deep exploration of the underlying dataset. Several data visualisation techniques were applied to help us better understand the nature and size of gaps, outliers, the relationships between different entities in the dataset, and the relevance of custom date ranges. Improved, cleaned data were then used to train multiple probabilistic forecasting models. These included a number of standard and well-known approaches, as well as a neural-network based quantile forecast model that was developed specifically for this dataset. Finally, model selection and forecast combination were used to choose a custom forecasting model for every entity in the dataset.  相似文献   

7.
The 2018 M4 Forecasting Competition was the first M Competition to elicit prediction intervals in addition to point estimates. We take a closer look at the twenty valid interval submissions by examining the calibration and accuracy of the prediction intervals and evaluating their performances over different time horizons. Overall, the submissions fail to estimate the uncertainty properly. Importantly, we investigate the benefits of interval combination using six recently-proposed heuristics that can be applied prior to learning about the realizations of the quantities. Our results suggest that interval aggregation offers improvements in terms of both calibration and accuracy. While averaging interval endpoints maintains its practical appeal as being simple to implement and performs quite well when data sets are large, the median and the interior trimmed average are found to be robust aggregators for the prediction interval submissions across all 100,000 time series.  相似文献   

8.
"With increasing interest in forecast uncertainty, there is an evolving concern with assessing the degree of certainty we can attach to uncertainty itself. This concern is the subject of recent work by Juha Alho.... I examine [his] approach systematically and draw general conclusions about its efficacy. Suggestions for improvement are made."  相似文献   

9.
I compare the forecasts of returns from the mean predictor (optimal under MSE), with the pseudo-optimal and optimal predictor for an asymmetric loss function under the assumption that agents have an asymmetric LINLIN loss function. The results strongly suggest not using the conditional mean predictor under conditions of asymmetry. In general, forecasts can be improved by the use of optimal predictor rather than the pseudo-optimal predictor, suggesting that the loss reduction from using the optimal predictor can actually be important for practitioners as well.  相似文献   

10.
Volatility forecasting is crucial for portfolio management, risk management, and pricing of derivative securities. Still, little is known about the accuracy of volatility forecasts and the horizon of volatility predictability. This paper aims to fill these gaps in the literature. We begin this paper by introducing the notions of spot and forward predicted volatilities and propose describing the term structure of volatility predictability by spot and forward forecast accuracy curves. Then, we perform a comprehensive study of the term structure of volatility predictability in stock and foreign exchange markets. Our results quantify the volatility forecast accuracy across horizons in two major markets and suggest that the horizon of volatility predictability is significantly longer than that reported in earlier studies. Nevertheless, the aforesaid horizon is observed to be much shorter than the longest maturity of traded derivative contracts.  相似文献   

11.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

12.
文章以国内证券分析师业的业绩预测和投资评级为研究对象,从投资评级的准确性、投资建议赢利性、业绩预测误差及其来源等几个方面进行了实证分析。研究结果表明,证券分析师的投资建议无论在短期还是中长期均不能产生显著的超额收益,业绩预测误差是导致投资评级失误的原因之一,而业绩预测误差主要源于分析师对公司层面信息的错误判断。  相似文献   

13.
We propose a new way of selecting among model forms in automated exponential smoothing routines, consequently enhancing their predictive power. The procedure, here addressed as treating, operates by selectively subsetting the ensemble of competing models based on information from their prediction intervals. By the same token, we set forth a pruning strategy to improve the accuracy of both point forecasts and prediction intervals in forecast combination methods. The proposed approaches are respectively applied to automated exponential smoothing routines and Bagging algorithms, to demonstrate their potential. An empirical experiment is conducted on a wide range of series from the M-Competitions. The results attest that the proposed approaches are simple, without requiring much additional computational cost, but capable of substantially improving forecasting accuracy for both point forecasts and prediction intervals, outperforming important benchmarks and recently developed forecast combination methods.  相似文献   

14.
The objective of this paper is to analyze the effects of uncertainty on density forecasts of stationary linear univariate ARMA models. We consider three specific sources of uncertainty: parameter estimation, error distribution, and lag order. Depending on the estimation sample size and the forecast horizon, each of these sources may have different effects. We consider asymptotic, Bayesian, and bootstrap procedures proposed to deal with uncertainty and compare their finite sample properties. The results are illustrated constructing fan charts for UK inflation.  相似文献   

15.
Using a broad selection of 53 carbon (EUA) related, commodity and financial predictors, we provide a comprehensive assessment of the out-of-sample (OOS) predictability of weekly European carbon futures return. We assess forecast performance using both statistical and economic value metrics over an OOS period spanning from January 2013 to May 2018. Two main types of dimension reduction techniques are employed: (i) shrinkage of coefficient estimates and (ii) factor models. We find that: (1) these dimension reduction techniques can beat the benchmark significantly with positive gains in forecast accuracy, despite very few individual predictors being able to; (2) forecast accuracy is sensitive to the sample period, and only Group-average models and Commodity-predictors tend to beat the benchmark consistently; the Group-average models can improve both the prediction accuracy and stability significantly by averaging the predictions of All-predictors model and the benchmark. Further, to demonstrate the usefulness of forecasts to the end-user, we estimate the certainty equivalent gains (economic value) generated. Almost all dimension reduction techniques do well especially those which apply shrinkage alone. We find including All-predictors and Group-average variable sets achieve the highest economic gains and portfolio performance. Our main results are robust to alternative specifications.  相似文献   

16.
Diagnostics cannot have much power against general alternatives   总被引:1,自引:0,他引:1  
Model diagnostics are shown to have little power unless alternative hypotheses can be narrowly defined. For example, the independence of observations cannot be tested against general forms of dependence. Thus, the basic assumptions in regression models cannot be inferred from the data. Equally, the proportionality assumption in proportional-hazards models is not testable. Specification error is a primary source of uncertainty in forecasting, and this uncertainty will be difficult to resolve without external calibration. Model-based causal inference is even more problematic.  相似文献   

17.
A variety of methods and ideas have been tried for electricity price forecasting (EPF) over the last 15 years, with varying degrees of success. This review article aims to explain the complexity of available solutions, their strengths and weaknesses, and the opportunities and threats that the forecasting tools offer or that may be encountered. The paper also looks ahead and speculates on the directions EPF will or should take in the next decade or so. In particular, it postulates the need for objective comparative EPF studies involving (i) the same datasets, (ii) the same robust error evaluation procedures, and (iii) statistical testing of the significance of one model’s outperformance of another.  相似文献   

18.
This paper presents our 13th place solution to the M5 Forecasting - Uncertainty challenge and compares it against GoodsForecast’s second-place solution. This challenge aims to estimate the median and eight other quantiles of various product sales in Walmart. Both solutions handle the predictions of median and other quantiles separately. Our solution hybridizes LightGBM and DeepAR in various ways for median and quantile estimation, based on the aggregation levels of the sales. Similarly, GoodsForecast’s solution also utilized a hybrid approach, i.e., LightGBM for point estimation and a Histogram algorithm for quantile estimation. In this paper, the differences between the two solutions and their results are highlighted. Despite our solution only taking 13th place in the challenge with the competition metric, it achieves the lowest average rank based on the multiple comparisons with the best (MCB) test which implies the most accurate forecasts in the majority of the series. It also indicates better performance at the product-store aggregation level which comprises 30,490 (71.2% of all) series compared to most teams.  相似文献   

19.
This article presents the empirical Bayes method for estimation of the transition probabilities of a generalized finite stationary Markov chain whose ith state is a multi-way contingency table. We use a log-linear model to describe the relationship between factors in each state. The prior knowledge about the main effects and interactions will be described by a conjugate prior. Following the Bayesian paradigm, the Bayes and empirical Bayes estimators relative to various loss functions are obtained. These procedures are illustrated by a real example. Finally, asymptotic normality of the empirical Bayes estimators are established.  相似文献   

20.
In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem into a linear model selection and estimation problem. To this end, we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, which is well known to time series econometricians, and the Marginal Bridge Estimator, which is better known to statisticians. The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 2007–2009. The forecast accuracy is measured using the root mean square forecast error. Hypothesis testing is also used to compare the performances of the different techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号