首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a simple quantile regression-based forecasting method that was applied in the probabilistic load forecasting framework of the Global Energy Forecasting Competition 2017 (GEFCom2017). The hourly load data are log transformed and split into a long-term trend component and a remainder term. The key forecasting element is the quantile regression approach for the remainder term, which takes into account both weekly and annual seasonalities, such as their interactions. Temperature information is used only for stabilizing the forecast of the long-term trend component. Information on public holidays is ignored. However, the forecasting method still placed second in the open data track and fourth in the definite data track, which is remarkable given the simplicity of the model. The method also outperforms the Vanilla benchmark consistently.  相似文献   

2.
This work describes an award winning approach for solving the NN3 Forecasting Competition problem, focusing on the sound experimental validation of its main innovative feature. The NN3 forecasting task consisted of predicting 18 future values of 111 short monthly time series. The main feature of the approach was the use of the median for combining the forecasts of an ensemble of 15 MLPs to predict each time series. Experimental comparison to a single MLP shows that the ensemble increases the performance accuracy for multiple-step ahead forecasting. This system performed well on the withheld data, having finished as the second best solution of the competition with an SMAPE of 16.17%.  相似文献   

3.
Forecasting cash demands at automatic teller machines (ATMs) is challenging, due to the heteroskedastic nature of such time series. Conventional global learning computational intelligence (CI) models, with their generalized learning behaviors, may not capture the complex dynamics and time-varying characteristics of such real-life time series data efficiently. In this paper, we propose to use a novel local learning model of the pseudo self-evolving cerebellar model articulation controller (PSECMAC) associative memory network to produce accurate forecasts of ATM cash demands. As a computational model of the human cerebellum, our model can incorporate local learning to effectively model the complex dynamics of heteroskedastic time series. We evaluated the forecasting performance of our PSECMAC model against the performances of current established CI and regression models using the NN5 competition dataset of 111 empirical daily ATM cash withdrawal series. The evaluation results show that the forecasting capability of our PSECMAC model exceeds that of the benchmark local and global-learning based models.  相似文献   

4.
Rosel  Jesús  Arnau  Jaime  Jara  Pilar 《Quality and Quantity》1998,32(2):155-163
In some publications the mean is identified with the constant of a Box–Jenkins time series model. In this paper the relation between both terms is demonstrated. Furthermore, by means of an example, the errors which may be made if one does not use each term adequately are shown.  相似文献   

5.
Forecasting wind power generation up to a few hours ahead is of the utmost importance for the efficient operation of power systems and for participation in electricity markets. Recent statistical learning approaches exploit spatiotemporal dependence patterns among neighbouring sites, but their requirement of sharing confidential data with third parties may limit their use in practice. This explains the recent interest in distributed, privacy preserving algorithms for high-dimensional statistical learning, e.g. with auto-regressive models. The few approaches that have been proposed are based on batch learning. However, these approaches are potentially computationally expensive and do not allow for the accommodation of nonstationary characteristics of stochastic processes like wind power generation. This paper closes the gap between online and distributed optimisation by presenting two novel approaches that recursively update model parameters while limiting information exchange between wind farm operators and other potential data providers. A simulation study compared the convergence and tracking ability of both approaches. In addition, a case study using a large dataset from 311 wind farms in Denmark confirmed that online distributed approaches generally outperform existing batch approaches while preserving privacy such that agents do not have to actively share their private data.  相似文献   

6.
The innovations representation for a local linear trend can adapt to long run secular and short term transitory effects in the data. This is illustrated by the theoretical power spectrum for the model which may possess considerable power at frequencies that might be associated with cycles of several years' duration. Whilst advantageous for short term forecasting, the model may be of less use when interest is in the underlying long run trend in the data. In this paper we propose a generalisation of the innovations representation for a local linear trend that is appropriate for representing short, medium and long run trends in the data.  相似文献   

7.
This paper considers a panel data stochastic frontier model that disentangles unobserved firm effects (firm heterogeneity) from persistent (time‐invariant/long‐term) and transient (time‐varying/short‐term) technical inefficiency. The model gives us a four‐way error component model, viz., persistent and time‐varying inefficiency, random firm effects and noise. We use Bayesian methods of inference to provide robust and efficient methods of estimating inefficiency components in this four‐way error component model. Monte Carlo results are provided to validate its performance. We also present results from an empirical application that uses a large panel of US commercial banks. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
We model panel data of crime careers of juveniles from a Dutch Judicial Juvenile Institution. The data are decomposed into a systematic and an individual-specific component, of which the systematic component reflects the general time-varying conditions including the criminological climate. Within a model-based analysis, we treat (1) shared effects of each group with the same systematic conditions, (2) strongly non-Gaussian features of the individual time series, (3) unobserved common systematic conditions, (4) changing recidivism probabilities in continuous time and (5) missing observations. We adopt a non-Gaussian multivariate state-space model that deals with all these issues simultaneously. The parameters of the model are estimated by Monte Carlo maximum likelihood methods. This paper illustrates the methods empirically. We compare continuous time trends and standard discrete-time stochastic trend specifications. We find interesting common time variation in the recidivism behaviour of the juveniles during a period of 13 years, while taking account of significant heterogeneity determined by personality characteristics and initial crime records.  相似文献   

9.
This paper reviews a spreadsheet-based forecasting approach which a process industry manufacturer developed and implemented to link annual corporate forecasts with its manufacturing/distribution operations. First, we consider how this forecasting system supports overall production planning and why it must be compatible with corporate forecasts. We then review the results of substantial testing of variations on the Winters three-parameter exponential smoothing model on 28 actual product family time series. In particular, we evaluate whether the use of damping parameters improves forecast accuracy. The paper concludes that a Winters four-parameter model (i.e. the standard Winters three-parameter model augmented by a fourth parameter to damp the trend) provides the most accurate forecasts of the models evaluated. Our application confirms the fact that there are situations where the use of damped trend parameters in short-run exponential smoothing based forecasting models is beneficial.  相似文献   

10.
We propose the construction of copulas through the inversion of nonlinear state space models. These copulas allow for new time series models that have the same serial dependence structure as a state space model, but with an arbitrary marginal distribution, and flexible density forecasts. We examine the time series properties of the copulas, outline serial dependence measures, and estimate the models using likelihood-based methods. Copulas constructed from three example state space models are considered: a stochastic volatility model with an unobserved component, a Markov switching autoregression, and a Gaussian linear unobserved component model. We show that all three inversion copulas with flexible margins improve the fit and density forecasts of quarterly U.S. broad inflation and electricity inflation.  相似文献   

11.
《Economic Outlook》1992,17(1):70-71
Some Key Global Adjustment Scenarios and Their Effects on Major Developing Country Regions Forecasting Inflation from the Term Structure: A Cointegration Approach An International CAPM for Bonds and Equities Fiscal and Monetary Policy Under EMU: Credible inflation targets or unpleasant monetary arithmetic? Capital-Skill Complementarity and Relative Employment in West German Manufacturing Estimating Long-run Relationships from Dynamic Heterogeneous Panels Measuring and Forecasting Underlying Economic Activity Discussion Paper No.18–92 Recently, interest in the methodology of constructing coincident economic indicators has been revived by the work of Stock and Watson (1988,1991). They adopt the framework of the state space form and Kalman filter in which to construct an optimal estimate of an unobserved component. This is interpreted as corresponding to underlying economic activity derived from a set of observed indicator variables. In this paper we suggest a modification to the Stock and Watson approach which allows for cointegration between some of the variables. We also discuss the general relationship between cointegration and the appropriate specification of stochastic trend models. The technique is applied to the UK where the observed indicator variables used are those which make up the CSO coincident indicator, therefore constructing alternative measures of economic activity. Two of the calculated series are forecast using a systems VAR with error correction terms, where the VAR consists of the CSO longer leading indicator component variables plus a term structure variable. The derived forecasts represent an alternative longer leading economic indicator. Price and Quantity Responses to Cost and Demand Shocks  相似文献   

12.
Forecasting customer flow is key for retailers in making daily operational decisions, but small retailers often lack the resources to obtain such forecasts. Rather than forecasting stores’ total customer flows, this research utilizes emerging third-party mobile payment data to provide participating stores with a value-added service by forecasting their share of daily customer flows. These customer transactions using mobile payments can then be utilized further to derive retailers’ total customer flows indirectly, thereby overcoming the constraints that small retailers face. We propose a third-party mobile-payment-platform centered daily mobile payments forecasting solution based on an extension of the newly-developed Gradient Boosting Regression Tree (GBRT) method which can generate multi-step forecasts for many stores concurrently. Using empirical forecasting experiments with thousands of time series, we show that GBRT, together with a strategy for multi-period-ahead forecasting, provides more accurate forecasts than established benchmarks. Pooling data from the platform across stores leads to benefits relative to analyzing the data individually, thus demonstrating the value of this machine learning application.  相似文献   

13.
We model the dynamic volatility and correlation structure of electricity futures of the European Energy Exchange index. We use a new multiplicative dynamic conditional correlation (mDCC) model to separate long‐run from short‐run components. We allow for smooth changes in the unconditional volatilities and correlations through a multiplicative component that we estimate nonparametrically. For the short‐run dynamics, we use a GJR‐GARCH model for the conditional variances and augmented DCC models for the conditional correlations. We also introduce exogenous variables to account for congestion and delivery date effects in short‐term conditional variances. We find different correlation dynamics for long‐ and short‐term contracts and the new model achieves higher forecasting performance compared \to a standard DCC model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
This paper aims to demonstrate a possible aggregation gain in predicting future aggregates under a practical assumption of model misspecification. Empirical analysis of a number of economic time series suggests that the use of the disaggregate model is not always preferred over the aggregate model in predicting future aggregates, in terms of an out-of-sample prediction root-mean-square error criterion. One possible justification of this interesting phenomena is model misspecification. In particular, if the model fitted to the disaggregate series is misspecified (i.e., not the true data generating mechanism), then the forecast made by a misspecified model is not always the most efficient. This opens up an opportunity for the aggregate model to perform better. It will be of interest to find out when the aggregate model helps. In this paper, we study a framework where the underlying disaggregate series has a periodic structure. We derive and compare the efficiency loss in linear prediction of future aggregates using the adapted disaggregate model and aggregate model. Some scenarios for aggregation gain to occur are identified. Numerical results show that the aggregate model helps over a fairly large region in the parameter space of the periodic model that we studied.  相似文献   

15.
We use a macro‐finance model, incorporating macroeconomic and financial factors, to study the term premium in the US bond market. Estimating the model using Bayesian techniques, we find that a single factor explains most of the variation in bond risk premiums. Furthermore, the model‐implied risk premiums account for up to 40% of the variability of one‐ and two‐year excess returns. Using the model to decompose yield spreads into an expectations and a term premium component, we find that, although this decomposition does not seem important to forecast economic activity, it is crucial to forecast inflation for most forecasting horizons. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMA-GARCH model. The corresponding ARIMA-GARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the long-run component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon. We show that, in this case, the prediction intervals constructed from the ARIMA-GARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels.  相似文献   

17.
We introduce a class of semiparametric time series models (SemiParTS) driven by a latent factor process. The proposed SemiParTS class is flexible because, given the latent process, only the conditional mean and variance of the time series are specified. These are the primary features of SemiParTS: (i) no parametric form is assumed for the conditional distribution of the time series given the latent process; (ii) it is suitable for a wide range of data: non-negative, count, bounded, binary, and real-valued time series; (iii) it does not constrain the dispersion parameter to be known. The quasi-likelihood inference is employed in order to estimate the parameters in the mean function. Here, we derive explicit expressions for the marginal moments and for the autocorrelation function of the time series process so that a method of moments can be employed to estimate the dispersion parameter and also the parameters related to the latent process. Simulated results that aim to check the proposed estimation procedure are presented. Forecasting procedures are proposed and evaluated in simulated and real data. Analyses of the number of admissions in a hospital due to asthma and a total insolation time series illustrate the potential for practical situations that involve the proposed models.  相似文献   

18.
This paper presents a data-driven approach applied to the long term prediction of daily time series in the Neural Forecasting Competition. The proposal comprises the use of adaptive fuzzy rule-based systems in a top-down modeling framework. Therefore, daily samples are aggregated to build weekly time series, and consequently, model optimization is performed in a top-down framework, thus reducing the forecast horizon from 56 to 8 steps ahead. Two different disaggregation procedures are evaluated: the historical and daily top-down approaches. Data pre-processing and input selection are carried out prior to the model adjustment. The prediction results are validated using multiple time series, as well as rolling origin evaluations with model re-calibration, and the results are compared with those obtained using daily models, allowing us to analyze the effectiveness of the top-down approach for longer forecast horizons.  相似文献   

19.
Forecasting the outcome of outbreaks as early and as accurately as possible is crucial for decision-making and policy implementations. A significant challenge faced by forecasters is that not all outbreaks and epidemics turn into pandemics, making the prediction of their severity difficult. At the same time, the decisions made to enforce lockdowns and other mitigating interventions versus their socioeconomic consequences are not only hard to make, but also highly uncertain. The majority of modeling approaches to outbreaks, epidemics, and pandemics take an epidemiological approach that considers biological and disease processes. In this paper, we accept the limitations of forecasting to predict the long-term trajectory of an outbreak, and instead, we propose a statistical, time series approach to modelling and predicting the short-term behavior of COVID-19. Our model assumes a multiplicative trend, aiming to capture the continuation of the two variables we predict (global confirmed cases and deaths) as well as their uncertainty. We present the timeline of producing and evaluating 10-day-ahead forecasts over a period of four months. Our simple model offers competitive forecast accuracy and estimates of uncertainty that are useful and practically relevant.  相似文献   

20.
In this paper we consider estimating an approximate factor model in which candidate predictors are subject to sharp spikes such as outliers or jumps. Given that these sharp spikes are assumed to be rare, we formulate the estimation problem as a penalized least squares problem by imposing a norm penalty function on those sharp spikes. Such a formulation allows us to disentangle the sharp spikes from the common factors and estimate them simultaneously. Numerical values of the estimates can be obtained by solving a principal component analysis (PCA) problem and a one-dimensional shrinkage estimation problem iteratively. In addition, it is easy to incorporate methods for selecting the number of common factors in the iterations. We compare our method with PCA by conducting simulation experiments in order to examine their finite-sample performances. We also apply our method to the prediction of important macroeconomic indicators in the U.S., and find that it can deliver performances that are comparable to those of the PCA method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号