首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到7条相似文献,搜索用时 0 毫秒
1.
A single outlier in a regression model can be detected by the effect of its deletion on the residual sum of squares. An equivalent procedure is the simple intervention in which an extra parameter is added for the mean of the observation in question. Similarly, for unobserved components or structural time-series models, the effect of elaborations of the model on inferences can be investigated by the use of interventions involving a single parameter, such as trend or level changes. Because such time-series models contain more than one variance, the effect of the intervention is measured by the change in individual variances.We examine the effect on the estimated parameters of moving various kinds of intervention along the series. The horrendous computational problems involved are overcome by the use of score statistics combined with recent developments in filtering and smoothing. Interpretation of the resulting time-series plots of diagnostics is aided by simulation envelopes.Our procedures, illustrated with four example, permit keen insights into the fragility of inferences to specific shocks, such as outliers and level breaks. Although the emphasis is mostly on parameter estimation, forecast are also considered. Possible extensions include seasonal adjustment and detrending of series.  相似文献   

2.
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data.  相似文献   

3.
When a large number of time series are to be forecast on a regular basis, as in large scale inventory management or production control, the appropriate choice of a forecast model is important as it has the potential for large cost savings through improved accuracy. A possible solution to this problem is to select one best forecast model for all the series in the dataset. Alternatively one may develop a rule that will select the best model for each series. Fildes (1989) calls the former an aggregate selection rule and the latter an individual selection rule. In this paper we develop an individual selection rule using discriminant analysis and compare its performance to aggregate selection for the quarterly series of the M-Competition data. A number of forecast accuracy measures are used for the evaluation and confidence intervals for them are constructed using bootstrapping. The results indicate that the individual selection rule based on discriminant scores is more accurate, and sometimes significantly so, than any aggregate selection method.  相似文献   

4.
This paper evaluates the properties of a joint and sequential estimation procedure for estimating the parameters of single and multiple threshold models. We initially proceed under the assumption that the number of regimes is known á priori but subsequently relax this assumption via the introduction of a model selection based procedure that allows the estimation of both the unknown parameters and their number to be performed jointly. Theoretical properties of the resulting estimators are derived and their finite sample properties investigated.  相似文献   

5.
Traffic congestion has significant adverse implications for the environment and economy. Many state and local transportation agencies have implemented traffic congestion management practices to alleviate the negative implications of urban traffic. One of the major drawbacks of traffic congestion management practices is that they do not account for socio-demographic and economic factors, which have a significant impact on traffic congestion. Understanding the influence of these factors is very crucial because they can help to communicate the system's performance management and target setting. Only a few studies analyzed the relationship between traffic conditions (e.g., traffic demand and vehicular traveling speed) with a limited number of socio-economic factors. Moreover, most of the existing models ignore the temporal and spatial autocorrelations of traffic congestion, which may significantly limit their reliability and effectiveness. This study is developed with the purpose of identifying the most relevant external factors that affect traffic congestion performance measures. To conduct the research, we have used three urban congestion performance measures collected from 51 metropolitan areas across the U.S. over a four-year period, 2013–2016: travel time index, planning time index, and congested hours. We have used multivariate time series models to account for the complex inter-relationships among the performance measures and socioeconomic factors to identify the most influential factors affecting system performance. We have finally developed predictive models to estimate the traffic congestion measures using these factors. The results of rigorous modeling show that the factors influencing the traffic congestion measures are monthly average daily traffic (MADT), the number of employed, rental vacancy rate, building permits, fuel price index, and Economic Conditions Index (ECI). The prediction models indicated that the effects of these factors are statistically significant and could be used to forecast future trends in three performance measures accurately.  相似文献   

6.
Least Squares Support Vector Machines (LS-SVM) are the state of the art in kernel methods for regression. These models have been successfully applied for time series modelling and prediction. A critical issue for the performance of these models is the choice of the kernel parameters and the hyperparameters which define the function to be minimized. In this paper a heuristic method for setting both the σ parameter of the Gaussian kernel and the regularization hyperparameter based on information extracted from the time series to be modelled is presented and evaluated.  相似文献   

7.
With the rapid growth of carbon trading, the development of carbon financial derivatives such as carbon options has become inevitable. This paper established a model based on GARCH and fractional Brownian motion (FBM), hoping to provide reference for China's upcoming carbon option trading through carbon option price forecasting research. The fractal characteristic of carbon option prices indicates that it is reasonable to use FBM to predict option prices. The GARCH model can make up for the lack of fixed FBM volatility. In this paper, the daily closing prices of EUA option contracts on the European Energy Exchange are selected as samples for price prediction. The GARCH model was used to determine the return volatility, and then the FBM was used to calculate the forecast price for the next 60 days. The results showed that the predicted price can better fit the actual price. This paper further compares the price prediction results of this model with the other three models through line graphs and error evaluation indicators such as MAPE, MAE and MSE. It is confirmed that the prediction results of the model in this paper is the closest to the actual price.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号