首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for one step ahead forecasts. Under continuous breaks, our approach largely recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for optimal weights in models with a single regressor, and asymptotically valid weights for models with more than one regressor. It is shown that in these cases the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain, a forecasting procedure based on robust optimal weights is proposed. The relative performance of our proposed approach is investigated using Monte Carlo experiments and an empirical application to forecasting real GDP using the yield curve across nine industrial economies.  相似文献   

2.
本文介绍了一种基于GARCH和非参数法的动态VaR模型——L_VaR模型,用来度量市场风险与流动性风险两者综合风险的大小。并通过采样我国银行间隔夜拆借的高频交易数据,以及SAS软件的数据处理分析发现,GARCH(1,1)模型能较好地拟合隔夜拆借利率的波动情况,而非参数估计法(Boot- strap)能较准确地估计拆借市场流动性的波动水平。实证结果表明。基于动态VaR模型对于市场风险与流动性风险两者综合风险的短期预测效果较为理想。  相似文献   

3.
In this research, we propose a disaster response model combining preparedness and responsiveness strategies. The selective response depends on the level of accuracy that our forecasting models can achieve. In order to decide the right geographical space and time window of response, forecasts are prepared and assessed through a spatial–temporal aggregation framework, until we find the optimum level of aggregation. The research considers major earthquake data for the period 1985–2014. Building on the produced forecasts, we develop accordingly a disaster response model. The model is dynamic in nature, as it is updated every time a new event is added in the database. Any forecasting model can be optimized though the proposed spatial–temporal forecasting framework, and as such our results can be easily generalized. This is true for other forecasting methods and in other disaster response contexts.  相似文献   

4.
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. A Monte Carlo study explores the finite sample performance of this procedure and evaluates the forecasting accuracy of models selected by this procedure. Two empirical applications confirm the usefulness of the model selection procedure proposed here for forecasting.  相似文献   

5.
A new semi-parametric expected shortfall (ES) estimation and forecasting framework is proposed. The proposed approach is based on a two-step estimation procedure. The first step involves the estimation of value at risk (VaR) at different quantile levels through a set of quantile time series regressions. Then, the ES is computed as a weighted average of the estimated quantiles. The quantile weighting structure is parsimoniously parameterized by means of a beta weight function whose coefficients are optimized by minimizing a joint VaR and ES loss function of the Fissler–Ziegel class. The properties of the proposed approach are first evaluated with an extensive simulation study using two data generating processes. Two forecasting studies with different out-of-sample sizes are then conducted, one of which focuses on the 2008 Global Financial Crisis period. The proposed models are applied to seven stock market indices, and their forecasting performances are compared to those of a range of parametric, non-parametric, and semi-parametric models, including GARCH, conditional autoregressive expectile (CARE), joint VaR and ES quantile regression models, and a simple average of quantiles. The results of the forecasting experiments provide clear evidence in support of the proposed models.  相似文献   

6.
Providing forecasts for ultra-long time series plays a vital role in various activities, such as investment decisions, industrial production arrangements, and farm management. This paper develops a novel distributed forecasting framework to tackle the challenges of forecasting ultra-long time series using the industry-standard MapReduce framework. The proposed model combination approach retains the local time dependency. It utilizes a straightforward splitting across samples to facilitate distributed forecasting by combining the local estimators of time series models delivered from worker nodes and minimizing a global loss function. Instead of unrealistically assuming the data generating process (DGP) of an ultra-long time series stays invariant, we only make assumptions on the DGP of subseries spanning shorter time periods. We investigate the performance of the proposed approach with AutoRegressive Integrated Moving Average (ARIMA) models using the real data application as well as numerical simulations. Our approach improves forecasting accuracy and computational efficiency in point forecasts and prediction intervals, especially for longer forecast horizons, compared to directly fitting the whole data with ARIMA models. Moreover, we explore some potential factors that may affect the forecasting performance of our approach.  相似文献   

7.
This paper develops a maximum likelihood (ML) method to estimate partially observed diffusion models based on data sampled at discrete times. The method combines two techniques recently proposed in the literature in two separate steps. In the first step, the closed form approach of Aït-Sahalia (2008) is used to obtain a highly accurate approximation to the joint transition probability density of the latent and the observed states. In the second step, the efficient importance sampling technique of Richard and Zhang (2007) is used to integrate out the latent states, thereby yielding the likelihood function. Using both simulated and real data, we show that the proposed ML method works better than alternative methods. The new method does not require the underlying diffusion to have an affine structure and does not involve infill simulations. Therefore, the method has a wide range of applicability and its computational cost is moderate.  相似文献   

8.
The well-developed ETS (ExponenTial Smoothing, or Error, Trend, Seasonality) method incorporates a family of exponential smoothing models in state space representation and is widely used for automatic forecasting. The existing ETS method uses information criteria for model selection by choosing an optimal model with the smallest information criterion among all models fitted to a given time series. The ETS method under such a model selection scheme suffers from computational complexity when applied to large-scale time series data. To tackle this issue, we propose an efficient approach to ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series. We provide a simulation study to show the model selection ability of the proposed approach on simulated data. We evaluate our approach on the widely used M4 forecasting competition dataset in terms of both point forecasts and prediction intervals. To demonstrate the practical value of our method, we showcase the performance improvements from our approach on a monthly hospital dataset.  相似文献   

9.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

10.
Quantile regression for dynamic panel data with fixed effects   总被引:4,自引:0,他引:4  
This paper studies a quantile regression dynamic panel model with fixed effects. Panel data fixed effects estimators are typically biased in the presence of lagged dependent variables as regressors. To reduce the dynamic bias, we suggest the use of the instrumental variables quantile regression method of Chernozhukov and Hansen (2006) along with lagged regressors as instruments. In addition, we describe how to employ the estimated models for prediction. Monte Carlo simulations show evidence that the instrumental variables approach sharply reduces the dynamic bias, and the empirical levels for prediction intervals are very close to nominal levels. Finally, we illustrate the procedures with an application to forecasting output growth rates for 18 OECD countries.  相似文献   

11.
We present a new approach to trend/cycle decomposition of time series that follow regime-switching processes. The proposed approach, which we label the “regime-dependent steady-state” (RDSS) decomposition, is motivated as the appropriate generalization of the Beveridge and Nelson decomposition [Beveridge, S., Nelson, C.R., 1981. A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle. Journal of Monetary Economics 7, 151–174] to the setting where the reduced-form dynamics of a given series can be captured by a regime-switching forecasting model. For processes in which the underlying trend component follows a random walk with possibly regime-switching drift, the RDSS decomposition is optimal in a minimum mean-squared-error sense and is more broadly applicable than directly employing an Unobserved Components model.  相似文献   

12.
There are two potential directions of forecast combination: combining for adaptation and combining for improvement. The former direction targets the performance of the best forecaster, while the latter attempts to combine forecasts to improve on the best forecaster. It is often useful to infer which goal is more appropriate so that a suitable combination method may be used. This paper proposes an AI-AFTER approach that can not only determine the appropriate goal of forecast combination but also intelligently combine the forecasts to automatically achieve the proper goal. As a result of this approach, the combined forecasts from AI-AFTER perform well universally in both adaptation and improvement scenarios. The proposed forecasting approach is implemented in our R package AIafter, which is available at https://github.com/weiqian1/AIafter.  相似文献   

13.
Recent literature on panel data emphasizes the importance of accounting for time-varying unobservable individual effects, which may stem from either omitted individual characteristics or macro-level shocks that affect each individual unit differently. In this paper, we propose a simple specification test of the null hypothesis that the individual effects are time-invariant against the alternative that they are time-varying. Our test is an application of Hausman (1978) testing procedure and can be used for any generalized linear model for panel data that admits a sufficient statistic for the individual effect. This is a wide class of models which includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare two alternative estimators of the model parameters based on two different formulations of the conditional maximum likelihood method. Our approach does not require assumptions on the distribution of unobserved heterogeneity, nor it requires the latter to be independent of the regressors in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs well, with small size distortions and good power properties. We use a health economics example based on data from the Health and Retirement Study to illustrate the proposed test.  相似文献   

14.
Cyberattacks in power systems that alter the input data of a load forecasting model have serious, potentially devastating consequences. Existing cyberattack-resilient work focuses mainly on enhancing attack detection. Although some outliers can be easily identified, more carefully designed attacks can escape detection and impact load forecasting. Here, a cyberattack-resilient load forecasting approach based on an adaptive robust regression method is proposed, where the observations are trimmed based on their residuals and the proportion of the trim is adaptively determined by an estimation of the contaminated data proportion. An extensive comparison study shows that the proposed method outperforms the standard robust regression in various settings.  相似文献   

15.
Retailers supply a wide range of stock keeping units (SKUs), which may differ for example in terms of demand quantity, demand frequency, demand regularity, and demand variation. Given this diversity in demand patterns, it is unlikely that any single model for demand forecasting can yield the highest forecasting accuracy across all SKUs. To save costs through improved forecasting, there is thus a need to match any given demand pattern to its most appropriate prediction model. To this end, we propose an automated model selection framework for retail demand forecasting. Specifically, we consider model selection as a classification problem, where classes correspond to the different models available for forecasting. We first build labeled training data based on the models’ performances in previous demand periods with similar demand characteristics. For future data, we then automatically select the most promising model via classification based on the labeled training data. The performance is measured by economic profitability, taking into account asymmetric shortage and inventory costs. In an exploratory case study using data from an e-grocery retailer, we compare our approach to established benchmarks. We find promising results, but also that no single approach clearly outperforms its competitors, underlying the need for case-specific solutions.  相似文献   

16.
This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an unbalanced panel. We allow the trend to evolve in a nonparametric way so that we obtain a fuller picture of the evolution of common temperature in the medium timescale. Profile likelihood estimators (PLE) are proposed and their statistical properties are studied. The proposed PLE has improved asymptotic property comparing the sequential two-step estimators. Finally, forecasting based on the proposed model is studied.  相似文献   

17.
The realized volatility forecasting of energy sector stocks facilitates the establishment of corresponding risk warning mechanisms and investor decisions. In this paper, we collected two different energy sector indices and used different methods, namely principal component analysis (PCA) and sparse principal component analysis (SPCA), to extract features, and combined LSTM and GRU to construct 12 different models. The results show that the SPCA-LSTM model we constructed has the best forecasting performance in the realized volatility forecasting of energy indices, and SPCA has better forecasting results than PCA in the feature extraction stage. The results of the robustness test indicate that our results are robust.  相似文献   

18.
ARCH and GARCH models are widely used to model financial market volatilities in risk management applications. Considering a GARCH model with heavy-tailed innovations, we characterize the limiting distribution of an estimator of the conditional value-at-risk (VaR), which corresponds to the extremal quantile of the conditional distribution of the GARCH process. We propose two methods, the normal approximation method and the data tilting method, for constructing confidence intervals for the conditional VaR estimator and assess their accuracies by simulation studies. Finally, we apply the proposed approach to an energy market data set.  相似文献   

19.
DSGE models are useful tools for evaluating the impact of policy changes, but their use for (short-term) forecasting is still in its infancy. Besides theory-based restrictions, the timeliness of data is an important issue. Since DSGE models are based on quarterly data, they suffer from the publication lag of quarterly national accounts. In this paper we present a framework for the short-term forecasting of GDP based on a medium-scale DSGE model for a small open economy within a currency area. We utilize the information available in monthly indicators based on the approach proposed by Giannone et al. (2009). Using Austrian data, we find that the forecasting performance of the DSGE model can be improved considerably by incorporating monthly indicators, while still maintaining the story-telling capability of the model.  相似文献   

20.
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and the composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors, and yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Weighting the data by their properties when constructing the factors also lead to improved forecasts. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have stronger loadings on some groups of series than others. It thus allows us to better understand the properties of the principal components estimator in empirical applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号