首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
To implement mean variance analysis one needs a technique for forecasting correlation coefficients. In this article we investigate the ability of several techniques to forecast correlation coefficients between securities. We find that separately forecasting the average level of pair‐wise correlations and individual pair‐wise differences from the average improves forecasting accuracy. Furthermore, forming homogenous groups of firms on the basis of industry membership or firm attributes (e.g. size) improves forecast accuracy. Accuracy is evaluated in two ways: First, in terms of the error in estimating future correlation coefficients. Second, in the characteristics of portfolios formed on the basis of each forecasting technique. The ranking of forecasting techniques is robust across both methods of evaluation and the better techniques outperform prior suggestions in the literature of financial economics.  相似文献   

2.
A reliable crude oil price forecast is important for market pricing. Despite the widespread use of ensemble empirical mode decomposition (EEMD) in financial time series forecasting, the one-time decomposition on the entire time series leads the in-sample data to be affected by the out-of-sample data. Consequently, the forecasting accuracy is overstated. This study incorporates a rolling window into two prevalent EEMD-based modeling paradigms, namely decomposition-ensemble and denoising, to ensure that only in-sample time series is processed by EEMD and used for model training. Given the time-consuming process of stepwise preprocessing and model fitting, two non-iterative machine learning algorithms, random vector functional link (RVFL) neural network and extreme learning machine (ELM), are used as predictors. Hence, we develop the rolling decomposition-ensemble and rolling denoising paradigms, respectively. Contrary to the majority of prior studies, empirical results based on monthly spot price time series for the Brent and West Texas Intermediate (WTI) markets indicate that EEMD plays a weak role in improving crude oil price forecasts when only the in-sample set is preprocessed. This is compatible with the weak form of the efficient market hypothesis (EMH). Nevertheless, the suggested rolling EEMD-denoising model has an advantage over other employed models for long-term forecasting.  相似文献   

3.
基于信息粒化和支持向量机的股票价格预测   总被引:1,自引:0,他引:1  
信息粒化是进行海量数据挖掘和模糊信息处理的有效工具。本文提出了一种基于信息粒化和支持向量机的股票价格预测方法。利用长安汽车的股票数据,建立股票开盘价回归预测模型,该模型克服了传统时间序列模型仅局限于线性系统的情况。应用实例表明:该方法能有效地预测股票价格的变化范围。  相似文献   

4.
Abstract

On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. To deal with this problem the time series have to be either homogenized through interpolation, or methods that can handle raw non-synchronous time series need to be employed. This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series. The three methods are tested on simulated data and actual trades time series.  相似文献   

5.
Abstract

This paper presents a forecasting model of economic assumptions that are inputs to projections of the Social Security system. Social Security projections are made to help policy-makers understand the financial stability of the system. Because system income and expenditures are subject to changes in law, they are controllable and not readily amenable to forecasting techniques. Hence, we focus directly on the four major economic assumptions to the system: inflation rate, investment returns, wage rate, and unemployment rate. Population models, the other major input to Social Security projections, require special demographic techniques and are not addressed here.

Our approach to developing a forecasting model emphasizes exploring characteristics of the data. That is, we use graphical techniques and diagnostic statistics to display patterns that are evident in the data. These patterns include (1) serial correlation, (2) conditional heteroscedasticity, (3) contemporaneous correlations, and (4) cross-correlations among the four economic series. To represent patterns in the four series, we use multivariate autoregressive, moving average (ARMA) models with generalized autoregressive, conditionally heteroscedastic (GARCH) errors.

The outputs of the fitted models are the forecasts. Because the forecasts can be used for nonlinear functions such as discounting present values of future obligations, we present a computer-intensive method for computing forecast distributions. The computer-intensive approach also allows us to compare alternative models via out-of-sample validation and to compute exact multivariate forecast intervals, in lieu of approximate simultaneous univariate forecast intervals. We show how to use the forecasts of economic assumptions to forecast a simplified version of a fund used to protect the Social Security system from adverse deviations. We recommend the use of the multivariate model because it establishes important lead and lag relationships among the series, accounts for information in the contemporaneous correlations, and provides useful forecasts of a fund that is analogous to the one used by the Social Security system.  相似文献   

6.
PurposeNowadays, Supply Chain Finance (SCF) has been developing rapidly since the emergence of credit risk. Therefore, this paper used SVM optimized by the firefly algorithm, which is called firefly algorithm support vector machine (FA-SVM), and applied it to SCF evaluation with a different indicator selection.Design/methodology/approachIn this paper, we used FA-SVM to assess the credit risk of supply chain finance with extracted index through correlation and appraisal analysis, and finally determined 3 first-level indicators and 15 third-level indicators. Through the application analysis, 39 SMEs (117 sample data) were selected from the Computer and Electronic Communications Manufacturing Industry as the characteristics for the input variables, to verify the improvement effect of the method relative to the LIBSVM and the classification pretest effect in the credit risk assessment of the SCF.FindingsThe results showed that FA-SVM could improve the accuracy of classification prediction compared with LIBSVM, and decrease the error rate of falseness recognize credible enterprise to untrusted enterprise.Originality/valueThis paper appliedthe firefly support vector machine in the supply chain financial evaluation for the first time. The output variable was described in a more detailed manner during the index define, and the random selection set in the process of FA-SVM data training.  相似文献   

7.
8.
基于TVP VAR模型的有色金属价格时变相关性研究   总被引:1,自引:0,他引:1  
基于TVP-VAR模型,考量有色金属价格时变相关性.结果显示,铜价、铝价及锌价之间存在显著的正向相关关系;一种有色金属价格发生变化,其他两种有色金属的价格通常出现正向响应,并且这种响应的强度是时变的.时点脉冲函数结果表明,不同时点下有色金属价格之间的相关关系是不同的,但大多时点下表现为正相关关系.  相似文献   

9.
High-order discretization schemes of SDEs using free Lie algebra-valued random variables are introduced by Kusuoka [Adv. Math. Econ., 2004, 5, 69–83], [Adv. Math. Econ., 2013, 17, 71–120], Lyons–Victoir [Proc. R. Soc. Lond. Ser. A Math. Phys. Sci., 2004, 460, 169–198], Ninomiya–Victoir [Appl. Math. Finance, 2008, 15, 107–121] and Ninomiya–Ninomiya [Finance Stochast., 2009, 13, 415–443]. These schemes are called KLNV methods. They involve solving the flows of vector fields associated with SDEs and it is usually done by numerical methods. The authors have found a special Lie algebraic structure on the vector fields in the major financial diffusion models. Using this structure, we can solve the flows associated with vector fields analytically and efficiently. Numerical examples show that our method reduces the computation time drastically.  相似文献   

10.
Skewness of financial time series is a relevant topic, due to its implications for portfolio theory and for statistical inference. In the univariate case, its default measure is the third cumulant of the standardized random variable. It can be generalized to the third multivariate cumulant that is a matrix containing all centered moments of order three which can be obtained from a random vector. The present paper examines some properties of the third cumulant under the assumptions of the multivariate SGARCH model introduced by De Luca, Genton, and Loperfido [2006. A multivariate skew-GARCH model. Advances in Econometrics 20: 33–57]. In the first place, it allows for parsimonious modelling of multivariate skewness. In the second place, all its elements are either null or negative, consistently with previous empirical and theoretical findings. A numerical example with financial returns of France, Spain and Netherlands illustrates the theoretical results in the paper.  相似文献   

11.
The paper is concerned with time series modelling of foreign exchange rate of an important emerging economy, viz., India, with due consideration to possible sources of misspecification of the conditional mean like serial correlation, parameter instability, omitted time series variables and nonlinear dependences. Since structural change is pervasive in economic time series relationships, the paper first studies this aspect of the exchange rate series in detail and finds the existence of four structural breaks. Accordingly, the entire sample period is divided into five sub-periods of stable parameters each, and then the appropriate mean specification for each of these sub-periods is determined by incorporating functions of recursive residuals. Thereafter, the GARCH and EGARCH models are considered to capture the volatility contained in the data. The estimated models thus obtained suggest that return on Indian exchange rate series is marked by instabilities and that the appropriate volatility model is EGARCH. Further, out-of-sample forecasting performance of the model has been studied by standard forecasting criteria, and then compared with that of an AR model only to find that the findings are quite favorable for the former.   相似文献   

12.
In this paper we study time-varying coefficient (beta coefficient) models with a time trend function to characterize the nonlinear, non-stationary and trending phenomenon in time series and to explain the behavior of asset returns. The general local polynomial method is developed to estimate the time trend and coefficient functions. More importantly, a graphical tool, the plot of the kth-order derivative of the parameter versus time, is proposed to select the proper order of the local polynomial so that the best estimate can be obtained. Finally, we conduct Monte Carlo experiments and a real data analysis to examine the finite sample performance of the proposed modeling procedure and compare it with the Nadaraya–Watson method as well as the local linear method.  相似文献   

13.
Abstract

This paper introduces nonlinear threshold time series modeling techniques that actuaries can use in pricing insurance products, analyzing the results of experience studies, and forecasting actuarial assumptions. Basic “self-exciting” threshold autoregressive (SETAR) models, as well as heteroscedastic and multivariate SETAR processes, are discussed. Modeling techniques for each class of models are illustrated through actuarial examples. The methods that are described in this paper have the advantage of being direct and transparent. The sequential and iterative steps of tentative specification, estimation, and diagnostic checking parallel those of the orthodox Box-Jenkins approach for univariate time series analysis.  相似文献   

14.
Bankruptcy prediction has received a growing interest in corporate finance and risk management recently. Although numerous studies in the literature have dealt with various statistical and artificial intelligence classifiers, their performance in credit risk forecasting needs to be further scrutinized compared to other methods. In the spirit of Chen, Härdle and Moro (2011, Quantitative Finance), we design an empirical study to assess the effectiveness of various machine learning topologies trained with big data approaches and qualitative, rather than quantitative, information as input variables. The experimental results from a ten-fold cross-validation methodology demonstrate that a generalized regression neural topology yields an accuracy measurement of 99.96%, a sensitivity measure of 99.91% and specificity of 100%. Indeed, this specific model outperformed multi-layer back-propagation networks, probabilistic neural networks, radial basis functions and regression trees, as well as other advanced classifiers. The utilization of advanced nonlinear classifiers based on big data methodologies and machine learning training generates outperforming results compared to traditional methods for bankruptcy forecasting and risk measurement.  相似文献   

15.
《Quantitative Finance》2013,13(4):303-314
Abstract

We generalize the construction of the multifractal random walk (MRW) due to Bacry et al (Bacry E, Delour J and Muzy J-F 2001 Modelling financial time series using multifractal random walks Physica A 299 84) to take into account the asymmetric character of financial returns. We show how one can include in this class of models the observed correlation between past returns and future volatilities, in such a way that the scale invariance properties of the MRW are preserved. We compute the leading behaviour of q-moments of the process, which behave as power laws of the time lag with an exponent ζ q =p?2p(p?1)λ2 for even q=2p, as in the symmetric MRW, and as ζ q =p + 1?2p 2λ2?α (q=2p + 1), where λ and α are parameters. We show that this extended model reproduces the ‘HARCH’ effect or ‘causal cascade’ reported by some authors. We illustrate the usefulness of this ‘skewed’ MRW by computing the resulting shape of the volatility smiles generated by such a process, which we compare with approximate cumulant expansion formulae for the implied volatility. A large variety of smile surfaces can be reproduced.  相似文献   

16.
This paper proposes a new methodology for modeling and forecasting market risks of portfolios. It is based on a combination of copula functions and Markov switching multifractal (MSM) processes. We assess the performance of the copula-MSM model by computing the value at risk of a portfolio composed of the NASDAQ composite index and the S&P 500. Using the likelihood ratio (LR) test by Christoffersen [1998. “Evaluating Interval Forecasts.” International Economic Review 39: 841–862], the GMM duration-based test by Candelon et al. [2011. “Backtesting Value at Risk: A GMM Duration-based Test.” Journal of Financial Econometrics 9: 314–343] and the superior predictive ability (SPA) test by Hansen [2005. “A Test for Superior Predictive Ability.” Journal of Business and Economic Statistics 23, 365–380] we evaluate the predictive ability of the copula-MSM model and compare it to other common approaches such as historical simulation, variance–covariance, RiskMetrics, copula-GARCH and constant conditional correlation GARCH (CCC-GARCH) models. We find that the copula-MSM model is more robust, provides the best fit and outperforms the other models in terms of forecasting accuracy and VaR prediction.  相似文献   

17.
Abstract

For any continuous univariate population with finite variance there is a mathematical relation which expresses the variate-value z as a convergent series of Legendre polynomials in (2F—1), where FF(z) is the distribution function of the population, and the coefficients in this series are the expectations of homogeneous linear forms in the order statistics of random samples from the population. The relation is well adapted for estimating the median and other percentile points when nothing more is known about the population, but a random sample from it is available. The variances of these estimates can be estimated from the data. A somewhat similar relation which expresses z as a series of Chebyshev polynomials is also discussed briefly. Finally a modification of the Legendre polynomial relation enables prior knowledge of a finite extremity of the population range to be used, and a numerical illustration is given.  相似文献   

18.
We show that log-dividends (d) and log-prices (p) are cointegrated, but, instead of de facto assuming the stationarity of the classical log dividend–price ratio, we allow the data to reveal the cointegration vector between d and p. We define the modified dividend–price ratio (mdp), as the long run trend deviation between d and p. Using S&P 500 data for the period 1926 to 2012, we show that mdp provides substantially improved forecasting results over the classical dp ratio. Out of sample, while the dp ratio cannot outperform the “simplistic forecast” benchmark for any useful horizon, an investor who employs the mdp ratio will do significantly better in forecasting 3-, 5- and 7-year returns with an ROS2 of 7%, 26% and 31% respectively. In some sense mdp can be considered as a de-noising of the classical ratio as it addresses the major weakness in dp, its presumed inability in revealing business cycle variation in expected returns. Unlike dp, mdp exhibits positive correlation with the risk free return component, and can discern if a low dividend state coincides with a low yield state.  相似文献   

19.
Abstract

Extract

When we have made a regression analysis, for instance on the basis of time series, it is often of interest to know how the results would change if we take into account observations made later on. Because it seems that the whole work of solving the normal equations must be made over again, we seldom continue the calculations by taking into account later information. It is, however, easy to find the adjustments required by a method developed in this article. It is possible to get time series showing the development of the regression coefficients without formidable work. We can in this way get a deeper insight in the problem to be studied than by making the regression analysis only once for all. If the purpose of the regression analysis is to obtain formulas to be used for forecasting, time series of regression coefficients give a better starting point than if we only have regression coefficients for a certain period.  相似文献   

20.
We propose a method to detect early signs of a potential major crash in the market from only the information of the time series representing its stock market data. As reinforcement of the abnormality test Test(ABN) developed in Okabe, Matsuura, and Klimek (International Journal of Pure and Applied Mathematics, 3, 443–484, 2002), we introduce in this paper a risk graph to measure abnormality of time series by using the non-linear prediction analysis in the theory of KM2O-Langevin equations. By applying it to real data of stock market indexes on the Black Monday of 1987 and those during the past 7 years from January 2000 to December 2006, we investigate whether we can detect early signs of a potential major crash in the market by watching the behavior of the risk graph. An erratum to this article can be found at  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号