首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
科学数据是自然界客观事物特性的表征。,现代科学领域的数据都开始以越来越精细的时问刻度来收集,在频率上向可微分方向发展,在数量上以指数级增长。自上世纪90年代以来,随着现代计算机技术在金融交易中的广泛使用,交易系统可以实时地提供市场参与者的交易数据,包括股票、汇兑、期货及金融衍生品等,并且全部交易过程被实时地逐笔交易或逐秒记录并存储下来,这样就形成了金融高频数据(Financial High Frequency Data),达到可微程度。金融高频数据具有海量性,如分钟数据,在10年内可以达到1000000数量级。  相似文献   

2.
Investors wishing to achieve a particular level of diversification may be misled on how many stocks to hold in a portfolio by assessing the portfolio risk at different data frequencies. High frequency intradaily data provide better estimates of volatility, which translate to more accurate assessment of portfolio risk. Using 5-min, daily and weekly data on S&P500 constituents for the period from 2003 to 2011, we find that for an average investor wishing to diversify away 85% (90%) of the risk, equally weighted portfolios of 7 (10) stocks will suffice, irrespective of the data frequency used or the time period considered. However, to assure investors of a desired level of diversification 90% of the time (in contrast to on average), using low frequency data results in an exaggerated number of stocks in a portfolio when compared with the recommendation based on 5-min data. This difference is magnified during periods when financial markets are in distress, as much as doubling during the 2007–2009 financial crisis.  相似文献   

3.
This paper proposes a new class of dynamic copula models for daily asset returns that exploits information from high frequency (intra-daily) data. We augment the generalized autoregressive score (GAS) model of Creal et al. (2013) with high frequency measures such as realized correlation to obtain a “GRAS” model. We find that the inclusion of realized measures significantly improves the in-sample fit of dynamic copula models across a range of U.S. equity returns. Moreover, we find that out-of-sample density forecasts from our GRAS models are superior to those from simpler models. Finally, we consider a simple portfolio choice problem to illustrate the economic gains from exploiting high frequency data for modeling dynamic dependence.  相似文献   

4.
Computing value at risk with high frequency data   总被引:2,自引:0,他引:2  
We compare the computation of value at risk with daily and with high frequency data for the Deutsche mark–US dollar exchange rate. Among the main points considered in the paper are: (a) the comparison of measures of value at risk on the basis of multi-step volatility forecasts; (b) the computation of the degree of fractional differencing for high frequency data in the context of a Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity (FIGARCH) model; and (c) the comparison between deterministic and stochastic models for the filtering of high frequency returns.  相似文献   

5.
Identifying VARS based on high frequency futures data   总被引:1,自引:0,他引:1  
Using the prices of federal funds futures contracts, we measure the impact of the surprise component of Federal Reserve policy decisions on the expected future trajectory of interest rates. We show how this information can be used to identify the effects of a monetary policy shock in a standard VAR. This alternative approach to identification is quite different, and, we argue, more plausible, than the conventional identifying restrictions. We find that a usual recursive identification of the model is rejected, as is any identification that insists on a monetary policy shock having an exactly zero effect on prices contemporaneously. We nevertheless agree with the conclusion of much of the VAR literature that only a small fraction of the variance of output can be attributed to monetary policy shocks.  相似文献   

6.
《Global Finance Journal》2009,19(3):319-336
This paper examines the pricing of Eurodollar futures and US dollar FRA contracts using a high frequency data set. I find the median futures/FRA differential is close to zero and the dispersion of the differential smaller than reported in prior studies using low frequency data and implied forward rates as proxies for forward rates. Arbitrage opportunities are linked to the presence of stale FRA quotes and the oscillatory behavior of FRA quotes. Inter-market information flows are found to be of much shorter duration than previously reported with the futures market playing the dominant role in the information transmission process in the shorter-dated maturities.  相似文献   

7.
This paper empirically studies the role of macro-factors in explaining and predicting daily bond yields. In general, macro-finance models use low-frequency data to match with macroeconomic variables available only at low frequencies. To deal with this, we construct and estimate a tractable no-arbitrage affine model with both conventional latent factors and macro-factors by imposing cross-equation restrictions on the daily yields of bonds with different maturities, credit risks, and inflation indexation. The estimation results using both the US and the UK data show that the estimated macro-factors significantly predict actual inflation and the output gap. In addition, our daily macro-term structure model forecasts better than no-arbitrage models with only latent factors as well as other statistical models.  相似文献   

8.
In this article we introduce a linear–quadratic volatility model with co-jumps and show how to calibrate this model to a rich dataset. We apply GMM and more specifically match the moments of realized power and multi-power variations, which are obtained from high-frequency stock market data. Our model incorporates two salient features: the setting of simultaneous jumps in both return process and volatility process and the superposition structure of a continuous linear–quadratic volatility process and a Lévy-driven Ornstein–Uhlenbeck process. We compare the quality of fit for several models, and show that our model outperforms the conventional jump diffusion or Bates model. Besides that, we find evidence that the jump sizes are not normally distributed and that our model performs best when the distribution of jump-sizes is only specified through certain (co-) moment conditions. Monte Carlo experiments are employed to confirm this.  相似文献   

9.
This paper tests affine, quadratic and Black-type Gaussian models on Euro area triple A Government bond yields for maturities up to 30 years. Quadratic Gaussian models beat affine Gaussian models both in-sample and out-of-sample. A Black-type model best fits the shortest maturities and the extremely low yields since 2013, but worst fits the longest maturities. Even for quadratic models we can infer the latent factors from some yields observed without errors, which makes quasi-maximum likelihood (QML) estimation feasible. New specifications of quadratic models fit the longest maturities better than does the ‘classic’ specification of Ahn et al. [2002. ‘Quadratic Term Structure Models: Theory and Evidence.’ The Review of Financial Studies 15 (1): 243–288], but the opposite is true for the shortest maturities. These new specifications are more suitable to QML estimation. Overall quadratic models seem preferable to affine Gaussian models, because of superior empirical performance, and to Black-type models, because of superior tractability. This paper also proposes the vertical method of lines (MOL) to solve numerically partial differential equations (PDEs) for pricing bonds under multiple non-independent stochastic factors. ‘Splitting’ the PDE drastically reduces computations. Vertical MOL can be considerably faster and more accurate than finite difference methods.  相似文献   

10.
Current studies on financial market risk measures usually use daily returns based on GARCH type models. This paper models realized range using intraday high frequency data based on CARR framework and apply it to VaR forecasting. Kupiec LR test and dynamic quantile test are used to compare the performance of VaR forecasting of realized range model with another intraday realized volatility model and daily GARCH type models. Empirical results of Chinese Stock Indices show that realized range model performs the same with realized volatility model, which performs much better than daily models.  相似文献   

11.
Extreme co-movement and extreme impact problems are inherently stochastic control problems, since they will influence the decision taken today and ultimately influence a decision taken in the future. Extreme co-movements among financial assets have been reported in the literature. However, extreme impacts have not been carefully studied yet. In this paper, we use the newly developed methodology to further explore extreme co-movements and extreme impacts in financial market. Particularly, two FX spot rates are studied. Based on the results of our analysis with FX returns, we conclude that there exist extreme co-movements and extreme impacts in FX returns and care has to be taken when we employ portfolio optimization models, especially models without the ability of handling extreme dependencies.  相似文献   

12.
Granger causality tests are being supplanted by new methods such as the Lead-Lag Ratio, particularly in finance where data arrives at random times and systematic sampling often produces spurious results. Existing approaches are insufficient; outside of block-sampling using a bootstrap, the lead-lag ratio has generally been assessed against a benchmark of 1 without regard for statistical significance. We use simulations to generate a response surface for the Lead-Lag Ratio. Our modelled critical values are applied to reassess the findings of three previous studies of lead/lag relations between financial return series with high frequency data. Our response surface method proves to be a convenient and efficient alternative to using a bootstrap.  相似文献   

13.
In this paper the effective bid-ask spread is estimated using 12 high frequency Danish bond samples. A clear-cut MA(1)-model for the mean of the return series, and a GARCH(1,1)-model for the variance, are found. Basically, Roll's model is used, but three different methods of calculating the first-order autocovariance are suggested. Each of these in turn produces three possible ways of estimating the effective bid-ask spread. First, Roll's original autocovariance estimate is used. Second, the autocovariance is calculating using the parameters of an estimated MA(1) model. Third, the autocovariance is obtained from the parameters of a joint MA(1)-GARCH(1,1) model. By means of bootstrapping the standard error of the bid-ask spread estimates are found. It is shown that the gain in efficiency, measured by the relative difference in the standard error of the estimates, is 29% when going from method one to method two, but only 1% when going from method two to method three. These results indicate that the extra gain in efficiency obtained by taking account of the MA(1) structure of the data is noteworthy, but the gain when incorporating the GARCH-effects is negligible.  相似文献   

14.
We use high frequency data for the mark–dollar exchange rate for the period 1992–1995 to evaluate the effects of central bank interventions on the foreign exchange market. We estimate an unobserved component model that decomposes volatility into non-stationary and stationary parts. Stationary components in turn are decomposed into seasonal and non-seasonal intra-day parts. Our results confirm the view that interventions are not particularly effective. The exchange rate moves in the desired direction for only about 50% of the time, and often with a substantial increase in volatility. The model suggests that the two components, which are affected the most by interventions, are the permanent and the stochastic intra-day.  相似文献   

15.
This paper introduces novel ‘doubly mean-reverting’ processes based on conditional modelling of model spreads between pairs of stocks. Intraday trading strategies using high frequency data are proposed based on the model. This model framework and the strategies are designed to capture ‘local’ market inefficiencies that are elusive for traditional pairs trading strategies with daily data. Results from real data back-testing for two periods show remarkable returns, even accounting for transaction costs, with annualized Sharpe ratios of 3.9 and 7.2 over the periods June 2013–April 2015 and 2008, respectively. By choosing the particular sector of oil companies, we also confirm the observation that the commodity price is the main driver of the share prices of commodity-producing companies at times of spikes in the related commodity market.  相似文献   

16.
N. Taylor  Y. Xu 《Quantitative Finance》2017,17(7):1021-1035
We develop a general form logarithmic vector multiplicative error model (log-vMEM). The log-vMEM improves on existing models in two ways. First, it is a more general form model as it allows the error terms to be cross-dependent and relaxes weak exogeneity restrictions. Second, the log-vMEM specification guarantees that the conditional means are non-negative without any restrictions imposed on the parameters. We further propose a multivariate lognormal distribution and a joint maximum likelihood estimation strategy. The model is applied to high frequency data associated with a number of NYSE-listed stocks. The results reveal empirical support for full interdependence of trading duration, volume and volatility, with the log-vMEM providing a better fit to the data than a competing model. Moreover, we find that unexpected duration and volume dominate observed duration and volume in terms of information content, and that volatility and volatility shocks affect duration in different directions. These results are interpreted with reference to extant microstructure theory.  相似文献   

17.
We generalize an empirical likelihood approach to deal with missing data to a model of consumer credit scoring. An application to recent consumer credit data shows that our procedure yields parameter estimates which are significantly different (both statistically and economically) from the case where customers who were refused credit are ignored. This has obvious implications for commercial banks as it shows that refused customers should not be ignored when developing scorecards for the retail business. We also show that forecasts of defaults derived from the method proposed in this paper improve upon the standard ones when refused customers do not enter the estimation data set.  相似文献   

18.
This paper explains the capability theory of how HFT firms make allocation decisions under uncertainty, and shows how capability maximization is precisely consistent with utility theory. The issue, however, is how these firms actually make allocation decisions in practice. Using the Gioia methodology, this paper presents evidence from interviews with HFT professionals and specialist media that suggest that these firms are capability satisficers. Capability theory is also consistent with bounded rationality and the adaptive markets hypothesis, and defines the point at which these firms reach a satisfactory solution. Thus, capability reconciles mainstream theory and the more realistic, behavioral theories based on observation of industry practice. The methodology developed can be applied to any firm that makes algorithmic decisions under uncertainty.  相似文献   

19.
As the example presented demonatrate, the analysis of cross-impactss amount a group of related events is somewhat more complex than perhaps heretofore imagined. Impacts of effects between such events very often carely depend on their temporal sequence. The event outcome space is accordingly multiplied considerably over the case where such dependency is ignored. The result is that high premium is placed on methods that seek to reduce analytical complexity without distorting essential event intrications.  相似文献   

20.
The increasing integration of computer technology for the processing of business transactions and the growing amount of financially relevant data in organizations create new challenges for external auditors. The availability of digital data opens up new opportunities for innovative audit procedures. Process mining can be used as a novel data analysis technique to support auditors in this context. Process mining algorithms produce process models by analyzing recorded event logs. Contemporary general purpose mining algorithms commonly use the temporal order of recorded events for determining the control flow in mined process models. The presented research shows how data dependencies related to the accounting structure of recorded events can be used as an alternative to the temporal order of events for discovering the control flow. The generated models provide accurate information on the control flow from an accounting perspective and show a lower complexity compared to those generated using timestamp dependencies. The presented research follows a design science research approach and uses three different real world data sets for evaluation purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号