首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
We present a new model to decompose total daily return volatility into high-frequency-based open-to-close volatility and a time-varying scaling factor. We use score-driven dynamics based on fat-tailed distributions to obtain robust volatility dynamics. Applying our new model to a 2001–2018 sample of individual stocks and stock indices, we find substantial in-sample variation of the daytime-to-total volatility ratio over time. We apply the model to out-of-sample forecasting, evaluated in terms of Value-at-Risk and Expected Shortfall. Models with a non-constant volatility ratio typically perform best, particularly in terms of Value-at-Risk. Our new model performs especially well during turbulent times. All results are generally stronger for individual stocks than for index returns.  相似文献   

2.
    
Volatility forecasts are important for a number of practical financial decisions, such as those related to risk management. When working with high-frequency data from markets that operate during a reduced time, an approach to deal with the overnight return volatility is needed. In this context, we use heterogeneous autoregressions (HAR) to model the variation associated with the intraday activity, with distinct realized measures as regressors, and, to model the overnight returns, we use augmented GARCH type models. Then, we combine the HAR and GARCH models to generate forecasts for the total daily return volatility. In an empirical study, for returns on six international stock indices, we analyze the separate modeling approach in terms of its out-of-sample forecasting performance of daily volatility, Value-at-Risk and Expected Shortfall relative to standard models from the literature. In particular, the overall results are favorable for the separate modeling approach in comparison with some HAR models based on realized variance measures for the whole day and the standard GARCH model.  相似文献   

3.
    
This article reviews the application of some advanced Monte Carlo techniques in the context of multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations, which can be biassed in some sense, for instance, by using the discretization of an associated probability law. The MLMC approach works with a hierarchy of biassed approximations, which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider some Markov chain Monte Carlo and sequential Monte Carlo methods, which have been introduced in the literature, and we describe different strategies that facilitate the application of MLMC within these methods.  相似文献   

4.
We develop a minimum amount of theory of Markov chains at as low a level of abstraction as possible in order to prove two fundamental probability laws for standard Markov chain Monte Carlo algorithms:
1. The law of large numbers explains why the algorithm works: it states that the empirical means calculated from the samples converge towards their "true" expected values, viz. expectations with respect to the invariant distribution of the associated Markov chain (=the target distribution of the simulation).
2. The central limit theorem expresses the deviations of the empirical means from their expected values in terms of asymptotically normally distributed random variables. We also present a formula and an estimator for the associated variance.  相似文献   

5.
    
Bayesian modification indices are presented that provide information for the process of model evaluation and model modification. These indices can be used to investigate the improvement in a model if fixed parameters are re-specified as free parameters. The indices can be seen as a Bayesian analogue to the modification indices commonly used in a frequentist framework. The aim is to provide diagnostic information for multi-parameter models where the number of possible model violations and the related number of alternative models is too large to render estimation of each alternative practical. As an example, the method is applied to an item response theory (IRT) model, that is, to the two-parameter model. The method is used to investigate differential item functioning and violations of the assumption of local independence.  相似文献   

6.
In many applications involving time-varying parameter VARs, it is desirable to restrict the VAR coefficients at each point in time to be non-explosive. This is an example of a problem where inequality restrictions are imposed on states in a state space model. In this paper, we describe how existing MCMC algorithms for imposing such inequality restrictions can work poorly (or not at all) and suggest alternative algorithms which exhibit better performance. Furthermore, we show that previous algorithms involve an approximation relating to a key prior integrating constant. Our algorithms are exact, not involving this approximation. In an application involving a commonly used U.S. data set, we present evidence that the algorithms proposed in this paper work well.  相似文献   

7.
    
A new framework for the joint estimation and forecasting of dynamic value at risk (VaR) and expected shortfall (ES) is proposed by our incorporating intraday information into a generalized autoregressive score (GAS) model introduced by Patton et al., 2019 to estimate risk measures in a quantile regression set-up. We consider four intraday measures: the realized volatility at 5-min and 10-min sampling frequencies, and the overnight return incorporated into these two realized volatilities. In a forecasting study, the set of newly proposed semiparametric models are applied to four international stock market indices (S&P 500, Dow Jones Industrial Average, Nikkei 225 and FTSE 100) and are compared with a range of parametric, nonparametric and semiparametric models, including historical simulations, generalized autoregressive conditional heteroscedasticity (GARCH) models and the original GAS models. VaR and ES forecasts are backtested individually, and the joint loss function is used for comparisons. Our results show that GAS models, enhanced with the realized volatility measures, outperform the benchmark models consistently across all indices and various probability levels.  相似文献   

8.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

9.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

10.
There exists a common belief among researchers and regional policy makers that the actual central system of Aeropuertos Españoles y Navegación Aérea (AENA) should be changed to one more decentralized where airport managers could have more autonomy. The main objective of this article is to evaluate the efficiency of the Spanish airports using Markov chain Monte Carlo (MCMC) simulation to estimate a stochastic frontier analysis (SFA) model. Our results show the existence of a significant level of inefficiency in airport operations. Additionally, we provide efficient marginal cost estimates for each airport which also cast some doubts about the current pricing practices.  相似文献   

11.
随着对经济和金融时间序列长记忆性的研究,分整阶数估计已成为当前理论研究的焦点问题。以对数周期图回归和局部Whittle方法为代表的半参数分整阶数估计方法在实践中得到广泛应用,但对这两类半参数估计方法的有限样本性质的比较则鲜有涉及,影响了在实践中对估计方法的选择。利用蒙特卡洛模拟方法,在不同数据产生的过程下,这两种半参数估计方法有限样本性质的研究结果表明:在ARFIMA(0, d, 0)过程下,LW类估计量具有较好的小样本性质;在平稳ARFIMA(1, d, 0)过程下,本文建议的QGPH估计量的有限样本性质要优于其他对数周期图估计量;在非平稳过程下,MGPH的偏差最小。  相似文献   

12.
  总被引:2,自引:0,他引:2  
A tutorial derivation of the reversible jump Markov chain Monte Carlo (MCMC) algorithm is given. Various examples illustrate how reversible jump MCMC is a general framework for Metropolis-Hastings algorithms where the proposal and the target distribution may have densities on spaces of varying dimension. It is finally discussed how reversible jump MCMC can be applied in genetics to compute the posterior distribution of the number, locations, effects, and genotypes of putative quantitative trait loci.  相似文献   

13.
In this paper we study the Candy model, a marked point process introduced by S toica et al. (2000) . We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present a simulation study.  相似文献   

14.
    
We investigate the optimal hedging strategy for a firm using options, where the role of production and basis risk are considered. Contrary to the existing literature, we find that the exercise price which minimizes the shortfall of the hedged portfolio is primarily affected by the amount of cash spent on the hedging. Also, we decompose the effect of production and basis risk showing that the former affects hedging effectiveness while the latter drives the choice of the optimal contract. Fitting the model parameters to match a financial turmoil scenario confirms that suboptimal option moneyness leads to a non-negligible economic loss.  相似文献   

15.
This paper analyzes the drivers of financial distress that were experienced by small Italian cooperative banks during the latest deep recession, focusing mainly on the importance of bank capital as a predictor of bankruptcy for Italian nonprofit banks. The analysis aims to build an early-warning model that is suitable for this type of bank.The results reveal non-monotonic effects of bank capital on the probability of failure. In contrast to distress models for for-profit banks, non-performing loans, profitability, liquidity, and management quality have a negligible predictive value. The findings also show that unreserved impaired loans have an important impact on the probability of bank distress. Moreover, the loan–loss ratio provision on substandard loans constitutes a suitable antibody against bank distress. Overall, the results are robust in terms of both the methodology (i.e., frequentist and Bayesian approaches) and the sample used (i.e., cooperative banks in Italy and euro-area countries).  相似文献   

16.
Forecasting and turning point predictions in a Bayesian panel VAR model   总被引:2,自引:0,他引:2  
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model, which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for hierarchical and for Minnesota-type priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.  相似文献   

17.
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal).  相似文献   

18.
19.
    
In application areas which involve digitised speech and audio signals, such as coding, digital remastering of old recordings and recognition of speech, it is often desirable to reduce the effects of noise with the aim of enhancing intelligibility and perceived sound quality. We consider the case where noise sources contain non-Gaussian, impulsive elements superimposed upon a continuous Gaussian background. Such a situation arises in areas such as communications channels, telephony and gramophone recordings where impulsive effects might be caused by electromagnetic interference (lightning strikes), electrical switching noise or defects in recording media, while electrical circuit noise or the combined effect of many distant atmospheric events lead to a continuous Gaussian component.
In this paper we discuss the background to this type of noise degradation and describe briefly some existing statistical techniques for noise reduction. We propose new methods for enhancement based upon Markov chain Monte Carlo (MCMC) simulation. Signals are modelled as autoregressive moving-average (ARMA); while noise sources are treated as discrete and continuous mixtures of Gaussian distributions. Results are presented for both real and artificially corrupted data sequences, illustrating the potential of the new methods.  相似文献   

20.
    
Markov chain Monte Carlo methods are frequently used in the analyses of genetic data on pedigrees for the estimation of probabilities and likelihoods which cannot be calculated by existing exact methods. In the case of discrete data, the underlying Markov chain may be reducible and care must be taken to ensure that reliable estimates are obtained. Potential reducibility thus has implications for the analysis of the mixed inheritance model, for example, where genetic variation is assumed to be due to one single locus of large effect and many loci each with a small effect. Similarly, reducibility arises in the detection of quantitative trait loci from incomplete discrete marker data. This paper aims to describe the estimation problem in terms of simple discrete genetic models and the single-site Gibbs sampler. Reducibility of the Gibbs sampler is discussed and some current methods for circumventing the problem outlined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号