首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nonnormal stable laws and Student t distributions are used to model the unconditional distribution of financial asset returns, as both models display heavy tails. The relevance of the two models is subject to debate because empirical estimates of the tail shape conditional on either model give conflicting signals. This stems from opposing bias terms. We exploit the biases to discriminate between the two distributions. A sign estimator for the second‐order scale parameter strengthens our results. Tail estimates based on asset return data match the bias induced by finite‐variance unconditional Student t data and the generalized autoregressive conditional heteroscedasticity process.  相似文献   

2.
In this paper we model Value‐at‐Risk (VaR) for daily asset returns using a collection of parametric univariate and multivariate models of the ARCH class based on the skewed Student distribution. We show that models that rely on a symmetric density distribution for the error term underperform with respect to skewed density models when the left and right tails of the distribution of returns must be modelled. Thus, VaR for traders having both long and short positions is not adequately modelled using usual normal or Student distributions. We suggest using an APARCH model based on the skewed Student distribution (combined with a time‐varying correlation in the multivariate case) to fully take into account the fat left and right tails of the returns distribution. This allows for an adequate modelling of large returns defined on long and short trading positions. The performances of the univariate models are assessed on daily data for three international stock indexes and three US stocks of the Dow Jones index. In a second application, we consider a portfolio of three US stocks and model its long and short VaR using a multivariate skewed Student density. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
It is a matter of common observation that investors value substantial gains but are averse to heavy losses. Obvious as it may sound, this translates into an interesting preference for right-skewed return distributions, whose right tails are heavier than their left tails. Skewness is thus not only a way to describe the shape of a distribution, but also a tool for risk measurement. We review the statistical literature on skewness and provide a comprehensive framework for its assessment. Then, we present a new measure of skewness, based on the decomposition of variance in its upward and downward components. We argue that this measure fills a gap in the literature and show in a simulation study that it strikes a good balance between robustness and sensitivity.  相似文献   

4.
Robust Likelihood Methods Based on the Skew-t and Related Distributions   总被引:1,自引:0,他引:1  
The robustness problem is tackled by adopting a parametric class of distributions flexible enough to match the behaviour of the observed data. In a variety of practical cases, one reasonable option is to consider distributions which include parameters to regulate their skewness and kurtosis. As a specific representative of this approach, the skew‐t distribution is explored in more detail and reasons are given to adopt this option as a sensible general‐purpose compromise between robustness and simplicity, both of treatment and of interpretation of the outcome. Some theoretical arguments, outcomes of a few simulation experiments and various wide‐ranging examples with real data are provided in support of the claim.  相似文献   

5.
A wide class of prior distributions for the Poisson‐gamma hierarchical model is proposed. Prior distributions in this class carry vague information in the sense that their tails exhibit slow decay. Conditions for the propriety of the resulting posterior density are determined, as well as for the existence of posterior moments of the Poisson rate of either an observed or an unobserved unit.  相似文献   

6.
We construct a copula from the skew t distribution of Sahu et al. ( 2003 ). This copula can capture asymmetric and extreme dependence between variables, and is one of the few copulas that can do so and still be used in high dimensions effectively. However, it is difficult to estimate the copula model by maximum likelihood when the multivariate dimension is high, or when some or all of the marginal distributions are discrete‐valued, or when the parameters in the marginal distributions and copula are estimated jointly. We therefore propose a Bayesian approach that overcomes all these problems. The computations are undertaken using a Markov chain Monte Carlo simulation method which exploits the conditionally Gaussian representation of the skew t distribution. We employ the approach in two contemporary econometric studies. The first is the modelling of regional spot prices in the Australian electricity market. Here, we observe complex non‐Gaussian margins and nonlinear inter‐regional dependence. Accurate characterization of this dependence is important for the study of market integration and risk management purposes. The second is the modelling of ordinal exposure measures for 15 major websites. Dependence between websites is important when measuring the impact of multi‐site advertising campaigns. In both cases the skew t copula substantially outperforms symmetric elliptical copula alternatives, demonstrating that the skew t copula is a powerful modelling tool when coupled with Bayesian inference. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
We develop a novel high‐dimensional non‐Gaussian modeling framework to infer measures of conditional and joint default risk for numerous financial sector firms. The model is based on a dynamic generalized hyperbolic skewed‐t block equicorrelation copula with time‐varying volatility and dependence parameters that naturally accommodates asymmetries and heavy tails, as well as nonlinear and time‐varying default dependence. We apply a conditional law of large numbers in this setting to define joint and conditional risk measures that can be evaluated quickly and reliably. We apply the modeling framework to assess the joint risk from multiple defaults in the euro area during the 2008–2012 financial and sovereign debt crisis. We document unprecedented tail risks between 2011 and 2012, as well as their steep decline following subsequent policy actions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Many asset prices, including exchange rates, exhibit periods of stability punctuated by infrequent, substantial, often one‐sided adjustments. Statistically, this generates empirical distributions of exchange rate changes that exhibit high peaks, long tails, and skewness. This paper introduces a GARCH model, with a flexible parametric error distribution based on the exponential generalized beta (EGB) family of distributions. Applied to daily US dollar exchange rate data for six major currencies, evidence based on a comparison of actual and predicted higher‐order moments and goodness‐of‐fit tests favours the GARCH‐EGB2 model over more conventional GARCH‐t and EGARCH‐t model alternatives, particularly for exchange rate data characterized by skewness. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
This paper derives a procedure for simulating continuous non‐normal distributions with specified L‐moments and L‐correlations in the context of power method polynomials of order three. It is demonstrated that the proposed procedure has computational advantages over the traditional product‐moment procedure in terms of solving for intermediate correlations. Simulation results also demonstrate that the proposed L‐moment‐based procedure is an attractive alternative to the traditional procedure when distributions with more severe departures from normality are considered. Specifically, estimates of L‐skew and L‐kurtosis are superior to the conventional estimates of skew and kurtosis in terms of both relative bias and relative standard error. Further, the L‐correlation also demonstrated to be less biased and more stable than the Pearson correlation. It is also shown how the proposed L‐moment‐based procedure can be extended to the larger class of power method distributions associated with polynomials of order five.  相似文献   

10.
11.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

12.
This paper features the application of a novel and recently developed method of statistical and mathematical analysis to the assessment of financial risk, namely regular vine copulas. Dependence modelling using copulas is a popular tool in financial applications but is usually applied to pairs of securities. Vine copulas offer greater flexibility and permit the modelling of complex dependence patterns using the rich variety of bivariate copulas that can be arranged and analysed in a tree structure to facilitate the analysis of multiple dependencies. We apply regular vine copula analysis to a sample of stocks comprising the Dow Jones index to assess their interdependencies and to assess how their correlations change in different economic circumstances using three different sample periods around Global Financial Crisis (GFC).: pre‐GFC (January 2005 to July 2007), GFC (July 2007 to September 2009) and post‐GFC periods (September 2009 to December 2011). The empirical results suggest that the dependencies change in a complex manner, and there is evidence of greater reliance on the Student‐t copula in the copula choice within the tree structures for the GFC period, which is consistent with the existence of larger tails in the distributions of returns for this period. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co‐dependencies. The practical application of regular vine metrics is demonstrated via an example of the calculation of the Value at Risk of a portfolio of stocks.  相似文献   

13.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

14.
The analysis of sports data, in particular football match outcomes, has always produced an immense interest among the statisticians. In this paper, we adopt the generalized Poisson difference distribution (GPDD) to model the goal difference of football matches. We discuss the advantages of the proposed model over the Poisson difference (PD) model, which was also used for the same purpose. The GPDD model, like the PD model, is based on the goal difference in each game that allows us to account for the correlation without explicitly modelling it. The main advantage of the GPDD model is its flexibility in the tails by considering shorter as well as longer tails than the PD distribution. We carry out the analysis in a Bayesian framework in order to incorporate external information, such as historical knowledge or data, through the prior distributions. We model both the mean and the variance of the goal difference and show that such a model performs considerably better than a model with a fixed variance. Finally, the proposed model is fitted to the 2012–2013 Italian Serie A football data, and various model diagnostics are carried out to evaluate the performance of the model.  相似文献   

15.
This work explores some distributional properties of aggregate output growth‐rate time series. We show that, in the majority of OECD countries, output growth‐rate distributions are well approximated by symmetric exponential power densities with tails much fatter than those of a Gaussian (but with finite moments of any order). Fat tails robustly emerge in output growth rates independently of: (i) the way we measure aggregate output; (ii) the family of densities employed in the estimation; (iii) the length of time lags used to compute growth rates. We also show that fat tails still characterize output growth‐rate distributions even after one washes away outliers, autocorrelation and heteroscedasticity. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

17.
Most of the empirical applications of the stochastic volatility (SV) model are based on the assumption that the conditional distribution of returns, given the latent volatility process, is normal. In this paper, the SV model based on a conditional normal distribution is compared with SV specifications using conditional heavy‐tailed distributions, especially Student's t‐distribution and the generalized error distribution. To estimate the SV specifications, a simulated maximum likelihood approach is applied. The results based on daily data on exchange rates and stock returns reveal that the SV model with a conditional normal distribution does not adequately account for the two following empirical facts simultaneously: the leptokurtic distribution of the returns and the low but slowly decaying autocorrelation functions of the squared returns. It is shown that these empirical facts are more adequately captured by an SV model with a conditional heavy‐tailed distribution. It also turns out that the choice of the conditional distribution has systematic effects on the parameter estimates of the volatility process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

18.
This paper focuses on the analysis of size distributions of innovations, which are known to be highly skewed. We use patent citations as one indicator of innovation significance, constructing two large datasets from the European and US Patent Offices at a high level of aggregation, and the Trajtenberg [1990, A penny for your quotes: patent citations and the value of innovations. Rand Journal of Economics 21(1), 172–187] dataset on CT scanners at a very low one. We also study self-assessed reports of patented innovation values using two very recent patent valuation datasets from the Netherlands and the UK, as well as a small dataset of patent licence revenues of Harvard University. Statistical methods are applied to analyse the properties of the empirical size distributions, where we put special emphasis on testing for the existence of ‘heavy tails’, i.e., whether or not the probability of very large innovations declines more slowly than exponentially. While overall the distributions appear to resemble a lognormal, we argue that the tails are indeed fat. We invoke some recent results from extreme value statistics and apply the Hill [1975. A simple general approach to inference about the tails of a distribution. The Annals of Statistics 3, 1163–1174] estimator with data-driven cut-offs to determine the tail index for the right tails of all datasets except the NL and UK patent valuations. On these latter datasets we use a maximum likelihood estimator for grouped data to estimate the tail index for varying definitions of the right tail. We find significantly and consistently lower tail estimates for the returns data than the citation data (around 0.6–1 vs. 3–5). The EPO and US patent citation tail indices are roughly constant over time, but the latter estimates are significantly lower than the former. The heaviness of the tails, particularly as measured by value indicators, we argue, has significant implications for technology policy and growth theory, since the second and possibly even the first moments of these distributions may not exist.  相似文献   

19.
Univariate continuous distributions have three possible types of support exemplified by: the whole real line , , the semi‐finite interval and the bounded interval (0,1). This paper is about connecting distributions on these supports via ‘natural’ simple transformations in such a way that tail properties are preserved. In particular, this work is focussed on the case where the tails (at ±∞) of densities are heavy, decreasing as a (negative) power of their argument; connections are then especially elegant. At boundaries (0 and 1), densities behave conformably with a directly related dependence on power of argument. The transformation from (0,1) to is the standard odds transformation. The transformation from to is a novel identity‐minus‐reciprocal transformation. The main points of contact with existing distributions are with the transformations involved in the Birnbaum–Saunders distribution and, especially, the Johnson family of distributions. Relationships between various other existing and newly proposed distributions are explored.  相似文献   

20.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号