首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new method to validate risk models: the Risk Map. This method jointly accounts for the number and the magnitude of extreme losses and graphically summarizes all information about the performance of a risk model. It relies on the concept of a super exception, which is defined as a situation in which the loss exceeds both the standard Value-at-Risk (VaR) and a VaR defined at an extremely low probability. We then formally test whether the sequences of exceptions and super exceptions are rejected by standard model validation tests. We show that the Risk Map can be used to validate market, credit, operational, or systemic risk estimates (VaR, stressed VaR, expected shortfall, and CoVaR) or to assess the performance of the margin system of a clearing house.  相似文献   

2.
This paper proposes the SU-normal distribution to describe non-normality features embedded in financial time series, such as: asymmetry and fat tails. Applying the SU-normal distribution to the estimation of univariate and multivariate GARCH models, we test its validity in capturing asymmetry and excess kurtosis of heteroscedastic asset returns. We find that the SU-normal distribution outperforms the normal and Student-t distributions for describing both the entire shape of the conditional distribution and the extreme tail shape of daily exchange rates and stock returns. The goodness-of-fit (GoF) results indicate that the skewness and excess kurtosis are better captured by the SU-normal distribution. The exceeding ratio (ER) test results indicate that the SU-normal is superior to the normal and Student-t distributions, which consistently underestimate both the lower and upper extreme tails, and tend to overestimate the lower tail in general.  相似文献   

3.
This paper analyzes ΔCoVaR proposed by Adrian and Brunnermeier (2011) as a tool for identifying/ranking systemically important institutions. We develop a test of significance of ΔCoVaR that allows determining whether or not a financial institution can be classified as being systemically important on the basis of the estimated systemic risk contribution, as well as a test of dominance aimed at testing whether or not, according to ΔCoVaR, one financial institution is more systemically important than another. We provide an empirical application on a sample of 26 large European banks to show the importance of statistical testing when using ΔCoVaR, and more generally also other market-based systemic risk measures, in this context.  相似文献   

4.
By assuming that a large share of investors (which we call fundamentalists) follows a fundamental approach to stock picking, we build a discounted cash flow (DCF) model and test on a sample of high-tech stocks whether the strong and the weak version of the model are supported by data from the US and European stock markets. Empirical results show that “fundamental” earning price ratios explain a significant share of cross-sectional variation of the observed E/P ratios, with other additional variables being only partially and weakly relevant.Within this general framework, valid both for Europe and the US, empirical results outline significant differences between the two markets. The most relevant of them is that the relationship between observed and fundamental E/P ratios is much weaker in Europe.  相似文献   

5.
Hill estimation (Hill, 1975), the most widespread method for estimating tail thickness of heavy-tailed financial data, suffers from two drawbacks. One is that the optimal number of tail observations to use in the estimation is a function of the unknown tail index being estimated, which diminishes the empirical relevance of the Hill estimation. The other is that the hypothesis test of the underlying data lying in the domain of attraction of an α-stable law (α < 2) or of a normal law (α  2) for finite samples, is performed on the basis of the asymptotic distribution, which can be different from those for finite samples. In this paper, using the Monte Carlo technique, we propose an exact test method for the stability parameter of α-stable distributions which is based on the Hill estimator, yet is able to provide exact confidence intervals for finite samples. Our exact test method automatically includes an estimation procedure which does not need the assumption of a known number of observations on the distributional tail. Empirical applications demonstrate the advantages of our new method in comparison with the Hill estimation.  相似文献   

6.
In order to provide reliable Value-at-Risk (VaR) and Expected Shortfall (ES) forecasts, this paper attempts to investigate whether an inter-day or an intra-day model provides accurate predictions. We investigate the performance of inter-day and intra-day volatility models by estimating the AR(1)-GARCH(1,1)-skT and the AR(1)-HAR-RV-skT frameworks, respectively. This paper is based on the recommendations of the Basel Committee on Banking Supervision. Regarding the forecasting performances, the exploitation of intra-day information does not appear to improve the accuracy of the VaR and ES forecasts for the 10-steps-ahead and 20-steps-ahead for the 95%, 97.5% and 99% significance levels. On the contrary, the GARCH specification, based on the inter-day information set, is the superior model for forecasting the multiple-days-ahead VaR and ES measurements. The intra-day volatility model is not as appropriate as it was expected to be for each of the different asset classes; stock indices, commodities and exchange rates.The multi-period VaR and ES forecasts are estimated for a range of datasets (stock indices, commodities, foreign exchange rates) in order to provide risk managers and financial institutions with information relating the performance of the inter-day and intra-day volatility models across various markets. The inter-day specification predicts VaR and ES measures adequately at a 95% confidence level. Regarding the 97.5% confidence level that has been recently proposed in the revised 2013 version of Basel III, the GARCH-skT specification provides accurate forecasts of the risk measures for stock indices and exchange rates, but not for commodities (that is Silver and Gold). In the case of the 99% confidence level, we do not achieve sufficiently accurate VaR and ES forecasts for all the assets.  相似文献   

7.
A global consistency result for the ML estimator of a misspecified two-parameter Pareto distribution is proved. The misspecification is due to the assumption of a wrong inflation rate, which violates the i.i.d. assumption in the model. We also investigate how far away from the true parameters one finds the ML estimator of the misspecified model (asymptotically for a small misspecification r). Finally, for the case where the misspecification depends on the number of observations n, i.e., r=r n , and where $r_{n}\stackrel{n\to \infty}{\longrightarrow}0A global consistency result for the ML estimator of a misspecified two-parameter Pareto distribution is proved. The misspecification is due to the assumption of a wrong inflation rate, which violates the i.i.d. assumption in the model. We also investigate how far away from the true parameters one finds the ML estimator of the misspecified model (asymptotically for a small misspecification r). Finally, for the case where the misspecification depends on the number of observations n, i.e., r=r n , and where rn? n? ¥0r_{n}\stackrel{n\to \infty}{\longrightarrow}0, we prove a central limit theorem for the ML estimator.  相似文献   

8.
Following Hansen and Jagannathan (J. Finance 52 (1997) 557), Jagannathan and Wang (J. Finance 51 (1996) 3) propose a distance measure that estimates the maximum pricing error generated by a linear asset model. Jagannathan and Wang propose a test of this HJ-distance using an empirical p-value as an alternative generalized method of moments (GMM) measure to Hansen's (Econometrica 50 (1982) 1029) GMM specification test. Using Monte Carlo analysis, we examine the finite sample properties of these specification tests. While the Hansen test mildly overrejects correct models in commonly used sample size, the empirical p-value of the HJ-distance rejects correct models too severely in such samples to provide a valid test of such models.  相似文献   

9.
This paper examines asset pricing theories for treasury bonds using longer maturities than previous studies and employing a simple multi-factor model. We allow bond factor loadings to vary over time according to term structure variables. The model examines not only the time variation in the expected returns of bonds but also their unexpected returns. This allows us to explicitly test some asset pricing restrictions which are difficult to study under existing frameworks. We confirm that the pure expectation theory of the term structure of interest rates is rejected by the data. Our empirical study of a two-factor model finds substantial evidence of time-varying term-premiums and factor loadings. The fact that factor loadings vary with long-term interest rates and yield spreads suggest that bond return volatilities are sensitive to interest rate levels.  相似文献   

10.
Private information is a common problem in banking and corporate finance research. Heckman's (1979) two-step estimator is commonly used to test for sample selection using a simple t-test on the inverse Mills ratio (IMR) coefficient. Following Puri (1996), this test is often interpreted as a test for private information. We conduct a series of Monte Carlo simulations to show that researchers can reliably use the Heckman estimator to test for private information when this private information is random. However, private information often takes the form of an omitted variable with a deterministic relationship to selection and outcomes. In this case, we show that the IMR coefficient is biased and inconsistent and that t-tests lead to incorrect conclusions regarding the significance of private information as well as its impact on selection and outcomes. We illustrate our results using a unique case in prior literature in which a bank's prior information was revealed. In conclusion, the Heckman model cannot be interpreted as a test for private information (or sample selection) when private information takes the form of an omitted variable in the first-stage regression.  相似文献   

11.
We derive the default cascade model and the fire-sale spillover model in a unified interdependent framework. The interactions among banks include not only direct cross-holding, but also indirect dependency by holding mutual assets outside the banking system. Using data extracted from the European Banking Authority, we present the interdependency network composed of 48 banks and 21 asset classes. For the robustness, we employ three methods, called Anan, Hała and Maxe, to reconstruct the asset/liability cross-holding network. Then we combine the external portfolio holdings of each bank to compute the interdependency matrix. The interdependency network is much denser than the direct cross-holding network, showing the complex latent interaction among banks. Finally, we perform macroprudential stress tests for the European banking system, using the adverse scenario in EBA stress test as the initial shock. For different reconstructed networks, we illustrate the hierarchical cascades and show that the failure hierarchies are roughly the same except for a few banks, reflecting the overlapping portfolio holding accounts for the majority of defaults. We also calculate systemic vulnerability and individual vulnerability, which provide important information for supervision and relevant management actions.  相似文献   

12.
We explore theoretically and empirically the relationship between firm productivity and liquidity management in the presence of financial frictions. We build a dynamic investment model and show that, counter to basic economic intuition, more productive firms could demand less capital assets and hold more liquid assets compared to less productive firms when financing costs are sufficiently high. We empirically test this prediction using a comprehensive dataset of Chinese manufacturers and find that more productive firms indeed hold less capital and more cash. We do not, however, observe this for US manufacturers. Our study suggests a larger capital misallocation problem in markets with significant financing frictions than previously documented.  相似文献   

13.
This study extends the HAR-RV model to detailedly compare the role of leverage effects, jumps, and overnight information in predicting the realized volatilities (RV) of 21 international equity indices. First, the in-sample results suggest that these three factors have significantly negative impact for most of international equity markets. Second, the out-of-sample predictive results show that leverage effects and overnight information have stronger predictive power than jumps. Furthermore, we provide convincing results that the use of these three factors simultaneously can produce the best predictions for almost international equity markets at all forecast horizons. Finally, the empirical results from alternative prediction window, Direction-of-Change test, out-of-sample R2 test, alternative loss functions, and alternative volatility estimator confirm our results are robust.  相似文献   

14.
The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the γth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GARCH process (heteroskedasticity effects). The long-rate variance exhibits heteroskedasticity effects and level effects in accordance with the square-root model. The spread variance exhibits heteroskedasticity effects but no level effects. The level-GARCH model is preferred above the GARCH model and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates.  相似文献   

15.
The market model is commonly used in finance to study events and to evaluate security performance. With daily data, it is not uncommon to find low R-squares, in the range 0–10%. Prior studies have attempted to improve the fit of the model by excluding observations associated with high trading volume. In this study, we compare the results of the high-volume-exclusion approach with the more direct firm-specific announcement exclusion approach. The announcement approach excludes observations associated with Wall Street Journal Index news items regarding the firm. By excluding the [−1,0] fays relative to such news in a sample of 68 firms, we find that R-squares increase significantly by about 5%. By excluding the days relative to earnings announcements only, R-squares increase by about 4%. These results are then compared to the high-volume-exclusion approach. It is found that this approach is more efficient as an 8% increase in R-squares is produced.The results of this study provide valuable evidence to empiricists by comparing the two approaches to improving the fit of the market model. The high-volume -exclusion approach provides higher R-squares. However, the relative efficiency of the two approaches should be balanced against the arguments for the methodologically correct approach. The advantage of using the firm-specific announcement exclusion approach is that there is more confidence of excluding only firm-specific movements from the estimation of the market model. It also allows a researcher to quickly and unambiguously identify the announcements and delete the corresponding observations. Furthermore, we find that about 50% of the improved fit, relative to the volume approach, can be accomplished by excluding earnings announcements. The methodological disadvantage of using the high-volume-exclusion approach is that it is affected not only by firm-specific announcements but also by other factors, such as the heterogeneity of investor expectations. These factors may influence the choice of using firm-specific announcements rather than the high-volume approach despite the lower increment in R-squares.  相似文献   

16.
Numerical approximations are presented for the expected utility of wealth over a single time period for a small investor who proportions her or his available capital between a risk-free asset and a risky stock. The stock price is assumed to be a log-stable random variable. The utility functional is logarithmic or isoeleastic (yaq, q < 0). Analytic results are presented for special choices of model parameters, and for large and small time periods.  相似文献   

17.
18.
A number of applications presume that asset returns are normally distributed, even though they are widely known to be skewed leptokurtic and fat-tailed and excess kurtosis. This leads to the underestimation or overestimation of the true value-at-risk (VaR). This study utilizes a composite trapezoid rule, a numerical integral method, for estimating quantiles on the skewed generalized t distribution (SGT) which permits returns innovation to flexibly treat skewness, leptokurtosis and fat tails. Daily spot prices of the thirteen stock indices in North America, Europe and Asia provide data for examining the one-day-ahead VaR forecasting performance of the GARCH model with normal, student??s t and SGT distributions. Empirical results indicate that the SGT provides a good fit to the empirical distribution of the log-returns followed by student??s t and normal distributions. Moreover, for all confidence levels, all models tend to underestimate real market risk. Furthermore, the GARCH-based model, with SGT distributional setting, generates the most conservative VaR forecasts followed by student??s t and normal distributions for a long position. Consequently, it appears reasonable to conclude that, from the viewpoint of accuracy, the influence of both skewness and fat-tails effects (SGT) is more important than only the effect of fat-tails (student??s t) on VaR estimates in stock markets for a long position.  相似文献   

19.
In this paper, we propose a framework for the analysis of risk communication and an index to measure the quality of risk disclosure. Mainstream literature on voluntary disclosure has emphasized that quantity can be used as a sound proxy for quality. We contend that, in the analysis of the disclosure of risks made by public companies, attention has to be paid not only to how much is disclosed but also to what is disclosed and how.We apply the framework to a sample of nonfinancial companies listed in the ordinary market on the Italian Stock Exchange. To verify that the framework and synthetic index are not influenced by the two factors recognized in the literature as the most powerful drivers of disclosure behavior for listed companies, we use an OLS model. The regression shows that the index of disclosure quantity is not influenced either by size or industry. Thus, the synthetic measure can be used to rank the quality of the disclosure of risks.  相似文献   

20.
Efficient tests of stock return predictability   总被引:1,自引:0,他引:1  
Conventional tests of the predictability of stock returns could be invalid, that is reject the null too frequently, when the predictor variable is persistent and its innovations are highly correlated with returns. We develop a pretest to determine whether the conventional t-test leads to invalid inference and an efficient test of predictability that corrects this problem. Although the conventional t-test is invalid for the dividend–price and smoothed earnings–price ratios, our test finds evidence for predictability. We also find evidence for predictability with the short rate and the long-short yield spread, for which the conventional t-test leads to valid inference.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号