首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
This paper develops a testing framework for comparing the predictive accuracy of competing multivariate density forecasts with different predictive copulas, focusing on specific parts of the copula support. The tests are framed in the context of the Kullback–Leibler Information Criterion, using (out-of-sample) conditional likelihood and censored likelihood in order to focus the evaluation on the region of interest. Monte Carlo simulations document that the resulting test statistics have satisfactory size and power properties for realistic sample sizes. In an empirical application to daily changes of yields on government bonds of the G7 countries we obtain insights into why the Student-t and Clayton mixture copula outperforms the other copulas considered; mixing in the Clayton copula with the t-copula is of particular importance to obtain high forecast accuracy in periods of jointly falling yields.  相似文献   

2.
We extend the recently introduced latent threshold dynamic models to include dependencies among the dynamic latent factors which underlie multivariate volatility. With an ability to induce time-varying sparsity in factor loadings, these models now also allow time-varying correlations among factors, which may be exploited in order to improve volatility forecasts. We couple multi-period, out-of-sample forecasting with portfolio analysis using standard and novel benchmark neutral portfolios. Detailed studies of stock index and FX time series include: multi-period, out-of-sample forecasting, statistical model comparisons, and portfolio performance testing using raw returns, risk-adjusted returns and portfolio volatility. We find uniform improvements on all measures relative to standard dynamic factor models. This is due to the parsimony of latent threshold models and their ability to exploit between-factor correlations so as to improve the characterization and prediction of volatility. These advances will be of interest to financial analysts, investors and practitioners, as well as to modeling researchers.  相似文献   

3.
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model Xt=γZt+Yt, where Zt belongs to a large class of deterministic regressors and Yt is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression, and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are χ2-distributed.  相似文献   

4.
We address the problem of estimating risk-minimizing portfolios from a sample of historical returns, when the underlying distribution that generates returns exhibits departures from the standard Gaussian assumption. Specifically, we examine how the underlying estimation problem is influenced by marginal heavy tails, as modeled by the univariate Student-t distribution, and multivariate tail-dependence, as modeled by the copula of a multivariate Student-t distribution. We show that when such departures from normality are present, robust alternatives to the classical variance portfolio estimator have lower risk.  相似文献   

5.
This paper proposes a first-order zero-drift GARCH (ZD-GARCH(1, 1)) model to study conditional heteroscedasticity and heteroscedasticity together. Unlike the classical GARCH model, the ZD-GARCH(1, 1) model is always non-stationary regardless of the sign of the Lyapunov exponent γ0, but interestingly it is stable with its sample path oscillating randomly between zero and infinity over time when γ0=0. Furthermore, this paper studies the generalized quasi-maximum likelihood estimator (GQMLE) of the ZD-GARCH(1, 1) model, and establishes its strong consistency and asymptotic normality. Based on the GQMLE, an estimator for γ0, a t-test for stability, a unit root test for the absence of the drift term, and a portmanteau test for model checking are all constructed. Simulation studies are carried out to assess the finite sample performance of the proposed estimators and tests. Applications demonstrate that a stable ZD-GARCH(1, 1) model is more appropriate than a non-stationary GARCH(1, 1) model in fitting the KV-A stock returns in Francq and Zakoïan (2012).  相似文献   

6.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

7.
We introduce a variant of the Adaptive Beliefs System (ABS) of Brock and Hommes (1998) based on returns instead of prices. Agents form their demands according to the degree to which they are trend-following or contrarian. Empirically, the model requires that agents’ demands be coerced by leverage constraints. Using five samples of US stock returns, we show that the fit to realized returns is essentially driven by the total dispersion of the model’s returns. We also find that the latter are more realistic when forecasts are based on short-term estimates and when trend-followers and contrarians have the same ex-ante importance. We then provide evidence that the model is able to mimic most stylized facts observed on financial markets (tail decay, volatility clustering and autocorrelation patterns) quite closely. Finally, we find that portfolio policies designed according to the model’s predictions outperform the naive 1/N portfolio out-of-sample by 2% per annum.  相似文献   

8.
It has been documented that random walk outperforms most economic structural and time series models in out-of-sample forecasts of the conditional mean dynamics of exchange rates. In this paper, we study whether random walk has similar dominance in out-of-sample forecasts of the conditional probability density of exchange rates given that the probability density forecasts are often needed in many applications in economics and finance. We first develop a nonparametric portmanteau test for optimal density forecasts of univariate time series models in an out-of-sample setting and provide simulation evidence on its finite sample performance. Then we conduct a comprehensive empirical analysis on the out-of-sample performances of a wide variety of nonlinear time series models in forecasting the intraday probability densities of two major exchange rates—Euro/Dollar and Yen/Dollar. It is found that some sophisticated time series models that capture time-varying higher order conditional moments, such as Markov regime-switching models, have better density forecasts for exchange rates than random walk or modified random walk with GARCH and Student-t innovations. This finding dramatically differs from that on mean forecasts and suggests that sophisticated time series models could be useful in out-of-sample applications involving the probability density.  相似文献   

9.
10.
This paper addresses the question whether dual long memory (LM), asymmetry and structural breaks in stock market returns matter when forecasting the value at risk (VaR) and expected shortfall (ES) for short and long trading positions. We answer this question for the Gulf Cooperation Council (GCC) stock markets. Empirically, we test the occurrence of structural breaks in the GCC return data using the Inclan and Tiao (1994)’s algorithm and we check the relevance of LM using Shimotsu (2006) procedure before estimating the ARFIMA-FIGARCH and ARFIMA-FIAPARCH models with different innovations’ distributions and computing VaR and ES. Our results show that all the GCC market's volatilities exhibit significant structural breaks matching mainly with the 2008–2009 global financial crises and the Arab spring. Also, they are governed by LM process either in the mean or in the conditional variance which cannot be due to the occurrence of structural breaks. Furthermore, the forecasting ability analysis shows that the FIAPARCH model under skewed Student-t distribution turn out to improve substantially the VaR and the ES forecasts.  相似文献   

11.
In this paper, we suggest how to handle the issue of the heteroskedasticity of measurement errors when specifying dynamic models for the conditional expectation of realized variance. We show that either adding a GARCH correction within an asymmetric extension of the HAR  class (AHAR-GARCH), or working within the class of asymmetric multiplicative error models (AMEM) greatly reduces the need for quarticity/quadratic terms to capture attenuation bias. This feature in AMEM can be strengthened by considering regime specific dynamics. Model Confidence Sets confirm this robustness both in- and out-of-sample for a panel of 28 big caps and the S&P500 index.  相似文献   

12.
13.
We show that statistical inference on the risk premia in linear factor models that is based on the Fama–MacBeth (FM) and generalized least squares (GLS) two-pass risk premia estimators is misleading when the ββ’s are small and/or the number of assets is large. We propose novel statistics, that are based on the maximum likelihood estimator of Gibbons [Gibbons, M., 1982. Multivariate tests of financial models: A new approach. Journal of Financial Economics 10, 3–27], which remain trustworthy in these cases. The inadequacy of the FM and GLS two-pass tt/Wald statistics is highlighted in a power and size comparison using quarterly portfolio returns from Lettau and Ludvigson [Lettau, M., Ludvigson, S., 2001. Resurrecting the (C)CAPM: A cross-sectional test when risk premia are time-varying. Journal of Political Economy 109, 1238–1287]. The power and size comparison shows that the FM and GLS two-pass tt/Wald statistics can be severely size distorted. The 95% confidence sets for the risk premia in the above-cited work that result from the novel statistics differ substantially from those that result from the FM and GLS two-pass tt-statistics. They show support for the human capital asset pricing model although the 95% confidence set for the risk premia on labor income growth is unbounded. The 95% confidence sets show no support for the (scaled) consumption asset pricing model, since the 95% confidence set of the risk premia on the scaled consumption growth consists of the whole real line, but do not reject it either.  相似文献   

14.
15.
16.
We provide a new test for equality of two symmetric positive-definite matrices that leads to a convenient mechanism for testing specification using the information matrix equality or the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positive-definite matrices A and B: the traces of AB?1 and BA?1 are equal to k if and only if A=B. Using this simple criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omni-directional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings.  相似文献   

17.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

18.
We consider a first-order autoregressive model with conditionally heteroskedastic innovations. The asymptotic distributions of least squares (LS), infeasible generalized least squares (GLS), and feasible GLS estimators and t statistics are determined. The GLS procedures allow for misspecification of the form of the conditional heteroskedasticity and, hence, are referred to as quasi-GLS procedures. The asymptotic results are established for drifting sequences of the autoregressive parameter ρn and the distribution of the time series of innovations. In particular, we consider the full range of cases in which ρn satisfies n(1?ρn) and n(1?ρn)h1[0,) as n, where n is the sample size. Results of this type are needed to establish the uniform asymptotic properties of the LS and quasi-GLS statistics.  相似文献   

19.
We explore the validity of the 2-stage least squares estimator with l1-regularization in both stages, for linear triangular models where the numbers of endogenous regressors in the main equation and instruments in the first-stage equations can exceed the sample size, and the regression coefficients are sufficiently sparse. For this l1-regularized 2-stage least squares estimator, we first establish finite-sample performance bounds and then provide a simple practical method (with asymptotic guarantees) for choosing the regularization parameter. We also sketch an inference strategy built upon this practical method.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号