首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Building on realized variance and bipower variation measures constructed from high-frequency financial prices, we propose a simple reduced form framework for effectively incorporating intraday data into the modeling of daily return volatility. We decompose the total daily return variability into the continuous sample path variance, the variation arising from discontinuous jumps that occur during the trading day, as well as the overnight return variance. Our empirical results, based on long samples of high-frequency equity and bond futures returns, suggest that the dynamic dependencies in the daily continuous sample path variability are well described by an approximate long-memory HAR–GARCH model, while the overnight returns may be modeled by an augmented GARCH type structure. The dynamic dependencies in the non-parametrically identified significant jumps appear to be well described by the combination of an ACH model for the time-varying jump intensities coupled with a relatively simple log-linear structure for the jump sizes. Finally, we discuss how the resulting reduced form model structure for each of the three components may be used in the construction of out-of-sample forecasts for the total return volatility.  相似文献   

2.
We show that the distribution of any portfolio whose components jointly follow a location–scale mixture of normals can be characterised solely by its mean, variance and skewness. Under this distributional assumption, we derive the mean–variance–skewness frontier in closed form, and show that it can be spanned by three funds. For practical purposes, we derive a standardised distribution, provide analytical expressions for the log-likelihood score and explain how to evaluate the information matrix. Finally, we present an empirical application in which we obtain the mean–variance–skewness frontier generated by the ten Datastream US sectoral indices, and conduct spanning tests.  相似文献   

3.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

4.
This paper considers a spatial panel data regression model with serial correlation on each spatial unit over time as well as spatial dependence between the spatial units at each point in time. In addition, the model allows for heterogeneity across the spatial units using random effects. The paper then derives several Lagrange multiplier tests for this panel data regression model including a joint test for serial correlation, spatial autocorrelation and random effects. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin and Bera [1998. Spatial dependence in linear regression models with an introduction to spatial econometrics. In: Ullah, A., Giles, D.E.A. (Eds.), Handbook of Applied Economic Statistics. Marcel Dekker, New York] and in the panel data context by Baltagi et al. [2003. Testing panel data regression models with spatial error correlation. Journal of Econometrics 117, 123–150]. The second is the LM tests for the error component panel data model with serial correlation derived by Baltagi and Li [1995. Testing AR(1) against MA(1) disturbances in an error component model. Journal of Econometrics 68, 133–151]. Hence, the joint LM test derived in this paper encompasses those derived in both strands of earlier works. In fact, in the context of our general model, the earlier LM tests become marginal LM tests that ignore either serial correlation over time or spatial error correlation. The paper then derives conditional LM and LR tests that do not ignore these correlations and contrast them with their marginal LM and LR counterparts. The small sample performance of these tests is investigated using Monte Carlo experiments. As expected, ignoring any correlation when it is significant can lead to misleading inference.  相似文献   

5.
This paper proposes a new test for jumps in asset prices that is motivated by the literature on variance swaps. Formally, the test follows by a direct application of Itô’s lemma to the semi-martingale process of asset prices and derives its power from the impact of jumps on the third and higher order return moments. Intuitively, the test statistic reflects the cumulative gain of a variance swap replication strategy which is known to be minimal in the absence of jumps but substantial in the presence of jumps. Simulations show that the jump test has nice properties and is generally more powerful than the widely used bi-power variation test. An important feature of our test is that it can be applied–in analytically modified form–to noisy high frequency data and still retain power. As a by-product of our analysis, we obtain novel analytical results regarding the impact of noise on bi-power variation. An empirical illustration using IBM trade data is also included.  相似文献   

6.
Many interesting issues are posed by synchronization of cycles. In this paper, we define synchronization and show how the degree of synchronization can be measured. We propose heteroscedasticity and serial correlation robust tests of the hypotheses that cycles are either unsynchronized or perfectly synchronized.  相似文献   

7.
This study reconsiders the role of jumps for volatility forecasting by showing that jumps have a positive and mostly significant impact on future volatility. This result becomes apparent once volatility is separated into its continuous and discontinuous components using estimators which are not only consistent, but also scarcely plagued by small sample bias. With the aim of achieving this, we introduce the concept of threshold bipower variation, which is based on the joint use of bipower variation and threshold estimation. We show that its generalization (threshold multipower variation) admits a feasible central limit theorem in the presence of jumps and provides less biased estimates, with respect to the standard multipower variation, of the continuous quadratic variation in finite samples. We further provide a new test for jump detection which has substantially more power than tests based on multipower variation. Empirical analysis (on the S&P500 index, individual stocks and US bond yields) shows that the proposed techniques improve significantly the accuracy of volatility forecasts especially in periods following the occurrence of a jump.  相似文献   

8.
This article studies inference of multivariate trend model when the volatility process is nonstationary. Within a quite general framework we analyze four classes of tests based on least squares estimation, one of which is robust to both weak serial correlation and nonstationary volatility. The existing multivariate trend tests, which either use non-robust standard errors or rely on non-standard distribution theory, are generally non-pivotal involving the unknown time-varying volatility function in the limit. Two-step residual-based i.i.d. bootstrap and wild bootstrap procedures are proposed for the robust tests and are shown to be asymptotically valid. Simulations demonstrate the effects of nonstationary volatility on the trend tests and the good behavior of the robust tests in finite samples.  相似文献   

9.
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross‐equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared with a simulation‐based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal and stable error distributions. In the Gaussian case, finite‐sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi‐stage Monte Carlo test methods. For non‐Gaussian distribution families involving nuisance parameters, confidence sets are derived for the nuisance parameters and the error distribution. The tests are applied to an asset pricing model with observable risk‐free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over 5‐year subperiods from 1926 to 1995.  相似文献   

10.
Multivariate continuous time models are now widely used in economics and finance. Empirical applications typically rely on some process of discretization so that the system may be estimated with discrete data. This paper introduces a framework for discretizing linear multivariate continuous time systems that includes the commonly used Euler and trapezoidal approximations as special cases and leads to a general class of estimators for the mean reversion matrix. Asymptotic distributions and bias formulae are obtained for estimates of the mean reversion parameter. Explicit expressions are given for the discretization bias and its relationship to estimation bias in both multivariate and in univariate settings. In the univariate context, we compare the performance of the two approximation methods relative to exact maximum likelihood (ML) in terms of bias and variance for the Vasicek process. The bias and the variance of the Euler method are found to be smaller than the trapezoidal method, which are in turn smaller than those of exact ML. Simulations suggest that when the mean reversion is slow, the approximation methods work better than ML, the bias formulae are accurate, and for scalar models the estimates obtained from the two approximate methods have smaller bias and variance than exact ML. For the square root process, the Euler method outperforms the Nowman method in terms of both bias and variance. Simulation evidence indicates that the Euler method has smaller bias and variance than exact ML, Nowman’s method and the Milstein method.  相似文献   

11.
In this paper, we propose two estimators, an integral estimator and a discretized estimator, for the wavelet coefficient of regression functions in nonparametric regression models with heteroscedastic variance. These estimators can be used to test the jumps of the regression function. The model allows for lagged-dependent variables and other mixing regressors. The asymptotic distributions of the statistics are established, and the asymptotic critical values are analytically obtained from the asymptotic distribution. We also use the test to determine consistent estimators for the locations of change points. The jump sizes and locations of change points can be consistently estimated using wavelet coefficients, and the convergency rates of these estimators are derived. We perform some Monte Carlo simulations to check the powers and sizes of the test statistics. Finally, we give practical examples in finance and economics to detect changes in stock returns and short-term interest rates using the empirical wavelet method.  相似文献   

12.
This research empirically evaluates the potential diversification benefits of Gold during the COVID-19 pandemic period, when including it in equity-based asset allocation strategies. This study proposes minimum VaR portfolios, with monthly rebalance and different wavelet scales (short-run, mid-run and long-run), doing both an in-sample and out-of-sample analysis. We find much more unstable weights as the frequency of the decomposition becomes lower, and strong evidence of the outperformance of the mid-run decompositions over the rest of active management strategies and the passive management of buy and hold the variety of single equity indices. Thus, we may shed some light on the role of Gold as a safe haven when properly filtering aggregated data.  相似文献   

13.
We decompose the squared VIX index, derived from US S&P500 options prices, into the conditional variance of stock returns and the equity variance premium. We evaluate a plethora of state-of-the-art volatility forecasting models to produce an accurate measure of the conditional variance. We then examine the predictive power of the VIX and its two components for stock market returns, economic activity and financial instability. The variance premium predicts stock returns while the conditional stock market variance predicts economic activity and has a relatively higher predictive power for financial instability than does the variance premium.  相似文献   

14.
This paper analyzes the S&P 500 index return variance dynamics and the variance risk premium by combining information in variance swap rates constructed from options and quadratic variation estimators constructed from tick data on S&P 500 index futures. Estimation shows that the index return variance jumps. The jump arrival rate is not constant over time, but is proportional to the variance rate level. The variance jumps are not rare events but arrive frequently. Estimation also identifies a strongly negative variance risk premium, the absolute magnitude of which is proportional to the variance rate level.  相似文献   

15.
Sample autocorrelation coefficients are widely used to test the randomness of a time series. Despite its unsatisfactory performance, the asymptotic normal distribution is often used to approximate the distribution of the sample autocorrelation coefficients. This is mainly due to the lack of an efficient approach in obtaining the exact distribution of sample autocorrelation coefficients. In this paper, we provide an efficient algorithm for evaluating the exact distribution of the sample autocorrelation coefficients. Under the multivariate elliptical distribution assumption, the exact distribution as well as exact moments and joint moments of sample autocorrelation coefficients are presented. In addition, the exact mean and variance of various autocorrelation-based tests are provided. Actual size properties of the Box–Pierce and Ljung–Box tests are investigated, and they are shown to be poor when the number of lags is moderately large relative to the sample size. Using the exact mean and variance of the Box–Pierce test statistic, we propose an adjusted Box–Pierce test that has a far superior size property than the traditional Box–Pierce and Ljung–Box tests.  相似文献   

16.
We propose several connectedness measures built from pieces of variance decompositions, and we argue that they provide natural and insightful measures of connectedness. We also show that variance decompositions define weighted, directed networks, so that our connectedness measures are intimately related to key measures of connectedness used in the network literature. Building on these insights, we track daily time-varying connectedness of major US financial institutions’ stock return volatilities in recent years, with emphasis on the financial crisis of 2007–2008.  相似文献   

17.
We investigate the estimation and inference in difference in difference econometric models used in the analysis of treatment effects. When the innovations in such models display serial correlation, commonly used ordinary least squares (OLS) procedures are inefficient and may lead to tests with incorrect size. Implementation of feasible generalized least squares (FGLS) procedures is often hindered by too few observations in the cross-section to allow for unrestricted estimation of the weight matrix without leading to tests with similar size distortions as conventional OLS based procedures. We analyze the small sample properties of FGLS based tests with a formal higher order Edgeworth expansion that allows us to construct a size corrected version of the test. We also address the question of optimal temporal aggregation as a method to reduce the dimension of the weight matrix. We apply our procedure to data on regulation of mobile telephone service prices. We find that a size corrected FGLS based test outperforms tests based on OLS.  相似文献   

18.
Factor modelling of a large time series panel has widely proven useful to reduce its cross-sectional dimensionality. This is done by explaining common co-movements in the panel through the existence of a small number of common components, up to some idiosyncratic behaviour of each individual series. To capture serial correlation in the common components, a dynamic structure is used as in traditional (uni- or multivariate) time series analysis of second order structure, i.e. allowing for infinite-length filtering of the factors via dynamic loadings. In this paper, motivated from economic data observed over long time periods which show smooth transitions over time in their covariance structure, we allow the dynamic structure of the factor model to be non-stationary over time by proposing a deterministic time variation of its loadings. In this respect we generalize the existing recent work on static factor models with time-varying loadings as well as the classical, i.e. stationary, dynamic approximate factor model. Motivated from the stationary case, we estimate the common components of our dynamic factor model by the eigenvectors of a consistent estimator of the now time-varying spectral density matrix of the underlying data-generating process. This can be seen as a time-varying principal components approach in the frequency domain. We derive consistency of this estimator in a “double-asymptotic” framework of both cross-section and time dimension tending to infinity. The performance of the estimators is illustrated by a simulation study and an application to a macroeconomic data set.  相似文献   

19.
Second-order properties of estimators and tests offer a way of choosinf among aymptotically equivalent procedures. This paper studies the second-order terms of two estimators of serial correlation in the linear model. Using these second-order approximations, the maximum likelihood estimator is judge to be superior in terms of bias and variance. A small Monte Carlo experiment is done to assess the accuracy of the results.  相似文献   

20.
We propose new spanning tests that assess if the initial and additional assets share the economically meaningful cost and mean representing portfolios. We prove their asymptotic equivalence to existing tests under local alternatives. We also show that unlike two-step or iterated procedures, single-step methods such as continuously updated GMM yield numerically identical overidentifying restrictions test, so there is arguably a single spanning test. To prove these results, we extend optimal GMM inference to deal with singularities in the long run second moment matrix of the influence functions. Finally, we test for spanning using size and book-to-market sorted US stock portfolios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号