首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyze the predictive performance of various volatility models for stock returns. To compare their performance, we choose loss functions for which volatility estimation is of paramount importance. We deal with two economic loss functions (an option pricing function and an utility function) and two statistical loss functions (a goodness-of-fit measure for a value-at-risk (VaR) calculation and a predictive likelihood function). We implement the tests for superior predictive ability of White [Econometrica 68 (5) (2000) 1097] and Hansen [Hansen, P. R. (2001). An unbiased and powerful test for superior predictive ability. Brown University]. We find that, for option pricing, simple models like the Riskmetrics exponentially weighted moving average (EWMA) or a simple moving average, which do not require estimation, perform as well as other more sophisticated specifications. For a utility-based loss function, an asymmetric quadratic GARCH seems to dominate, and this result is robust to different degrees of risk aversion. For a VaR-based loss function, a stochastic volatility model is preferred. Interestingly, the Riskmetrics EWMA model, proposed to calculate VaR, seems to be the worst performer. For the predictive likelihood-based loss function, modeling the conditional standard deviation instead of the variance seems to be a dominant modeling strategy.  相似文献   

2.
The generalised method of moments estimator may be substantially biased in finite samples, especially so when there are large numbers of unconditional moment conditions. This paper develops a class of first-order equivalent semi-parametric efficient estimators and tests for conditional moment restrictions models based on a local or kernel-weighted version of the Cressie–Read power divergence family of discrepancies. This approach is similar in spirit to the empirical likelihood methods of Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restrictions models. Econometrica 72, 1667–1714] and Tripathi and Kitamura [2003. Testing conditional moment restrictions. Annals of Statistics 31, 2059–2095]. These efficient local methods avoid the necessity of explicit estimation of the conditional Jacobian and variance matrices of the conditional moment restrictions and provide empirical conditional probabilities for the observations.  相似文献   

3.
We adapt the Bierens (1990) test to the I-regular models of Park and Phillips (2001). Bierens (1990) defines the test hypothesis in terms of a conditional moment condition. Under the null hypothesis, the moment condition holds with probability one. The probability measure used is that induced by the variables in the model, that are assumed to be strictly stationary. Our framework is nonstationary and this approach is not always applicable. We show that the Lebesgue measure can be used instead in a meaningful way. The resultant test is consistent against all I-regular alternatives.  相似文献   

4.
In this paper, we consider bootstrapping cointegrating regressions. It is shown that the method of bootstrap, if properly implemented, generally yields consistent estimators and test statistics for cointegrating regressions. For the cointegrating regression models driven by general linear processes, we employ the sieve bootstrap based on the approximated finite-order vector autoregressions for the regression errors and the first differences of the regressors. In particular, we establish the bootstrap consistency for OLS method. The bootstrap method can thus be used to correct for the finite sample bias of the OLS estimator and to approximate the asymptotic critical values of the OLS-based test statistics in general cointegrating regressions. The bootstrap OLS procedure, however, is not efficient. For the efficient estimation and hypothesis testing, we consider the procedure proposed by Saikkonen [1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21] and Stock and Watson [1993. A simple estimator of cointegrating vectors in higher order integrating systems. Econometrica 61, 783–820] relying on the regression augmented with the leads and lags of differenced regressors. The bootstrap versions of their procedures are shown to be consistent, and can be used to do asymptotically valid inferences. A Monte Carlo study is conducted to investigate the finite sample performances of the proposed bootstrap methods.  相似文献   

5.
We compare the powers of five tests of the coefficient on a single endogenous regressor in instrumental variables regression. Following Moreira [2003, A conditional likelihood ratio test for structural models. Econometrica 71, 1027–1048], all tests are implemented using critical values that depend on a statistic which is sufficient under the null hypothesis for the (unknown) concentration parameter, so these conditional tests are asymptotically valid under weak instrument asymptotics. Four of the tests are based on k-class Wald statistics (two-stage least squares, LIML, Fuller's [Some properties of a modification of the limited information estimator. Econometrica 45, 939–953], and bias-adjusted TSLS); the fifth is Moreira's (2003) conditional likelihood ratio (CLR) test. The heretofore unstudied conditional Wald (CW) tests are found to perform poorly, compared to the CLR test: in many cases, the CW tests have almost no power against a wide range of alternatives. Our analysis is facilitated by a new algorithm, presented here, for the computation of the asymptotic conditional p-value of the CLR test.  相似文献   

6.
7.
Given the specification of the lag length and functional form of a (non)linear time series regression we shall propose a test of the null hypothesis that the expectation of the error conditional on the exogenous variables, all lagged exogenous variables and all lagged dependent variables equals zero with probability 1. In the case that the data-generating process is strictly stationary this test is consistent with respect to the alternative hypothesis that the null is false. The test is also applicable for a particular class of non-stationary time series regressions, although in that case consistency with respect to all possible alternatives is no longer guaranteed. The test involved is a generalization of a test proposed in Bierens (1982b). Moreover, we also present a similar but simpler test of the hypothesis that the errors are martingale differences.  相似文献   

8.
《Journal of econometrics》2005,128(1):137-164
In this paper, we construct a new class of estimators for conditional quantiles in possibly misspecified nonlinear models with time series data. Proposed estimators belong to the family of quasi-maximum likelihood estimators (QMLEs) and are based on a new family of densities which we call ‘tick-exponential’. A well-known member of the tick-exponential family is the asymmetric Laplace density, and the corresponding QMLE reduces to the Koenker and Bassett's (Econometrica 46 (1978) 33) nonlinear quantile regression estimator. We derive primitive conditions under which the tick-exponential QMLEs are consistent and asymptotically normally distributed with an asymptotic covariance matrix that accounts for possible conditional quantile model misspecification and which can be consistently estimated by using the tick-exponential scores and Hessian matrix. Despite its non-differentiability, the tick-exponential quasi-likelihood is easy to maximize by using a ‘minimax’ representation not seen in the earlier work on conditional quantile estimation.  相似文献   

9.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

10.
We propose two new semiparametric specification tests which test whether a vector of conditional moment conditions is satisfied for any vector of parameter values θ0. Unlike most existing tests, our tests are asymptotically valid under weak and/or partial identification and can accommodate discontinuities in the conditional moment functions. Our tests are moreover consistent provided that identification is not too weak. We do not require the availability of a consistent first step estimator. Like Robinson [Robinson, Peter M., 1987. Asymptotically efficient estimation in the presence of heteroskedasticity of unknown form. Econometrica 55, 875–891] and many others in similar problems subsequently, we use k-nearest neighbor (knn) weights instead of kernel weights. The advantage of using knn weights is that local power is invariant to transformations of the instruments and that under strong point identification computation of the test statistic yields an efficient estimator of θ0 as a byproduct.  相似文献   

11.
This paper develops a modified version of the Sargan [Sargan, J.D., 1958. The estimation of economic relationships using instrumental variables. Econometrica 26 (3), 393-415] restrictions, and shows that it is numerically equivalent to the test statistic of Hahn and Hausman [Hahn, J., Hausman, J., 2002. A new specification test for the validity of instrumental variables. Econometrica 70 (1), 163-189] up to a sign. The modified Sargan test is constructed such that its asymptotic distribution under the null hypothesis of correct specification is standard normal when the number of instruments increases with the sample size. The equivalence result is useful in understanding what the Hahn-Hausman test detects and its power properties.  相似文献   

12.
We use high-frequency intra-day realized volatility data to evaluate the relative forecasting performances of various models that are used commonly for forecasting the volatility of crude oil daily spot returns at multiple horizons. These models include the RiskMetrics, GARCH, asymmetric GARCH, fractional integrated GARCH and Markov switching GARCH models. We begin by implementing Carrasco, Hu, and Ploberger’s (2014) test for regime switching in the mean and variance of the GARCH(1, 1), and find overwhelming support for regime switching. We then perform a comprehensive out-of-sample forecasting performance evaluation using a battery of tests. We find that, under the MSE and QLIKE loss functions: (i) models with a Student’s t innovation are favored over those with a normal innovation; (ii) RiskMetrics and GARCH(1, 1) have good predictive accuracies at short forecast horizons, whereas EGARCH(1, 1) yields the most accurate forecasts at medium horizons; and (iii) the Markov switching GARCH shows a superior predictive accuracy at long horizons. These results are established by computing the equal predictive ability test of Diebold and Mariano (1995) and West (1996) and the model confidence set of Hansen, Lunde, and Nason (2011) over the entire evaluation sample. In addition, a comparison of the MSPE ratios computed using a rolling window suggests that the Markov switching GARCH model is better at predicting the volatility during periods of turmoil.  相似文献   

13.
We define a new procedure for consistent estimation of nonparametric simultaneous equations models under the conditional mean independence restriction of Newey et al. [1999. Nonparametric estimation of triangular simultaneous equation models. Econometrica 67, 565–603]. It is based upon local polynomial regression and marginal integration techniques. We establish the asymptotic distribution of our estimator under weak data dependence conditions. Simulation evidence suggests that our estimator may significantly outperform the estimators of Pinkse [2000. Nonparametric two-step regression estimation when regressors and errors are dependent. Canadian Journal of Statistics 28, 289–300] and Newey and Powell [2003. Instrumental variable estimation of nonparametric models. Econometrica 71, 1565–1578].  相似文献   

14.
In this paper we propose a downside risk measure, the expectile-based Value at Risk (EVaR), which is more sensitive to the magnitude of extreme losses than the conventional quantile-based VaR (QVaR). The index θ of an EVaR is the relative cost of the expected margin shortfall and hence reflects the level of prudentiality. It is also shown that a given expectile corresponds to the quantiles with distinct tail probabilities under different distributions. Thus, an EVaR may be interpreted as a flexible QVaR, in the sense that its tail probability is determined by the underlying distribution. We further consider conditional EVaR and propose various Conditional AutoRegressive Expectile models that can accommodate some stylized facts in financial time series. For model estimation, we employ the method of asymmetric least squares proposed by Newey and Powell [Newey, W.K., Powell, J.L., 1987. Asymmetric least squares estimation and testing. Econometrica 55, 819–847] and extend their asymptotic results to allow for stationary and weakly dependent data. We also derive an encompassing test for non-nested expectile models. As an illustration, we apply the proposed modeling approach to evaluate the EVaR of stock market indices.  相似文献   

15.
This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that typically are solved by utilizing large sample approximations. By relying on Markov chain Monte Carlo methods, we are enabled to circumvent these issues and avoid computationally-prohibitive estimation strategies like the grid search. Due to the proliferation of parameters, we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar by means of a forecasting comparison. Our findings indicate that adopting a non-linear modeling approach improves the predictive accuracy for most currencies relative to a set of simpler benchmark models and the random walk.  相似文献   

16.
Newey and Powell [2003. Instrumental variable estimation of nonparametric models. Econometrica 71, 1565–1578] and Ai and Chen [2003. Efficient estimation of conditional moment restrictions models containing unknown functions. Econometrica 71, 1795–1843] propose sieve minimum distance (SMD) estimation of both finite dimensional parameter (θ)(θ) and infinite dimensional parameter (h) that are identified through a conditional moment restriction model, in which h could depend on endogenous variables. This paper modifies their SMD procedure to allow for different conditioning variables to be used in different equations, and derives the asymptotic properties when the model may be misspecified  . Under low-level sufficient conditions, we show that: (i) the modified SMD estimators of both θθ and h   converge to some pseudo-true values in probability; (ii) the SMD estimators of smooth functionals, including the θθ estimator and the average derivative estimator, are asymptotically normally distributed; and (iii) the estimators for the asymptotic covariances of the SMD estimators of smooth functionals are consistent and easy to compute. These results allow for asymptotically valid tests of various hypotheses on the smooth functionals regardless of whether the semiparametric model is correctly specified or not.  相似文献   

17.
《Journal of econometrics》2005,124(1):117-148
This paper discusses specification tests for diffusion processes. In the one-dimensional case, our proposed test is closest to the nonparametric test of Aı̈t-Sahalia (Rev. Financ. Stud. 9 (1996) 385). However, we compare CDFs instead of densities. In the multidimensional and/or multifactor case, our proposed test is based on comparison of the empirical CDF of actual data and the empirical CDF of simulated data. Asymptotically valid critical values are obtained using an empirical process version of the block bootstrap which accounts for parameter estimation error. An example based on a simple version of the Cox et al. (Econometrica 53 (1985) 385) model is outlined and related Monte Carlo experiments are carried out.  相似文献   

18.
We generalize the weak instrument robust score or Lagrange multiplier and likelihood ratio instrumental variables (IV) statistics towards multiple parameters and a general covariance matrix so they can be used in the generalized method of moments (GMM). The GMM extension of Moreira's [2003. A conditional likelihood ratio test for structural models. Econometrica 71, 1027–1048] conditional likelihood ratio statistic towards GMM preserves its expression except that it becomes conditional on a statistic that tests the rank of a matrix. We analyze the spurious power decline of Kleibergen's [2002. Pivotal statistics for testing structural parameters in instrumental variables regression. Econometrica 70, 1781–1803, 2005. Testing parameters in GMM without assuming that they are identified. Econometrica 73, 1103–1124] score statistic and show that an independent misspecification pre-test overcomes it. We construct identification statistics that reflect if the confidence sets of the parameters are bounded. A power study and the possible shapes of confidence sets illustrate the analysis.  相似文献   

19.
We consider the power properties of the CUSUM and CUSUM of squares (CUSQ) tests in the presence of a one-time change in the parameters of a linear regression model. A result due to Ploberger and Krämer [1990. The local power of the cusum and cusum of squares tests. Econometric Theory 6, 335–347.] is that the CUSQ test has only trivial asymptotic local power in this case, while the CUSUM test has non-trivial local asymptotic power unless the change is orthogonal to the mean regressor. The main theme of the paper is that such conclusions obtained from a local asymptotic framework are not reliable guides to what happens in finite samples. The approach we take is to derive expansions of the test statistics that retain terms related to the magnitude of the change under the alternative hypothesis. This enables us to analyze what happens for non-local to zero breaks. Our theoretical results are able to explain how the power function of the tests can be drastically different depending on whether one deals with a static regression with uncorrelated errors, a static regression with correlated errors, a dynamic regression with lagged dependent variables, or whether a correction for non-normality is applied in the case of the CUSQ. We discuss in which cases the tests are subject to a non-monotonic power function that goes to zero as the magnitude of the change increases, and uncover some curious properties. All theoretical results are verified to yield good guides to the finite sample power through simulation experiments. We finally highlight the practical importance of our results.  相似文献   

20.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号