共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a test of the null hypothesis of stationarity that is robust to the presence of fat-tailed errors. The test statistic is a modified version of the so-called KPSS statistic. The modified statistic uses the “sign” of the data minus the sample median, whereas KPSS used deviations from means. This “indicator” KPSS statistic has the same limit distribution as the standard KPSS statistic under the null, without relying on assumptions about moments, but a different limit distribution under unit root alternatives. The indicator test has lower power than standard KPSS when tails are thin, but higher power when tails are fat. 相似文献
2.
Long-run variance estimation can typically be viewed as the problem of estimating the scale of a limiting continuous time Gaussian process on the unit interval. A natural benchmark model is given by a sample that consists of equally spaced observations of this limiting process. The paper analyzes the asymptotic robustness of long-run variance estimators to contaminations of this benchmark model. It is shown that any equivariant long-run variance estimator that is consistent in the benchmark model is highly fragile: there always exists a sequence of contaminated models with the same limiting behavior as the benchmark model for which the estimator converges in probability to an arbitrary positive value. A class of robust inconsistent long-run variance estimators is derived that optimally trades off asymptotic variance in the benchmark model against the largest asymptotic bias in a specific set of contaminated models. 相似文献
3.
Cointegration, common cycle, and related tests statistics are often constructed using logged data, even without clear reason why logs should be used rather than levels. Unfortunately, it is also the case that standard data transformation tests, such as those based on Box–Cox transformations, cannot be shown to be consistent unless assumptions concerning whether variables I(0) or I(1) are made. In this paper, we propose a simple randomized procedure for choosing between levels and log-levels specifications in the (possible) presence of deterministic and/or stochastic trends, and discuss the impact of incorrect data transformation on common cycle, cointegration and unit root tests. 相似文献
4.
We propose a class of distribution-free rank-based tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference densityg, which need not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite-sample size, is guaranteed, irrespective of the actual underlying density, by distribution-freeness. Those tests are locally and asymptotically optimal under a particular asymptotic scheme, for which we provide a complete analysis of asymptotic relative efficiencies. Rather than stressing asymptotic optimality, however, we emphasize finite-sample performances, which also depend, quite heavily, on initial values. It appears that our rank-based tests significantly outperform the traditional Dickey-Fuller tests, as well as the more recent procedures proposed by Elliott et al. (1996), Ng and Perron (2001), and Elliott and Müller (2006), for a broad range of initial values and for heavy-tailed innovation densities. Thus, they provide a useful complement to existing techniques. 相似文献
5.
The paper proposes a novel inference procedure for long-horizon predictive regression with persistent regressors, allowing the autoregressive roots to lie in a wide vicinity of unity. The invalidity of conventional tests when regressors are persistent has led to a large literature dealing with inference in predictive regressions with local to unity regressors. Magdalinos and Phillips (2009b) recently developed a new framework of extended IV procedures (IVX) that enables robust chi-square testing for a wider class of persistent regressors. We extend this robust procedure to an even wider parameter space in the vicinity of unity and apply the methods to long-horizon predictive regression. Existing methods in this model, which rely on simulated critical values by inverting tests under local to unity conditions, cannot be easily extended beyond the scalar regressor case or to wider autoregressive parametrizations. In contrast, the methods developed here lead to standard chi-square tests, allow for multivariate regressors, and include predictive processes whose roots may lie in a wide vicinity of unity. As such they have many potential applications in predictive regression. In addition to asymptotics under the null hypothesis of no predictability, the paper investigates validity under the alternative, showing how balance in the regression may be achieved through the use of localizing coefficients and developing local asymptotic power properties under such alternatives. These results help to explain some of the empirical difficulties that have been encountered in establishing predictability of stock returns. 相似文献
6.
In this paper we provide a joint treatment of two major problems that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, and uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not. We suggest decision rules based on the union of rejections of four standard unit root tests (OLS and quasi-differenced demeaned and detrended ADF unit root tests), along with information regarding the magnitude of the trend and initial condition, to allow simultaneously for both trend and initial condition uncertainty. 相似文献
7.
A versatile and robust metric entropy test of time-reversibility,and other hypotheses 总被引:1,自引:0,他引:1
We examine the performance of a metric entropy statistic as a robust test for time-reversibility (TR), symmetry, and serial dependence. It also serves as a measure of goodness-of-fit. The statistic provides a consistent and unified basis in model search, and is a powerful diagnostic measure with surprising ability to pinpoint areas of model failure. We provide empirical evidence comparing the performance of the proposed procedure with some of the modern competitors in nonlinear time-series analysis, such as robust implementations of the BDS and characteristic function-based tests of TR, along with correlation-based competitors such as the Ljung–Box Q-statistic. Unlike our procedure, each of its competitors is motivated for a different, specific, context and hypothesis. Our evidence is based on Monte Carlo simulations along with an application to several stock indices for the US equity market. 相似文献
8.
A new test is proposed for the weak white noise null hypothesis. The test is based on a new automatic selection of the order for a Box–Pierce (1970) test statistic or the test statistic of Hong (1996). The heteroskedasticity and autocorrelation-consistent (HAC) critical values from Lee (2007) are used, allowing for estimation of the error term. The data-driven order selection is tailored to detect a new class of alternatives with autocorrelation coefficients which can be o(n−1/2) provided there are sufficiently many of such coefficients. A simulation experiment illustrates the good statistical properties of the test both under the weak white noise null and the alternative. 相似文献
9.
Harvey, Leybourne and Taylor [Harvey, D.I., Leybourne, S.J., Taylor, A.M.R. 2009. Simple, robust and powerful tests of the breaking trend hypothesis. Econometric Theory 25, 995–1029] develop a test for the presence of a broken linear trend at an unknown point in the sample whose size is asymptotically robust as to whether the (unknown) order of integration of the data is either zero or one. This test is not size controlled, however, when this order assumes fractional values; its asymptotic size can be either zero or one in such cases. In this paper we suggest a new test, based on a sup-Wald statistic, which is asymptotically size-robust across fractional values of the order of integration (including zero or one). We examine the asymptotic power of the test under a local trend break alternative. The finite sample properties of the test are also investigated. 相似文献
10.
We propose methods for constructing confidence sets for the timing of a break in level and/or trend that have asymptotically correct coverage for both I(0) and I(1) processes. These are based on inverting a sequence of tests for the break location, evaluated across all possible break dates. We separately derive locally best invariant tests for the I(0) and I(1) cases; under their respective assumptions, the resulting confidence sets provide correct asymptotic coverage regardless of the magnitude of the break. We suggest use of a pre-test procedure to select between the I(0)- and I(1)-based confidence sets, and Monte Carlo evidence demonstrates that our recommended procedure achieves good finite sample properties in terms of coverage and length across both I(0) and I(1) environments. An application using US macroeconomic data is provided which further evinces the value of these procedures. 相似文献
11.
A formal test on the Lyapunov exponent is developed to distinguish a random walk model from a chaotic system, which is based on the Nadaraya–Watson kernel estimator of the Lyapunov exponent. The asymptotic null distribution of our test statistic is free of nuisance parameter, and simply given by the range of standard Brownian motion on the unit interval. The test is consistent against the chaotic alternatives. A simulation study shows that the test performs reasonably well in finite samples. We apply our test to some of the standard macro and financial time series, finding no significant empirical evidence of chaos. 相似文献
12.
Recent approaches to testing for a unit root when uncertainty exists over the presence and timing of a trend break employ break detection methods, so that a with-break unit root test is used only if a break is detected by some auxiliary statistic. While these methods achieve near asymptotic efficiency in both fixed trend break and no trend break environments, in finite samples pronounced “valleys” in the power functions of the tests (when mapped as functions of the break magnitude) are observed, with power initially high for very small breaks, then decreasing as the break magnitude increases, before increasing again. In response to this problem, we propose two practical solutions, based either on the use of a with-break unit root test but with adaptive critical values, or on a union of rejections principle taken across with-break and without-break unit root tests. These new procedures are shown to offer improved reliability in terms of finite sample power. We also develop local limiting distribution theory for both the extant and the newly proposed unit root statistics, treating the trend break magnitude as local-to-zero. We show that this framework allows the asymptotic analysis to closely approximate the finite sample power valley phenomenon, thereby providing useful analytical insights. 相似文献
13.
Y is conditionally independent of Z given X if Pr{f(y|X,Z)=f(y|X)}=1 for all y on its support, where f(·|·) denotes the conditional density of Y given (X,Z) or X. This paper proposes a nonparametric test of conditional independence based on the notion that two conditional distributions are equal if and only if the corresponding conditional characteristic functions are equal. We extend the test of Su and White (2005. A Hellinger-metric nonparametric test for conditional independence. Discussion Paper, Department of Economics, UCSD) in two directions: (1) our test is less sensitive to the choice of bandwidth sequences; (2) our test has power against deviations on the full support of the density of (X,Y,Z). We establish asymptotic normality for our test statistic under weak data dependence conditions. Simulation results suggest that the test is well behaved in finite samples. Applications to stock market data indicate that our test can reveal some interesting nonlinear dependence that a traditional linear Granger causality test fails to detect. 相似文献
14.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-t densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of tby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model. 相似文献
15.
A class of stochastic unit-root bilinear processes, allowing for GARCH-type effects with asymmetries, is studied. Necessary and sufficient conditions for the strict and second-order stationarity of the error process are given. The strictly stationary solution is shown to be strongly mixing under mild additional assumptions. It follows that, in this model, the standard (non-stochastic) unit-root tests of Phillips–Perron and Dickey–Fuller are asymptotically valid to detect the presence of a (stochastic) unit-root. The finite sample properties of these tests are studied via Monte-Carlo experiments. 相似文献
16.
Perron [Perron, P., 1989. The great crash, the oil price shock and the unit root hypothesis. Econometrica 57, 1361–1401] introduced a variety of unit root tests that are valid when a break in the trend function of a time series is present. The motivation was to devise testing procedures that were invariant to the magnitude of the shift in level and/or slope. In particular, if a change is present it is allowed under both the null and alternative hypotheses. This analysis was carried under the assumption of a known break date. The subsequent literature aimed to devise testing procedures valid in the case of an unknown break date. However, in doing so, most of the literature and, in particular the commonly used test of Zivot and Andrews [Zivot, E., Andrews, D.W.K., 1992. Further evidence on the great crash, the oil price shock and the unit root hypothesis. Journal of Business and Economic Statistics 10, 251–270], assumed that if a break occurs, it does so only under the alternative hypothesis of stationarity. This is undesirable since (a) it imposes an asymmetric treatment when allowing for a break, so that the test may reject when the noise is integrated but the trend is changing; (b) if a break is present, this information is not exploited to improve the power of the test. In this paper, we propose a testing procedure that addresses both issues. It allows a break under both the null and alternative hypotheses and, when a break is present, the limit distribution of the test is the same as in the case of a known break date, thereby allowing increased power while maintaining the correct size. Simulation experiments confirm that our procedure offers an improvement over commonly used methods in small samples. 相似文献
17.
This paper demonstrates that the class of conditionally linear and Gaussian state-space models offers a general and convenient framework for simultaneously handling nonlinearity, structural change and outliers in time series. Many popular nonlinear time series models, including threshold, smooth transition and Markov-switching models, can be written in state-space form. It is then straightforward to add components that capture parameter instability and intervention effects. We advocate a Bayesian approach to estimation and inference, using an efficient implementation of Markov Chain Monte Carlo sampling schemes for such linear dynamic mixture models. The general modelling framework and the Bayesian methodology are illustrated by means of several examples. An application to quarterly industrial production growth rates for the G7 countries demonstrates the empirical usefulness of the approach. 相似文献
18.
We consider a semiparametric cointegrating regression model, for which the disequilibrium error is further explained nonparametrically by a functional of distributions changing over time. The paper develops the statistical theories of the model. We propose an efficient econometric estimator and obtain its asymptotic distribution. A specification test for the model is also investigated. The model and methodology are applied to analyze how an aging population in the US influences the consumption level and the savings rate. We find that the impact of age distribution on the consumption level and the savings rate is consistent with the life-cycle hypothesis. 相似文献
19.
Hotelling's T
2 statistic is an important tool for inference about the center of a multivariate normal population. However, hypothesis tests
and confidence intervals based on this statistic can be adversely affected by outliers. Therefore, we construct an alternative
inference technique based on a statistic which uses the highly robust MCD estimator [9] instead of the classical mean and
covariance matrix. Recently, a fast algorithm was constructed to compute the MCD [10]. In our test statistic we use the reweighted
MCD, which has a higher efficiency. The distribution of this new statistic differs from the classical one. Therefore, the
key problem is to find a good approximation for this distribution. Similarly to the classical T
2 distribution, we obtain a multiple of a certain F-distribution. A Monte Carlo study shows that this distribution is an accurate
approximation of the true distribution. Finally, the power and the robustness of the one-sample test based on our robust T
2 are investigated through simulation. 相似文献
20.
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies. 相似文献