首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
In this paper, we propose several finite‐sample specification tests for multivariate linear regressions (MLR). We focus on tests for serial dependence and ARCH effects with possibly non‐Gaussian errors. The tests are based on properly standardized multivariate residuals to ensure invariance to error covariances. The procedures proposed provide: (i) exact variants of standard multivariate portmanteau tests for serial correlation as well as ARCH effects, and (ii) exact versions of the diagnostics presented by Shanken ( 1990 ) which are based on combining univariate specification tests. Specifically, we combine tests across equations using a Monte Carlo (MC) test method so that Bonferroni‐type bounds can be avoided. The procedures considered are evaluated in a simulation experiment: the latter shows that standard asymptotic procedures suffer from serious size problems, while the MC tests suggested display excellent size and power properties, even when the sample size is small relative to the number of equations, with normal or Student‐t errors. The tests proposed are applied to the Fama–French three‐factor model. Our findings suggest that the i.i.d. error assumption provides an acceptable working framework once we allow for non‐Gaussian errors within 5‐year sub‐periods, whereas temporal instabilities clearly plague the full‐sample dataset. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

3.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

4.
The effect of a program or treatment may vary according to observed characteristics. In such a setting, it may not only be of interest to determine whether the program or treatment has an effect on some sub‐population defined by these observed characteristics, but also to determine for which sub‐populations, if any, there is an effect. This paper treats this problem as a multiple testing problem in which each null hypothesis in the family of null hypotheses specifies whether the program has an effect on the outcome of interest for a particular sub‐population. We develop our methodology in the context of PROGRESA, a large‐scale poverty‐reduction program in Mexico. In our application, the outcome of interest is the school enrollment rate and the sub‐populations are defined by gender and highest grade completed. Under weak assumptions, the testing procedure we construct controls the familywise error rate—the probability of even one false rejection—in finite samples. Similar to earlier studies, we find that the program has a significant effect on the school enrollment rate, but only for a much smaller number of sub‐populations when compared to results that do not adjust for multiple testing. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

6.
There has been a substantial debate whether GNP has a unit root. However, statistical tests have had little success in distinguishing between unit‐root and trend‐reverting specifications because of poor statistical properties. This paper develops a new exact small‐sample, pointwise most powerful unit root test that is invariant to the unknown mean and scale of the time series tested, that generates exact small‐sample critical values, powers and p‐values, that has power which approximates the maximum possible power, and that is highly robust to conditional heteroscedasticity. This test decisively rejects the unit root null hypothesis when applied to annual US real GNP and US real per capita GNP series. This paper also develops a modified version of the test to address whether a time series contains a permanent, unit root process in addition to a temporary, stationary process. It shows that if these GNP series contain a unit root process in addition to the stationary process, then it is most likely very small. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

8.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

9.
In a cross‐section where the initial distribution of observations differs from the steady‐state distribution and initial values matter, convergence is best measured in terms of σ‐convergence over a fixed time period. For this setting, we propose a new simple Wald test for conditional σ‐convergence. According to our Monte Carlo simulations, this test performs well and its power is comparable with the available tests of unconditional convergence. We apply two versions of the test to conditional convergence in the size of European manufacturing firms. The null hypothesis of no convergence is rejected for all country groups, most single economies, and for younger firms of our sample of 49,646 firms.  相似文献   

10.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

11.
This paper illustrates that, under the null hypothesis of no cointegration, the correlation of p‐values from a single‐equation residual‐based test (i.e., ADF or ) with a system‐based test (trace or maximum eigenvalue) is very low even as the sample size gets large. With data‐generating processes under the null or ‘near’ it, the two types of tests can yield virtually any combination of p‐values regardless of sample size. As a practical matter, we also conduct tests for cointegration on 132 data sets from 34 studies appearing in this Journal and find substantial differences in p‐values for the same data set. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
This paper proposes an empirical asset pricing test based on the homogeneity of the factor risk premia across risky assets. Factor loadings are considered to be dynamic and estimated from data at higher frequencies. The factor risk premia are obtained as estimates from time series regressions applied to each risky asset. We propose Swamy‐type tests robust to the presence of generated regressors and dependence between the pricing errors to assess the homogeneity of the factor risk premia and the zero intercept hypothesis. An application to US industry portfolios shows overwhelming evidence rejecting the capital asset pricing model, and the three and five factor models developed by Fama and French (Journal of Financial Economics, 1993, 33, 3–56; Journal of Financial Economics, 2015, 116, 1–22). In particular, we reject the null hypotheses of a zero intercept, homogeneous factor risk premia across risky assets, and the joint test involving both hypotheses.  相似文献   

13.
This article aims to provide some empirical guidelines for the practical implementation of right‐tailed unit root tests, focusing on the recursive right‐tailed ADF test of Phillips et al. (2011b). We analyze and compare the limit theory of the recursive test under different hypotheses and model specifications. The size and power properties of the test under various scenarios are examined and some recommendations for empirical practice are given. Some new results on the consistent estimation of localizing drift exponents are obtained, which are useful in assessing model specification. Empirical applications to stock markets illustrate these specification issues and reveal their practical importance in testing.  相似文献   

14.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

15.
This paper proposes a new system‐equation test for threshold cointegration based on a threshold vector autoregressive distributed lag (ADL) model. The new test can be applied when the cointegrating vector is unknown and when weak exogeneity fails. The asymptotic null distribution of the new test is derived, critical values are tabulated and finite‐sample properties are examined. In particular, the new test is shown to have good size, so the bootstrap is not required. The new test is illustrated using the long‐term and short‐term interest rates. We show that the system‐equation model can shed light on both asymmetric adjustment speeds and asymmetric adjustment roles. The latter is unavailable in the single‐equation testing strategy.  相似文献   

16.
This paper introduces the notion of common non‐causal features and proposes tools to detect them in multivariate time series models. We argue that the existence of co‐movements might not be detected using the conventional stationary vector autoregressive (VAR) model as the common dynamics are present in the non‐causal (i.e. forward‐looking) component of the series. We show that the presence of a reduced rank structure allows to identify purely causal and non‐causal VAR processes of order P>1 even in the Gaussian likelihood framework. Hence, usual test statistics and canonical correlation analysis can be applied, where either lags or leads are used as instruments to determine whether the common features are present in either the backward‐ or forward‐looking dynamics of the series. The proposed definitions of co‐movements are also valid for the mixed causal—non‐causal VAR, with the exception that a non‐Gaussian maximum likelihood estimator is necessary. This means however that one loses the benefits of the simple tools proposed. An empirical analysis on Brent and West Texas Intermediate oil prices illustrates the findings. No short run co‐movements are found in a conventional causal VAR, but they are detected when considering a purely non‐causal VAR.  相似文献   

17.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

18.
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross‐equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared with a simulation‐based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal and stable error distributions. In the Gaussian case, finite‐sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi‐stage Monte Carlo test methods. For non‐Gaussian distribution families involving nuisance parameters, confidence sets are derived for the nuisance parameters and the error distribution. The tests are applied to an asset pricing model with observable risk‐free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over 5‐year subperiods from 1926 to 1995.  相似文献   

19.
In this paper, we investigate a test for structural change in the long‐run persistence in a univariate time series. Our model has a unit root with no structural change under the null hypothesis, while under the alternative it changes from a unit‐root process to a stationary one or vice versa. We propose a Lagrange multiplier‐type test, a test with the quasi‐differencing method, and ‘demeaned versions’ of these tests. We find that the demeaned versions of these tests have better finite‐sample properties, although they are not necessarily superior in asymptotics to the other tests.  相似文献   

20.
Rationalizing non‐participation as a resource deficiency in the household, this paper identifies strategies for milk‐market development in the Ethiopian highlands. The additional amounts of covariates required for positive marketable surplus—‘distances‐to market’—are computed from a model in which production and sales are correlated; sales are left‐censored at some unobserved threshold; production efficiencies are heterogeneous; and the data are in the form of a panel. Incorporating these features into the modeling exercise is important because they are fundamental to the data‐generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second, because selling involves time and time is arguably the most important resource available to a subsistence household, the minimum sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third, the potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production innovations alone are insufficient to encourage participation. Market‐precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号