首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose an integrated conditional moment type predictive accuracy test that is similar in spirit to that developed by Bierens (J. Econometr. 20 (1982) 105; Econometrica 58 (1990) 1443) and Bierens and Ploberger (Econometrica 65 (1997) 1129). The test is consistent against generic nonlinear alternatives, and is designed for comparing nested models. One important feature of our approach is that the same loss function is used for in-sample estimation and out-of-sample prediction. In this way, we rule out the possibility that the null model can outperform the nesting generic alternative model. It turns out that the limiting distribution of the ICM type test statistic that we propose is a functional of a Gaussian process with a covariance kernel that reflects both the time series structure of the data as well as the contribution of parameter estimation error. As a consequence, critical values that are data dependent and cannot be directly tabulated. One approach in this case is to obtain critical value upper bounds using the approach of Bierens and Ploberger (Econometrica 65 (1997) 1129). Here, we establish the validity of a conditional p-value method for constructing critical values. The method is similar in spirit to that proposed by Hansen (Econometrica 64 (1996) 413) and Inoue (Econometric Theory 17 (2001) 156), although we additionally account for parameter estimation error. In a series of Monte Carlo experiments, the finite sample properties of three variants of the predictive accuracy test are examined. Our findings suggest that all three variants of the test have good finite sample properties when quadratic loss is specified, even for samples as small as 600 observations. However, non-quadratic loss functions such as linex loss require larger sample sizes (of 1000 observations or more) in order to ensure reasonable finite sample performance.  相似文献   

2.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

3.
This paper reviews tests of equality between the sets of coefficients in thetwo linear regression models, and examines the effect of heteroscedasticityin each model on the behaviour of one such test. A Monte Carlo evaluation,of the size in particular, shows that the usual Chow's F-ratio is wellbehaved as long as the sample sizes in the two models are equal and the twomodels exhibit the same form of heteroscedasticity which is moderate in thecontext of the values used. In such situations it is recommended that thetest be used directly to avoid the extra efforts involved in transformingthe original data in order to attain homoscedasticity.  相似文献   

4.
Eunju Hwang  Dong Wan Shin 《Metrika》2017,80(6-8):767-787
Stationary bootstrapping is applied to a CUSUM test for common mean break detection in cross-sectionally correlated panel data. Asymptotic null distribution of the bootstrapped test is derived, which is the same as that of the original CUSUM test depending on cross-sectional correlation parameter. A bootstrap test using the CUSUM test with bootstrap critical values is proposed and its asymptotic validity is proved. Finite sample Monte-Carlo simulation shows that the proposed test has reasonable size while other existing tests have severe size distortion under cross-section correlation. The simulation also shows good power performance of the proposed test against non-cancelling mean changes. The simulation also shows that the theoretically justified stationary bootstrapping CUSUM test has comparable size and power relative to other, theoretically unjustified, moving block or tapered block bootstrapping CUSUM tests.  相似文献   

5.
Hypothesis testing on cointegrating vectors based on the asymptotic distributions of the test statistics are known to suffer from severe small sample size distortion. In this paper an alternative bootstrap procedure is proposed and evaluated through a Monte Carlo experiment, finding that the Type I errors are close to the nominal signficance levels but power might be not entirely adequate. It is then shown that a combined test based on the outcomes of both the asymptotic and the bootstrap tests will have both correct size and low Type II error, therefore improving the currently available procedures.  相似文献   

6.
A procedure to test hypotheses about the population variance of a fuzzy random variable is analyzed. The procedure is based on the theory of UH-statistics. The variance is defined in terms of a general metric to quantify the variability of the fuzzy values about its (fuzzy) mean. An asymptotic one-sample test in a wide setting is developed and a bootstrap test, which is more suitable for small and moderate sample sizes, is also studied. Moreover, the power function of the asymptotic procedure through local alternatives is analyzed. Some simulations showing the empirical behavior and consistency of both tests are carried out. Finally, some illustrative examples of the practical application of the proposed tests are presented.  相似文献   

7.
This paper studies subsampling VAR tests of linear constraints as a way of finding approximations of their finite sample distributions that are valid regardless of the stochastic nature of the data generating processes for the tests. In computing the VAR tests with subsamples (i.e., blocks of consecutive time series), both the tests of the original form and the tests with the subsample OLS coefficient estimates centered at the full-sample estimates are used. Subsampling using the latter is called centered subsampling in this paper. It is shown that the subsamplings provide asymptotic distributions that are equivalent to the asymptotic distributions of the VAR tests. In addition, the tests using critical values from the subsamplings are shown to be consistent. The subsampling methods are applied to testing for causality. To choose the block sizes for subsample causality tests, the minimum volatility method, a new simulation-based calibration rule and a bootstrap-based calibration rule are used. Simulation results in this paper indicate that the centered subsampling using the simulation-based calibration rule for the block size is quite promising. It delivers stable empirical size and reasonably high-powered causality tests. Moreover, when the causality test has a chi-square distribution in the limit, the test using critical values from the centered subsampling has better size properties than the one using chi-square critical values. The centered subsampling using the bootstrap-based calibration rule for the block size also works well, but it is slightly inferior to that using the simulation-based calibration rule.  相似文献   

8.
The performance on small and medium-size samples of several techniques to solve the classification problem in discriminant analysis is investigated. The techniques considered are two widely used parametric statistical techniques (Fisher's linear discriminant function and Smith's quadratic function), and a class of recently proposed nonparametric estimation techniques based on mathematical programming (linear and mixed-integer programming). A simulation study is performed, analyzing the relative performance of the above techniques in the two-group case, for various small sample sizes, moderate group overlap and across six different data conditions. Training samples as well as validation samples are used to assess the classificatory performance of the techniques. The degree of group overlap and sample sizes selected for analysis in this paper are of interest in practice because they closely reflect conditions of many real data sets. The results of the experiment show that Smith's nonlinear quadratic function tends to be superior on the training samples and validation samples when the variances–covariances across groups are heterogeneous, while the mixed-integer technique performs best on the training samples when the variances–covariances are equal, and on validation samples with equal variances and discrete uniform independent variables. The mixed-integer technique and the quadratic discriminant function are also found to be more sensitive than the other techniques to the sample size, giving disproportionally inaccurate results on small samples.  相似文献   

9.
Panel unit root tests under cross-sectional dependence   总被引:5,自引:0,他引:5  
In this paper alternative approaches for testing the unit root hypothesis in panel data are considered. First, a robust version of the Dickey-Fuller t -statistic under contemporaneous correlated errors is suggested. Second, the GLS t -statistic is considered, which is based on the t -statistic of the transformed model. The asymptotic power of both tests is compared against a sequence of local alternatives. To adjust for short-run serial correlation of the errors, we propose a pre-whitening procedure that yields a test statistic with a standard normal limiting distribution as N and T tends to infinity. The test procedure is further generalized to accommodate individual specific intercepts or linear time trends. From our Monte Carlo simulations it turns out that the robust OLS t -statistic performs well with respect to size and power, whereas the GLS t -statistic may suffer from severe size distortions in small and moderate sample sizes. The tests are applied to test for a unit root in real exchange rates.  相似文献   

10.
11.
Obtaining a statistically significant result does not necessarily tell us whether we would obtain significant results in other, similar studies, particularly if the original sample sizes were small. This is why we are supposed to replicate experiments. The present study concerns social science events that cannot be repeated by virtue of their being historically situated. Among social science events, many textual data are datable and, by definition, unrepeatable. One solution to this quandary lies in bootstrap replications, which are based on the original data. A case in point is that of founding political speeches such as those that buoy the European construction. We analyze and compare 82 speeches made by President Delors over the period 1988–1994, and 28 by President Santer over the period 1995–1997. We have all these speeches (N = 110) concorded as to which words are used, how often, where, and when, with the help of a computer-aided content analysis package. We then test various hypotheses using replication bootstrap estimates, that is, by replicating the original sample a large number of times and recreating several thousand samples from the population so created.  相似文献   

12.
To examine complex relationships among variables, researchers in human resource management, industrial-organizational psychology, organizational behavior, and related fields have increasingly used meta-analytic procedures to aggregate effect sizes across primary studies to form meta-analytic correlation matrices, which are then subjected to further analyses using linear models (e.g., multiple linear regression). Because missing effect sizes (i.e., correlation coefficients) and different sample sizes across primary studies can occur when constructing meta-analytic correlation matrices, the present study examined the effects of missingness under realistic conditions and various methods for estimating sample size (e.g., minimum sample size, arithmetic mean, harmonic mean, and geometric mean) on the estimated squared multiple correlation coefficient (R2) and the power of the significance test on the overall R2 in linear regression. Simulation results suggest that missing data had a more detrimental effect as the number of primary studies decreased and the number of predictor variables increased. It appears that using second-order sample sizes of at least 10 (i.e., independent effect sizes) can improve both statistical power and estimation of the overall R2 considerably. Results also suggest that although the minimum sample size should not be used to estimate sample size, the other sample size estimates appear to perform similarly.  相似文献   

13.
We consider the uniformly most powerful unbiased (UMPU) one-sided test for the comparison of two proportions based on sample sizes m and n, i.e., the randomized version of Fisher's exact one-sided test. It will be shown that the power function of the one-sided UMPU-test based on sample sizes m and n can coincide with the power function of the UMPU-test based on sample sizes m+1 and n for certain levels on the entire parameter space. A characterization of all such cases with identical power functions is derived. Finally, this characterization is closely related to number theoretical problems concerning Fermat-like binomial equations. Some consequences for Fisher's original exact test will be discussed, too.  相似文献   

14.
For reasons of time constraint and cost reduction, censoring is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I right censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. In this article, we consider a progressively Type-I censored life-test under the assumption that the lifetime of each test unit is exponentially distributed. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE under the condition that its existence is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we then discuss the construction of confidence intervals for the mean parameter and their performance is assessed through Monte Carlo simulations. Finally, an example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

15.
Yet another paper on fit measures? To our knowledge, very few papers discuss how fit measures are affected by error variance in the Data Generating Process (DGP). The present paper deals with this. Based upon an extensive simulation study, this paper shows that the effects of increased error variance differ significantly for various fit measures. In addition to error variance the effects depend on sample size and severity of misspecification. The findings confirm the general notion that good fit as measured by the chi-square, RMSEA and GFI etc. does not necessarily mean that the model is correctly specified and reliable. One finding is that the chi square test may give support to misspecified models in situations with a high level of error variance in the DGP, for small sample sizes. Another finding is that the chi-square test looses power also for large sample sizes when the model is negligible misspecified. Other results include incremental fit indices as NFI and RFI which prove to be more informative indicators under these circumstances. At the end of the paper we formulate some guidelines for use of different fit measures.  相似文献   

16.
Rainer Schwabe  Harro Walk 《Metrika》1996,44(1):165-180
Based on the idea of averaging a new stochastic approximation algorithm has been proposed by Bather (1989), which shows a preferable performance for small to moderate sample sizes. In the present paper an almost sure representation is established for this procedure, which gives the optimal rate of convergence with minimal asymptotic variance. Work partly supported by the research grant Ku719/2-1 of the Deutsche Forschungsgemeinschaft  相似文献   

17.
This paper proposes a class of estimators of finite population variance in successive sampling on two occasions and analyzes its properties. Isaki (J Am Stat Assoc 78:117?C123, 1983) motivated to consider the problem of estimation of finite population variance in survey sampling, and its extension to the case of successive sampling is much interesting, and the theory developed here will be helpful to those involved in such analysis in future. To our knowledge this is the first attempt made by the authors in this direction. An empirical study based on real populations and moderate sample sizes demonstrates the usefulness of the proposed methodology. In addition, this paper also presents a through review on successive sampling.  相似文献   

18.
In this paper, an alternative sampling procedure that is a mixture of simple random sampling and systematic sampling is proposed. It results in uniform inclusion probabilities for all individual units and positive inclusion probabilities for all pairs of units. As a result, the proposed sampling procedure enables us to estimate the population mean unbiasedly using the ordinary sample mean, and to provide an unbiased estimator of its sampling variance. It is also found that the suggested sampling procedure performs well especially when the size of simple random sample is small. Received August 2001  相似文献   

19.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

20.
The problem of testing for multiplicative heteroskedasticity is considered and a large sample test is proposed. The test statistic is based upon ordinary least squares results, so that only estimation under the null hypothesis of homoskedasticity is required. The test is, however, asymptotically equivalent to the likelihood ratio test and so has good asymptotic power properties. The finite sample behaviour of the test statistic is examined using Monte Carlo experiments which indicate that the test works well for quite small samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号