首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To examine complex relationships among variables, researchers in human resource management, industrial-organizational psychology, organizational behavior, and related fields have increasingly used meta-analytic procedures to aggregate effect sizes across primary studies to form meta-analytic correlation matrices, which are then subjected to further analyses using linear models (e.g., multiple linear regression). Because missing effect sizes (i.e., correlation coefficients) and different sample sizes across primary studies can occur when constructing meta-analytic correlation matrices, the present study examined the effects of missingness under realistic conditions and various methods for estimating sample size (e.g., minimum sample size, arithmetic mean, harmonic mean, and geometric mean) on the estimated squared multiple correlation coefficient (R2) and the power of the significance test on the overall R2 in linear regression. Simulation results suggest that missing data had a more detrimental effect as the number of primary studies decreased and the number of predictor variables increased. It appears that using second-order sample sizes of at least 10 (i.e., independent effect sizes) can improve both statistical power and estimation of the overall R2 considerably. Results also suggest that although the minimum sample size should not be used to estimate sample size, the other sample size estimates appear to perform similarly.  相似文献   

2.
This paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a retail store. A panel of time series concerns the case where the cross‐sectional dimension and the time dimension are large. Often, there is no a priori reason to select a few series or to aggregate the series over the cross‐sectional dimension. The use of, for example, a vector autoregression or other types of multivariate models then becomes cumbersome. Panel models and associated estimation techniques are more useful. Due to the large time dimension, one should however incorporate the time‐series features. And, the models should not have too many parameters to facilitate interpretation. This paper discusses representation, estimation and inference of relevant models and discusses recently proposed modeling approaches that explicitly aim to meet these requirements. The paper concludes with some reflections on the usefulness of large data sets. These concern sample selection issues and the notion that more detail also requires more complex models.  相似文献   

3.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

4.
T. Shiraishi 《Metrika》1990,37(1):189-197
Summary For testing homogeneity in multivariatek sample model, robust tests based onM-estimators are proposed and their asymptoticx 2-distributions are investigated. FurthermoreM-tests in multivariate regression models are discussed.  相似文献   

5.
Two families of kurtosis measures are defined as K 1(b)=E[ab −|z|] and K 2(b)=E[a(1−|z|b)] where z denotes the standardized variable and a is a normalizing constant chosen such that the kurtosis is equal to 3 for normal distributions. K 2(b) is an extension of Stavig's robust kurtosis. As with Pearson's measure of kurtosis β2=E[z 4], both measures are expected values of continuous functions of z that are even, convex or linear and strictly monotonic in ℜ and in ℜ+. In contrast to β2, our proposed kurtosis measures give more importance to the central part of the distribution instead of the tails. Tests of normality based on these new measures are more sensitive with respect to the peak of the distribution. K 1(b) and K 2(b) satisfy Van Zwet's ordering and correlate highly with other kurtosis measures such as L-kurtosis and quantile kurtosis. RID="*" ID="*"  The authors thank the referees for their insightful comments that significantly improved the clarity of the article.  相似文献   

6.
In this paper, we study the degree of business cycle synchronization by means of a small sample version of the Harding and Pagan's [Journal of Econometrics (2006) Vol. 132, pp. 59–79] Generalized Method of Moment test. We show that the asymptotic version of the test gets increasingly distorted in small samples when the number of countries grows large. However, a block bootstrapped version of the test can remedy the size distortion when the time series length divided by the number of countries T/n is sufficiently large. Applying the technique to a number of business cycle proxies of developed economies, we are unable to reject the null hypothesis of a non‐zero common multivariate synchronization index for certain economically meaningful subsets of these countries.  相似文献   

7.
This paper is concerned with inference about a function g that is identified by a conditional quantile restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite-dimensional parametric family against a nonparametric alternative. The test is not subject to the ill-posed inverse problem of nonparametric instrumental variable estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is proportional to n−1/2, where n is the sample size. Monte Carlo simulations illustrate the finite-sample performance of the test.  相似文献   

8.
We consider the popular ‘bounds test’ for the existence of a level relationship in conditional equilibrium correction models. By estimating response surface models based on about 95 billion simulated F-statistics and 57 billion t-statistics, we improve upon and substantially extend the set of available critical values, covering the full range of possible sample sizes and lag orders, and allowing for any number of long-run forcing variables. By computing approximate P-values, we find that the bounds test can be easily oversized by more than 5 percentage points in small samples when using asymptotic critical values.  相似文献   

9.
We consider the popular ‘bounds test’ for the existence of a level relationship in conditional equilibrium correction models. By estimating response surface models based on about 95 billion simulated F‐statistics and 57 billion t‐statistics, we improve upon and substantially extend the set of available critical values, covering the full range of possible sample sizes and lag orders, and allowing for any number of long‐run forcing variables. By computing approximate P‐values, we find that the bounds test can be easily oversized by more than 5 percentage points in small samples when using asymptotic critical values.  相似文献   

10.
The assumption of normality has underlain much of the development of statistics, including spatial statistics, and many tests have been proposed. In this work, we focus on the multivariate setting and first review the recent advances in multivariate normality tests for i.i.d. data, with emphasis on the skewness and kurtosis approaches. We show through simulation studies that some of these tests cannot be used directly for testing normality of spatial data. We further review briefly the few existing univariate tests under dependence (time or space), and then propose a new multivariate normality test for spatial data by accounting for the spatial dependence. The new test utilises the union-intersection principle to decompose the null hypothesis into intersections of univariate normality hypotheses for projection data, and it rejects the multivariate normality if any individual hypothesis is rejected. The individual hypotheses for univariate normality are conducted using a Jarque–Bera type test statistic that accounts for the spatial dependence in the data. We also show in simulation studies that the new test has a good control of the type I error and a high empirical power, especially for large sample sizes. We further illustrate our test on bivariate wind data over the Arabian Peninsula.  相似文献   

11.
We study Ackerberg, Caves, and Frazer's (Econometrica, 2015, 83, 2411–2451; hereafter ACF) production function estimation method using Monte Carlo simulations. First, we replicate their results by following their procedure to confirm the existence of a spurious minimum in the estimation, as noted by ACF. In the population, or when sample sizes are sufficiently large, this “global” identification problem may not be a concern because the spurious minimum occurs only at extreme values of capital and labor coefficients. However, in finite samples, their estimator can produce estimates that may not be clearly distinguishable from the spurious ones. In our second experiment, we modify the ACF procedure and show that robust estimates can be obtained using additional lagged instruments or sequential search. We also provide some arguments for why such modifications help in the ACF setting.  相似文献   

12.
Lothar Heinrich 《Metrika》1993,40(1):67-94
Summary This paper presents a method for the estimation of parameters of random closed sets (racs’s) in ℝ d based on a single realization within a (large) convex sampling window. The essential idea first applied by Diggle (1981) in a special case consists in defining the estimation by minimizing a suitably defined distance (called contrast function) between the true and the empirical contact distribution function of the racs under consideration, where the most relevant case of Boolean models is discussed in details. The resulting estimates are shown to be strongly consistent (if the racs is ergodic) and asymptotically normal (if the racs is Boolean) when the sampling window expands unboundedly.  相似文献   

13.
This paper investigates the cointegration relationship among a group of international stock indices in light of new developments of econometric methods. Kasa (1992) first documented strong evidence for cointegration relations among five national stock indices, which suggests that there exists a common trend among those stock indices. Using Johansen multivariate cointegration test, we find that his findings are persistent in a sample of longer periods and more countries. In order to investigate whether these results are driven by statistical biases related to the sample size, we apply to our tests the Johansen’s small sample correction factor. The results still point toward the existence of a cointegration relationship but the evidence becomes much weaker. We next examine the empirical patterns emerged from different lag specifications and argue that Kasa’s findings are more likely due to the size distortion in extreme long lag VAR models. Indeed, when we employ a newly developed non-parametric test that does not require estimation VAR models, the null hypothesis of no cointegration cannot be rejected for the original sample of Kasa’s five-country stock indices from 1974 to 1990, nor for the extended period from 1970 to 2003.  相似文献   

14.
This paper considers estimation and inference in linear panel regression models with lagged dependent variables and/or other weakly exogenous regressors when N (the cross‐section dimension) is large relative to T (the time series dimension). It allows for fixed and time effects (FE‐TE) and derives a general formula for the bias of the FE‐TE estimator which generalizes the well‐known Nickell bias formula derived for the pure autoregressive dynamic panel data models. It shows that in the presence of weakly exogenous regressors inference based on the FE‐TE estimator will result in size distortions unless N/T is sufficiently small. To deal with the bias and size distortion of the FE‐TE estimator the use of a half‐panel jackknife FE‐TE estimator is considered and its asymptotic distribution is derived. It is shown that the bias of the half‐panel jackknife FE‐TE estimator is of order T?2, and for valid inference it is only required that N/T3→0, as N,T jointly. Extension to unbalanced panel data models is also provided. The theoretical results are illustrated with Monte Carlo evidence. It is shown that the FE‐TE estimator can suffer from large size distortions when N>T, with the half‐panel jackknife FE‐TE estimator showing little size distortions. The use of half‐panel jackknife FE‐TE estimator is illustrated with two empirical applications from the literature.  相似文献   

15.
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose an integrated conditional moment type predictive accuracy test that is similar in spirit to that developed by Bierens (J. Econometr. 20 (1982) 105; Econometrica 58 (1990) 1443) and Bierens and Ploberger (Econometrica 65 (1997) 1129). The test is consistent against generic nonlinear alternatives, and is designed for comparing nested models. One important feature of our approach is that the same loss function is used for in-sample estimation and out-of-sample prediction. In this way, we rule out the possibility that the null model can outperform the nesting generic alternative model. It turns out that the limiting distribution of the ICM type test statistic that we propose is a functional of a Gaussian process with a covariance kernel that reflects both the time series structure of the data as well as the contribution of parameter estimation error. As a consequence, critical values that are data dependent and cannot be directly tabulated. One approach in this case is to obtain critical value upper bounds using the approach of Bierens and Ploberger (Econometrica 65 (1997) 1129). Here, we establish the validity of a conditional p-value method for constructing critical values. The method is similar in spirit to that proposed by Hansen (Econometrica 64 (1996) 413) and Inoue (Econometric Theory 17 (2001) 156), although we additionally account for parameter estimation error. In a series of Monte Carlo experiments, the finite sample properties of three variants of the predictive accuracy test are examined. Our findings suggest that all three variants of the test have good finite sample properties when quadratic loss is specified, even for samples as small as 600 observations. However, non-quadratic loss functions such as linex loss require larger sample sizes (of 1000 observations or more) in order to ensure reasonable finite sample performance.  相似文献   

16.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

17.
Modeling conditional distributions in time series has attracted increasing attention in economics and finance. We develop a new class of generalized Cramer–von Mises (GCM) specification tests for time series conditional distribution models using a novel approach, which embeds the empirical distribution function in a spectral framework. Our tests check a large number of lags and are therefore expected to be powerful against neglected dynamics at higher order lags, which is particularly useful for non-Markovian processes. Despite using a large number of lags, our tests do not suffer much from loss of a large number of degrees of freedom, because our approach naturally downweights higher order lags, which is consistent with the stylized fact that economic or financial markets are more affected by recent past events than by remote past events. Unlike the existing methods in the literature, the proposed GCM tests cover both univariate and multivariate conditional distribution models in a unified framework. They exploit the information in the joint conditional distribution of underlying economic processes. Moreover, a class of easy-to-interpret diagnostic procedures are supplemented to gauge possible sources of model misspecifications. Distinct from conventional CM and Kolmogorov–Smirnov (KS) tests, which are also based on the empirical distribution function, our GCM test statistics follow a convenient asymptotic N(0,1) distribution and enjoy the appealing “nuisance parameter free” property that parameter estimation uncertainty has no impact on the asymptotic distribution of the test statistics. Simulation studies show that the tests provide reliable inference for sample sizes often encountered in economics and finance.  相似文献   

18.
Codependent cycles   总被引:1,自引:0,他引:1  
This paper extends the work of Engle and Kozicki (1993) to test for co-movement in multiple time series when their cycles are not exactly synchronized. We call these codependent cycles and show that testing and estimation in this case will be a Generalized Method of Moments test and estimation procedure. We also show that the Tiao and Tsay (1985) proposed test for scalar components models of order (0, q) can be seen as a test for codependent cycles based on a consistent, but sub-optimal, estimate of the cofeature vector. We assess the small sample performance of the proposed tests through a series of simulations. Finally we apply this test to investigate comovement between durable and non-durable consumption expenditures.  相似文献   

19.
In this paper, we provide an intensive review of the recent developments for semiparametric and fully nonparametric panel data models that are linearly separable in the innovation and the individual-specific term. We analyze these developments under two alternative model specifications: fixed and random effects panel data models. More precisely, in the random effects setting, we focus our attention in the analysis of some efficiency issues that have to do with the so-called working independence condition. This assumption is introduced when estimating the asymptotic variance–covariance matrix of nonparametric estimators. In the fixed effects setting, to cope with the so-called incidental parameters problem, we consider two different estimation approaches: profiling techniques and differencing methods. Furthermore, we are also interested in the endogeneity problem and how instrumental variables are used in this context. In addition, for practitioners, we also show different ways of avoiding the so-called curse of dimensionality problem in pure nonparametric models. In this way, semiparametric and additive models appear as a solution when the number of explanatory variables is large.  相似文献   

20.
Monte Carlo Evidence on Cointegration and Causation   总被引:1,自引:0,他引:1  
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号