首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nature and form of the restrictions implied by the rational expectations hypothesis are examined in a variety of models with expectations and the properties of appropriate test statistics are analyzed with Monte Carlo evidence. Specifically, we consider the implications of lagged variables, simultaneous equations, and future period expectations upon the number and functional form of the rational expectations restrictions. Two asymptotically equivalent test statistics — a likelihood ratio and a Wald test — are available for implementing a test of these restrictions. Monte Carlo evidence is offered to provide a comparison between the properties of the alternative test statistics in small samples.  相似文献   

2.
Monte Carlo Evidence on Cointegration and Causation   总被引:1,自引:0,他引:1  
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.  相似文献   

3.
The ‘official’ (OPEC) prices of crude oil before the collapse in the oil market in the mid-1980s can be interpreted as contract prices and analysed on the basis of the theory of futures (or forward) markets. This paper uses the generalized method of moments estimation technique to test for efficiency in the relationship between the official prices and the ex-post spot prices at the time of delivery. Efficiency is rejected for the sample period 1978–1985 as a whole, but evidence is found of improvements over time. Further, the GMM Wald and Hansen tests, although asymptotically equivalent, are shown to differ greatly when applied to a small sample of monthly oil price data.  相似文献   

4.
Summary The Neyman-Pearson Lemma describes a test for two simple hypotheses that, for a given sample size, is most powerful for its level. It is usually implemented by choosing the smallest sample size that achieves a prespecified power for a fixed level. The Lemma does not describe how to select either the level or the power of the test. In the usual Wald decision-theoretic structure there exists a sampling cost function, an initial prior over the hypothesis space and various payoffs to right/wrong hypothesis selections. The optimal Wald test is a Bayes decision rule that maximizes the expected payoff net of sampling costs. This paper shows that the Wald-optimal test and the Neyman-Pearson test can be the same and how the Neyman-Pearson test, with fixed level and power, can be viewed as a Wald test subject to restrictions on the payoff vector, cost function and prior distribution.  相似文献   

5.
Many applied researchers have to deal with spatially autocorrelated residuals (SAR). Available tests that identify spatial spillovers as captured by a significant SAR parameter, are either based on maximum likelihood (MLE) or generalized method of moments (GMM) estimates. This paper illustrates the properties of various tests for the null hypothesis of a zero SAR parameter in a comprehensive Monte Carlo study. The main finding is that Wald tests generally perform well regarding both size and power even in small samples. The GMM-based Wald test is correctly sized even for non-normally distributed disturbances and small samples, and it exhibits a similar power as its MLE-based counterpart. Hence, for the applied researcher the GMM Wald test can be recommended, because it is easy to implement.  相似文献   

6.
7.
The negativity of the substitution matrix implies that its latent roots are non-positive. When inequality restrictions are tested, standard test statistics such as a likelihood ratio or a Wald test are not X2-distributed in large samples. We propose a Wald test for testing the negativity of the substitution matrix. The asymptotic distribution of the statistic is a mixture of X2-distributions. The Wald test provides an exact critical value for a given significance level. The problems involved in computing the exact critical value can be avoided by using the upper and lower bound critical values derived by Kodde and Palm (1986). Finally the methods are applied to the empirical results obtained by Barten and Geyskens (1975).  相似文献   

8.
We consider the dynamic factor model and show how smoothness restrictions can be imposed on factor loadings by using cubic spline functions. We develop statistical procedures based on Wald, Lagrange multiplier and likelihood ratio tests for this purpose. The methodology is illustrated by analyzing a newly updated monthly time series panel of US term structure of interest rates. Dynamic factor models with and without smooth loadings are compared with dynamic models based on Nelson–Siegel and cubic spline yield curves. We conclude that smoothness restrictions on factor loadings are supported by the interest rate data and can lead to more accurate forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
本文提出了一种新的时间趋势属性的检验方法,该方法融合了非线性模型与线性模型。本文构建了三个Wald类检验统计量及一个稳健检验统计量,推导出了这些统计量的极限分布并分析了其有限样本下的统计性质。应用该检验程序,本文分析了我国24个重要宏观经济变量的时间趋势属性,结果表明,其中22个经济变量具有非线性平滑转移特征,其时间趋势属性表现为确定性。  相似文献   

10.
This paper deals with the issue of testing hypotheses in symmetric and log‐symmetric linear regression models in small and moderate‐sized samples. We focus on four tests, namely, the Wald, likelihood ratio, score, and gradient tests. These tests rely on asymptotic results and are unreliable when the sample size is not large enough to guarantee a good agreement between the exact distribution of the test statistic and the corresponding chi‐squared asymptotic distribution. Bartlett and Bartlett‐type corrections typically attenuate the size distortion of the tests. These corrections are available in the literature for the likelihood ratio and score tests in symmetric linear regression models. Here, we derive a Bartlett‐type correction for the gradient test. We show that the corrections are also valid for the log‐symmetric linear regression models. We numerically compare the various tests and bootstrapped tests, through simulations. Our results suggest that the corrected and bootstrapped tests exhibit type I probability error closer to the chosen nominal level with virtually no power loss. The analytically corrected tests as well as the bootstrapped tests, including the Bartlett‐corrected gradient test derived in this paper, perform with the advantage of not requiring computationally intensive calculations. We present a real data application to illustrate the usefulness of the modified tests.  相似文献   

11.
This Monte Carlo study examines the relative performance of sample selection and two-part models for data with a cluster at zero. The data are drawn from a bivariate normal distribution with a positive correlation. The alternative estimators are examined in terms of means squared error, mean bias and pointwise bias. The sample selection estimators include LIML and FIML. The two-part estimators include a naive (the true specification, omitting the correlation coefficient) and a data-analytic (testimator) variant.In the absence of exclusion restrictions, the two-part models are no worse, and often appreciably better than selection models in terms of mean behavior, but can behave poorly for extreme values of the independent variable. LIML had the worst performance of all four models. Empirically, selection effects are difficult to distinguish from a non-linear (e.g., quadratic) response. With exclusion restrictions, simple selection models were significantly better behaved than a naive two-part model over subranges of the data, but were negligibly better than the data-analytic version.  相似文献   

12.
Incomplete correlated 2 × 2 tables are common in some infectious disease studies and two‐step treatment studies in which one of the comparative measures of interest is the risk ratio (RR). This paper investigates the two‐stage tests of whether K RRs are homogeneous and whether the common RR equals a freewill constant. On the assumption that K RRs are equal, this paper proposes four asymptotic test statistics: the Wald‐type, the logarithmic‐transformation‐based, the score‐type and the likelihood ratio statistics to test whether the common RR equals a prespecified value. Sample size formulae based on hypothesis testing method and confidence interval method are proposed in the second stage of test. Simulation results show that sample sizes based on the score‐type test and the logarithmic‐transformation‐based test are more accurate to achieve the predesigned power than those based on the Wald‐type test. The score‐type test performs best of the four tests in terms of type I error rate. A real example is used to illustrate the proposed methods.  相似文献   

13.
《Journal of econometrics》1986,31(2):121-149
We investigate the problem of specification testing when the score vector evaluated at the restricted MLE is identically zero. Several econometric examples are provided. A general test procedure which generalizes the geometric principle of the score test is proposed. The Wald and the likelihood ratio tests are also analyzed. Even under such irregularities, the usual asymptotic distribution of the likelihood ratio statistics is still valid. However, the Wald-type statistics need to be modified. The generalized score, the likelihood ratio and the modified Wald tests are shown to be asymptotically equivalent. The asymptotic efficiency of the tests is derived.  相似文献   

14.
针对经济变量的长期均衡和短期调节关系可能同时存在非线性的事实,本文扩展现有阈值协整模型,提出了协整向量、调节参数都为非线性的阈值协整模型,并着重探讨了该模型的检验方法。研究表明,在协整关系的检验中,Wald统计量有较好的有限样本性质。在协整关系的非线性检验中,LMW和LMG统计量的水平扭曲和检验势都较好。在调节参数的非线性检验中,当调节参数具有显著的非线性时,LMH统计量表现出较好的有限样本性质。  相似文献   

15.
This paper studies instrumental variables (IV) estimation for an error component model with stationary and nearly nonstationary regressors. It is assumed that the numbers of cross section and time series observations are infinite. Furthermore, autoregressive disturbances are assumed for the error component model, the structure of which may vary with individuals. The estimators considered are the Within-IV-OLS, Within-IV-GLS and IV-GLS estimators. The GLS estimators use Gohberg's formula, which is particularly useful when autoregressive structures are imposed on the disturbance terms. Sequential limit theories for the estimators are derived, and it is shown that all of the estimators have normal distributions in the limit. Additionally, Wald tests for coefficient vectors are shown to have chi-square distributions in the limit. Simulation results regarding the estimator efficiency and the size of the Wald tests are also reported. The results show that the Within-IV-GLS and IV-GLS estimators are more efficient than the Within-IV-OLS estimator in most cases and that the Wald tests keep nominal size reasonably well. The relation between the trade and budget deficits of 23 OECD nations is examined using the panel IV estimators. The empirical results support the view that the budget and trade deficits move in the same direction.  相似文献   

16.
We derive computationally simple expressions for score tests of misspecification in parametric dynamic factor models using frequency domain techniques. We interpret those diagnostics as time domain moment tests which assess whether certain autocovariances of the smoothed latent variables match their theoretical values under the null of correct model specification. We also reinterpret reduced‐form residual tests as checking specific restrictions on structural parameters. Our Gaussian tests are robust to nonnormal, independent innovations. Monte Carlo exercises confirm the finite‐sample reliability and power of our proposals. Finally, we illustrate their empirical usefulness in an application that constructs a US coincident indicator.  相似文献   

17.
This paper illustrates the pitfalls of the conventional heteroskedasticity and autocorrelation robust (HAR) Wald test and the advantages of new HAR tests developed by Kiefer and Vogelsang in 2005 and by Phillips, Sun and Jin in 2003 and 2006. The illustrations use the 1993 Fama–French three‐factor model. The null that the intercepts are zero is tested for 5‐year, 10‐year and longer sub‐periods. The conventional HAR test with asymptotic P‐values rejects the null for most 5‐year and 10‐year sub‐periods. By contrast, the null is not rejected by the new HAR tests. This conflict is explained by showing that inferences based on the conventional HAR test are misleading for the sample sizes used in this application. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

19.
Receiver operating characteristic curves are widely used as a measure of accuracy of diagnostic tests and can be summarised using the area under the receiver operating characteristic curve (AUC). Often, it is useful to construct a confidence interval for the AUC; however, because there are a number of different proposed methods to measure variance of the AUC, there are thus many different resulting methods for constructing these intervals. In this article, we compare different methods of constructing Wald‐type confidence interval in the presence of missing data where the missingness mechanism is ignorable. We find that constructing confidence intervals using multiple imputation based on logistic regression gives the most robust coverage probability and the choice of confidence interval method is less important. However, when missingness rate is less severe (e.g. less than 70%), we recommend using Newcombe's Wald method for constructing confidence intervals along with multiple imputation using predictive mean matching.  相似文献   

20.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号