首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The assumption of normality has underlain much of the development of statistics, including spatial statistics, and many tests have been proposed. In this work, we focus on the multivariate setting and first review the recent advances in multivariate normality tests for i.i.d. data, with emphasis on the skewness and kurtosis approaches. We show through simulation studies that some of these tests cannot be used directly for testing normality of spatial data. We further review briefly the few existing univariate tests under dependence (time or space), and then propose a new multivariate normality test for spatial data by accounting for the spatial dependence. The new test utilises the union-intersection principle to decompose the null hypothesis into intersections of univariate normality hypotheses for projection data, and it rejects the multivariate normality if any individual hypothesis is rejected. The individual hypotheses for univariate normality are conducted using a Jarque–Bera type test statistic that accounts for the spatial dependence in the data. We also show in simulation studies that the new test has a good control of the type I error and a high empirical power, especially for large sample sizes. We further illustrate our test on bivariate wind data over the Arabian Peninsula.  相似文献   

2.
We prove asymptotic normality of a suitably standardized integrated square difference between a kernel type error density estimator based on residuals and the expected value of the error density estimator based on innovations in GARCH models. This result is similar to that of Bickel–Rosenblatt under i.i.d. set up. Consequently the goodness-of-fit test for the innovation density of GARCH processes based on this statistic is asymptotically distribution free, unlike the tests based on the residual empirical process. A simulation study comparing the finite sample behavior of this test with Kolmogorov–Smirnov test and the test based on integrated square difference between the kernel density estimate and null density shows some superiority of the proposed test.  相似文献   

3.
This article proposes a class of asymptotically distribution-free specification tests for parametric conditional distributions. These tests are based on a martingale transform of a proper sequential empirical process of conditionally transformed data. Standard continuous functionals of this martingale provide omnibus tests while linear combinations of the orthogonal components in its spectral representation form a basis for directional tests. Finally, Neyman-type smooth tests, a compromise between directional and omnibus tests, are discussed. As a special example we study in detail the construction of directional tests for the null hypothesis of conditional normality versus heteroskedastic contiguous alternatives. A small Monte Carlo study shows that our tests attain the nominal level already for small sample sizes.  相似文献   

4.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

5.
The standard LM tests for spatial dependence in linear and panel regressions are derived under the normality and homoskedasticity assumptions of the regression disturbances. Hence, they may not be robust against non-normality or heteroskedasticity of the disturbances. Following Born and Breitung (2011), we introduce general methods to modify the standard LM tests so that they become robust against heteroskedasticity and non-normality. The idea behind the robustification is to decompose the concentrated score function into a sum of uncorrelated terms so that the outer product of gradient (OPG) can be used to estimate its variance. We also provide methods for improving the finite sample performance of the proposed tests. These methods are then applied to several popular spatial models. Monte Carlo results show that they work well in finite sample.  相似文献   

6.
Summary We prove that the tests of Cs?rgő (1986) and of Baringhaus and Henze (1988) for multivariate normality, both based on the empirical characteristic function, are consistent. Work partially supported by the Hungarian National Foundation for Scientific Research, Grants No. 1808/86 and 457/88.  相似文献   

7.
Hollander, Park and Proschan (1986) proposed a test of new is better than used of a specified age. It is based on large sample normality of the test statistic. There is, however, no study in the literature on its actual size for small and moderate sample sizes. To shed some lights on this, the results of a Monte Carlo simulation study as well as two real data examples are reported and these indicate that the test can have a quite liberal size, especially for small to moderate sample sizes. In order to improve on this weakness, a modified test is proposed and studied. It is noticed that this modified test seems to over-correct the original test to an extent that it becomes unduly conservative sometimes. Hence we propose another modification that combines the original test and the modified test turns out to have its size quite close to the nominal level and is therefore preferable to both the original and modified tests.  相似文献   

8.
Withdrawing from a longitudinal investigation is a common problem in epidemiological research. This paper describes a nonparametric method, based on a bootstrap approach, for assessing whether dropouts are missed at random. The basic idea is to compare scores of dropouts and non-dropouts at different assessments using a weighted nonparametric test statistic.A Monte Carlo investigation evaluates the comparative power of the test to violations from populations normality, using three commonly occurring distributions. The test proposed here is more powerful than the parametric counterpart under distributions with extreme skews.The method is applied to a longitudinal community-based study investigating mental disorders. It is found that dropouts did not differ from the other subjects with respect to two psychological variables, although chi-square tests gave some other impressions.  相似文献   

9.
This paper focuses on nonparametric efficiency analysis based on robust estimation of partial frontiers in a complete multivariate setup (multiple inputs and multiple outputs). It introduces α-quantile efficiency scores. A nonparametric estimator is proposed achieving strong consistency and asymptotic normality. Then if α increases to one as a function of the sample size we recover the properties of the FDH estimator. But our estimator is more robust to the perturbations in data, since it attains a finite gross-error sensitivity. Environmental variables can be introduced to evaluate efficiencies and a consistent estimator is proposed. Numerical examples illustrate the usefulness of the approach.  相似文献   

10.
Summary We propose a natural extension of Neyman’s smooth goodness of fit tests under composite hypotheses. The components of our smooth tests are immediate analogues of the corresponding components in the completely specified null case. We show that, when testing for univariate normality, one of our smooth tests is similar to D’Agostino and Pearson’s (1973) statistic; when testing for multivariate normality, our smooth test for skewness is identical to Mardia’s (1970) measure of multivariate skewness; and, when testing for exponentiality, our simplest smooth test is equivalent to Greenwood’s (1946) statistic.  相似文献   

11.
A normality assumption is usually made for the discrimination between two stationary time series processes. A nonparametric approach is desirable whenever there is doubt concerning the validity of this normality assumption. In this paper a nonparametric approach is suggested based on kernel density estimation firstly on (p+1) sample autocorrelations and secondly on (p+1) consecutive observations. A numerical comparison is made between Fishers linear discrimination based on sample autocorrelations and kernel density discrimination for AR and MA processes with and without Gaussian noise. The methods are applied to some seismological data.  相似文献   

12.
We study forward curves formed from commodity futures prices listed on the Standard and Poor’s-Goldman Sachs Commodities Index (S&P GSCI) using recently developed tools in functional time series analysis. Functional tests for stationarity and serial correlation suggest that log-differenced forward curves may be generally considered as stationary and conditionally heteroscedastic sequences of functions. Several functional methods for forecasting forward curves that more accurately reflect the time to expiry of contracts are developed, and we found that these typically outperformed their multivariate counterparts, with the best among them using the method of predictive factors introduced by Kargin and Onatski (2008).  相似文献   

13.
Fisher and Inference for Scores   总被引:1,自引:0,他引:1  
This paper examines the work of Fisher and Bartlett on discriminant analysis, ordinal response regression and correspondence analysis. Placing these methods with canonical correlation analysis in the context of the singular value decomposition of particular matrices, we use explicit models and vector space notation to unify these methods, understand Fisher's approach, understand Bartlett's criticisms of Fisher and relate both to modern thinking. We consider in particular the formulation of certain hypotheses and Fisher's arguments to obtain approximate distributions for tests of these hypotheses (without assuming multivariate normality) and put these in modern notation. Using perturbation techniques pioneered by G.S. Watson, we give an asymptotic justification for Fisher's test for assigned scores and thereby resolve a long standing conflict between Fisher and Bartlett.  相似文献   

14.
Ya. Yu. Nikitin 《Metrika》2018,81(6):609-618
We consider two scale-free tests of normality based on the characterization of the symmetric normal law by Ahsanullah et al. (Normal and student’s t-distributions and their applications, Springer, Berlin, 2014). Both tests have an U-empirical structure, but the first one is of integral type, while the second one is of Kolmogorov type. We discuss the limiting behavior of the test statistics and calculate their local exact Bahadur efficiency for location, skew and contamination alternatives.  相似文献   

15.
《Journal of econometrics》2005,124(1):149-186
In this paper, we consider testing marginal normal distributional assumptions. More precisely, we propose tests based on moment conditions implied by normality. These moment conditions are known as the Stein (Proceedings of the Sixth Berkeley Symposium on Mathematics, Statistics and Probability, Vol. 2, pp. 583–602) equations. They coincide with the first class of moment conditions derived by Hansen and Scheinkman (Econometrica 63 (1995) 767) when the random variable of interest is a scalar diffusion. Among other examples, Stein equation implies that the mean of Hermite polynomials is zero. The GMM approach we adopt is well suited for two reasons. It allows us to study in detail the parameter uncertainty problem, i.e., when the tests depend on unknown parameters that have to be estimated. In particular, we characterize the moment conditions that are robust against parameter uncertainty and show that Hermite polynomials are special examples. This is the main contribution of the paper. The second reason for using GMM is that our tests are also valid for time series. In this case, we adopt a heteroskedastic-autocorrelation-consistent approach to estimate the weighting matrix when the dependence of the data is unspecified. We also make a theoretical comparison of our tests with Jarque and Bera (Econom. Lett. 6 (1980) 255) and OPG regression tests of Davidson and MacKinnon (Estimation and Inference in Econometrics, Oxford University Press, Oxford). Finite sample properties of our tests are derived through a comprehensive Monte Carlo study. Finally, two applications to GARCH and realized volatility models are presented.  相似文献   

16.
We consider estimation of nonparametric structural models under a functional coefficient representation for the regression function. Under this representation, models are linear in the endogenous components with coefficients given by unknown functions of the predetermined variables, a nonparametric generalization of random coefficient models. The functional coefficient restriction is an intermediate approach between fully nonparametric structural models that are ill posed when endogenous variables are continuously distributed, and partially linear models over which they have appreciable flexibility. We propose two-step estimators that use local linear approximations in both steps. The first step is to estimate a vector of reduced forms of regression models and the second step is local linear regression using the estimated reduced forms as regressors. Our large sample results include consistency and asymptotic normality of the proposed estimators. The high practical power of estimators is illustrated via both a Monte Carlo simulation study and an application to returns to education.  相似文献   

17.
This study was conducted to examine whether the same underlying structure of ability tests emerges when three different data analysis methods are used. The sample consisted of 335 examinees who applied for vocational guidance and were administered a battery of 17 tests. A matrix of intercorrelations between scores, based on the number of correct answers was obtained. The matrix was subjected to factor analysis, Guttman's SSA, and tree analysis, essentially resulting in different structures. Comparisons were made, and the theoretical implications of the results are discussed in relation to various structural models of ability tests existing in the literature.  相似文献   

18.
N. Henze 《Metrika》1990,37(1):7-18
Summary The approach of Epps and Pulley (1983) based on the empirical characteristic function is one of the most powerful tools for detecting any departures from normality. We obtain the first four moments of the limiting null distribution of the Epps-Pulley Statistic. Johnson- and Pearson curve fitting yields excellent approximations to simulated quantiles, and by modifying the test statistic the procedure may be carried out easily without the use of extensive tables for all sample sizes. Research done while the author was on leave at the University of Gie?en.  相似文献   

19.
Economic and financial data often take the form of a collection of curves observed consecutively over time. Examples include, intraday price curves, yield and term structure curves, and intraday volatility curves. Such curves can be viewed as a time series of functions. A fundamental issue that must be addressed, before an attempt is made to statistically model such data, is whether these curves, perhaps suitably transformed, form a stationary functional time series. This paper formalizes the assumption of stationarity in the context of functional time series and proposes several procedures to test the null hypothesis of stationarity. The tests are nontrivial extensions of the broadly used tests in the KPSS family. The properties of the tests under several alternatives, including change-point and I(1)I(1), are studied, and new insights, present only in the functional setting are uncovered. The theory is illustrated by a small simulation study and an application to intraday price curves.  相似文献   

20.
Based on the well known Karhunen–Loève expansion, it can be shown that many omnibus tests lack power against “high frequency” alternatives. The smooth tests of  Neyman (1937) may be employed to circumvent this power deficiency problem. Yet, such tests may be difficult to compute in many applications. In this paper, we propose a more operational approach to constructing smooth tests. This approach hinges on a Fourier representation of the postulated empirical process with known Fourier coefficients, and the proposed test is based on the normalized principal components associated with the covariance matrix of finitely many Fourier coefficients. The proposed test thus needs only standard principal component analysis that can be carried out using most econometric packages. We establish the asymptotic properties of the proposed test and consider two data-driven methods for determining the number of Fourier coefficients in the test statistic. Our simulations show that the proposed tests compare favorably with the conventional smooth tests in finite samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号