首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Volatility models have been playing important roles in economics and finance. Using a generalized spectral second order derivative approach, we propose a new class of generally applicable omnibus tests for the adequacy of linear and nonlinear volatility models. Our tests have a convenient asymptotic null N(0,1) distribution, and can detect a wide range of misspecifications for volatility dynamics, including both neglected linear and nonlinear volatility dynamics. Distinct from the existing diagnostic tests for volatility models, our tests are robust to time-varying higher order moments of unknown form (e.g., time-varying skewness and kurtosis). They check a large number of lags and are therefore expected to be powerful against neglected volatility dynamics that occurs at higher order lags or display long memory properties. Despite using a large number of lags, our tests do not suffer much from the loss of a large number of degrees of freedom, because our approach naturally discounts higher order lags, which is consistent with the stylized fact that economic or financial markets are affected more by the recent past events than by the remote past events. No specific estimation method is required, and parameter estimation uncertainty has no impact on the convenient limit N(0,1) distribution of the test statistics. Moreover, there is no need to formulate an alternative volatility model, and only estimated standardized residuals are needed to implement our tests. We do not have to calculate tedious and model-specific score functions or derivatives of volatility models with respect to estimated parameters, which are required in some existing popular diagnostic tests for volatility models. We examine the finite sample performance of the proposed tests. It is documented that the new tests are rather powerful in detecting neglected nonlinear volatility dynamics which the existing tests can easily miss. They are useful diagnostic tools for practitioners when modelling volatility dynamics.  相似文献   

2.
We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but as usual the asymptotic results do not require normally distributed innovations. Our tests differ from existing tests in two respects. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott et al. (1996). Second, our tests incorporate a “sign” restriction which generalizes the one-sided unit root test. We show that the asymptotic local power of the proposed tests dominates that of existing cointegration rank tests.  相似文献   

3.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

4.
There is a need to test the hypothesis of exponentiality against a wide variety of alternative hypotheses, across many areas of economics and finance. Local or contiguous alternatives are the closest alternatives against which it is still possible to have some power. Hence goodness-of-fit tests should have some power against all, or a huge majority, of local alternatives. Such tests are often based on nonlinear statistics, with a complicated asymptotic null distribution. Thus a second desirable property of a goodness-of-fit test is that its statistic will be asymptotically distribution free. We suggest a whole class of goodness-of-fit tests with both of these properties, by constructing a new version of empirical process that weakly converges to a standard Brownian motion under the hypothesis of exponentiality. All statistics based on this process will asymptotically behave as statistics from a standard Brownian motion and so will be asymptotically distribution free. We show the form of transformation is especially simple in the case of exponentiality. Surprisingly there are only two asymptotically distribution free versions of empirical process for this problem, and only this one has a convenient limit distribution. Many tests of exponentiality have been suggested based on asymptotically linear functionals from the empirical process. We illustrate none of these can be used as goodness-of-fit tests, contrary to some previous recommendations. Of considerable interest is that a selection of well-known statistics all lead to the same test asymptotically, with negligible asymptotic power against a great majority of local alternatives. Finally, we present an extension of our approach that solves the problem of multiple testing, both for exponentiality and for other, more general hypotheses.  相似文献   

5.
Modeling conditional distributions in time series has attracted increasing attention in economics and finance. We develop a new class of generalized Cramer–von Mises (GCM) specification tests for time series conditional distribution models using a novel approach, which embeds the empirical distribution function in a spectral framework. Our tests check a large number of lags and are therefore expected to be powerful against neglected dynamics at higher order lags, which is particularly useful for non-Markovian processes. Despite using a large number of lags, our tests do not suffer much from loss of a large number of degrees of freedom, because our approach naturally downweights higher order lags, which is consistent with the stylized fact that economic or financial markets are more affected by recent past events than by remote past events. Unlike the existing methods in the literature, the proposed GCM tests cover both univariate and multivariate conditional distribution models in a unified framework. They exploit the information in the joint conditional distribution of underlying economic processes. Moreover, a class of easy-to-interpret diagnostic procedures are supplemented to gauge possible sources of model misspecifications. Distinct from conventional CM and Kolmogorov–Smirnov (KS) tests, which are also based on the empirical distribution function, our GCM test statistics follow a convenient asymptotic N(0,1) distribution and enjoy the appealing “nuisance parameter free” property that parameter estimation uncertainty has no impact on the asymptotic distribution of the test statistics. Simulation studies show that the tests provide reliable inference for sample sizes often encountered in economics and finance.  相似文献   

6.
This paper considers tests of seasonal integration and cointegration for multivariate unobserved component models. First, the locally best invariant (LBI) test of the null hypothesis of a deterministic seasonal pattern against the alternative of seasonal integration is derived for a model with Gaussian i.i.d. disturbances and deterministic trend. Then the null hypothesis of seasonal cointegration is considered and a test for common nonstationary components at the seasonal frequencies is proposed. The tests are subsequently generalized to account for stochastic trends, weakly dependent errors and unattended unit roots. Asymptotic representations and critical values of the tests are provided, while the finite sample performance is evaluated by Monte Carlo simulation experiments. Finally, the tests are applied to the series of industrial production of the four largest countries of the European Monetary Union. It is found that Germany does not appear to cointegrate with the other countries at most seasonal frequencies, while there seems to exist a common nonstationary seasonal component between France, Italy and Spain. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
《Journal of econometrics》2005,127(2):179-199
Many tests of parameter change in dynamic models exhibit nonmonotonic power. An important source of the nonmonotonic power comes from the bias in estimating parameters when there is a change in the deterministic component. To avoid this bias, we propose a nonparametric test for changing trends based on nonparametrically detrended data. The tests are similar in spirit to nonparametric conditional moment tests such as Fan and Li (J. Nonparametr. Stat. 10 (1999a) 245; 11 (1999b) 251) and Zheng (J. Econometrics 75 (1996) 263). The resulting statistics have a standard normal distribution. A Monte Carlo experiment suggests that the tests have good power against changes in the deterministic component.  相似文献   

8.
《Journal of econometrics》2005,128(1):165-193
We analyze OLS-based tests of long-run relationships, weak exogeneity and short-run dynamics in conditional error correction models. Unweighted sums of single equation test statistics are used for hypothesis testing in pooled systems. When model errors are (conditionally) heteroskedastic tests of weak exogeneity and short run dynamics are affected by nuisance parameters. Similarly, on the pooled level the advocated test statistics are no longer pivotal in presence of cross-sectional error correlation. We prove that the wild bootstrap provides asymptotically valid critical values under both conditional heteroskedasticity and cross-sectional error correlation. A Monte-Carlo study reveals that in small samples the bootstrap outperforms first-order asymptotic approximations in terms of the empirical size even if the asymptotic distribution of the test statistic does not depend on nuisance parameters. Opposite to feasible GLS methods the approach does not require any estimate of cross-sectional correlation and copes with time-varying patterns of contemporaneous error correlation.  相似文献   

9.
Recent research has found that trend‐break unit root tests derived from univariate linear models do not support the hypothesis of long‐run purchasing power parity (PPP) for US dollar real exchange rates. In this paper univariate smooth transition models are utilized to develop unit root tests that allow under the alternative hypothesis for stationarity around a gradually changing deterministic trend function. These tests reveal statistically significant evidence against the null hypothesis of a unit root for the real exchange rates of a number of countries against the US dollar. However, restrictions consistent with long‐run PPP are rejected for some of the countries for which a rejection of the unit root hypothesis is obtained. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
The paper is concerned with several kinds of stochastic frontier models whose likelihood function is not available in closed form. First, with output-oriented stochastic frontier models whose one-sided errors have a distribution other than the standard ones (exponential or half-normal). The gamma and beta distributions are leading examples. Second, with input-oriented stochastic frontier models which are common in theoretical discussions but not in econometric applications. Third, with two-tiered stochastic frontier models when the one-sided error components follow gamma distributions. Fourth, with latent class models with gamma distributed one-sided error terms. Fifth, with models whose two-sided error component is distributed as stable Paretian and the one-sided error is gamma. The principal aim is to propose approximations to the density of the composed error based on the inversion of the characteristic function (which turns out to be manageable) using the Fourier transform. Procedures that are based on the asymptotic normal form of the log-likelihood function and have arbitrary degrees of asymptotic efficiency are also proposed, implemented and evaluated in connection with output-oriented stochastic frontiers. The new methods are illustrated using data for US commercial banks, electric utilities, and a sample from the National Youth Longitudinal Survey.  相似文献   

11.
We characterize the class of dominant-strategy incentive-compatible (or strategy-proof) random social choice functions in the standard multi-dimensional voting model where voter preferences over the various dimensions (or components) are lexicographically separable. We show that these social choice functions (which we call generalized random dictatorships) are induced by probability distributions on voter sequences of length equal to the number of components. They induce a fixed probability distribution on the product set of voter peaks. The marginal probability distribution over every component is a random dictatorship. Our results generalize the classic random dictatorship result in Gibbard (1977) and the decomposability results for strategy-proof deterministic social choice functions for multi-dimensional models with separable preferences obtained in LeBreton and Sen (1999).  相似文献   

12.
《Journal of econometrics》1986,31(3):341-361
A modified Lagrange multiplier test statistic is proposed which takes explicit account of the one-sided nature of the alternative in problems where the null hypothesis specifies that the true value of the parameter vector lies on the boundary of the parameter space. Computation of this statistic requires only the constrained maximum likelihood estimator. Conditions for the consistency of tests based on this statistic are examined and it is shown that the distribution of the statistic is not affected of nuisance parameters are allowed to lie on the boundary of the parameter space.  相似文献   

13.
We discuss how to test the specification of an ordered discrete choice model against a general alternative. Two main approaches can be followed: tests based on moment conditions and tests based on comparisons between parametric and nonparametric estimations. Following these approaches, various statistics are proposed and their asymptotic properties are discussed. The performance of the statistics is compared by means of simulations. An easy-to-compute variant of the standard moment-based statistic yields the best results in models with a single explanatory variable. In models with various explanatory variables the results are less conclusive, since the relative performance of the statistics depends on both the fit of the model and the type of misspecification that is considered.  相似文献   

14.
To test the null hypothesis of a Poisson marginal distribution, test statistics based on the Stein–Chen identity are proposed. For a wide class of Poisson count time series, the asymptotic distribution of different types of Stein–Chen statistics is derived, also if multiple statistics are jointly applied. The performance of the tests is analyzed with simulations, as well as the question which Stein–Chen functions should be used for which alternative. Illustrative data examples are presented, and possible extensions of the novel Stein–Chen approach are discussed as well.  相似文献   

15.
The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains.  相似文献   

16.
R. Gatto 《Metrika》2017,80(6-8):733-747
This article provides P values for two new tests on the mean direction of the von Mises–Fisher distribution. The test statistics are obtained from the exponent of the saddlepoint approximation to the density of M-estimators, as suggested by Robinson et al. (Ann Stat 31:1154–1169, 2003). These test statistics are chi-square distributed with asymptotically small relative errors. Despite the high dimensionality of the problem, the proposed P values are accurate and simple to compute. The numerical precision of the P values of the new tests is illustrated by some simulation studies.  相似文献   

17.
In this paper, we develop two cointegration tests for two varying coefficient cointegration regression models, respectively. Our test statistics are residual based. We derive the asymptotic distributions of test statistics under the null hypothesis of cointegration and show that they are consistent against the alternative hypotheses. We also propose a wild bootstrap procedure companioned with the continuous moving block bootstrap method proposed in  Paparoditis and Politis (2001) and  Phillips (2010) to rectify severe distortions found in simulations when the sample size is small. We apply the proposed test statistic to examine the purchasing power parity (PPP) hypothesis between the US and Canada. In contrast to the existing results from linear cointegration tests, our varying coefficient cointegration test does not reject that PPP holds between the US and Canada.  相似文献   

18.
In applied research in econometrics a general model determined from the current knowledge of economic theory often establishes a ‘natural’ method of embedding a number of otherwise non-nested hypotheses. Under these circumstances, significant tests of various hypotheses can be carried out within the classical framework, and tests of non-nested or separate families of hypotheses do not require development of new statistical methods. The application of some suitable variant of likelihood ratio testing procedure will be quite appropriate.There are, however, many ocassions in applied econometrics where the hypotheses under consideration are intended to provide genuine rival explanations of the same given phenomenon and the state of economic theory is not such as to furnish us with a general model that contains both of the rival hypotheses in a ‘natural’ and theoretically consistent manner. A number of investigators have advocated that even when a ‘natural’ comprehensive model containing both of the hypotheses under consideration cannot be obtained from theoretical considerations, it is still appropriate to base significant tests of non-nested hypotheses upon a combined model ‘artificially’ constructed from the rival alternatives. Moreover, in a recent paper on the application of Lagrange Multiplier (LM) tests to model specification, T.S. Breusch and A.R. Pagan (1980) have claimed that Cox's test statistic is connected to an LM or ‘score’ statistic derived from the application of the LM method to an exponentially combined model earlier employed by A.C. Atkinson (1970).Although the use of ‘artificially’ constructed comprehensive models fortesting separate families of hypotheses is analytically tempting, nevertheless it is subject to two major difficulties. Firstly, in many cases of interest in econometrics, the structural parameters under the combined hypothesis are not identified. Secondly, the log likelihood function of the artificially constructed model has singularities under both the null and alternative hypotheses.The paper firstly examines the derivation of LM statistics in the case of non-nested hypotheses and shows that Atkinson's general test statistic, or Breusch and Pagan's result, can be regarded as an LM test if the parameters of the alternative hypothesis are known. The paper also shows that unless all the parameters of the combined models are identified, no meaningful test of the separate families of the hypotheses by the artificial embedding procedure is possible, and in the identified case an expression for the LM statistic which avoids the problem of the singularity of the information matrix under the null and the alternative hypotheses is obtained.The paper concludes that none of the artificially embedding procedures are satisfactory for testing non-nested models and should be abandoned. It, however, emphasizes that despite these difficulties associated with the use of artificial embedding procedures, Cox's original statistic (which is not derived as an LM statistic and does not depend on any arbitrary synthetic combination of hypotheses) can still be employed as a useful procedure for testing the rival hypotheses often encountered in applied econometrics.  相似文献   

19.
《Journal of econometrics》2005,124(2):253-267
This paper suggests a procedure for the construction of optimal weighted average power similar tests for the error covariance matrix of a Gaussian linear regression model when the alternative model belongs to the exponential family. The paper uses a saddlepoint approximation to construct simple test statistics for a large class of problems and overcomes the computational burden of evaluating the complicated integrals arising in the derivation of optimal weighted average power tests. Extensions to panel data models are considered. Applications are given to tests for error autocorrelation in the linear regression model and in a panel data framework.  相似文献   

20.
We develop three corrected score tests for generalized linear models with dispersion covariates, thus generalizing the results of Cordeiro , Ferrari and Paula (1993) and Cribari-Neto and Ferrari (1995) . We present, in matrix notation, general formulae for the coefficients which define the corrected statistics. The formulae only require simple operations on matrices and can be used to obtain analytically closed-form corrections for score test statistics in a variety of special generalized linear models with dispersion covariates. They also have advantages for numerical purposes since our formulae are readily computable using a language supporting numerical linear algebra. Two examples, namely, iid sampling without covariates on the mean or dispersion parameter oand one-way classification models, are given. We also present some simulations where the three corrected tests perform better than the usual score test, the likelihood ratio test and its Bartlett corrected version. Finally, we present a numerical example for a data set discussed by Simonoff and Tsai (1994) .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号