首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In applied research in econometrics a general model determined from the current knowledge of economic theory often establishes a ‘natural’ method of embedding a number of otherwise non-nested hypotheses. Under these circumstances, significant tests of various hypotheses can be carried out within the classical framework, and tests of non-nested or separate families of hypotheses do not require development of new statistical methods. The application of some suitable variant of likelihood ratio testing procedure will be quite appropriate.There are, however, many ocassions in applied econometrics where the hypotheses under consideration are intended to provide genuine rival explanations of the same given phenomenon and the state of economic theory is not such as to furnish us with a general model that contains both of the rival hypotheses in a ‘natural’ and theoretically consistent manner. A number of investigators have advocated that even when a ‘natural’ comprehensive model containing both of the hypotheses under consideration cannot be obtained from theoretical considerations, it is still appropriate to base significant tests of non-nested hypotheses upon a combined model ‘artificially’ constructed from the rival alternatives. Moreover, in a recent paper on the application of Lagrange Multiplier (LM) tests to model specification, T.S. Breusch and A.R. Pagan (1980) have claimed that Cox's test statistic is connected to an LM or ‘score’ statistic derived from the application of the LM method to an exponentially combined model earlier employed by A.C. Atkinson (1970).Although the use of ‘artificially’ constructed comprehensive models fortesting separate families of hypotheses is analytically tempting, nevertheless it is subject to two major difficulties. Firstly, in many cases of interest in econometrics, the structural parameters under the combined hypothesis are not identified. Secondly, the log likelihood function of the artificially constructed model has singularities under both the null and alternative hypotheses.The paper firstly examines the derivation of LM statistics in the case of non-nested hypotheses and shows that Atkinson's general test statistic, or Breusch and Pagan's result, can be regarded as an LM test if the parameters of the alternative hypothesis are known. The paper also shows that unless all the parameters of the combined models are identified, no meaningful test of the separate families of the hypotheses by the artificial embedding procedure is possible, and in the identified case an expression for the LM statistic which avoids the problem of the singularity of the information matrix under the null and the alternative hypotheses is obtained.The paper concludes that none of the artificially embedding procedures are satisfactory for testing non-nested models and should be abandoned. It, however, emphasizes that despite these difficulties associated with the use of artificial embedding procedures, Cox's original statistic (which is not derived as an LM statistic and does not depend on any arbitrary synthetic combination of hypotheses) can still be employed as a useful procedure for testing the rival hypotheses often encountered in applied econometrics.  相似文献   

2.
In this paper we propose a non-nested hypothesis test for testing the specification of a multivariate econometric model in the presence of an alternative model which purports to explain the same phenomenon. We demonstrate that the new test statistic tends to minus the same random variable as the CPD test statistic introduced by Pesaran and Deaton (1978), provided that the truth is ‘close’ to the null hypothesis. Since the new test is simpler to compute than the multivariate CPD test, it would seem to be the procedure of choice.  相似文献   

3.
The necessary and sufficient condition to test for ‘overall causality’, i.e., the presence of Granger- causality and instantaneous causal relations, in a bivariate and trivariate autoregressive model with recursive form is discussed. It is argued that the conventional AR model (the reduced form AR) is a more straightforward and effective means of testing for ‘overall causality’. To detect instanta- neous causality it is proposed to select the best subset system in a residual regression system in conjunction with model selection criteria. The Canadian money-income-bank rate system is re-examined in this way and by using a previously proposed algorithm we identify the optimum multivariate subset AR with constraints to detect whether there is ‘overall causality’ in that system.  相似文献   

4.
Dr. K. Auinger 《Metrika》1990,37(1):97-116
In this paper we propose a general method for the construction of tests that can be used for testing goodness of fit of lifetime distributions. The method is the following: first find an identity which holds for the survival function or the cumulative hazard function of the null distribution. Then replace the function by a consistent estimate. The resulting statistic is asymptotically normal. Estimating its asymptotic variance then gives a test statistic which is underH 0 asymptotically chi2. The method can be used for randomly (right) censored and single type-I (right) censored data. We apply this method to the following distributions: Weibull, Log-logistic, Log-normal, Half-normal, Rayleigh, Gompertz, Pareto.  相似文献   

5.
Although it has long been a consensus that intercoder reliability is crucial to the validity of a content analysis study, the choice among them has been debated. This study reviewed and empirically tested most popular intercoder reliability indices, aiming to find the most robust index against prevalence and rater bias, by empirically testing their relationships with response surface methodology through a Monte Carlo experiment. It was found that Maxwell’s R.E is superior to Krippendorff’s α, Scott’s π, Cohen’s κ, I r of Perreault and Leigh, and Gwet’s AC 1. More nuanced relationships among prevalence, sensitivity, specificity and the intercoder reliability indices were discovered through response surface plots. Both theoretical and practical implications were also discussed in the end.  相似文献   

6.
The theory of robustness modelling is essentially based on heavy‐tailed distributions, because longer tails are more prepared to deal with diverse information (such as outliers) because of the higher probabilities on the tails. There are many classes of distributions that can be regarded as heavy tails; some of them have interesting properties and are not explored in statistics. In the present work, we propose a robustness modelling approach based on the O‐regularly varying class (ORV), which is a generalization of the regular variation family; however, the ORV class allows more flexible tails behaviour, which can improve the way in which the outlying information is discarded by the model. We establish sufficient conditions in the location and in the scale parameter structures, which allow to resolve automatically the conflicts of information. We also provide a procedure for generating new distributions within the ORV class.  相似文献   

7.
Summary  In this paper the concept of 'rank-interaction' is introduced and a distribution-free method for testing against the presence of 'rank-interaction' is suggested in the case of a two-way layout (classification) with m (> 1) observations per cell. Roughly speaking rank-interaction can be understood as the phenomenon at which the ranks of the levels of some relevant variable are different for different classes of the other factor. The exact null distribution of the test statistic has been computed in some cases. The asymptotic distribution under the null hypothesis has been derived. A test suggested by J.V. B radley in his book 'Distribution-free Statistical Tests' [2] is discussed. In the opinion of the authors it is doubtful whether the asymptotic distribution of the test statistic under the null hypothesis, as given by B radley , is correct. The test of B radley was intended to be sensitive to the presence of interactions defined in the usual way and hence not only to 'rank-interaction'. The same applies to methods proposed by some other authors. We claim that situations exist where one should test against rank-interaction and not against the usual more general alternative.  相似文献   

8.
This paper introduces tests for residual serial correlation in cointegrating regressions. The tests are devised in the frequency domain by using the spectral measure estimates. The asymptotic distributions of the tests are derived and test consistency is established. The asymptotic distributions are obtained by using the assumptions and methods that are different from those used in Grenander and Rosenblatt (1957) and Durlauf (1991). Small-scale simulation results are reported to illustrate the finite sample performance of the tests under various distributional assumptions on the data generating process. The distributions considered are normal and t-distributions. The tests are shown to have stable size at sample sizes as large as 50 or 100. Additionally, it is shown that the tests are reasonably powerful against the ARMA residuals. An empirical application of the tests to investigate the ‘weak-form’ efficiency in the foreign exchange market is also reported.  相似文献   

9.
Xu Zheng 《Metrika》2012,75(4):455-469
This paper proposes a new goodness-of-fit test for parametric conditional probability distributions using the nonparametric smoothing methodology. An asymptotic normal distribution is established for the test statistic under the null hypothesis of correct specification of the parametric distribution. The test is shown to have power against local alternatives converging to the null at certain rates. The test can be applied to testing for possible misspecifications in a wide variety of parametric models. A bootstrap procedure is provided for obtaining more accurate critical values for the test. Monte Carlo simulations show that the test has good power against some common alternatives.  相似文献   

10.
This paper examines Durbin and Watson's (1950) choice of test statistic for their test of first- 0order autoregressive regression disturbances. Attention is focused on an alternative statistic, d'. Theoretical and empirical power properties of the d' test are compared with those of the Durbin-Watson test. The former is found to be locally best invariant while the latter is approximately locally best invariant. The d' test is also found to be more powerful than its counterpart against negative autocorrelation and for small values of the autocorrelation coefficient against positive autocorrelation. Selected bounds for significance points of d' are tabulated.  相似文献   

11.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

12.
Most of the empirical applications of the stochastic volatility (SV) model are based on the assumption that the conditional distribution of returns, given the latent volatility process, is normal. In this paper, the SV model based on a conditional normal distribution is compared with SV specifications using conditional heavy‐tailed distributions, especially Student's t‐distribution and the generalized error distribution. To estimate the SV specifications, a simulated maximum likelihood approach is applied. The results based on daily data on exchange rates and stock returns reveal that the SV model with a conditional normal distribution does not adequately account for the two following empirical facts simultaneously: the leptokurtic distribution of the returns and the low but slowly decaying autocorrelation functions of the squared returns. It is shown that these empirical facts are more adequately captured by an SV model with a conditional heavy‐tailed distribution. It also turns out that the choice of the conditional distribution has systematic effects on the parameter estimates of the volatility process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
This paper analyses the ways in which a media organization implicated in a series of reputational scandals represents its own management in a comedy series. The organization in question is the BBC (British Broadcasting Corporation) and the comedy series is W1A, a mockumentary commissioned and screened by the BBC in 2014–17. Firstly, I discuss the ways in which W1A as a ‘text’ uses satirical devices to ridicule its own management as well as management fads and fashions. Secondly, I analyse W1A as the ‘intertext’, and consider the satirical representations of management in W1A against the backdrop of the BBC’s reputational scandals. I put forward an interpretation that the intertextual references in the comedy series break down the distance between ‘us’ and ‘the troubled organization’. I also argue that intertextual reading of the series (e.g. the analysis of allusions, cameo appearances, and parallels with the real BBC) throws an entirely different light on organizational wrongdoing, opening new possibilities for organizational reintegration and the repair of broken trust. Not only does the reading of W1A change when the audience considers what is happening in the real BBC, but also our interpretation of what is happening in the BBC may change when we watch W1A.  相似文献   

14.
This theoretical perspective paper interprets (un)known-(un)known risk quadrants as being formed from both abstract and concrete risk knowledge. It shows that these quadrants are useful for categorising risk forecasting challenges against the levels of abstract and concrete risk knowledge that are typically available, as well as for measuring perceived levels of abstract and concrete risk knowledge available for forecasting in psychometric research. Drawing on cybersecurity risk examples, a case is made for refocusing risk management forecasting efforts towards changing unknown-unknowns into known-knowns. We propose that this be achieved by developing the ‘boosted risk radar’ as organisational practice, where suitably ‘risk intelligent’ managers gather ‘risk intelligence information’, such that the ‘risk intelligent organisation’ can purposefully co-develop both abstract and concrete risk forecasting knowledge. We also illustrate what this can entail in simple practical terms within organisations.  相似文献   

15.
In this paper we apply results on regular economies to study equilibria and core in a non-differentiable framework. We show that the distributions of agents' characteristics of regular economies form a dense subset of all distributions of agents' characteristics. Therefore ‘most’ economies have equilibria which are contained in finitely many ?-balls. And the core of ‘most’ sufficiently large economies is contained in finitely many ?-balls centered at equilibrium allocations of these economies.  相似文献   

16.
In this paper, we examine the implications of imposing separability on the translog and three other flexible forms. Our results imply that the Berndt-Christensen ‘nonlinear’ test for weak separability tests not only for weak separability, but also imposes a restrictive structure on the macro and micro functions for all currently known ‘flexible’ functional forms. For example, testing for weak separability using the translog as an exact form is in fact equivalent to testing for a hybrid of strong (additive) separability and homothetic weak separability with Cobb-Douglas aggregator functions. Our results show that these ‘flexible’ functional forms are ‘separability-inflexible’. That is, they are not capable of providing a second-order approximation to an arbitrary weakly separable function in any neighbourhood of a given point.  相似文献   

17.
We consider nonparametric/semiparametric estimation and testing of econometric models with data dependent smoothing parameters. Most of the existing works on asymptotic distributions of a nonparametric/semiparametric estimator or a test statistic are based on some deterministic smoothing parameters, while in practice it is important to use data-driven methods to select the smoothing parameters. In this paper we give a simple sufficient condition that can be used to establish the first order asymptotic equivalence of a nonparametric estimator or a test statistic with stochastic smoothing parameters to those using deterministic smoothing parameters. We also allow for general weakly dependent data.  相似文献   

18.
Since the pioneering work by Granger (1969), many authors have proposed tests of causality between economic time series. Most of them are concerned only with “linear causality in mean”, or if a series linearly affects the (conditional) mean of the other series. It is no doubt of primary interest, but dependence between series may be nonlinear, and/or not only through the conditional mean. Indeed conditional heteroskedastic models are widely studied recently. The purpose of this paper is to propose a nonparametric test for possibly nonlinear causality. Taking into account that dependence in higher order moments are becoming an important issue especially in financial time series, we also consider a test for causality up to the Kth conditional moment. Statistically, we can also view this test as a nonparametric omitted variable test in time series regression. A desirable property of the test is that it has nontrivial power against T1/2-local alternatives, where T is the sample size. Also, we can form a test statistic accordingly if we have some knowledge on the alternative hypothesis. Furthermore, we show that the test statistic includes most of the omitted variable test statistics as special cases asymptotically. The null asymptotic distribution is not normal, but we can easily calculate the critical regions by simulation. Monte Carlo experiments show that the proposed test has good size and power properties.  相似文献   

19.
Summary This paper generalizes a result by Stadje (1984) by deriving conditions for which a general dependency structure for multivariate observations, given in Pavur (1987), yields a positive definite covariance structure. This general dependency structure allows the sample covariance matrix to be distributed as a constant times a Wishart random matrix. It is then demonstrated that the maximum squared-radii test and a test for equal population covariance matrices have null distributions which remain unchanged when the new general dependency structure, rather than the usual independence structure, for the vector observations, is assumed. Moreover, under a general dependency structure for which the population covariance matrices are unequal, it is shown that the distribution of the test statistic for testing equal covariance matrices is identical to the distribution of the same test statistic when the population covariance matrices are equal and the observations are independent.  相似文献   

20.
In nonparametric instrumental variable estimation, the function being estimated is the solution to an integral equation. A solution may not exist if, for example, the instrument is not valid. This paper discusses the problem of testing the null hypothesis that a solution exists against the alternative that there is no solution. We give necessary and sufficient conditions for existence of a solution and show that uniformly consistent testing of an unrestricted null hypothesis is not possible. Uniformly consistent testing is possible, however, if the null hypothesis is restricted by assuming that any solution to the integral equation is smooth. Many functions of interest in applied econometrics, including demand functions and Engel curves, are expected to be smooth. The paper presents a statistic for testing the null hypothesis that a smooth solution exists. The test is consistent uniformly over a large class of probability distributions of the observable random variables for which the integral equation has no smooth solution. The finite-sample performance of the test is illustrated through Monte Carlo experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号