首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We produce Monte Carlo evidence on the size and power of the RESET, a heteroscedasticity test, and a test for autocorrelation applied to realistic distributed-lag models. We find that the autocorrelation test has the correct size and high power to detect not only autocorrelation (given a correct model), but also the erroneous omission of several lags of an explanatory variable, whereas the RESET and heteroscedasticity tests are oversized in the presence of positive disturbance autocorrelation, especially when the regressors are also positively autocorrelated, and have no power to detect such misspecification errors. In large samples, size distortion may be avoided by using autocorrelation-robust methods.
Athanassios StavrakoudisEmail:
  相似文献   

2.
The dynamic CUSUM test for structural change proposed by Kr?mer, Ploberger and Alt (1988) is investigated when the errors are serially correlated in a linear dynamic model. We show that the dynamic CUSUM test can be modified to allow for serial correlation in the disturbance using the same procedure as in Kao and Ross (1995), and that the modified dynamic CUSUM test retains its asymptotic significance levels. Monte Carlo results suggest that the empirical size of the dynamic CUSUM test is highly distorted while the empirical size of the modified dynamic CUSUM test is fairly robust to the change on the degree of autocorrelation. We also find that the power of the modified test essentially depends on the angle between the mean regressors and the structural shift. First version received: April 1997/Final version received: January 1998  相似文献   

3.
Following recent work of Franses, Hylleberg and Lee (FHL), this paper analyses the consequences of fitting a deterministic seasonal model to a quarterly time series which can be (at least approximately) described by a seasonal unit root(s) model. Besides the distribution of the coefficient of determination, the empirical distributions of two commonly used statistics are also investigated through Monte Carlo experiments for small, moderately large and large samples. FHL's work is also extended allowing the possibility of residual autocorrelation corrections. The main conclusion that emerges from the results is that one should not try to measure the importance of deterministic seasonality nor test for its presence in the context of such (static) regression models, even when using some form of residual autocorrelation correction. A simple empirical application is provided to illustrate our results. First version received: July 1997/final version received: July 1998  相似文献   

4.
We demonstrate that t ratios (the F statistic) for I(1) regressors in a model with an I(0) dependent variable will generally be oversized. This indicates that spurious significance occurs in a situation where it was not previously identified. We also compare the asymptotic rejection rates of t ratios for various combinations of I(1) and I(0) variables in the two-variable linear regression model. These rejection rates systematically increase with the degree of autocorrelation, yielding spurious significance, when both variables are either positively or negatively autocorrelated. In contrast, when one variable is negatively autocorrelated and the other is positively autocorrelated the rejection rates systematically fall and are undersized.  相似文献   

5.
This paper reexamines recent results on the predictability of nominal exchange rate returns by means of fundamental models. Using a monthly sample of the post-Bretton Woods period we show that the in-sample fit between long-horizon exchange rate returns and various models is not significant if we correct for the persistence that is caused by overlapping data and spurious regression phenomena. The long horizon out-of-sample predictive power of the fundamental exchange rate models is found to be very weak. This is especially the case when we conduct the out-of-sample forecasting tests for a longer time span than that of earlier papers. We show that this failure in forecasting performance, resulting from extending the time span, is due to the absence of cointegration between exchange rates and structural exchange rate models. First version received: September 1997/final version received: November 1998  相似文献   

6.
Empirical factor demand analysis is a topic in which a choice must be made among several competing non-nested functional forms. Each of the commonly used factor demand systems, such as Translog, Generalized Leontief, Quadratic, and Generalized McFadden, exhibits statistical inadequacy when tested for the absence of residual autocorrelation, homoskedasticity and normality. This does not necessarily imply that the whole system is invalid, especially if misspecification affects only a subset of the equations forming the entire system. Since there is no theoretical guidance on how to select the model which is most able to capture the relevant features of the data, formal testing procedures can be useful. In the literature, paired and joint univariate non-nested tests (e.g. Davidson-MacKinnon's J and P tests, Bera-McAleer test and Barten-McAleer test) have been discussed at length, whereas virtually no attention has been paid to multivariate non-nested tests. In this paper we show how multivariate non-nested tests can be derived from their univariate counterparts, and we apply these tests to compare alternative factor demand systems. Since the outcome of a non-nested test is likely to be influenced by the type of misspecification affecting the competing models, we investigate the empirical performance of a multivariate non-nested test using new Monte Carlo experiments. The competing models are compared indirectly via a statistically adequate model which is considered as if it were the DGP. Under such circumstances, the distribution of the non-nested test of an incorrect null, when it is evaluated at the DGP, tends to be closer to the distribution of the test under the correct null, at least in small samples. A non-nested test is expected to select the model which is closest to the DGP. Moreover, we investigate the empirical behaviour of a non-nested test when the DGP has, in turn, autoregressive, heteroskedastic and non-normal errors. Finally, we provide some suggestions for the applied researcher. First version received: November 1999/Final version received: May 2001  相似文献   

7.
Researchers have become increasingly interested in estimating mixtures of stochastic frontiers. Mester (1993), Caudill (1993), and Polachek and Yoon (1987), for example, estimate stochastic frontier models for different regimes, assuming sample separation information is given. Building on earlier work by Lee and Porter (1984), Douglas, Conway, and Ferrier (1995) estimate a stochastic frontier switching regression model in the presence of noisy sample separation information. The purpose of this paper is to extend earlier work by estimating a mixture of stochastic frontiers assuming no sample separation information. This case is more likely to occur in practice than even noisy sample separation information. In order to estimate a mixture of stochastic frontiers with no sample separation information, an EM algorithm to obtain maximum likelihood estimates is developed. The algorithm is used to estimate a mixture of stochastic (cost) frontiers using data on U.S. savings and loans for the years 1986, 1987, and 1988. Statistical evidence is found supporting the existence of a mixture of stochastic frontiers. First version received: 3/13/01/Final version received: 6/17/02 RID="*" ID="*"  I am grateful to Ram Acharya, Janice Caudill, and especially James R. Barth for several helpful comments on an earlier version of the paper. During the revision process I benefitted greatly from the suggestions of the Associate Editor and three anonymous referees.  相似文献   

8.
The present paper uses Canadian data for the period 1947–1972 and three commodity groups to examine the empirical importance of restrictions imposed by autocorrelated disturbances on the static linear expenditure system, LES. For comparison a simple habit persistence model is also estimated. Results of applications of likelihood ratio tests indicate that autocorrelation is present in the data, that a simple habit persistence hypothesis on the structure is implied and that the restrictions imposed by the form of the utility function and maximization problem are inconsistent with the data whether or not adjusted for the autocorrelation in errors. Despite these differences, the estimated price and income elasticities remain fairly constant across the various specifications of the LES that were considered.  相似文献   

9.
We show that when instruments are nearly exogenous, the two stage least squares t-statistic unpredictably over-rejects or under-rejects the null hypothesis that the endogenous regressor is insignificant and Anderson–Rubin test over-rejects the null. We prove that in the limit these tests are no longer nuisance parameter free.  相似文献   

10.
Testing for PPP: Should we use panel methods?   总被引:3,自引:3,他引:0  
A common finding in the empirical literature on the validity of purchasing power parity (PPP) is that it holds when tested for in panel data, but not in univariate (i.e. country-specific) analysis. The usual explanation for this mismatch is that panel tests for unit roots are more powerful than their univariate counterparts. In this paper we suggest an alternative explanation. Existing panel methods assume that cross-unit cointegrating relationships, that would tie the units of the panel together, are not present. Using simulations, we show that if this important underlying assumption of panel unit root tests is violated, the empirical size of the tests is substantially higher than the nominal level, and the null hypothesis of a unit root is rejected too often even when it is true. More generally, this finding warns against the automatic use of panel methods for testing for unit roots in macroeconomic time series.First version received: November 2001/Final version received : October 2003  相似文献   

11.
We examine impacts of different types of environmental innovations on firm profits. Following Porter’s (Sci Am 264(4):168, 1991) hypothesis that environmental regulation can improve firms’ competitiveness, we distinguish between regulation-induced and voluntary environmental innovations. We find that innovations which do not improve firms’ resource efficiency do not provide positive returns to profitability. However, innovations that increase a firm’s resource efficiency in terms of material or energy consumption per unit of output have a positive impact on profitability. This positive result holds for both regulation-induced and voluntary innovations, although the effect is greater for regulation-driven innovation. We conclude that the Porter hypothesis does not hold in general for its “strong” version, but depends on the type of environmental innovation. Our findings rest on firm-level data from the German part of the Community Innovation Survey 2008 (CIS 2008).  相似文献   

12.
This paper provides closed-form formulae for computing the asymptotic covariance matrices of the estimated autocovariance and autocorrelation functions of stable VAR models by means of the delta method. These covariance matrices can be used to construct asymptotic confidence bands for the estimated autocovariance and autocorrelation functions to assess the underlying estimation uncertainty. The usefulness of the formulae for empirical work is illustrated by an application to inflation and output gap data for the U.S. economy indicating the existence of a significant short-run Phillips-curve tradeoff.First version received: November 2002/Final version received: September 2003  相似文献   

13.
This paper models the main stock index of the Vienna Stock Exchange with daily data from 1986 to 1992. We find that returns are nonnormal and show linear and nonliner dependence. On that basis we compare the fit of alternative specifications of Generalized Autoregressive Conditional Heteroscedasticity (GARCH) to the Markov-Switching approach. The models are evaluated with diagnostic tests on the standardized residuals. We consider evidence for deterministic structures and for infinite variance. Our main result is that a parsimonious model from the GARCH – class can generate the statistical properties of daily returns. The behavior of the two types of models with respect to temporal aggregation is found to differ significantly. First version received: January 1996/Final version received: December 1997  相似文献   

14.
The vast majority of randomized experiments in economics rely on a single baseline and single follow-up survey. While such a design is suitable for study of highly autocorrelated and relatively precisely measured outcomes in the health and education domains, it is unlikely to be optimal for measuring noisy and relatively less autocorrelated outcomes such as business profits, and household incomes and expenditures. Taking multiple measurements of such outcomes at relatively short intervals allows one to average out noise, increasing power. When the outcomes have low autocorrelation and budget is limited, it can make sense to do no baseline at all. Moreover, I show how for such outcomes, more power can be achieved with multiple follow-ups than allocating the same total sample size over a single follow-up and baseline. I also highlight the large gains in power from ANCOVA analysis rather than difference-in-differences analysis when autocorrelations are low.  相似文献   

15.
In this paper, we consider anti‐dumping (AD) duties as a tool to facilitate collusion between a domestic and a foreign firm in an infinitely repeated differentiated Bertrand game, where prices are publicly observable and each firm receives a privately observed i.i.d. cost shock in each period. We consider second‐best scenarios, where market‐share or production arrangement with sidepayments is not allowed. We show that there exist equilibrium‐path reciprocal ADs. The collusive (trigger) price is distorted downward compared with complete information benchmark as a trade‐off between diminishing the incentive to deviate and ensuring off‐schedule deviation gains when private cost shocks are highly favourable. The model differs from Green and Porter ( 1984 ) and Rotemberg and Saloner ( 1986 ) in that it is the private cost shocks as opposed to public demand shocks that necessitate modifications of collusion. In conclusion, AD policy may encourage collusion, and therefore, unless the source of market imperfection is carefully examined, laissez faire might be a better choice.  相似文献   

16.
It is acknowledged that purchasing power parity (PPP) fails in empirical tests. The position adopted is that real factors are an omitted variable from the PPP relationship and are the cause of divergences from PPP. The real exchange rate as being determined by supply and demand shift factors (as in Stockman, 1987 and Neary, 1988) is modelled. We then empirically estimate a real exchange rate equation and use the fitted value as a generated regressor in tests of PPP. It is demonstrated that when changes in the real exchange rate are incorporated into the PPP relationship, PPP improves.  相似文献   

17.
This paper tests a version of the rational expectations hypothesis using ‘fixed-event’ inflation forecasts for the UK. Fixed-event forecasts consist of a panel of forecasts for a set of outturns of a series at varying horizons prior to each outturn. The forecasts are the prediction of fund managers surveyed by Merrill Lynch. Fixed-event forecasts allow tests for whether expectations are unbiased in a similar fashion to the rest of the literature. But they also permit the conduct of particular tests of forecast efficiency - whether the forecasts make best use of available information - that are not possible with rolling-event data. We find evidence of a positive bias in inflation expectations. Evidence for inefficiency is much less clear cut.First version received: June 2002/ Final version received: November 2003We would like two anonymous referees and an editor for comments that have significantly improved the paper. The views expressed in this paper are those of the authors and do not necessarily reflect those of the Bank of England.  相似文献   

18.
Profitability in UK manufacturing collapsed in the early 1980s, but then recovered to 1970s levels. To account for the changes in profits we propose a series of extensions of the widely-used Cowling and Waterson (1976) model. Our extensions incorporate demand shocks, varying competition and collusion, and the role of unions. The resulting model encompasses Cowling/Waterson, the Kreps and Scheinkman (1983) varying competition model and the Green and Porter (1984) and Rotemberg and Saloner (1986) varying collusion models. Using a panel of 53 UK industries, 1973–1986 we estimate the encompassing model by generalised methods of moments/instrumental variables. Our major findings are: (a) there is no substantial contribution to changes in aggregate profitability from the batting average effect of movements between sectors; (b) the collapse of profits in the early 1980s was mainly driven by the collapse in demand; (c) the fall in union density in the 1980s has increased profitability despite a fall in concentration. We also find tentative evidence suggesting that collusion is pro-cyclical as in Green and Porter (1984).For very useful comments I thank Josef Falkinger, Paul Geroski and David Audretsch. I thank Chris Martin for letting me use much of our joint work and Ian Small for the data.  相似文献   

19.
The modified logit model (Amemiya and Nold, 1975) is generalised to the case where the error term is autocorrelated. The asymptotic distribution (as n →∞ and T →∞) of a feasible GLS estimator of β is derived. Tests of linear restrictions on β and the significance of ρ are presented. The results of the applied work suggest that the factors which explain the pricing behaviour of manufacturing firms, as reported in the tendency survey conducted by the Australian Chamber of Commerce and Industry and the Westpac Banking Corporation, include historical inflation rates of up to 7 quarters and capacity utilisation. First version received: March 2001/Final version received: July 2002 RID="*" ID="*"  The first draft of this paper was written while the author was on study leave at the Department of Econometrics, University of Sydney, Australia.  相似文献   

20.
In previous studies, measures of technical inefficiency effects derived from stochastic production frontiers have been estimated from residuals which are sensitive to specification errors. This study corrects for this inaccuracy by extending the doubly heteroscedastic stochastic cost frontier suggested by Hadri (1999) to the model for technical inefficiency effects. This model is a stochastic frontier production function for panel data as proposed by Battese and Coelli (1995). The study uses, for illustration of the techniques, data on 101 mainly cereal farms in England. We find that the correction for heteroscedasticity is supported by the data. Both point estimates and confidence intervals for technical efficiencies are provided. The confidence intervals are constructed by extending the “Battese-Coelli” method reported by Horrace and Schmidt (1996) by allowing the technical inefficiency to be time varying and the disturbance terms to be heteroscedastic. The confidence intervals reveal the precision of technical efficiency estimates and show the deficiencies of making inferences based exclusively on point estimates. First version received: March 2000/Final version received: Oct. 2001 RID="*" ID="*"  The authors are grateful to the Economic and Social Research Council for access to their Data Archive which has provided the data for this research. We are indebted to Badi Baltagi and two anonymous referees for their helpful comments and suggestions. The usual caveat applies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号