首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

2.
This article proposes a Bayesian approach to examining money‐output causality within the context of a logistic smooth transition vector error correction model. Our empirical results provide substantial evidence that the postwar US money‐output relationship is nonlinear, with regime changes mainly governed by the output growth and price levels. Furthermore, we obtain strong support for nonlinear Granger causality from money to output, although there is also some evidence for models indicating that money is not Granger causal or long‐run causal to output.  相似文献   

3.
In this paper, we propose a fixed design wild bootstrap procedure to test parameter restrictions in vector autoregressive models, which is robust in cases of conditionally heteroskedastic error terms. The wild bootstrap does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation, empirical size and power properties of the method are illustrated for the case of white noise under the null hypothesis. We compare the bootstrap approach with standard ordinary least squares (OLS)-based, weighted least squares (WLS) and quasi-maximum likelihood (QML) approaches. In terms of empirical size, the proposed method outperforms competing approaches and achieves size-adjusted power close to WLS or QML inference. A White correction of standard OLS inference is satisfactory only in large samples. We investigate the case of Granger causality in a bivariate system of inflation expectations in France and the United Kingdom. Our evidence suggests that the former are Granger causal for the latter while for the reverse relation Granger non-causality cannot be rejected.  相似文献   

4.
This study examines the causal relationship between interset rates and the exchange value of the dollar using Granger causality tests. Cointegration tests show that error correction models are not necessary in this case. The results suggest that the combination of short- and long-term U.S. interest rates, in nominal or real terms, Granger cause the exchange value of the dollar, and that the difference between nominal domestic and foreign rates does not Granger cause the exchange value of the dollar. This result supports the proposition that budget deficits contribute to trade deficits.  相似文献   

5.
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross‐equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared with a simulation‐based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal and stable error distributions. In the Gaussian case, finite‐sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi‐stage Monte Carlo test methods. For non‐Gaussian distribution families involving nuisance parameters, confidence sets are derived for the nuisance parameters and the error distribution. The tests are applied to an asset pricing model with observable risk‐free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over 5‐year subperiods from 1926 to 1995.  相似文献   

6.
Ordinary least squares estimation of an impulse‐indicator coefficient is inconsistent, but its variance can be consistently estimated. Although the ratio of the inconsistent estimator to its standard error has a t‐distribution, that test is inconsistent: one solution is to form an index of indicators. We provide Monte Carlo evidence that including a plethora of indicators need not distort model selection, permitting the use of many dummies in a general‐to‐specific framework. Although White's (1980) heteroskedasticity test is incorrectly sized in that context, we suggest an easy alteration. Finally, a possible modification to impulse ‘intercept corrections’ is considered.  相似文献   

7.
We provide an accessible introduction to graph‐theoretic methods for causal analysis. Building on the work of Swanson and Granger (Journal of the American Statistical Association, Vol. 92, pp. 357–367, 1997), and generalizing to a larger class of models, we show how to apply graph‐theoretic methods to selecting the causal order for a structural vector autoregression (SVAR). We evaluate the PC (causal search) algorithm in a Monte Carlo study. The PC algorithm uses tests of conditional independence to select among the possible causal orders – or at least to reduce the admissible causal orders to a narrow equivalence class. Our findings suggest that graph‐theoretic methods may prove to be a useful tool in the analysis of SVARs.  相似文献   

8.
In this paper, we propose a unified approach to generating standardized‐residuals‐based correlation tests for checking GARCH‐type models. This approach is valid in the presence of estimation uncertainty, is robust to various standardized error distributions, and is applicable to testing various types of misspecifications. By using this approach, we also propose a class of power‐transformed‐series (PTS) correlation tests that provides certain robustifications and power extensions to the Box–Pierce, McLeod–Li, Li–Mak, and Berkes–Horváth–Kokoszka tests in diagnosing GARCH‐type models. Our simulation and empirical example show that the PTS correlation tests outperform these existing autocorrelation tests in financial time series analysis. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
Two well‐established findings are apparent in the analyses of individual wage determination: cross‐section wage equations can account for less than half of the variance in earnings and there are large and persistent inter‐industry wage differentials. We explore these two empirical regularities using longitudinal data from the British Household Panel Survey (BHPS). We show that around 90% of the variation in earnings can be explained by observed and unobserved individual characteristics. However, small – but statistically significant – industry wage premia do remain, and there is also a role for a rich set of job and workplace controls.  相似文献   

10.
Most of the empirical applications of the stochastic volatility (SV) model are based on the assumption that the conditional distribution of returns, given the latent volatility process, is normal. In this paper, the SV model based on a conditional normal distribution is compared with SV specifications using conditional heavy‐tailed distributions, especially Student's t‐distribution and the generalized error distribution. To estimate the SV specifications, a simulated maximum likelihood approach is applied. The results based on daily data on exchange rates and stock returns reveal that the SV model with a conditional normal distribution does not adequately account for the two following empirical facts simultaneously: the leptokurtic distribution of the returns and the low but slowly decaying autocorrelation functions of the squared returns. It is shown that these empirical facts are more adequately captured by an SV model with a conditional heavy‐tailed distribution. It also turns out that the choice of the conditional distribution has systematic effects on the parameter estimates of the volatility process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

11.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Whether voluntary or mandatory in nature, most recent corporate governance codes of best practice assume that board structural independence, and the application by boards of outcome‐based incentive plans, are important boundary conditions for the enforcement of Chief Executive Officer (CEO) pay‐for‐firm‐performance; that is, for optimal contracting between owners and executive agents. We test this logic on a large Australian sample using a system Generalized Method of Moments (GMM) approach to dynamic panel data estimation. We find that Australian boards exhibiting best practice structural arrangements – those chaired by non‐executives and dominated by non‐executive directors at the full board and compensation committee levels – are no more adept at enforcing CEO pay‐for‐firm‐performance than are executive‐dominated boards. These findings suggest that policy makers' faith in incentive plans and the moderating influence of structural independence per se may be misplaced. Our findings also hold significant implications for corporate governance theory. Specifically, the findings lend further support to a contingency‐based understanding of board composition, reward choice and monitoring; an approach integrating the insights afforded by behavioural approaches to Agency Theory and by social‐cognitive and institutional understandings of director outlook, decision‐making and behaviour.  相似文献   

13.
The within‐group estimator (same as the least squares dummy variable estimator) of the dominant root in dynamic panel regression is known to be biased downwards. This article studies recursive mean adjustment (RMA) as a strategy to reduce this bias for AR(p) processes that may exhibit cross‐sectional dependence. Asymptotic properties for N,T→∞ jointly are developed. When ( log 2T)(N/T)→ζ, where ζ is a non‐zero constant, the estimator exhibits nearly negligible inconsistency. Simulation experiments demonstrate that the RMA estimator performs well in terms of reducing bias, variance and mean square error both when error terms are cross‐sectionally independent and when they are not. RMA dominates comparable estimators when T is small and/or when the underlying process is persistent.  相似文献   

14.
Instrumental variable (IV) methods for regression are well established. More recently, methods have been developed for statistical inference when the instruments are weakly correlated with the endogenous regressor, so that estimators are biased and no longer asymptotically normally distributed. This paper extends such inference to the case where two separate samples are used to implement instrumental variables estimation. We also relax the restrictive assumptions of homoskedastic error structure and equal moments of exogenous covariates across two samples commonly employed in the two‐sample IV literature for strong IV inference. Monte Carlo experiments show good size properties of the proposed tests regardless of the strength of the instruments. We apply the proposed methods to two seminal empirical studies that adopt the two‐sample IV framework.  相似文献   

15.
Social and economic scientists are tempted to use emerging data sources like big data to compile information about finite populations as an alternative for traditional survey samples. These data sources generally cover an unknown part of the population of interest. Simply assuming that analyses made on these data are applicable to larger populations is wrong. The mere volume of data provides no guarantee for valid inference. Tackling this problem with methods originally developed for probability sampling is possible but shown here to be limited. A wider range of model‐based predictive inference methods proposed in the literature are reviewed and evaluated in a simulation study using real‐world data on annual mileages by vehicles. We propose to extend this predictive inference framework with machine learning methods for inference from samples that are generated through mechanisms other than random sampling from a target population. Describing economies and societies using sensor data, internet search data, social media and voluntary opt‐in panels is cost‐effective and timely compared with traditional surveys but requires an extended inference framework as proposed in this article.  相似文献   

16.
This paper considers a spatial panel data regression model with serial correlation on each spatial unit over time as well as spatial dependence between the spatial units at each point in time. In addition, the model allows for heterogeneity across the spatial units using random effects. The paper then derives several Lagrange multiplier tests for this panel data regression model including a joint test for serial correlation, spatial autocorrelation and random effects. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin and Bera [1998. Spatial dependence in linear regression models with an introduction to spatial econometrics. In: Ullah, A., Giles, D.E.A. (Eds.), Handbook of Applied Economic Statistics. Marcel Dekker, New York] and in the panel data context by Baltagi et al. [2003. Testing panel data regression models with spatial error correlation. Journal of Econometrics 117, 123–150]. The second is the LM tests for the error component panel data model with serial correlation derived by Baltagi and Li [1995. Testing AR(1) against MA(1) disturbances in an error component model. Journal of Econometrics 68, 133–151]. Hence, the joint LM test derived in this paper encompasses those derived in both strands of earlier works. In fact, in the context of our general model, the earlier LM tests become marginal LM tests that ignore either serial correlation over time or spatial error correlation. The paper then derives conditional LM and LR tests that do not ignore these correlations and contrast them with their marginal LM and LR counterparts. The small sample performance of these tests is investigated using Monte Carlo experiments. As expected, ignoring any correlation when it is significant can lead to misleading inference.  相似文献   

17.
In this article, a three‐regime multivariate threshold vector error correction model with a ‘band of inaction’ is formulated to examine uncovered interest rate parity (UIRP) and expectation hypothesis of the term structure (EHTS) of interest rates for Switzerland. Combining both UIRP and EHTS in a model that allows for nonlinearities, we investigate whether the Swiss advantage is disappearing with respect to Europe. Our results favour threshold cointegration and show that both hypotheses hold, at least in one of the three regimes of the process for Switzerland/Germany. The same is not true between Switzerland and the United States.  相似文献   

18.
The wild bootstrap is studied in the context of regression models with heteroskedastic disturbances. We show that, in one very specific case, perfect bootstrap inference is possible, and a substantial reduction in the error in the rejection probability of a bootstrap test is available much more generally. However, the version of the wild bootstrap with this desirable property is without the skewness correction afforded by the currently most popular version of the wild bootstrap. Simulation experiments show that this does not prevent the preferred version from having the smallest error in rejection probability in small and medium-sized samples.  相似文献   

19.
Consider a linear regression model and suppose that our aim is to find a confidence interval for a specified linear combination of the regression parameters. In practice, it is common to perform a Durbin–Watson pretest of the null hypothesis of zero first‐order autocorrelation of the random errors against the alternative hypothesis of positive first‐order autocorrelation. If this null hypothesis is accepted then the confidence interval centered on the ordinary least squares estimator is used; otherwise the confidence interval centered on the feasible generalized least squares estimator is used. For any given design matrix and parameter of interest, we compare the confidence interval resulting from this two‐stage procedure and the confidence interval that is always centered on the feasible generalized least squares estimator, as follows. First, we compare the coverage probability functions of these confidence intervals. Second, we compute the scaled expected length of the confidence interval resulting from the two‐stage procedure, where the scaling is with respect to the expected length of the confidence interval centered on the feasible generalized least squares estimator, with the same minimum coverage probability. These comparisons are used to choose the better confidence interval, prior to any examination of the observed response vector.  相似文献   

20.
Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long‐run stock returns and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target‐zone applications. We show that both estimation bias and size distortions of hypothesis tests are generally larger, if the amount of overlap is larger, the sample size is smaller, and autoregressive root of the data‐generating process is closer to unity. In particular, the estimates are biased in a way that makes it more likely that the predictions of the Bertola–Svensson model will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation‐consistent covariance matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号