首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper considers methods for estimating the slope coefficients in large panel data models that are robust to the presence of various forms of error cross-section dependence. It introduces a general framework where error cross-section dependence may arise because of unobserved common effects and/or error spill-over effects due to spatial or other forms of local dependencies. Initially, this paper focuses on a panel regression model where the idiosyncratic errors are spatially dependent and possibly serially correlated, and derives the asymptotic distributions of the mean group and pooled estimators under heterogeneous and homogeneous slope coefficients, and for these estimators proposes non-parametric variance matrix estimators. The paper then considers the more general case of a panel data model with a multifactor error structure and spatial error correlations. Under this framework, the Common Correlated Effects (CCE) estimator, recently advanced by Pesaran (2006), continues to yield estimates of the slope coefficients that are consistent and asymptotically normal. Small sample properties of the estimators under various patterns of cross-section dependence, including spatial forms, are investigated by Monte Carlo experiments. Results show that the CCE approach works well in the presence of weak and/or strong cross-sectionally correlated errors.  相似文献   

2.
A new empirical reduced-form model for credit rating transitions is introduced. It is a parametric intensity-based duration model with multiple states and driven by exogenous covariates and latent dynamic factors. The model has a generalized semi-Markov structure designed to accommodate many of the stylized facts of credit rating migrations. Parameter estimation is based on Monte Carlo maximum likelihood methods for which the details are discussed in this paper. A simulation experiment is carried out to show the effectiveness of the estimation procedure. An empirical application is presented for transitions in a 7 grade rating system. The model includes a common dynamic component that can be interpreted as the credit cycle. Asymmetric effects of this cycle across rating grades and additional semi-Markov dynamics are found to be statistically significant. Finally, we investigate whether the common factor model suffices to capture systematic risk in rating transition data by introducing multiple factors in the model.  相似文献   

3.
This paper proposes a new testing procedure for detecting error cross section dependence after estimating a linear dynamic panel data model with regressors using the generalised method of moments (GMM). The test is valid when the cross-sectional dimension of the panel is large relative to the time series dimension. Importantly, our approach allows one to examine whether any error cross section dependence remains after including time dummies (or after transforming the data in terms of deviations from time-specific averages), which will be the case under heterogeneous error cross section dependence. Finite sample simulation-based results suggest that our tests perform well, particularly the version based on the [Blundell, R., Bond, S., 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115–143] system GMM estimator. In addition, it is shown that the system GMM estimator, based only on partial instruments consisting of the regressors, can be a reliable alternative to the standard GMM estimators under heterogeneous error cross section dependence. The proposed tests are applied to employment equations using UK firm data and the results show little evidence of heterogeneous error cross section dependence.  相似文献   

4.
Equilibrium business cycle models have typically less shocks than variables. As pointed out by Altug (1989) International Economic Review 30 (4) 889–920 and Sargent (1989) The Journal of Political Economy 97 (2) 251–287, if variables are measured with error, this characteristic implies that the model solution for measured variables has a factor structure. This paper compares estimation performance for the impulse response coefficients based on a VAR approximation to this class of models and an estimation method that explicitly takes into account the restrictions implied by the factor structure. Bias and mean-squared error for both factor- and VAR-based estimates of impulse response functions are quantified using, as data-generating process, a calibrated standard equilibrium business cycle model. We show that, at short horizons, VAR estimates of impulse response functions are less accurate than factor estimates while the two methods perform similarly at medium and long run horizons.  相似文献   

5.
The paper introduces a novel approach to testing for unit roots in panels, which takes a new contour that is drawn along the line given by the equi-squared-sum instead of the traditional one given by the equi-sample-size. We show in the paper that the distributions of the unit root tests are asymptotically normal along the new contour under both the null and the local-to-unity alternatives. Subsequently, we demonstrate that this startling finding may be exploited constructively to invent tools and methodologies for effective inferences in panel unit root models. Simulations show that our approach works quite well in finite samples.  相似文献   

6.
This paper shows consistency of a two-step estimation of the factors in a dynamic approximate factor model when the panel of time series is large (n large). In the first step, the parameters of the model are estimated from an OLS on principal components. In the second step, the factors are estimated via the Kalman smoother. The analysis develops the theory for the estimator considered in Giannone et al. (2004) and Giannone et al. (2008) and for the many empirical papers using this framework for nowcasting.  相似文献   

7.
This paper proposes a nonlinear panel data model which can endogenously generate both ‘weak’ and ‘strong’ cross-sectional dependence. The model’s distinguishing characteristic is that a given agent’s behaviour is influenced by an aggregation of the views or actions of those around them. The model allows for considerable flexibility in terms of the genesis of this herding or clustering type behaviour. At an econometric level, the model is shown to nest various extant dynamic panel data models. These include panel AR models, spatial models, which accommodate weak dependence only, and panel models where cross-sectional averages or factors exogenously generate strong, but not weak, cross sectional dependence. An important implication is that the appropriate model for the aggregate series becomes intrinsically nonlinear, due to the clustering behaviour, and thus requires the disaggregates to be simultaneously considered with the aggregate. We provide the associated asymptotic theory for estimation and inference. This is supplemented with Monte Carlo studies and two empirical applications which indicate the utility of our proposed model as a vehicle to model different types of cross-sectional dependence.  相似文献   

8.
We propose and illustrate a Markov-switching multifractal duration (MSMD) model for analysis of inter-trade durations in financial markets. We establish several of its key properties with emphasis on high persistence and long memory. Empirical exploration suggests MSMD’s superiority relative to leading competitors.  相似文献   

9.
This paper focuses on the analysis of size distributions of innovations, which are known to be highly skewed. We use patent citations as one indicator of innovation significance, constructing two large datasets from the European and US Patent Offices at a high level of aggregation, and the Trajtenberg [1990, A penny for your quotes: patent citations and the value of innovations. Rand Journal of Economics 21(1), 172–187] dataset on CT scanners at a very low one. We also study self-assessed reports of patented innovation values using two very recent patent valuation datasets from the Netherlands and the UK, as well as a small dataset of patent licence revenues of Harvard University. Statistical methods are applied to analyse the properties of the empirical size distributions, where we put special emphasis on testing for the existence of ‘heavy tails’, i.e., whether or not the probability of very large innovations declines more slowly than exponentially. While overall the distributions appear to resemble a lognormal, we argue that the tails are indeed fat. We invoke some recent results from extreme value statistics and apply the Hill [1975. A simple general approach to inference about the tails of a distribution. The Annals of Statistics 3, 1163–1174] estimator with data-driven cut-offs to determine the tail index for the right tails of all datasets except the NL and UK patent valuations. On these latter datasets we use a maximum likelihood estimator for grouped data to estimate the tail index for varying definitions of the right tail. We find significantly and consistently lower tail estimates for the returns data than the citation data (around 0.6–1 vs. 3–5). The EPO and US patent citation tail indices are roughly constant over time, but the latter estimates are significantly lower than the former. The heaviness of the tails, particularly as measured by value indicators, we argue, has significant implications for technology policy and growth theory, since the second and possibly even the first moments of these distributions may not exist.  相似文献   

10.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

11.
By contrasting endogenous growth models with facts, one is frequently confronted with the prediction that levels of economic variables, such as R&D expenditures, imply lasting effects on the growth rate of an economy. As stylized facts show, the research intensity in most advanced countries has dramatically increased, mostly more than the GDP. Yet, the growth rates have roughly remained constant or even declined. In this paper we modify the Romer endogenous growth model and test our variant of the model using time series data. We estimate the market version both for the US and Germany for the time period January 1962 to April 1996. Our results demonstrate that the model is compatible with the time series for aggregate data in those countries. All parameters fall into a reasonable range.  相似文献   

12.
We show how a simple model with sign restrictions can be used to identify symmetric and asymmetric supply, demand and monetary policy shocks in an estimated two‐country structural VAR for the UK and Euro area. The results can be used to deal with several issues that are relevant in the optimal currency area literature. We find an important role for symmetric shocks in explaining the variability of the business cycle in both economies. However, the relative importance of asymmetric shocks, being around 20% in the long run, cannot be ignored. Moreover, when we estimate the model for the UK and US, the degree of business cycle synchronization seems to be higher. Finally, we confirm existing evidence of the exchange rate being an independent source of shocks in the economy.  相似文献   

13.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

14.
We model credit rating histories as continuous-time discrete-state Markov processes. Infrequent monitoring of the debtors’ solvency will result in erroneous observations of the rating transition times, and consequently in biased parameter estimates. We develop a score test against such measurement errors in the transition data that is independent of the error distribution. We derive the asymptotic χ2χ2-distribution for the test statistic under the null by stochastic limit theory. The test is applied to an international corporate portfolio, while accounting for economic and debtor-specific covariates. The test indicates that measurement errors in the transition times are a real problem in practice.  相似文献   

15.
This paper develops a very simple test for the null hypothesis of no cointegration in panel data. The test is general enough to allow for heteroskedastic and serially correlated errors, unit‐specific time trends, cross‐sectional dependence and unknown structural breaks in both the intercept and slope of the cointegrated regression, which may be located at different dates for different units. The limiting distribution of the test is derived, and is found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small‐sample properties of the test. In our empirical application, we provide new evidence concerning the purchasing power parity hypothesis.  相似文献   

16.
In this paper, a new model to analyze the comovements in the volatilities of a portfolio is proposed. The Pure Variance Common Features model is a factor model for the conditional variances of a portfolio of assets, designed to isolate a small number of variance features that drive all assets’ volatilities. It decomposes the conditional variance into a short-run idiosyncratic component (a low-order ARCH process) and a long-run component (the variance factors). An empirical example provides evidence that models with very few variance features perform well in capturing the long-run common volatilities of the equity components of the Dow Jones.  相似文献   

17.
We present new Monte Carlo evidence regarding the feasibility of separating causality from selection within non-experimental duration data, by means of the non-parametric maximum likelihood estimator (NPMLE). Key findings are: (i) the NPMLE is extremely reliable, and it accurately separates the causal effects of treatment and duration dependence from sorting effects, almost regardless of the true unobserved heterogeneity distribution; (ii) the NPMLE is normally distributed, and standard errors can be computed directly from the optimally selected model; and (iii) unjustified restrictions on the heterogeneity distribution, e.g., in terms of a pre-specified number of support points, may cause substantial bias.  相似文献   

18.
This paper proposes and analyses the autoregressive conditional root (ACR) time‐series model. This multivariate dynamic mixture autoregression allows for non‐stationary epochs. It proves to be an appealing alternative to existing nonlinear models, e.g. the threshold autoregressive or Markov switching class of models, which are commonly used to describe nonlinear dynamics as implied by arbitrage in presence of transaction costs. Simple conditions on the parameters of the ACR process and its innovations are shown to imply geometric ergodicity, stationarity and existence of moments. Furthermore, consistency and asymptotic normality of the maximum likelihood estimators are established. An application to real exchange rate data illustrates the analysis.  相似文献   

19.
We develop a test for the linear no cointegration null hypothesis in a threshold vector error correction model. We adopt a sup-Wald type test and derive its null asymptotic distribution. A residual-based bootstrap is proposed, and the first-order consistency of the bootstrap is established. A set of Monte Carlo simulations shows that the bootstrap corrects size distortion of asymptotic distribution in finite samples, and that its power against the threshold cointegration alternative is significantly greater than that of conventional cointegration tests. Our method is illustrated with used car price indexes.  相似文献   

20.
This paper proposes maximum likelihood estimators for panel seemingly unrelated regressions with both spatial lag and spatial error components. We study the general case where spatial effects are incorporated via spatial errors terms and via a spatial lag dependent variable and where the heterogeneity in the panel is incorporated via an error component specification. We generalize the approach of Wang and Kockelman (2007) and propose joint and conditional Lagrange multiplier tests for spatial autocorrelation and random effects for this spatial SUR panel model. The small sample performance of the proposed estimators and tests are examined using Monte Carlo experiments. An empirical application to hedonic housing prices in Paris illustrate these methods. The proposed specification uses a system of three SUR equations corresponding to three types of flats within 80 districts of Paris over the period 1990-2003. We test for spatial effects and heterogeneity and find reasonable estimates of the shadow prices for housing characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号