首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 474 毫秒
1.
This paper contributes to the econometric literature on structural breaks by proposing a test for parameter stability in vector autoregressive (VAR) models at a particular frequency ω, where ω ∈ [0, π]. When a dynamic model is affected by a structural break, the new tests allow for detecting which frequencies of the data are responsible for parameter instability. If the model is locally stable at the frequencies of interest, the whole sample size can then be exploited despite the presence of a break. The methodology is applied to analyse the productivity slowdown in the US, and the outcome is that local stability concerns only the higher frequencies of data on consumption, investment and output.  相似文献   

2.
The paper describes two automatic model selection algorithms, RETINA and PcGets, briefly discussing how the algorithms work and what their performance claims are. RETINA's Matlab implementation of the code is explained, then the program is compared with PcGets on the data in Perez‐Amaral, Gallo and White (2005 , Econometric Theory, Vol. 21, pp. 262–277), ‘A Comparison of Complementary Automatic Modelling Methods: RETINA and PcGets’, and Hoover and Perez (1999 , Econometrics Journal, Vol. 2, pp. 167–191), ‘Data Mining Reconsidered: Encompassing and the General‐to‐specific Approach to Specification Search’. Monte Carlo simulation results assess the null and non‐null rejection frequencies of the RETINA and PcGets model selection algorithms in the presence of nonlinear functions.  相似文献   

3.
Abstract . Several recent attempts to measure the quality of life directly are examined. The indicators developed for this purpose seem incapable of correctly ranking locations or time periods. Variable aggregation schemes have not summarized data in a way which is useful to policy makers. Failures arise partly because researchers do not agree on the set of variables to be measured. More importantly, the methodology used has been inconsistent with proposed theoretical models and has resulted in indicators which violate accepted laws of economics. For example, neither diminishing returns not substitutability between quality of life “inputs” is recognized. Priority, it is believed, should be given solution of existing problems.  相似文献   

4.
This paper proposes a nonlinear panel data model which can endogenously generate both ‘weak’ and ‘strong’ cross-sectional dependence. The model’s distinguishing characteristic is that a given agent’s behaviour is influenced by an aggregation of the views or actions of those around them. The model allows for considerable flexibility in terms of the genesis of this herding or clustering type behaviour. At an econometric level, the model is shown to nest various extant dynamic panel data models. These include panel AR models, spatial models, which accommodate weak dependence only, and panel models where cross-sectional averages or factors exogenously generate strong, but not weak, cross sectional dependence. An important implication is that the appropriate model for the aggregate series becomes intrinsically nonlinear, due to the clustering behaviour, and thus requires the disaggregates to be simultaneously considered with the aggregate. We provide the associated asymptotic theory for estimation and inference. This is supplemented with Monte Carlo studies and two empirical applications which indicate the utility of our proposed model as a vehicle to model different types of cross-sectional dependence.  相似文献   

5.
Quantile regression techniques have been widely used in empirical economics. In this paper, we consider the estimation of a generalized quantile regression model when data are subject to fixed or random censoring. Through a discretization technique, we transform the censored regression model into a sequence of binary choice models and further propose an integrated smoothed maximum score estimator by combining individual binary choice models, following the insights of Horowitz (1992) and Manski (1985). Unlike the estimators of Horowitz (1992) and Manski (1985), our estimators converge at the usual parametric rate through an integration process. In the case of fixed censoring, our approach overcomes a major drawback of existing approaches associated with the curse-of-dimensionality problem. Our approach for the fixed censored case can be extended readily to the case with random censoring for which other existing approaches are no longer applicable. Both of our estimators are consistent and asymptotically normal. A simulation study demonstrates that our estimators perform well in finite samples.  相似文献   

6.
We study regression models that involve data sampled at different frequencies. We derive the asymptotic properties of the NLS estimators of such regression models and compare them with the LS estimators of a traditional model that involves aggregating or equally weighting data to estimate a model at the same sampling frequency. In addition we propose new tests to examine the null hypothesis of equal weights in aggregating time series in a regression model. We explore the above theoretical aspects and verify them via an extensive Monte Carlo simulation study and an empirical application.  相似文献   

7.
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

8.
Economic sentiment surveys are carried out by all European Union member states and are often seen as early indicators for future economic developments. Based on these surveys, the European Commission constructs an aggregate European Economic Sentiment Indicator (ESI). This paper compares the ESI with more sophisticated aggregation schemes based on statistical methods: dynamic factor analysis and partial least squares. The indicator based on partial least squares clearly outperforms the other two indicators in terms of comovement with economic activity. In terms of forecast ability, the ESI, constructed in a rather ad hoc way, can compete with the other indicators.  相似文献   

9.
This article considers the problem of testing for cross‐section independence in limited dependent variable panel data models. It derives a Lagrangian multiplier (LM) test and shows that in terms of generalized residuals of Gourieroux et al. (1987) it reduces to the LM test of Breusch and Pagan (1980) . Because of the tendency of the LM test to over‐reject in panels with large N (cross‐section dimension), we also consider the application of the cross‐section dependence test (CD) proposed by Pesaran (2004) . In Monte Carlo experiments it emerges that for most combinations of N and T the CD test is correctly sized, whereas the validity of the LM test requires T (time series dimension) to be quite large relative to N. We illustrate the cross‐sectional independence tests with an application to a probit panel data model of roll‐call votes in the US Congress and find that the votes display a significant degree of cross‐section dependence.  相似文献   

10.
Recent literature on panel data emphasizes the importance of accounting for time-varying unobservable individual effects, which may stem from either omitted individual characteristics or macro-level shocks that affect each individual unit differently. In this paper, we propose a simple specification test of the null hypothesis that the individual effects are time-invariant against the alternative that they are time-varying. Our test is an application of Hausman (1978) testing procedure and can be used for any generalized linear model for panel data that admits a sufficient statistic for the individual effect. This is a wide class of models which includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare two alternative estimators of the model parameters based on two different formulations of the conditional maximum likelihood method. Our approach does not require assumptions on the distribution of unobserved heterogeneity, nor it requires the latter to be independent of the regressors in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs well, with small size distortions and good power properties. We use a health economics example based on data from the Health and Retirement Study to illustrate the proposed test.  相似文献   

11.
Detecting and modeling structural changes in time series models have attracted great attention. However, relatively little effort has been paid to the testing of structural changes in panel data models despite their increasing importance in economics and finance. In this paper, we propose a new approach to testing structural changes in panel data models. Unlike the bulk of the literature on structural changes, which focuses on detection of abrupt structural changes, we consider smooth structural changes for which model parameters are unknown deterministic smooth functions of time except for a finite number of time points. We use nonparametric local smoothing method to consistently estimate the smooth changing parameters and develop two consistent tests for smooth structural changes in panel data models. The first test is to check whether all model parameters are stable over time. The second test is to check potential time-varying interaction while allowing for a common trend. Both tests have an asymptotic N(0,1) distribution under the null hypothesis of parameter constancy and are consistent against a vast class of smooth structural changes as well as abrupt structural breaks with possibly unknown break points alternatives. Simulation studies show that the tests provide reliable inference in finite samples and two empirical examples with respect to a cross-country growth model and a capital structure model are discussed.  相似文献   

12.
This article shows that spurious regression results can occur for a fixed effects model with weak time series variation in the regressor and/or strong time series variation in the regression errors when the first‐differenced and Within‐OLS estimators are used. Asymptotic properties of these estimators and the related t‐tests and model selection criteria are studied by sending the number of cross‐sectional observations to infinity. This article shows that the first‐differenced and Within‐OLS estimators diverge in probability, that the related t‐tests are inconsistent, that R2s converge to zero in probability and that AIC and BIC diverge to ?∞ in probability. The results of the article warn that one should not jump to the use of fixed effects regressions without considering the degree of time series variations in the data.  相似文献   

13.
We examine the impact of time aggregation on price change estimates for 19 supermarket item categories using scanner data. Time aggregation choices lead to a difference in price change estimates for chained indexes which ranged from 0.28% to 29.73% for a superlative index and an incredible 14.88%-46,463.71% for a non-superlative index. Traditional index number theory appears to break down with weekly data, even for superlative indexes. Monthly and (in some cases) quarterly time aggregation were insufficient to eliminate downward drift in superlative indexes. To eliminate drift, a novel adaptation of a multilateral index number method is proposed.  相似文献   

14.
In the Stackelberg duopoly experiments in Huck et al. (2001) , nearly half of the followers’ behaviours are inconsistent with conventional prediction. We use a test in which the conventional self‐interested model is nested as a special case of an inequality aversion model. Maximum likelihood methods applied to the Huck et al. (2001) data set reject the self‐interested model. We find that almost 40% of the players have disadvantageous inequality aversion that is statistically different from zero and economically significant, but advantageous inequality aversion is relatively unimportant. These estimates provide support for a more parsimonious model with no advantageous inequality aversion.  相似文献   

15.
Abstract

In this paper we construct a model to estimate local employment growth in Italian local labour markets for the period 1991–2001. The model is constructed in a similar manner to the original models of Glaeser et al. (1992), Henderson et al. (1995) and Combes (2000). Our objective is to identify the extent to which the results estimated by these types of models are themselves sensitive to the model specification. In order to do this we extend the basic models by successively incorporating new explanatory variables into the model framework. In addition, and for the first time, we also estimate these same models at two different levels of sectoral aggregation, for the same spatial structure. Our results indicate that these models are highly sensitive to sectoral aggregation and classification and our results therefore strongly support the use of highly disaggregated data.  相似文献   

16.
The short end of the US$ term structure of interest rates is analysed allowing for the possibility of fractional integration and cointegration. This approach permits mean‐reverting dynamics for the data and the existence of a common long run stochastic trend to be maintained simultaneously. We estimate the model for the period 1963–2006 and find it compatible with this structure. The restriction that the data are I(1) and the errors are I(0) is rejected, mainly because the latter still display long memory. This result is consistent with a model of monetary policy in which the Central Bank operates affecting contracts with short term maturity, and the impulses are transmitted to contracts with longer maturities and then to the final goals. However, the transmission of the impulses along the term structure cannot be modelled using the Expectations Hypothesis.  相似文献   

17.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

18.
Recently, single‐equation estimation by the generalized method of moments (GMM) has become popular in the monetary economics literature, for estimating forward‐looking models with rational expectations. We discuss a method for analysing the empirical identification of such models that exploits their dynamic structure and the assumption of rational expectations. This allows us to judge the reliability of the resulting GMM estimation and inference and reveals the potential sources of weak identification. With reference to the New Keynesian Phillips curve of Galí and Gertler [Journal of Monetary Economics (1999) Vol. 44, 195] and the forward‐looking Taylor rules of Clarida, Galí and Gertler [Quarterly Journal of Economics (2000) Vol. 115, 147], we demonstrate that the usual ‘weak instruments’ problem can arise naturally, when the predictable variation in inflation is small relative to unpredictable future shocks (news). Hence, we conclude that those models are less reliably estimated over periods when inflation has been under effective policy control.  相似文献   

19.
I study inverse probability weighted M-estimation under a general missing data scheme. Examples include M-estimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with observed retention frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows a general characterization of a “double robustness” result due to Scharfstein et al. [1999. Rejoinder. Journal of the American Statistical Association 94, 1135–1146].  相似文献   

20.
This paper describes semiparametric techniques recently proposed for the analysis of seasonal or cyclical long memory and applies them to a monthly Spanish inflation series. One of the conclusions is that this series has long memory not only at the origin but also at some but not all seasonal frequencies, suggesting that the fractional difference operator (1−L12)d should be avoided. Moreover, different persistent cycles are observed before and after the first oil crisis. Whereas the cycles seem stationary in the former period, we find evidence of a unit root after 1973, which implies that a shock has a permanent effect. Finally, it is shown how to compute the exact impulse responses and the coefficients in the autoregressive expansion of parametric seasonal long memory models. These two quantities are important to assess the impact of aleatory shocks such as those produced by a change of economic policy and for forecasting purposes, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号