首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper questions the appropriateness of the practice known as ‘error‐autocorrelation correcting’ in linear regression, by showing that adopting an AR(1) error formulation is equivalent to assuming that the regressand does not Granger cause any of the regressors. This result is used to construct a new test for the common factor restrictions, as well as investigate – using Monte Carlo simulations – other potential sources of unreliability of inference resulting from this practice. The main conclusion is that when the Granger cause restriction is false, the ordinary least square and generalized least square estimators are biased and inconsistent, and using autocorrelation‐consistent standard errors does not improve the reliability of inference.  相似文献   

2.
We characterize the restrictions imposed by the minimal I(2)‐to‐I(1) transformation that underlies much applied work, e.g. on money demand relationships or open‐economy pricing relationships. The relationship between the parameters of the original I(2) vector autoregression, including the coefficients of polynomially cointegrating relationships, and the transformed I(1) model is characterized. We discuss estimation of the transformed model subject to restrictions as well as the more commonly used approach of unrestricted reduced rank regression. Only a minor loss of efficiency is incurred by ignoring the restrictions in the empirical example and a simulation study. A properly transformed vector autoregression thus provides a practical and effective means for inference on the parameters of the I(2) model.  相似文献   

3.
In applications of structural VAR modeling, finite-sample properties may be difficult to obtain when certain identifying restrictions are imposed on lagged relationships. As a result, even though imposing some lagged restrictions makes economic sense, lagged relationships are often left unrestricted to make statistical inference more convenient. This paper develops block Monte Carlo methods to obtain both maximum likelihood estimates and exact Bayesian inference when certain types of restrictions are imposed on the lag structure. These methods are applied to two examples to illustrate the importance of imposing restrictions on lagged relationships.  相似文献   

4.
We present a sequential approach to estimating a dynamic Hausman–Taylor model. We first estimate the coefficients of the time‐varying regressors and subsequently regress the first‐stage residuals on the time‐invariant regressors. In comparison to estimating all coefficients simultaneously, this two‐stage procedure is more robust against model misspecification, allows for a flexible choice of the first‐stage estimator, and enables simple testing of the overidentifying restrictions. For correct inference, we derive analytical standard error adjustments. We evaluate the finite‐sample properties with Monte Carlo simulations and apply the approach to a dynamic gravity equation for US outward foreign direct investment.  相似文献   

5.
We develop a general framework for analyzing the usefulness of imposing parameter restrictions on a forecasting model. We propose a measure of the usefulness of the restrictions that depends on the forecaster’s loss function and that could be time varying. We show how to conduct inference about this measure. The application of our methodology to analyzing the usefulness of no-arbitrage restrictions for forecasting the term structure of interest rates reveals that: (1) the restrictions have become less useful over time; (2) when using a statistical measure of accuracy, the restrictions are a useful way to reduce parameter estimation uncertainty, but are dominated by restrictions that do the same without using any theory; (3) when using an economic measure of accuracy, the no-arbitrage restrictions are no longer dominated by atheoretical restrictions, but for this to be true it is important that the restrictions incorporate a time-varying risk premium.  相似文献   

6.
We conduct an extensive Monte Carlo experiment to examine the finite sample properties of maximum‐likelihood‐based inference in the bivariate probit model with an endogenous dummy. We analyse the relative performance of alternative exogeneity tests, the impact of distributional misspecification and the role of exclusion restrictions to achieve parameter identification in practice. The results allow us to infer important guidelines for applied econometric practice.  相似文献   

7.
Standard inference in cointegrating models is fragile because it relies on an assumption of an I(1)I(1) model for the common stochastic trends, which may not accurately describe the data’s persistence. This paper considers low-frequency tests about cointegrating vectors under a range of restrictions on the common stochastic trends. We quantify how much power can potentially be gained by exploiting correct restrictions, as well as the magnitude of size distortions if such restrictions are imposed erroneously. A simple test motivated by the analysis in Wright (2000) is developed and shown to be approximately optimal for inference about a single cointegrating vector in the unrestricted stochastic trend model.  相似文献   

8.
In this paper, we derive restrictions for Granger noncausality in MS‐VAR models and show under what conditions a variable does not affect the forecast of the hidden Markov process. To assess the noncausality hypotheses, we apply Bayesian inference. The computational tools include a novel block Metropolis–Hastings sampling algorithm for the estimation of the underlying models. We analyze a system of monthly US data on money and income. The results of testing in MS‐VARs contradict those obtained with linear VARs: the money aggregate M1 helps in forecasting industrial production and in predicting the next period's state. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
This paper provides novel evidence on the effect of deregulating overtime hours restrictions on women by using the 1985 Amendments to the Labour Standards Act (LSA) in Japan as a natural experiment. The original LSA of 1947 prohibited women from working overtime exceeding two hours a day; six hours a week; and 150 hours a year. The 1985 Amendments exempted a variety of occupations and industries from such an overtime restriction on women. Applying a difference‐in‐difference model to census data, we find causal evidence pointing to the positive effect of this particular piece of labour market deregulation on the proportion of female employment. We then carry out a series of sensitivity analyses to ensure the robustness of our finding. Especially, we conduct a falsification test and an event study to show that our causal inference is not threatened by the differential pretreatment trends. Finally, we use quantile regressions and find that for jobs with more rapidly growing proportion of female employment, the effect of the exemption from the overtime work restriction on women is larger.  相似文献   

10.
Social and economic scientists are tempted to use emerging data sources like big data to compile information about finite populations as an alternative for traditional survey samples. These data sources generally cover an unknown part of the population of interest. Simply assuming that analyses made on these data are applicable to larger populations is wrong. The mere volume of data provides no guarantee for valid inference. Tackling this problem with methods originally developed for probability sampling is possible but shown here to be limited. A wider range of model‐based predictive inference methods proposed in the literature are reviewed and evaluated in a simulation study using real‐world data on annual mileages by vehicles. We propose to extend this predictive inference framework with machine learning methods for inference from samples that are generated through mechanisms other than random sampling from a target population. Describing economies and societies using sensor data, internet search data, social media and voluntary opt‐in panels is cost‐effective and timely compared with traditional surveys but requires an extended inference framework as proposed in this article.  相似文献   

11.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

12.
The generalised method of moments estimator may be substantially biased in finite samples, especially so when there are large numbers of unconditional moment conditions. This paper develops a class of first-order equivalent semi-parametric efficient estimators and tests for conditional moment restrictions models based on a local or kernel-weighted version of the Cressie–Read power divergence family of discrepancies. This approach is similar in spirit to the empirical likelihood methods of Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restrictions models. Econometrica 72, 1667–1714] and Tripathi and Kitamura [2003. Testing conditional moment restrictions. Annals of Statistics 31, 2059–2095]. These efficient local methods avoid the necessity of explicit estimation of the conditional Jacobian and variance matrices of the conditional moment restrictions and provide empirical conditional probabilities for the observations.  相似文献   

13.
In this paper, we propose a fixed design wild bootstrap procedure to test parameter restrictions in vector autoregressive models, which is robust in cases of conditionally heteroskedastic error terms. The wild bootstrap does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation, empirical size and power properties of the method are illustrated for the case of white noise under the null hypothesis. We compare the bootstrap approach with standard ordinary least squares (OLS)-based, weighted least squares (WLS) and quasi-maximum likelihood (QML) approaches. In terms of empirical size, the proposed method outperforms competing approaches and achieves size-adjusted power close to WLS or QML inference. A White correction of standard OLS inference is satisfactory only in large samples. We investigate the case of Granger causality in a bivariate system of inflation expectations in France and the United Kingdom. Our evidence suggests that the former are Granger causal for the latter while for the reverse relation Granger non-causality cannot be rejected.  相似文献   

14.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

15.
Steady‐state restrictions are commonly imposed on highly persistent variables to achieve stationarity prior to confronting rational expectations models with data. However, the resulting steady‐state deviations are often surprisingly persistent indicating that some aspects of the underlying theory may be empirically problematic. This paper discusses how to formulate steady‐state restrictions in rational expectations models with latent forcing variables and test their validity using cointegration techniques. The approach is illustrated by testing steady‐state restrictions for alternative specifications of the New Keynesian model and shown to be able to discriminate between different assumptions on the sources of the permanent shocks.  相似文献   

16.
Long‐run restrictions have been used extensively for identifying structural shocks in vector autoregressive (VAR) analysis. Such restrictions are typically just‐identifying but can be checked by utilizing changes in volatility. This paper reviews and contrasts the volatility models that have been used for this purpose. Three main approaches have been used, exogenously generated changes in the unconditional residual covariance matrix, changing volatility modelled by a Markov switching mechanism and multivariate generalized autoregressive conditional heteroskedasticity models. Using changes in volatility for checking long‐run identifying restrictions in structural VAR analysis is illustrated by reconsidering models for identifying fundamental components of stock prices.  相似文献   

17.
Record linkage is the act of bringing together records from two files that are believed to belong to the same unit (e.g., a person or business). It is a low‐cost way of increasing the set of variables available for analysis. Errors may arise in the linking process if an error‐free unit identifier is not available. Two types of linking errors include an incorrect link (records belonging to two different units are linked) and a missed record (an unlinked record for which a correct link exists). Naively ignoring linkage errors may mean that analysis of the linked file is biased. This paper outlines a “weighting approach” to making correct inference about regression coefficients and population totals in the presence of such linkage errors. This approach is designed for analysts who do not have the expertise or time to use specialist software required by other approaches but who are comfortable using weights in inference. The performance of the estimator is demonstrated in a simulation study.  相似文献   

18.
We propose a novel identification‐robust test for the null hypothesis that an estimated New Keynesian model has a reduced form consistent with the unique stable solution against the alternative of sunspot‐driven multiple equilibria. Our strategy is designed to handle identification failures as well as the misspecification of the relevant propagation mechanisms. We invert a likelihood ratio test for the cross‐equation restrictions (CER) that the New Keynesian system places on its reduced‐form solution under determinacy. If the CER are not rejected, sunspot‐driven expectations can be ruled out from the model equilibrium and we accept the structural model. Otherwise, we move to a second‐step and invert an Anderson and Rubin‐type test for the orthogonality restrictions (OR) implied by the system of structural Euler equations. The hypothesis of indeterminacy and the structural model are accepted if the OR are not rejected. We investigate the finite‐sample performance of the suggested identification‐robust two‐step testing strategy by some Monte Carlo experiments and then apply it to a New Keynesian AD/AS model estimated with actual US data. In spite of some evidence of weak identification as for the ‘Great Moderation’ period, our results offer formal support to the hypothesis of a switch from indeterminacy to a scenario consistent with uniqueness occurring in the late 1970s. Our identification‐robust full‐information confidence set for the structural parameters computed on the ‘Great Moderation’ regime turns out to be more precise than the intervals previously reported in the literature through ‘limited‐information’ methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
This paper applies the minimax regret criterion to choice between two treatments conditional on observation of a finite sample. The analysis is based on exact small sample regret and does not use asymptotic approximations or finite-sample bounds. Core results are: (i) Minimax regret treatment rules are well approximated by empirical success rules in many cases, but differ from them significantly–both in terms of how the rules look and in terms of maximal regret incurred–for small sample sizes and certain sample designs. (ii) Absent prior cross-covariate restrictions on treatment outcomes, they prescribe inference that is completely separate across covariates, leading to no-data rules as the support of a covariate grows. I conclude by offering an assessment of these results.  相似文献   

20.
This paper considers the location‐scale quantile autoregression in which the location and scale parameters are subject to regime shifts. The regime changes in lower and upper tails are determined by the outcome of a latent, discrete‐state Markov process. The new method provides direct inference and estimate for different parts of a non‐stationary time series distribution. Bayesian inference for switching regimes within a quantile, via a three‐parameter asymmetric Laplace distribution, is adapted and designed for parameter estimation. Using the Bayesian output, the marginal likelihood is readily available for testing the presence and the number of regimes. The simulation study shows that the predictability of regimes and conditional quantiles by using asymmetric Laplace distribution as the likelihood is fairly comparable with the true model distributions. However, ignoring that autoregressive coefficients might be quantile dependent leads to substantial bias in both regime inference and quantile prediction. The potential of this new approach is illustrated in the empirical applications to the US inflation and real exchange rates for asymmetric dynamics and the S&P 500 index returns of different frequencies for financial market risk assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号