首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Several authors have suggested using the jackknife technique to approximate a standard error for the Gini coefficient. It has also been shown that the Gini measure can be obtained simply from an artificial ordinary least square (OLS) regression based on the data and their ranks. We show that obtaining an exact analytical expression for the standard error is actually a trivial matter. Further, by extending the regression framework to one involving seemingly unrelated regressions (SUR), several interesting hypotheses regarding the sensitivity of the Gini coefficient to changes in the data are readily tested in a formal manner.  相似文献   

2.
The potential shortcomings of the regression (or stochastic) approach for computing exact standard errors of the Gini coefficient using ordinary least squares or weighted least squares are pointed out.  相似文献   

3.
4.
We demonstrate that when testing for stochastic dominance of order three and above, using a weighted version of the Kolmogorov–Smirnov-type statistic proposed by McFadden [1989. In: Fomby, T.B., Seo, T.K. (Eds.), Studies in the Economics of Uncertainty. Springer, New York, pp. 113–134] is necessary for obtaining a non-degenerate asymptotic distribution. Since the asymptotic distribution is complex, we discuss a bootstrap approximation for it in the context of a real application.  相似文献   

5.
This paper addresses the problem of endogenous regressors due to the presence of unobserved heterogeneity, when this is correlated with the regressors, and caused by regressors’ measurement errors. A simple two‐stage testing procedure is proposed for the identification of the underlying cause of correlation between regressors and the error term. The statistical performance of the resulting sequential test is assessed using simulated data.  相似文献   

6.
This paper addresses the issue of testing the ‘hybrid’ New Keynesian Phillips curve (NKPC) through vector autoregressive (VAR) systems and likelihood methods, giving special emphasis to the case where the variables are non‐stationary. The idea is to use a VAR for both the inflation rate and the explanatory variable(s) to approximate the dynamics of the system and derive testable restrictions. Attention is focused on the ‘inexact’ formulation of the NKPC. Empirical results over the period 1971–98 show that the NKPC is far from providing a ‘good first approximation’ of inflation dynamics in the Euro area.  相似文献   

7.
Exchange rate forecasting is hard and the seminal result of Meese and Rogoff [Meese, R., Rogoff, K., 1983. Empirical exchange rate models of the seventies: Do they fit out of sample? Journal of International Economics 14, 3–24] that the exchange rate is well approximated by a driftless random walk, at least for prediction purposes, still stands despite much effort at constructing other forecasting models. However, in several other macro and financial forecasting applications, researchers in recent years have considered methods for forecasting that effectively combine the information in a large number of time series. In this paper, I apply one such method for pooling forecasts from several different models, Bayesian Model Averaging, to the problem of pseudo out-of-sample exchange rate predictions. For most currency–horizon pairs, the Bayesian Model Averaging forecasts using a sufficiently high degree of shrinkage, give slightly smaller out-of-sample mean square prediction error than the random walk benchmark. The forecasts generated by this model averaging methodology are however very close to, but not identical to, those from the random walk forecast.  相似文献   

8.
Maximization of utility implies that consumer demand systems have a Slutsky matrix which is everywhere symmetric. However, previous non- and semi-parametric approaches to the estimation of consumer demand systems do not give estimators that are restricted to satisfy this condition, nor do they offer powerful tests of this restriction. We use nonparametric modeling to test and impose Slutsky symmetry in a system of expenditure share equations over prices and expenditure. In this context, Slutsky symmetry is a set of nonlinear cross-equation restrictions on levels and derivatives of consumer demand equations. The key insight is that due to the differing convergence rates of levels and derivatives and due to the fact that the symmetry restrictions are linear in derivatives, both the test and the symmetry restricted estimator behave asymptotically as if these restrictions were (locally) linear. We establish large and finite sample properties of our methods, and show that our test has advantages over the only other comparable test. All methods we propose are implemented with Canadian micro-data. We find that our nonparametric analysis yields statistically significantly and qualitatively different results from traditional parametric estimators and tests.  相似文献   

9.
This paper proposes and analyses the autoregressive conditional root (ACR) time‐series model. This multivariate dynamic mixture autoregression allows for non‐stationary epochs. It proves to be an appealing alternative to existing nonlinear models, e.g. the threshold autoregressive or Markov switching class of models, which are commonly used to describe nonlinear dynamics as implied by arbitrage in presence of transaction costs. Simple conditions on the parameters of the ACR process and its innovations are shown to imply geometric ergodicity, stationarity and existence of moments. Furthermore, consistency and asymptotic normality of the maximum likelihood estimators are established. An application to real exchange rate data illustrates the analysis.  相似文献   

10.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

11.
We conduct an extensive Monte Carlo experiment to examine the finite sample properties of maximum‐likelihood‐based inference in the bivariate probit model with an endogenous dummy. We analyse the relative performance of alternative exogeneity tests, the impact of distributional misspecification and the role of exclusion restrictions to achieve parameter identification in practice. The results allow us to infer important guidelines for applied econometric practice.  相似文献   

12.
This paper analyses the unbiasedness hypothesis between spot and forward volatility, using both the actual and the continuous path of realised volatility, and focusing on long-memory properties. For this purpose, we use daily realised volatility with jumps for the USD/EUR exchange rate negotiated in the FX market and employ fractional integration and cointegration techniques. Both series have long-range dependence, and so does the error correction term of their long-run relationship. Hence, deviations from equilibrium are highly persistent, and the effects of shocks affecting the long-run relationship dissipate very slowly. While for long-term contracts, there is some empirical evidence that the forward volatility unbiasedness hypothesis does not hold – and, thus, that forward implied volatility is a systematically downward-biased predictor of future spot volatility – for short-term contracts, the evidence is mixed.  相似文献   

13.
We provide analytical formulae for the asymptotic bias (ABIAS) and mean-squared error (AMSE) of the IV estimator, and obtain approximations thereof based on an asymptotic scheme which essentially requires the expectation of the first stage F-statistic to converge to a finite (possibly small) positive limit as the number of instruments approaches infinity. Our analytical formulae can be viewed as generalizing the bias and MSE results of [Richardson and Wu 1971. A note on the comparison of ordinary and two-stage least squares estimators. Econometrica 39, 973–982] to the case with nonnormal errors and stochastic instruments. Our approximations are shown to compare favorably with approximations due to [Morimune 1983. Approximate distributions of kk-class estimators when the degree of overidentifiability is large compared with the sample size. Econometrica 51, 821–841] and [Donald and Newey 2001. Choosing the number of instruments. Econometrica 69, 1161–1191], particularly when the instruments are weak. We also construct consistent estimators for the ABIAS and AMSE, and we use these to further construct a number of bias corrected OLS and IV estimators, the properties of which are examined both analytically and via a series of Monte Carlo experiments.  相似文献   

14.
Instrumental variable estimation in the presence of many moment conditions   总被引:1,自引:0,他引:1  
This paper develops shrinkage methods for addressing the “many instruments” problem in the context of instrumental variable estimation. It has been observed that instrumental variable estimators may behave poorly if the number of instruments is large. This problem can be addressed by shrinking the influence of a subset of instrumental variables. The procedure can be understood as a two-step process of shrinking some of the OLS coefficient estimates from the regression of the endogenous variables on the instruments, then using the predicted values of the endogenous variables (based on the shrunk coefficient estimates) as the instruments. The shrinkage parameter is chosen to minimize the asymptotic mean square error. The optimal shrinkage parameter has a closed form, which makes it easy to implement. A Monte Carlo study shows that the shrinkage method works well and performs better in many situations than do existing instrument selection procedures.  相似文献   

15.
It is well-known that size adjustments based on bootstrapping the tt-statistic perform poorly when instruments are weakly correlated with the endogenous explanatory variable. In this paper, we provide a theoretical proof that guarantees the validity of the bootstrap for the score statistic. This theory does not follow from standard results, since the score statistic is not a smooth function of sample means and some parameters are not consistently estimable when the instruments are uncorrelated with the explanatory variable.  相似文献   

16.
We investigate the economic significance of trading off empirical validity of models against other desirable model properties. Our investigation is based on three alternative econometric systems of the supply side, in a model that can be used to discuss optimal monetary policy in Norway. Our results caution against compromising empirical validity when selecting a model for policy analysis. We also find large costs from basing policies on the robust model, or on a suite of models, even when it contains the valid model. This confirms an important role for econometric modelling and evaluation in model choice for policy analysis.  相似文献   

17.
This paper extends the method of local instrumental variables developed by Heckman and Vytlacil [Heckman, J., Vytlacil E., 2005. Structural equations, treatment, effects and econometric policy evaluation. Econometrica 73(3), 669–738] to the estimation of not only means, but also distributions of potential outcomes. The newly developed method is illustrated by applying it to changes in college enrollment and wage inequality using data from the National Longitudinal Survey of Youth of 1979. Increases in college enrollment cause changes in the distribution of ability among college and high school graduates. This paper estimates a semiparametric selection model of schooling and wages to show that, for fixed skill prices, a 14% increase in college participation (analogous to the increase observed in the 1980s), reduces the college premium by 12% and increases the 90–10 percentile ratio among college graduates by 2%.  相似文献   

18.
This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross-equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.  相似文献   

19.
A number of information-theoretic alternatives to GMM have recently been proposed in the literature. For practical use and general interpretation, the main drawback of these alternatives, particularly in the case of conditional moment restrictions, is that they give up the computational and interpretational simplicity of quadratic optimization. The main contribution of this paper is to analyze the informational content of estimating equations within the unified framework of Chi-square distance. Improved inference by control variables, closed form formulae for implied probabilities and information-theoretic interpretations of continuously updated GMM are discussed in the two cases of unconditional and conditional moment restrictions.  相似文献   

20.
This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak’s Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号