首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper analyzes many weak moment asymptotics under the possibility of similar moments. The possibility of highly related moments arises when there are many of them. Knight and Fu (2000) designate the issue of similar regressors as the “nearly singular” design in the least squares case. In the nearly singular design, the sample variance converges to a singular limit term. However, Knight and Fu (2000) assume that on the nullspace of the limit term, the difference between the sample variance and the singular matrix converges in probability to a positive definite matrix when multiplied by an appropriate rate. We consider specifically Continuous Updating Estimator (CUE) with many weak moments under nearly singular design. We show that the nearly singular design affects the form of the limit of the many weak moment asymptotics that is introduced by Newey and Windmeijer (2009a). However, the estimator is still consistent and the Wald test has the standard χ2χ2 limit.  相似文献   

2.
Testing with many weak instruments   总被引:1,自引:0,他引:1  
This paper establishes the asymptotic distributions of the likelihood ratio (LR), Anderson–Rubin (AR), and Lagrange multiplier (LM) test statistics under “many weak IV asymptotics.” These asymptotics are relevant when the number of IVs is large and the coefficients on the IVs are relatively small. The asymptotic results hold under the null and under suitable alternatives. Hence, power comparisons can be made.  相似文献   

3.
We examine the econometric implications of the decision problem faced by a profit/utility-maximizing lender operating in a simple “double-binary” environment, where the two actions available are “approve” or “reject”, and the two states of the world are “pay back” or “default”. In practice, such decisions are often made by applying a fixed cutoff to the maximum likelihood estimate of a parametric model of the default probability. Following (Elliott and Lieli, 2007), we argue that this practice might contradict the lender’s economic objective and, using German loan data, we illustrate the use of “context-specific” cutoffs and an estimation method derived directly from the lender’s problem. We also provide a brief discussion of how to incorporate legal constraints, such as the prohibition of disparate treatment of potential borrowers, into the lender’s problem.  相似文献   

4.
This paper deals with a special case of estimation with grouped data, where the dependent variable is only available for groups, whereas the endogenous regressor(s) is available at the individual level. By estimating the first stage using the available individual data, and then estimating the second stage at the aggregate level, it might be possible to gain efficiency relative to the OLS and 2SLS estimators that use only grouped data. We term this the mixed-2SLS estimator (M2SLS). The M2SLS estimator is consistent and asymptotically normal. We also provide a test of efficiency of M2SLS relative to OLS and “2SLS” estimators.  相似文献   

5.
I study inverse probability weighted M-estimation under a general missing data scheme. Examples include M-estimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with observed retention frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows a general characterization of a “double robustness” result due to Scharfstein et al. [1999. Rejoinder. Journal of the American Statistical Association 94, 1135–1146].  相似文献   

6.
This paper studies likelihood-based estimation and inference in parametric discontinuous threshold regression models with i.i.d. data. The setup allows heteroskedasticity and threshold effects in both mean and variance. By interpreting the threshold point as a “middle” boundary of the threshold variable, we find that the Bayes estimator is asymptotically efficient among all estimators in the locally asymptotically minimax sense. In particular, the Bayes estimator of the threshold point is asymptotically strictly more efficient than the left-endpoint maximum likelihood estimator and the newly proposed middle-point maximum likelihood estimator. Algorithms are developed to calculate asymptotic distributions and risk for the estimators of the threshold point. The posterior interval is proved to be an asymptotically valid confidence interval and is attractive in both length and coverage in finite samples.  相似文献   

7.
The natural resource curse and economic transition   总被引:1,自引:0,他引:1  
Using cross-country regressions, we examine the relationship between “point-source” resource abundance and economic growth, quality of institutions, investment in human and physical capital, and social welfare (life expectancy and infant mortality) for all countries and for the economies in transition. Contrary to most literature, we find little evidence of a natural resource curse for all countries. Only the “voice and accountability” measure of institutional quality is negatively and significantly affected by oil wealth. In the economies in transition, there is some evidence that natural resource wealth is associated with lower primary school enrollment and life expectancy and higher infant mortality compared to other resource rich countries. Compared to other economies in transition, however, natural resource abundant transitional economies are not significantly worse off with respect to our indicators.  相似文献   

8.
This paper presents an economic interpretation of the optimal “stopping” of perpetual project opportunities under both certainty and uncertainty. Prior to stopping, the expected rate of return from delay exceeds the rate of interest. The expected rate of return from delay is the sum of the expected rate of change in project value and the expected rate of change in the option premium associated with waiting. At stopping the expected rate of return from delay has fallen to the rate of interest. Viewing stopping in this way unifies the theoretical and practical insights of the theory of stopping under certainty and uncertainty.  相似文献   

9.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

10.
Structural vs. atheoretic approaches to econometrics   总被引:1,自引:0,他引:1  
In this paper I attempt to lay out the sources of conflict between the so-called “structural” and “experimentalist” camps in econometrics. Critics of the structural approach often assert that it produces results that rely on too many assumptions to be credible, and that the experimentalist approach provides an alternative that relies on fewer assumptions. Here, I argue that this is a false dichotomy. All econometric work relies heavily on a priori assumptions. The main difference between structural and experimental (or “atheoretic”) approaches is not in the number of assumptions but the extent to which they are made explicit.  相似文献   

11.
We provide a new framework for estimating the systematic and idiosyncratic jump tail risks in financial asset prices. Our estimates are based on in-fill asymptotics for directly identifying the jumps, together with Extreme Value Theory (EVT) approximations and methods-of-moments for assessing the tail decay parameters and tail dependencies. On implementing the procedures with a panel of intraday prices for a large cross-section of individual stocks and the S&P 500 market portfolio, we find that the distributions of the systematic and idiosyncratic jumps are both generally heavy-tailed and close to symmetric, and show how the jump tail dependencies deduced from the high-frequency data together with the day-to-day variation in the diffusive volatility account for the “extreme” joint dependencies observed at the daily level.  相似文献   

12.
This paper determines coverage probability errors of both delta method and parametric bootstrap confidence intervals (CIs) for the covariance parameters of stationary long-memory Gaussian time series. CIs for the long-memory parameter d0d0 are included. The results establish that the bootstrap provides higher-order improvements over the delta method. Analogous results are given for tests. The CIs and tests are based on one or other of two approximate maximum likelihood estimators. The first estimator solves the first-order conditions with respect to the covariance parameters of a “plug-in” log-likelihood function that has the unknown mean replaced by the sample mean. The second estimator does likewise for a plug-in Whittle log-likelihood.  相似文献   

13.
14.
This paper proposes a test of the null hypothesis of stationarity that is robust to the presence of fat-tailed errors. The test statistic is a modified version of the so-called KPSS statistic. The modified statistic uses the “sign” of the data minus the sample median, whereas KPSS used deviations from means. This “indicator” KPSS statistic has the same limit distribution as the standard KPSS statistic under the null, without relying on assumptions about moments, but a different limit distribution under unit root alternatives. The indicator test has lower power than standard KPSS when tails are thin, but higher power when tails are fat.  相似文献   

15.
We generalize the weak instrument robust score or Lagrange multiplier and likelihood ratio instrumental variables (IV) statistics towards multiple parameters and a general covariance matrix so they can be used in the generalized method of moments (GMM). The GMM extension of Moreira's [2003. A conditional likelihood ratio test for structural models. Econometrica 71, 1027–1048] conditional likelihood ratio statistic towards GMM preserves its expression except that it becomes conditional on a statistic that tests the rank of a matrix. We analyze the spurious power decline of Kleibergen's [2002. Pivotal statistics for testing structural parameters in instrumental variables regression. Econometrica 70, 1781–1803, 2005. Testing parameters in GMM without assuming that they are identified. Econometrica 73, 1103–1124] score statistic and show that an independent misspecification pre-test overcomes it. We construct identification statistics that reflect if the confidence sets of the parameters are bounded. A power study and the possible shapes of confidence sets illustrate the analysis.  相似文献   

16.
This paper establishes the relatively weak conditions under which causal inferences from a regression–discontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any pre-determined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V>v0V>v0, but where v0v0 is a known threshold, and V is observable. V can depend on the individual's characteristics and choices, but there is also a random chance element: for each individual, there exists a well-defined probability distribution for V  . The density function—allowed to differ arbitrarily across the population—is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V=v0V=v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “near-experimental” causal estimates of the electoral advantage to incumbency.  相似文献   

17.
We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson [Annals of Mathematical Statistics (1951), 22, 327–351] sensitivity to the ordering of the variables for the LDU rank statistic of Cragg and Donald [Journal of the American Statistical Association (1996), 91, 1301–1309] and Gill and Lewbel [Journal of the American Statistical Association (1992), 87, 766–776] a limiting distribution that is not a standard chi-squared distribution for the rank statistic of Robin and Smith [Econometric Theory (2000), 16, 151–175] usage of numerical optimization for the objective function statistic of Cragg and Donald [Journal of Econometrics (1997), 76, 223–250] and ignoring the non-negativity restriction on the singular values in Ratsimalahelo [2002, Rank test based on matrix perturbation theory. Unpublished working paper, U.F.R. Science Economique, University de Franche-Comté]. In the non-stationary cointegration case, the limiting distribution of the new rank statistic is identical to that of the Johansen trace statistic.  相似文献   

18.
This paper studies the decision-theoretic foundation for the notion of stability in the dynamic context of strategic interaction. We formulate and show that common knowledge of rationality implies a “stable” pattern of behavior in extensive games with perfect information. In the “generic” case, our approach is consistent with Aumann’s [Aumann, R.J., 1995. Backward induction and common knowledge of rationality. Games and Economic Behavior 8, 6–19] result that common knowledge of rationality leads to the backward induction outcome.  相似文献   

19.
Consider a multivariate nonparametric model where the unknown vector of functions depends on two sets of explanatory variables. For a fixed level of one set of explanatory variables, we provide consistent statistical tests, called local rank tests, to determine whether the multivariate relationship can be explained by a smaller number of functions. We also provide estimators for the smallest number of functions, called local rank, explaining the relationship. The local rank tests and the estimators of local rank are defined in terms of the eigenvalues of a kernel-based estimator of some matrix. The asymptotics of the eigenvalues is established by using the so-called Fujikoshi expansion along with some techniques of the theory of U-statistics. We present a simulation study which examines the small sample properties of local rank tests. We also apply the local rank tests and the local rank estimators to a demand system given by a newly constructed data set. This work can be viewed as a “local” extension of the tests for a number of factors in a nonparametric relationship introduced by Stephen Donald.  相似文献   

20.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号