首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper develops a bootstrap theory for models including autoregressive time series with roots approaching to unity as the sample size increases. In particular, we consider the processes with roots converging to unity with rates slower than n-1n-1. We call such processes weakly   integrated processes. It is established that the bootstrap relying on the estimated autoregressive model is generally consistent for the weakly integrated processes. Both the sample and bootstrap statistics of the weakly integrated processes are shown to yield the same normal asymptotics. Moreover, for the asymptotically pivotal statistics of the weakly integrated processes, the bootstrap is expected to provide an asymptotic refinement and give better approximations for the finite sample distributions than the first order asymptotic theory. For the weakly integrated processes, the magnitudes of potential refinements by the bootstrap are shown to be proportional to the rate at which the root of the underlying process converges to unity. The order of boostrap refinement can be as large as o(n-1/2+?)o(n-1/2+?) for any ?>0?>0. Our theory helps to explain the actual improvements observed by many practitioners, which are made by the use of the bootstrap in analyzing the models with roots close to unity.  相似文献   

2.
The wild bootstrap is studied in the context of regression models with heteroskedastic disturbances. We show that, in one very specific case, perfect bootstrap inference is possible, and a substantial reduction in the error in the rejection probability of a bootstrap test is available much more generally. However, the version of the wild bootstrap with this desirable property is without the skewness correction afforded by the currently most popular version of the wild bootstrap. Simulation experiments show that this does not prevent the preferred version from having the smallest error in rejection probability in small and medium-sized samples.  相似文献   

3.
In this paper we consider the issue of unit root testing in cross-sectionally dependent panels. We consider panels that may be characterized by various forms of cross-sectional dependence including (but not exclusive to) the popular common factor framework. We consider block bootstrap versions of the group-mean (Im et al., 2003) and the pooled (Levin et al., 2002) unit root coefficient DF tests for panel data, originally proposed for a setting of no cross-sectional dependence beyond a common time effect. The tests, suited for testing for unit roots in the observed data, can be easily implemented as no specification or estimation of the dependence structure is required. Asymptotic properties of the tests are derived for T going to infinity and N finite. Asymptotic validity of the bootstrap tests is established in very general settings, including the presence of common factors and cointegration across units. Properties under the alternative hypothesis are also considered. In a Monte Carlo simulation, the bootstrap tests are found to have rejection frequencies that are much closer to nominal size than the rejection frequencies for the corresponding asymptotic tests. The power properties of the bootstrap tests appear to be similar to those of the asymptotic tests.  相似文献   

4.
A random sample drawn from a population would appear to offer an ideal opportunity to use the bootstrap in order to perform accurate inference, since the observations of the sample are IID. In this paper, Monte Carlo results suggest that bootstrapping a commonly used index of inequality leads to inference that is not accurate even in very large samples, although inference with poverty indices is satisfactory. We find that the major cause is the extreme sensitivity of many inequality indices to the exact nature of the upper tail of the income distribution. This leads us to study two non-standard bootstraps, the m out of n bootstrap, which is valid in some situations where the standard bootstrap fails, and a bootstrap in which the upper tail is modelled parametrically. Monte Carlo results suggest that accurate inference can be achieved with this last method in moderately large samples.  相似文献   

5.
This paper analyzes the higher-order properties of the estimators based on the nested pseudo-likelihood (NPL) algorithm and the practical implementation of such estimators for parametric discrete Markov decision models. We derive the rate at which the NPL algorithm converges to the MLE and provide a theoretical explanation for the simulation results in Aguirregabiria and Mira [Aguirregabiria, V., Mira, P., 2002. Swapping the nested fixed point algorithm: A class of estimators for discrete Markov decision models. Econometrica 70, 1519–1543], in which iterating the NPL algorithm improves the accuracy of the estimator. We then propose a new NPL algorithm that can achieve quadratic convergence without fully solving the fixed point problem in every iteration and apply our estimation procedure to a finite mixture model. We also develop one-step NPL bootstrap procedures for discrete Markov decision models. The Monte Carlo simulation evidence based on a machine replacement model of Rust [Rust, J., 1987. Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. Econometrica 55, 999–1033] shows that the proposed one-step bootstrap test statistics and confidence intervals improve upon the first order asymptotics even with a relatively small number of iterations.  相似文献   

6.
This paper derives the limiting distribution of the Lagrange Multiplier (LM) test for threshold nonlinearity in a TAR model with GARCH errors when one of the regimes contains a unit root. It is shown that the asymptotic distribution is nonstandard and depends on nuisance parameters that capture the degree of conditional heteroskedasticity and non-Gaussian nature of the process. We propose a bootstrap procedure for approximating the exact finite-sample distribution of the test for linearity and establish its asymptotic validity.  相似文献   

7.
Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.  相似文献   

8.
This paper uses spatial empirical methods to detect and analyze trade patterns in a historical data set on Chinese rice prices. Our results suggest that spatial features were important for the expansion of interregional trade. Geography dictates, first, over what distances trade was possible in different regions, because the costs of ship transport were considerably below those for land transport. Spatial features also influence the direction in which a trading network is expanding. Moreover, our analysis captures the impact of new trade routes both within and outside the trading areas.  相似文献   

9.
During the past two decades, innovations protected by patents have played a key role in business strategies. This fact enhanced studies of the determinants of patents and the impact of patents on innovation and competitive advantage. Sustaining competitive advantages is as important as creating them. Patents help sustaining competitive advantages by increasing the production cost of competitors, by signaling a better quality of products and by serving as barriers to entry. If patents are rewards for innovation, more R&D should be reflected in more patent applications but this is not the end of the story. There is empirical evidence showing that patents through time are becoming easier to get and more valuable to the firm due to increasing damage awards from infringers. These facts question the constant and static nature of the relationship between R&D and patents. Furthermore, innovation creates important knowledge spillovers due to its imperfect appropriability. Our paper investigates these dynamic effects using US patent data from 1979 to 2000 with alternative model specifications for patent counts. We introduce a general dynamic count panel data model with dynamic observable and unobservable spillovers, which encompasses previous models, is able to control for the endogeneity of R&D and therefore can be consistently estimated by maximum likelihood. Apart from allowing for firm specific fixed and random effects, we introduce a common unobserved component, or secret stock of knowledge, that affects differently the propensity to patent of each firm across sectors due to their different absorptive capacity.  相似文献   

10.
We consider a multiple mismeasured regressor errors-in-variables model. We develop closed-form minimum distance estimators from any number of estimating equations, which are linear in the third and higher cumulants of the observable variables. Using the cumulant estimators alters qualitative inference relative to ordinary least squares in two applications related to investment and leverage regressions. The estimators perform well in Monte Carlos calibrated to resemble the data from our applications. Although the cumulant estimators are asymptotically equivalent to the moment estimators from Erickson and Whited (2002), the finite-sample performance of the cumulant estimators exceeds that of the moment estimators.  相似文献   

11.
The practical relevance of several concepts of exogeneity of treatments for the estimation of causal parameters based on observational data are discussed. We show that the traditional concepts, such as strong ignorability and weak and super-exogeneity, are too restrictive if interest lies in average effects (i.e. not on distributional effects of the treatment). We suggest a new definition of exogeneity, KL-exogeneity. It does not rely on distributional assumptions and is not based on counterfactual random variables. As a consequence it can be empirically tested using a proposed test that is simple to implement and is distribution-free.  相似文献   

12.
A nonparametric, residual-based stationary bootstrap procedure is proposed for unit root testing in a time series. The procedure generates a pseudoseries which mimics the original, but ensures the presence of a unit root. Unlike many others in the literature, the proposed test is valid for a wide class of weakly dependent processes and is not based on parametric assumptions on the data-generating process. Large sample theory is developed and asymptotic validity is shown via a bootstrap functional central limit theorem. The case of a least squares statistic is discussed in detail, including simulations to investigate the procedure's finite sample performance.  相似文献   

13.
It is widely believed that investing in education could be an effective strategy to promote higher standards of living and equity. We empirically assess this claim by estimating returns to education across the whole earnings distribution in urban China and find supporting evidence. In particular, we find that returns to education are more pronounced for individuals in the lower tail of the distribution than for those in the upper tail and that returns to education are uniformly larger for women than men. We also find that returns to education increased over time across the whole earnings distribution.  相似文献   

14.
This paper extends the method of local instrumental variables developed by Heckman and Vytlacil [Heckman, J., Vytlacil E., 2005. Structural equations, treatment, effects and econometric policy evaluation. Econometrica 73(3), 669–738] to the estimation of not only means, but also distributions of potential outcomes. The newly developed method is illustrated by applying it to changes in college enrollment and wage inequality using data from the National Longitudinal Survey of Youth of 1979. Increases in college enrollment cause changes in the distribution of ability among college and high school graduates. This paper estimates a semiparametric selection model of schooling and wages to show that, for fixed skill prices, a 14% increase in college participation (analogous to the increase observed in the 1980s), reduces the college premium by 12% and increases the 90–10 percentile ratio among college graduates by 2%.  相似文献   

15.
We examine the higher order properties of the wild bootstrap (Wu, 1986) in a linear regression model with stochastic regressors. We find that the ability of the wild bootstrap to provide a higher order refinement is contingent upon whether the errors are mean independent of the regressors or merely uncorrelated with them. In the latter case, the wild bootstrap may fail to match some of the terms in an Edgeworth expansion of the full sample test statistic. Nonetheless, we show that the wild bootstrap still has a lower maximal asymptotic risk as an estimator of the true distribution than a normal approximation, in shrinking neighborhoods of properly specified models. To assess the practical implications of this result we conduct a Monte Carlo study contrasting the performance of the wild bootstrap with a normal approximation and the traditional nonparametric bootstrap.  相似文献   

16.
    
This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross-equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.  相似文献   

17.
The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains.  相似文献   

18.
This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak’s Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.  相似文献   

19.
This paper considers alternative methods of testing cointegration in fractionally integrated processes, using the bootstrap. The investigation focuses on (a) choice of statistic, (b) use of bias correction techniques, and (c) designing the simulation of the null hypothesis. Three residual-based tests are considered, two of the null hypothesis of non-cointegration, the third of the null hypothesis that cointegration exists. The tests are compared in Monte Carlo experiments to throw light on the relative roles of issues (a)–(c) in test performance.  相似文献   

20.
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号