首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 61 毫秒
1.
Through Monte Carlo experiments the effects of a feedback mechanism on the accuracy in finite samples of ordinary and bootstrap inference procedures are examined in stable first- and second-order autoregressive distributed-lag models with non-stationary weakly exogenous regressors. The Monte Carlo is designed to mimic situations that are relevant when a weakly exogenous policy variable affects (and is affected by) the outcome of agents’ behaviour. In the parameterizations we consider, it is found that small-sample problems undermine ordinary first-order asymptotic inference procedures irrespective of the presence and importance of a feedback mechanism. We examine several residual-based bootstrap procedures, each of them designed to reduce one or several specific types of bootstrap approximation error. Surprisingly, the bootstrap procedure which only incorporates the conditional model overcomes the small sample problems reasonably well. Often (but not always) better results are obtained if the bootstrap also resamples the marginal model for the policymakers’ behaviour.  相似文献   

2.
The paper investigates the usefulness of bootstrap methods for small sample inference in cointegrating regression models. It discusses the standard bootstrap, the recursive bootstrap, the moving block bootstrap and the stationary bootstrap methods. Some guidelines for bootstrap data generation and test statistics to consider are provided and some simulation evidence presented suggests that the bootstrap methods, when properly implemented, can provide significant improvement over asymptotic inference.  相似文献   

3.
This paper is a study of the application of Bayesian exponentially tilted empirical likelihood to inference about quantile regressions. In the case of simple quantiles we show the exact form for the likelihood implied by this method and compare it with the Bayesian bootstrap and with Jeffreys' method. For regression quantiles we derive the asymptotic form of the posterior density. We also examine Markov chain Monte Carlo simulations with a proposal density formed from an overdispersed version of the limiting normal density. We show that the algorithm works well even in models with an endogenous regressor when the instruments are not too weak. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping.  相似文献   

6.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

7.
We examine the higher order properties of the wild bootstrap (Wu, 1986) in a linear regression model with stochastic regressors. We find that the ability of the wild bootstrap to provide a higher order refinement is contingent upon whether the errors are mean independent of the regressors or merely uncorrelated with them. In the latter case, the wild bootstrap may fail to match some of the terms in an Edgeworth expansion of the full sample test statistic. Nonetheless, we show that the wild bootstrap still has a lower maximal asymptotic risk as an estimator of the true distribution than a normal approximation, in shrinking neighborhoods of properly specified models. To assess the practical implications of this result we conduct a Monte Carlo study contrasting the performance of the wild bootstrap with a normal approximation and the traditional nonparametric bootstrap.  相似文献   

8.
For reasons of time constraint and cost reduction, censoring is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I right censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. In this article, we consider a progressively Type-I censored life-test under the assumption that the lifetime of each test unit is exponentially distributed. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE under the condition that its existence is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we then discuss the construction of confidence intervals for the mean parameter and their performance is assessed through Monte Carlo simulations. Finally, an example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

9.
We present finite sample evidence on different IV estimators available for linear models under weak instruments; explore the application of the bootstrap as a bias reduction technique to attenuate their finite sample bias; and employ three empirical applications to illustrate and provide insights into the relative performance of the estimators in practice. Our evidence indicates that the random‐effects quasi‐maximum likelihood estimator outperforms alternative estimators in terms of median point estimates and coverage rates, followed by the bootstrap bias‐corrected version of LIML and LIML. However, our results also confirm the difficulty of obtaining reliable point estimates in models with weak identification and moderate‐size samples. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
This article is concerned with the inference on seemingly unrelated non‐parametric regression models with serially correlated errors. Based on an initial estimator of the mean functions, we first construct an efficient estimator of the autoregressive parameters of the errors. Then, by applying an undersmoothing technique, and taking both of the contemporaneous correlation among equations and serial correlation into account, we propose an efficient two‐stage local polynomial estimation for the unknown mean functions. It is shown that the resulting estimator has the same bias as those estimators which neglect the contemporaneous and/or serial correlation and smaller asymptotic variance. The asymptotic normality of the resulting estimator is also established. In addition, we develop a wild block bootstrap test for the goodness‐of‐fit of models. The finite sample performance of our procedures is investigated in a simulation study whose results come out very supportive, and a real data set is analysed to illustrate the usefulness of our procedures.  相似文献   

11.
We propose a fast resample method for two step nonlinear parametric and semiparametric models, which does not require recomputation of the second stage estimator during each resample iteration. The fast resample method directly exploits the score function representations computed on each bootstrap sample, thereby reducing computational time considerably. This method is used to approximate the limit distribution of parametric and semiparametric estimators, possibly simulation based, that admit an asymptotic linear representation. Monte Carlo experiments demonstrate the desirable performance and vast improvement in the numerical speed of the fast bootstrap method.  相似文献   

12.
Tsung-Shan Tsou 《Metrika》2010,71(1):101-115
This paper introduces a way of modifying the bivariate normal likelihood function. One can use the adjusted likelihood to generate valid likelihood inferences for the regression parameter of interest, even if the bivariate normal assumption is fallacious. The retained asymptotic legitimacy requires no knowledge of the true underlying joint distributions so long as their second moments exist. The extension to the multivariate situations is straightforward in theory and yet appears to be arduous computationally. Nevertheless, it is illustrated that the implementation of this seemingly sophisticated procedure is almost effortless needing only outputs from existing statistical software. The efficacy of the proposed parametric approach is demonstrated via simulations.  相似文献   

13.
This paper studies likelihood-based estimation and inference in parametric discontinuous threshold regression models with i.i.d. data. The setup allows heteroskedasticity and threshold effects in both mean and variance. By interpreting the threshold point as a “middle” boundary of the threshold variable, we find that the Bayes estimator is asymptotically efficient among all estimators in the locally asymptotically minimax sense. In particular, the Bayes estimator of the threshold point is asymptotically strictly more efficient than the left-endpoint maximum likelihood estimator and the newly proposed middle-point maximum likelihood estimator. Algorithms are developed to calculate asymptotic distributions and risk for the estimators of the threshold point. The posterior interval is proved to be an asymptotically valid confidence interval and is attractive in both length and coverage in finite samples.  相似文献   

14.
In this paper, we propose a fixed design wild bootstrap procedure to test parameter restrictions in vector autoregressive models, which is robust in cases of conditionally heteroskedastic error terms. The wild bootstrap does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation, empirical size and power properties of the method are illustrated for the case of white noise under the null hypothesis. We compare the bootstrap approach with standard ordinary least squares (OLS)-based, weighted least squares (WLS) and quasi-maximum likelihood (QML) approaches. In terms of empirical size, the proposed method outperforms competing approaches and achieves size-adjusted power close to WLS or QML inference. A White correction of standard OLS inference is satisfactory only in large samples. We investigate the case of Granger causality in a bivariate system of inflation expectations in France and the United Kingdom. Our evidence suggests that the former are Granger causal for the latter while for the reverse relation Granger non-causality cannot be rejected.  相似文献   

15.
Cuizhen Niu  Xu Guo  Wangli Xu  Lixing Zhu 《Metrika》2014,77(6):795-809
Due to the strikingly resemblance to the normal theory and inference methods, the inverse Gaussian (IG) distribution is commonly applied to model positive and right-skewed data. As the shape parameter in the IG distribution is greatly related to other important quantities such as the mean, skewness, kurtosis and the coefficient of variation, it plays an important role in distribution theory. This paper focuses on testing the equality of shape parameters in several inverse Gaussian distributions. Three tests are suggested: the exact generalized inference-based test, the asymptotic test and a test that is based on parametric bootstrap approximation. Simulation studies are undertaken to examine the performances of the these methods, and three real data examples are analyzed for illustration.  相似文献   

16.
Bootstrapping Financial Time Series   总被引:2,自引:0,他引:2  
It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate.  相似文献   

17.
Probabilistic record linkage is the act of bringing together records that are believed to belong to the same unit (e.g., person or business) from two or more files. It is a common way to enhance dimensions such as time and breadth or depth of detail. Probabilistic record linkage is not an error-free process and link records that do not belong to the same unit. Naively treating such a linked file as if it is linked without errors can lead to biased inferences. This paper develops a method of making inference with estimating equations when records are linked using algorithms that are widely used in practice. Previous methods for dealing with this problem cannot accommodate such linking algorithms. This paper develops a parametric bootstrap approach to inference in which each bootstrap replicate involves applying the said linking algorithm. This paper demonstrates the effectiveness of the method in simulations and in real applications.  相似文献   

18.
This paper presents results from a Monte Carlo study concerning inference with spatially dependent data. We investigate the impact of location/distance measurement errors upon the accuracy of parametric and nonparametric estimators of asymptotic variances. Nonparametric estimators are quite robust to such errors, method of moments estimators perform surprisingly well, and MLE estimators are very poor. We also present and evaluate a specification test based on a parametric bootstrap that has good power properties for the types of measurement error we consider.  相似文献   

19.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

20.
We propose a general class of models and a unified Bayesian inference methodology for flexibly estimating the density of a response variable conditional on a possibly high-dimensional set of covariates. Our model is a finite mixture of component models with covariate-dependent mixing weights. The component densities can belong to any parametric family, with each model parameter being a deterministic function of covariates through a link function. Our MCMC methodology allows for Bayesian variable selection among the covariates in the mixture components and in the mixing weights. The model’s parameterization and variable selection prior are chosen to prevent overfitting. We use simulated and real data sets to illustrate the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号