首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss a method to estimate the confidence bounds for average economic growth, which is robust to misspecification of the unit root property of a given time series. We derive asymptotic theory for the consequences of such misspecification. Our empirical method amounts to an implementation of the subsampling procedure advocated in Romano and Wolf (Econometrica, 2001, Vol. 69, p. 1283). Simulation evidence supports the theory and it also indicates the practical relevance of the subsampling method. We use quarterly postwar US industrial production for illustration and we show that non‐robust approaches rather lead to different conclusions on average economic growth than our robust approach.  相似文献   

2.
This article reviews the content-corrected method for tolerance limits proposed by Fernholz and Gillespie (2001) and addresses some robustness issues that affect the length of the corresponding interval as well as the corrected content value. The content-corrected method for k-factor tolerance limits consists of obtaining a bootstrap corrected value p * that is robust in the sense of preserving the confidence coefficient for a variety of distributions. We propose several location/scale robust alternatives to obtain robust corrected-content k-factor tolerance limits that produce shorter intervals when outlying observations are present. We analyze the Hadamard differentiability to insure bootstrap consistency for large samples. We define the breakdown point for the particular statistic to be bootstrapped, and we obtain the influence function and the value of the breakdown point for the traditional and the robust statistics. Two examples showing the advantage of using these robust alternatives are also included.  相似文献   

3.
This paper analyzes the properties of subsampling, hybrid subsampling, and size-correction methods in two non-regular models. The latter two procedures are introduced in Andrews and Guggenberger (2009a). The models are non-regular in the sense that the test statistics of interest exhibit a discontinuity in their limit distribution as a function of a parameter in the model. The first model is a linear instrumental variables (IV) model with possibly weak IVs estimated using two-stage least squares (2SLS). In this case, the discontinuity occurs when the concentration parameter is zero. The second model is a linear regression model in which the parameter of interest may be near a boundary. In this case, the discontinuity occurs when the parameter is on the boundary.  相似文献   

4.
This article proposes omnibus specification tests of parametric dynamic quantile models. In contrast to the existing procedures, we allow for a flexible specification, where a possible continuum of quantiles is simultaneously specified under fairly weak conditions on the serial dependence in the underlying data-generating process. Since the null limit distribution of tests is not pivotal, we propose a subsampling approximation of the asymptotic critical values. A Monte Carlo study shows that the asymptotic results provide good approximations for small sample sizes. Finally, an application suggests that our methodology is a powerful alternative to standard backtesting procedures in evaluating market risk.  相似文献   

5.
We study a general family of Anderson–Rubin-type procedures, allowing for arbitrary collinearity among the instruments and endogenous variables. Using finite-sample distributional theory, we show that the proposed procedures, besides being robust to weak instruments, are also robust to the exclusion of relevant instruments and to the distribution of endogenous regressors. A solution to the problem of computing linear projections from general possibly singular quadric surfaces is derived and used to build finite-sample confidence sets for individual structural parameters. The importance of robustness to excluded instruments is studied by simulation. Applications to the trade-growth relationship and to education returns are presented.  相似文献   

6.
In the robustness framework, the parametric model underlying the data is usually embedded in a neighborhood of other plausible distributions. Accordingly, the asymptotic properties of robust estimates should be uniform over the whole set of possible models. In this paper, we study location M-estimates calculated with a previous generalized S-scale and show that, under some regularity conditions, they are uniformly asymptotically normal over contamination neighborhoods of known size. There is a trade off between the size of the neighborhood and the breakdown point of the GS-scale, but it is possible to adjust the estimates so that they have 50% breakdown point whereas the uniform asymptotic normality is ensured over neighborhoods that contain up to 25% of contamination. Alternatively, both the breakdown point and the size of the neighborhood could be chosen to be 38%. These results represent an improvement over those obtained recently by Salibian-Barrera and Zamar (2004) J.R. Berrendero was Spanish supported by Grant BFM2001-0169 and Grand 06/0050/2003 (Comunidad De Madrid) R. H. Zamar was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).  相似文献   

7.
This paper studies robust inference for linear panel models with fixed effects in the presence of heteroskedasticity and spatiotemporal dependence of unknown forms. We propose a bivariate kernel covariance estimator that nests existing estimators as special cases. Our estimator improves upon existing estimators in terms of robustness, efficiency, and adaptiveness. For distributional approximations, we considered two types of asymptotics: the increasing-smoothing asymptotics and the fixed-smoothing asymptotics. Under the former asymptotics, the Wald statistic based on our covariance estimator converges to a chi-square distribution. Under the latter asymptotics, the Wald statistic is asymptotically equivalent to a distribution that can be well approximated by an F distribution. Simulation results show that our proposed testing procedure works well in finite samples.  相似文献   

8.
This paper studies subsampling VAR tests of linear constraints as a way of finding approximations of their finite sample distributions that are valid regardless of the stochastic nature of the data generating processes for the tests. In computing the VAR tests with subsamples (i.e., blocks of consecutive time series), both the tests of the original form and the tests with the subsample OLS coefficient estimates centered at the full-sample estimates are used. Subsampling using the latter is called centered subsampling in this paper. It is shown that the subsamplings provide asymptotic distributions that are equivalent to the asymptotic distributions of the VAR tests. In addition, the tests using critical values from the subsamplings are shown to be consistent. The subsampling methods are applied to testing for causality. To choose the block sizes for subsample causality tests, the minimum volatility method, a new simulation-based calibration rule and a bootstrap-based calibration rule are used. Simulation results in this paper indicate that the centered subsampling using the simulation-based calibration rule for the block size is quite promising. It delivers stable empirical size and reasonably high-powered causality tests. Moreover, when the causality test has a chi-square distribution in the limit, the test using critical values from the centered subsampling has better size properties than the one using chi-square critical values. The centered subsampling using the bootstrap-based calibration rule for the block size also works well, but it is slightly inferior to that using the simulation-based calibration rule.  相似文献   

9.
This article studies inference of multivariate trend model when the volatility process is nonstationary. Within a quite general framework we analyze four classes of tests based on least squares estimation, one of which is robust to both weak serial correlation and nonstationary volatility. The existing multivariate trend tests, which either use non-robust standard errors or rely on non-standard distribution theory, are generally non-pivotal involving the unknown time-varying volatility function in the limit. Two-step residual-based i.i.d. bootstrap and wild bootstrap procedures are proposed for the robust tests and are shown to be asymptotically valid. Simulations demonstrate the effects of nonstationary volatility on the trend tests and the good behavior of the robust tests in finite samples.  相似文献   

10.
Sonja Kuhnt 《Metrika》2010,71(3):281-294
Loglinear Poisson models are commonly used to analyse contingency tables. So far, robustness of parameter estimators as well as outlier detection have rarely been treated in this context. We start with finite-sample breakdown points. We yield that the breakdown point of mean value estimators determines a lower bound for a masking breakdown point of a class of one-step outlier identification rules. Within a more refined breakdown approach, which takes account of the structure of the contingency table, a stochastic breakdown function is defined. It returns the probability that a given proportion of outliers is randomly placed at such a pattern, where breakdown is possible. Finally, the introduced breakdown concepts are applied to characterise the maximum likelihood estimator and a median-polish estimator.  相似文献   

11.
We provide an extensive evaluation of the predictive performance of the US yield curve for US gross domestic product growth by using new tests for forecast breakdown, in addition to a variety of in‐sample and out‐of‐sample evaluation procedures. Empirical research over the past decades has uncovered a strong predictive relationship between the yield curve and output growth, whose stability has recently been questioned. We document the existence of a forecast breakdown during the Burns–Miller and Volker monetary policy regimes, whereas during the early part of the Greenspan era the yield curve emerged as a more reliable model to predict future economic activity.  相似文献   

12.
A maxbias curve is a powerful tool to describe the robustness of an estimator. It is an asymptotic concept which tells how much an estimator can change due to a given fraction of contamination. In this paper, maxbias curves are computed for some univariate scale estimators based on subranges: trimmed standard deviations, interquantile ranges and the univariate Minimum Volume Ellipsoid (MVE) and Minimum Covariance Determinant (MCD) scale estimators. These estimators are intuitively appealing and easy to calculate. Since the bias behavior of scale estimators may differ depending on the type of contamination (outliers or inliers), expressions for both explosion and implosion maxbias curves are given. On the basis of robustness and efficiency arguments, the MCD scale estimator with 25% breakdown point can be recommended for practical use. Received: February 2000  相似文献   

13.
A major aim in recent nonparametric frontier modeling is to estimate a partial frontier well inside the sample of production units but near the optimal boundary. Two concepts of partial boundaries of the production set have been proposed: an expected maximum output frontier of order m=1,2,… and a conditional quantile-type frontier of order α∈]0,1]. In this paper, we answer the important question of how the two families are linked. For each m, we specify the order α for which both partial production frontiers can be compared. We show that even one perturbation in data is sufficient for breakdown of the nonparametric order-m frontiers, whereas the global robustness of the order-α frontiers attains a higher breakdown value. Nevertheless, once the α frontiers break down, they become less resistant to outliers than the order-m frontiers. Moreover, the m frontiers have the advantage to be statistically more efficient. Based on these findings, we suggest a methodology for identifying outlying data points. We establish some asymptotic results, contributing to important gaps in the literature. The theoretical findings are illustrated via simulations and real data.  相似文献   

14.
This paper considers the estimation of likelihood-based models in a panel setting. That is, we have panel data, and for each time period separately we have a correctly specified model that could be estimated by MLE. We want to allow non-independence over time. This paper shows how to improve on the QMLE. It then considers MLE based on joint distributions constructed using copulas. It discusses the efficiency gain from using the true copula, and shows that knowledge of the true copula is redundant only if the variance matrix of the relevant set of moment conditions is singular. It also discusses the question of robustness against misspecification of the copula, and proposes a test of the validity of the copula. GMM methods are argued to be useful analytically, and also for reasons of efficiency if the copula is robust but not correct.  相似文献   

15.
Long-run variance estimation can typically be viewed as the problem of estimating the scale of a limiting continuous time Gaussian process on the unit interval. A natural benchmark model is given by a sample that consists of equally spaced observations of this limiting process. The paper analyzes the asymptotic robustness of long-run variance estimators to contaminations of this benchmark model. It is shown that any equivariant long-run variance estimator that is consistent in the benchmark model is highly fragile: there always exists a sequence of contaminated models with the same limiting behavior as the benchmark model for which the estimator converges in probability to an arbitrary positive value. A class of robust inconsistent long-run variance estimators is derived that optimally trades off asymptotic variance in the benchmark model against the largest asymptotic bias in a specific set of contaminated models.  相似文献   

16.
O. Arslan  O. Edlund  H. Ekblom 《Metrika》2002,55(1-2):37-51
Constrained M-estimators for regression were introduced by Mendes and Tyler in 1995 as an alternative class of robust regression estimators with high breakdown point and high asymptotic efficiency. To compute the CM-estimate, the global minimum of an objective function with an inequality constraint has to be localized. To find the S-estimate for the same problem, we instead restrict ourselves to the boundary of the feasible region. The algorithm presented for computing CM-estimates can easily be modified to compute S-estimates as well. Testing is carried out with a comparison to the algorithm SURREAL by Ruppert.  相似文献   

17.
This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002). In this partially identified model, we propose a new estimator based on MT’s modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges at a nearly cube-root-n rate. We propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.  相似文献   

18.
Harvey, Leybourne and Taylor [Harvey, D.I., Leybourne, S.J., Taylor, A.M.R. 2009. Simple, robust and powerful tests of the breaking trend hypothesis. Econometric Theory 25, 995–1029] develop a test for the presence of a broken linear trend at an unknown point in the sample whose size is asymptotically robust as to whether the (unknown) order of integration of the data is either zero or one. This test is not size controlled, however, when this order assumes fractional values; its asymptotic size can be either zero or one in such cases. In this paper we suggest a new test, based on a sup-Wald statistic, which is asymptotically size-robust across fractional values of the order of integration (including zero or one). We examine the asymptotic power of the test under a local trend break alternative. The finite sample properties of the test are also investigated.  相似文献   

19.
There is a need for tests that are derived from the ordinary least squares (OLS) estimators of regression coefficients and are useful in the presence of unspecified forms of heteroskedasticity and autocorrelation. A method that uses the moving block bootstrap and quasi‐estimators in order to derive a consistent estimator of the asymptotic covariance matrix for the OLS estimators and robust significance tests is proposed. The method is shown to be asymptotically valid and Monte Carlo evidence indicates that it is capable of providing good control of significance levels in finite samples and good power compared with two other bootstrap tests.  相似文献   

20.
The breakdown point in its different variants is one of the central notions to quantify the global robustness of a procedure. We propose a simple supplementary variant which is useful in situations where we have no obvious or only partial equivariance: Extending the Donoho and Huber (The notion of breakdown point, Wadsworth, Belmont, 1983) Finite Sample Breakdown Point?, we propose the Expected Finite Sample Breakdown Point to produce less configuration-dependent values while still preserving the finite sample aspect of the former definition. We apply this notion for joint estimation of scale and shape (with only scale-equivariance available), exemplified for generalized Pareto, generalized extreme value, Weibull, and Gamma distributions. In these settings, we are interested in highly-robust, easy-to-compute initial estimators; to this end we study Pickands-type and Location-Dispersion-type estimators and compute their respective breakdown points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号