首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

2.
The effective use of spatial information in a regression‐based approach to small area estimation is an important practical issue. One approach to account for geographic information is by extending the linear mixed model to allow for spatially correlated random area effects. An alternative is to include the spatial information by a non‐parametric mixed models. Another option is geographic weighted regression where the model coefficients vary spatially across the geography of interest. Although these approaches are useful for estimating small area means efficiently under strict parametric assumptions, they can be sensitive to outliers. In this paper, we propose robust extensions of the geographically weighted empirical best linear unbiased predictor. In particular, we introduce robust projective and predictive estimators under spatial non‐stationarity. Mean squared error estimation is performed by two analytic approaches that account for the spatial structure in the data. Model‐based simulations show that the methodology proposed often leads to more efficient estimators. Furthermore, the analytic mean squared error estimators introduced have appealing properties in terms of stability and bias. Finally, we demonstrate in the application that the new methodology is a good choice for producing estimates for average rent prices of apartments in urban planning areas in Berlin.  相似文献   

3.
Cyberattacks in power systems that alter the input data of a load forecasting model have serious, potentially devastating consequences. Existing cyberattack-resilient work focuses mainly on enhancing attack detection. Although some outliers can be easily identified, more carefully designed attacks can escape detection and impact load forecasting. Here, a cyberattack-resilient load forecasting approach based on an adaptive robust regression method is proposed, where the observations are trimmed based on their residuals and the proportion of the trim is adaptively determined by an estimation of the contaminated data proportion. An extensive comparison study shows that the proposed method outperforms the standard robust regression in various settings.  相似文献   

4.
Robust normal reference bandwidth for kernel density estimation   总被引:1,自引:0,他引:1  
Bandwidth selection is the main problem of kernel density estimation, the most popular method of density estimation. The classical normal reference bandwidth usually oversmoothes the density estimate. The existing hi-tech bandwidths have computational problems (even may not exist) and are not robust against outliers in the sample. A highly robust normal reference bandwidth is proposed, which adapts to different types of densities.  相似文献   

5.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

6.
Simultaneous confidence bands are versatile tools for visualizing estimation uncertainty for parameter vectors, such as impulse response functions. In linear models, it is known that that the sup‐t confidence band is narrower than commonly used alternatives—for example, Bonferroni and projection bands. We show that the same ranking applies asymptotically even in general nonlinear models, such as vector autoregressions (VARs). Moreover, we provide further justification for the sup‐t band by showing that it is the optimal default choice when the researcher does not know the audience's preferences. Complementing existing plug‐in and bootstrap implementations, we propose a computationally convenient Bayesian sup‐t band with exact finite‐sample simultaneous credibility. In an application to structural VAR impulse response function estimation, the sup‐t band—which has been surprisingly overlooked in this setting—is at least 35% narrower than other off‐the‐shelf simultaneous bands.  相似文献   

7.
Forecast outliers commonly occur in economic, financial, and other areas of forecasting applications. In the literature of forecast combinations, there have been only a few studies exploring how to deal with outliers. In this work, we propose two robust combining methods based on the AFTER algorithm (Yang, 2004a). Our approach utilizes robust loss functions in order to reduce the influence of outliers. Oracle inequalities for certain versions of these methods are obtained, which show that the combined forecasts automatically perform as well as the best individual among the pool of original forecasts. Systematic simulations and data examples show that the robust methods outperform the AFTER algorithm when outliers are likely to occur and perform on par with AFTER when there are no outliers. Comparison of the robust AFTERs with some commonly used combining methods also shows their potential advantages.  相似文献   

8.
The paper discusses methods of estimating univariate ARIMA models with outliers. The approach calls for a state vector representation of a time-series model, on which we can then operate on using the Kalman filter. One of the additional advantages of Kalman filter operating on the state vector representation is that the method and code could easily be adapted to be applicable to the ARIMA model with missing observations. The paper investigates ways to calculate robust initial estimation of the parameters of the ARIMA model. The method proposed is based on the results obtained by R.D. Martin (1980).  相似文献   

9.
In frontier analysis, most of the nonparametric approaches (DEA, FDH) are based on envelopment ideas which suppose that with probability one, all the observed units belong to the attainable set. In these deterministic frontier models, statistical theory is now mostly available (Simar and Wilson, 2000a). In the presence of super-efficient outliers, envelopment estimators could behave dramatically since they are very sensitive to extreme observations. Some recent results from Cazals et al. (2002) on robust nonparametric frontier estimators may be used in order to detect outliers by defining a new DEA/FDH deterministic type estimator which does not envelop all the data points and so is more robust to extreme data points. In this paper, we summarize the main results of Cazals et al. (2002) and we show how this tool can be used for detecting outliers when using the classical DEA/FDH estimators or any parametric techniques. We propose a methodology implementing the tool and we illustrate through some numerical examples with simulated and real data. The method should be used in a first step, as an exploratory data analysis, before using any frontier estimation.  相似文献   

10.
Material selection in the chemistry value chain involves consideration of many objectives, including cost, performance, health risk, and environmental impact. Alternatives assessment is an emerging tool for guiding complex decisions with respect to these goals. As a relatively new method, the process is not yet well developed, especially with respect to how trade‐offs among objectives can be assessed accurately and inexpensively. Using paint strippers alternatives assessment as an illustrative example, we show how an established decision‐analytic method, known as comparative screening, allows for a multistep process with gradually increasing information needs. Compared with existing methodological approaches, comparative screening instills flexible and consistent treatment of trade‐offs. This is important because it maximizes the potential for a robust assessment while minimizing arduous data collection. Further, its use in the alternatives assessment process can support the selection of more sustainable materials.  相似文献   

11.
12.
Prediction of economic and social systems is in general based on the assumption that the structure of the system does not vary over time. Such an assumption is seldom fulfilled. Often, at least some of the behavioral relations of a system are characterized by structural variability. The problem of reduced form estimation and prediction in systems with such partial structural variability is considered in the paper. A robust SF-RF strategy, combining direct, unrestricted and indirect, restricted reduced form estimation, is emphasized. This strategy, as well as more commonly applied alternatives of L 1- and L 2-norm estimation, is studied numerically within a small interdependent system with partially occurring, discrete parameter shifts. The different estimation techniques are evaluated with respect to structural form estimation and prediction performance. The study indicates advantages of the SF-RF strategy over considered alternatives.  相似文献   

13.
This paper contributes to the literature on forecast evaluation by conducting an extensive Monte Carlo experiment using the evaluation procedure proposed by Elliott, Komunjer and Timmermann. We consider recent developments in weighting matrices for GMM estimation and testing. We pay special attention to the size and power properties of variants of the J‐test of forecast rationality. Proceeding from a baseline scenario to a more realistic setting, our results show that the approach leads to precise estimates of the degree of asymmetry of the loss function. For correctly specified models, we find the size of the J‐tests to be close to the nominal size, while the tests have high power against misspecified models. These findings are quite robust to inducing fat tails, serial correlation and outliers.  相似文献   

14.
It is a widespread concern that schools and other public buildings are in poor conditions. A popular explanation is that maintenance is given too little priority in the budgetary process because politicians are shortsighted. In this paper we investigate this hypothesis using two novel survey data sets on school and general building conditions in Norwegian local governments. We use political fragmentation as a proxy for myopic behavior and provide strong empirical evidence that a high degree of political fragmentation is associated with poor building conditions, both for schools and for buildings in general. The finding is robust to handling of controls, outliers, and estimation method. We also provide evidence that lack of maintenance is the channel for poor building conditions.  相似文献   

15.
《Journal of econometrics》2002,106(2):297-324
The aim of this paper is to demonstrate how to acquire robust consistent estimates of the linear model when the fundamental orthogonality condition is not fulfilled. With this end in view, we develop two estimation procedures: Two stage generalized M (2SGM) and robust generalized method of moments (RGMM). Both estimators are B-robust, i.e. their associated influence function is bounded, consistent and asymptotic normally distributed. Our simulation results indicate that the relatively efficient RGMM estimator (in regressions with heteroskedastic and/or autocorrelated errors) provides accurate parameter estimates of a panel data model with all variables subject to measurement errors, even if a substantial portion of the data is contaminated with aberrant observations. The traditional estimation techniques such as 2SLS and GMM break down when outliers corrupt the data.  相似文献   

16.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

17.
本文提出使用核估计的方法构造平滑转移模型(STR)的非参数模拟最大似然估计(NPSML),给出了NPSML估计量的构造方法、渐近性质以及相应的核函数和窗宽的选择准则,并利用滑动窗宽算法对估计量的构造过程进行了改进。通过Monte Carlo实验证明,该方法是可靠的,并且当误差项存在序列相关时,此种估计量是稳健的。  相似文献   

18.
Propensity score matching is a widely‐used method to measure the effect of a treatment in social as well as medicine sciences. An important issue in propensity score matching is how to select conditioning variables in estimation of the propensity scores. It is commonly mentioned that variables which affect both program participation and outcomes are selected. Using Monte Carlo simulation, this paper shows that efficiency in estimation of the Average Treatment Effect on the Treated can be gained if all the available observed variables in the outcome equation are included in the estimation of propensity scores. This result still holds in the presence of non‐sampling errors in the observed control variables.  相似文献   

19.
In likelihood-based approaches to robustify state space models, Gaussian error distributions are replaced by non-normal alternatives with heavier tails. Robustified observation models are appropriate for time series with additive outliers, while state or transition equations with heavy-tailed error distributions lead to filters and smoothers that can cope with structural changes in trend or slope caused by innovations outliers. As a consequence, however, conditional filtering and smoothing densities become analytically intractable. Various attempts have been made to deal with this problem, reaching from approximate conditional mean type estimation to fully Bayesian analysis using MCMC simulation. In this article we consider penalized likelihood smoothers, this means estimators which maximize penalized likelihoods or, equivalently, posterior densities. Filtering and smoothing for additive and innovations outlier models can be carried out by computationally efficient Fisher scoring steps or iterative Kalman-type filters. Special emphasis is on the Student family, for which EM-type algorithms to estimate unknown hyperparameters are developed. Operational behaviour is illustrated by simulation experiments and by real data applications. Received: March 1998  相似文献   

20.
The kernel density estimation is a popular method in density estimation. The main issue is bandwidth selection, which is a well‐known topic and is still frustrating statisticians. A robust least squares cross‐validation bandwidth is proposed, which significantly improves the classical least squares cross‐validation bandwidth for its variability and undersmoothing, adapts to different kinds of densities, and outperforms the existing bandwidths in statistical literature and software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号