首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary The generalized ridge estimator, which considers generalizations of mean square error, is presented, and a mathematical rule of determining the optimalk-value is discussed. The generalized ridge estimator is examined in comparison with the least squares, the pseudoinverse, theJames-Stein-type shrinkage, and the principal component estimators, especially focusing their attention on improved adjustments for regression coefficients. An alternative estimation approach that better integrates a priori information is noted. Finally, combining the generalized ridge and robust regression methods is suggested.  相似文献   

2.
Spatial autoregressive models are powerful tools in the analysis of data sets from diverse scientific areas of research such as econometrics, plant species richness, cancer mortality rates, image processing, analysis of the functional Magnetic Resonance Imaging (fMRI) data, and many more. An important class in the host of spatial autoregressive models is the class of spatial error models in which spatially lagged error terms are assumed. In this paper, we propose efficient shrinkage and penalty estimators for the regression coefficients of the spatial error model. We carry out asymptotic as well as simulation analyses to illustrate the gain in efficiency achieved by these new estimators. Furthermore, we apply the new methodology to housing prices data and provide a bootstrap approach to compute prediction errors of the new estimators.  相似文献   

3.
We consider improved estimation strategies for the parameter matrix in multivariate multiple regression under a general and natural linear constraint. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on prior information, there is a need of combining two estimation techniques in an optimal way. In this scenario, we suggest some shrinkage estimators for the targeted parameter matrix. Also, we examine the relative performances of the suggested estimators in the direction of the subspace and candidate subspace restricted type estimators. We develop a large sample theory for the estimators including derivation of asymptotic bias and asymptotic distributional risk of the suggested estimators. Furthermore, we conduct Monte Carlo simulation studies to appraise the relative performance of the suggested estimators with the classical estimators. The methods are also applied on a real data set for illustrative purposes.  相似文献   

4.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

5.
It is well known that the maximum likelihood estimator (MLE) is inadmissible when estimating the multidimensional Gaussian location parameter. We show that the verdict is much more subtle for the binary location parameter. We consider this problem in a regression framework by considering a ridge logistic regression (RR) with three alternative ways of shrinking the estimates of the event probabilities. While it is shown that all three variants reduce the mean squared error (MSE) of the MLE, there is at the same time, for every amount of shrinkage, a true value of the location parameter for which we are overshrinking, thus implying the minimaxity of the MLE in this family of estimators. Little shrinkage also always reduces the MSE of individual predictions for all three RR estimators; however, only the naive estimator that shrinks toward 1/2 retains this property for any generalized MSE (GMSE). In contrast, for the two RR estimators that shrink toward the common mean probability, there is always a GMSE for which even a minute amount of shrinkage increases the error. These theoretical results are illustrated on a numerical example. The estimators are also applied to a real data set, and practical implications of our results are discussed.  相似文献   

6.
Instrumental variable estimation in the presence of many moment conditions   总被引:1,自引:0,他引:1  
This paper develops shrinkage methods for addressing the “many instruments” problem in the context of instrumental variable estimation. It has been observed that instrumental variable estimators may behave poorly if the number of instruments is large. This problem can be addressed by shrinking the influence of a subset of instrumental variables. The procedure can be understood as a two-step process of shrinking some of the OLS coefficient estimates from the regression of the endogenous variables on the instruments, then using the predicted values of the endogenous variables (based on the shrunk coefficient estimates) as the instruments. The shrinkage parameter is chosen to minimize the asymptotic mean square error. The optimal shrinkage parameter has a closed form, which makes it easy to implement. A Monte Carlo study shows that the shrinkage method works well and performs better in many situations than do existing instrument selection procedures.  相似文献   

7.
Semi-parametric estimation methods of the long-memory exponent of a time series have been studied in several papers, some applied, others theoretical, some using Fourier methods, others using a wavelet-based technique. In this paper, we compare the Fourier and wavelet approaches to the local regression method and to the local Whittle method. We provide an overview of these methods, describe what has been done and indicate the available results and the conditions under which they hold. We discuss their relative strengths and weaknesses both from a practical and a theoretical perspective. We also include a simulation-based comparison. The software written to support this work is available on demand and we illustrate its use at the end of the paper.  相似文献   

8.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

9.
《Statistica Neerlandica》2018,72(2):90-108
Variable selection and error structure determination of a partially linear model with time series errors are important issues. In this paper, we investigate the regression coefficient and autoregressive order shrinkage and selection via the smoothly clipped absolute deviation penalty for a partially linear model with a divergent number of covariates and finite order autoregressive time series errors. Both consistency and asymptotic normality of the proposed penalized estimators are derived. The oracle property of the resultant estimators is proved. Simulation studies are carried out to assess the finite‐sample performance of the proposed procedure. A real data analysis is made to illustrate the usefulness of the proposed procedure as well.  相似文献   

10.
We consider improved estimation strategies for a two-parameter inverse Gaussian distribution and use a shrinkage technique for the estimation of the mean parameter. In this context, two new shrinkage estimators are suggested and demonstrated to dominate the classical estimator under the quadratic risk with realistic conditions. Furthermore, based on our shrinkage strategy, a new estimator is proposed for the common mean of several inverse Gaussian distributions, which uniformly dominates the Graybill–Deal type unbiased estimator. The performance of the suggested estimators is examined by using simulated data and our shrinkage strategies are shown to work well. The estimation methods and results are illustrated by two empirical examples.  相似文献   

11.
本文为一类具有异质性非参数时间趋势的面板数据模型提出了一种简单估计方法。基于局部多项式回归的思想,首先去除数据中的时间趋势成分,然后由最小二乘法来估计公共系数,同时得到时间趋势函数的非参数估计。在一些正则条件下,研究了这些估计量的渐近性质,即在时间维度T和横截面维度n同时趋向无穷时,建立了各个估计量的渐近相合性和渐近正态性。最后通过蒙特卡洛模拟,考查了这种估计方法的有限样本性质。  相似文献   

12.
Mann–Whitney‐type causal effects are generally applicable to outcome variables with a natural ordering, have been recommended for clinical trials because of their clinical relevance and interpretability and are particularly useful in analysing an ordinal composite outcome that combines an original primary outcome with death and possibly treatment discontinuation. In this article, we consider robust and efficient estimation of such causal effects in observational studies and clinical trials. For observational studies, we propose and compare several estimators: regression estimators based on an outcome regression (OR) model or a generalised probabilistic index (GPI) model, an inverse probability weighted estimator based on a propensity score model and two doubly robust (DR), locally efficient estimators. One of the DR estimators involves a propensity score model and an OR model, is consistent and asymptotically normal under the union of the two models and attains the semiparametric information bound when both models are correct. The other DR estimator has the same properties with the OR model replaced by a GPI model. For clinical trials, we extend an existing augmented estimator based on a GPI model and propose a new one based on an OR model. The methods are evaluated and compared in simulation experiments and applied to a clinical trial in cardiology and an observational study in obstetrics.  相似文献   

13.
Penalized Regression with Ordinal Predictors   总被引:1,自引:0,他引:1  
Ordered categorial predictors are a common case in regression modelling. In contrast to the case of ordinal response variables, ordinal predictors have been largely neglected in the literature. In this paper, existing methods are reviewed and the use of penalized regression techniques is proposed. Based on dummy coding two types of penalization are explicitly developed; the first imposes a difference penalty, the second is a ridge type refitting procedure. Also a Bayesian motivation is provided. The concept is generalized to the case of non-normal outcomes within the framework of generalized linear models by applying penalized likelihood estimation. Simulation studies and real world data serve for illustration and to compare the approaches to methods often seen in practice, namely simple linear regression on the group labels and pure dummy coding. Especially the proposed difference penalty turns out to be highly competitive.  相似文献   

14.
Whether doing parametric or nonparametric regression with shrinkage, thresholding, penalized likelihood, Bayesian posterior estimators (e.g., ridge regression, lasso, principal component regression, waveshrink or Markov random field ), it is common practice to rescale covariates by dividing by their respective standard errors ρ. The stated goal of this operation is to provide unitless covariates to compare like with like, especially when penalized likelihood or prior distributions are used. We contend that this vision is too simplistic. Instead, we propose to take into account a more essential component of the structure of the regression matrix by rescaling the covariates based on the diagonal elements of the covariance matrix Σ of the maximum-likelihood estimator. We illustrate the differences between the standard ρ- and proposed Σ-rescalings with various estimators and data sets.  相似文献   

15.
The Kelly portfolio, which is documented to have the highest wealth growth rate of any other portfolio in the long run, has highly risky and unstable performance in the short term. This paper offers a hybrid approach to address this problem by integrating the concept of ridge regression and shrinkage estimation into a robustly modified Kelly portfolio. The proposed approach is a two-stage optimization process that not only takes into account the effect of estimation error but also solves the notoriously conservative problem introduced by the robust optimization method. By extending the worst-case scenarios considered by the robust Kelly portfolio, our approach significantly improves its out-of-sample performance without compromising risk reduction. In an extensive out-of-sample analysis with simulated and empirical data sets, we also characterize the impacts of the robustness level and the length of the rolling window on the final result. Moreover, we conduct a comparative study to confirm the validity of the proposed approach, and our model allows the investor to have a better risk-return trade-off than other traditional models.  相似文献   

16.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

17.
Motivated by the requirement of controlling the number of false discoveries that arises in several application fields, we study the behaviour of diagnostic procedures obtained from popular high‐breakdown regression estimators when no outlier is present in the data. We find that the empirical error rates for many of the available techniques are surprisingly far from the prescribed nominal level. Therefore, we propose a simulation‐based approach to correct the liberal diagnostics and reach reliable inferences. We provide evidence that our approach performs well in a wide range of settings of practical interest and for a variety of robust regression techniques, thus showing general appeal. We also evaluate the loss of power that can be expected from our corrections under different contamination schemes and show that this loss is often not dramatic. Finally, we detail some possible extensions that may further enhance the applicability of the method.  相似文献   

18.
In this paper, we propose an automatic selection of the bandwidth of the recursive kernel estimators of a regression function defined by the stochastic approximation algorithm. We showed that, using the selected bandwidth and the stepsize which minimize the mean weighted integrated squared error, the recursive estimator will be better than the non‐recursive one for small sample setting in terms of estimation error and computational costs. We corroborated these theoretical results through simulation study and a real dataset.  相似文献   

19.
Multivariate panel data provides a unique opportunity in studying the joint evolution of multiple response variables over time. In this paper, we propose an error component seemingly unrelated nonparametric regression model to fit the multivariate panel data, which is more flexible than the traditional error component seemingly unrelated parametric regression. By applying the undersmoothing technique and taking both of the correlations within and among responses into account, we propose an efficient two-stage local polynomial estimation for the unknown functions. It is shown that the resulting estimators are asymptotically normal, and have the same biases as the standard local polynomial estimators, which are only based on the individual response, and smaller asymptotic variances. The performance of the proposed procedure is evaluated through a simulation study and a real data set.  相似文献   

20.
Anomalies in the Foundations of Ridge Regression   总被引:1,自引:0,他引:1  
Errors persist in ridge regression, its foundations, and its usage, as set forth in Hoerl & Kennard (1970) and elsewhere. Ridge estimators need not be minimizing, nor a prospective ridge parameter be admissible. Conventional estimators are not LaGrange's solutions constrained to fixed lengths, as claimed, since such solutions are singular. Of a massive literature on estimation, prediction, cross–validation, choice of ridge parameter, and related issues, little emanates from constrained optimization to include inequality constraints. The problem traces to a misapplication of LaGrange's Principle, unrecognized singularities, and misplaced links between constraints and ridge parameters. Alternative principles, based on condition numbers, are seen to validate both conventional ridge and surrogate ridge regression to be defined. Numerical studies illustrate that ridge regression as practiced often exhibits pathologies it is intended to redress.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号