共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
4.
5.
6.
7.
Robert B. Gramacy 《Revue internationale de statistique》2020,88(2):326-329
I thoroughly enjoyed reading the article by Bhadra et. al. (2020) and convey my congratulations to the authors for providing a comprehensive and coherent review of horseshoe-based regularization approaches for machine learning models. I am thankful to the editors for providing this opportunity to write a discussion on this useful article, which I expect will turn out to be a good guide in the future for statisticians and practitioners alike. It is quite amazing to see the rapid progress and the magnitude of work advancing the horseshoe regularization approach since the seminal paper by Carvalho et al. (2010). The current review article is a testimony for this. While I have been primarily working with continuous spike and slab priors for high-dimensional Bayesian modeling, I have been following the literature on horseshoe regularization with a keen interest. For my comments on this article, I will focus on some comparisons between these two approaches particularly in terms of model building and methodology and some computational considerations. I would like to first provide some comments on performing valid inference based on the horsheshoe prior framework. 相似文献
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
David R. Hunter 《Revue internationale de statistique》2014,82(1):76-79
The paper by Lange, Chi and Zhou (hereafter ‘the authors’) serves not only as a useful reminder about the importance of optimization techniques in implementing modern statistical methods but also as a collection of various techniques along with citations for further study. The authors point out that it is useful to be able to ‘mix and match’ these techniques. My brief comments will focus on a few aspects of this mixing and matching. 相似文献
20.