期刊
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE
卷 2017, 期 -, 页码 -出版社
HINDAWI LTD
DOI: 10.1155/2017/2747431
关键词
-
资金
- National Natural Science Foundation of China [11671317, 11601412, 11501438]
- Natural Science Basic Research Plan in Shaanxi Province of China [2017JQ1034]
- Science Foundation of Xi'an University of Architecture and Technology [RC1438, QN1508]
As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs) have gained much interest due to their many advantages. Stability selection (Meinshausen and Buhlmann, 2010), a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR) and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region. in lasso and the parameter lambda(min) properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Lambda and lambda(min). Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据