期刊
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
卷 79, 期 -, 页码 -出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2021.103192
关键词
Multi-source; Illumination discrimination; Salient object detection; Deep learning
资金
- Tianjin Key Research Program, China [18ZXRHSY00190]
- Yunnan Key Research Project [2018IB007]
- National Natural Science Foundation of China [61771338]
A new salient object detection method named IAN-MF-SOD network is proposed, which intelligently utilizes features from multiple sources and adaptively fuses them based on different illumination conditions.
Salient object detection (SOD) tasks aim to outline the most concerned part of human vision, which is widely used in computer vision fields. Due to possibility of the insufficient illumination in the application environment (such as night or dim indoor environment), RGB images from visible channels usually lose most of their performance, while thermal images can improve the detection performance. Therefore, it is in urgent need of a robust saliency detection method, which can handle complex illumination conditions and take use of features from multiple sources intelligently. Accordingly, we propose our 'illumination based multi-source fused salient object detection network' (IAN-MF-SOD network). Taking the illumination condition as a quantitative reference, we guide features from two sources to fuse adaptively and intelligently, so that our method can enhance both of their advantages. For different illumination conditions, we distribute different fusion weights for each RGB-thermal image pair. Well fused images are generated as inputs to a trained SOD network to obtain saliency maps. Due to the analysis of our proposed IAN-score, our method performs favorably against traditional RGB-based SOD networks.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据