4.7 Article

A Dual-Generator Translation Network Fusing Texture and Structure Features for SAR and Optical Image Matching

期刊

REMOTE SENSING
卷 14, 期 12, 页码 -

出版社

MDPI
DOI: 10.3390/rs14122946

关键词

SAR-to-optical image translation; dual-generator; texture and structure fusing; SAR and optical image matching

资金

  1. National Natural Science Foundation of China [41961053, 31860182]
  2. Yunnan Fundamental Research Projects [202101AT070102, 202101BE070001-037, 202201AT070164]

向作者/读者索取更多资源

A dual-generator translation network is proposed to fuse structure and texture features of SAR and optical images, with the introduction of frequency-domain and spatial-domain loss functions to reduce differences between pseudo-optical and real optical images. Extensive experiments show that the method achieves state-of-the-art performance in terms of matching accuracy and keypoints repeatability.
The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is the current focus of research. However, the existing methods for SAR-to-optical translation have two main drawbacks. First, single generators usually sacrifice either structure or texture features to balance the model performance and complexity, which often results in textural or structural distortion; second, due to large nonlinear radiation distortions (NRDs) in SAR images, there are still visual differences between the pseudo-optical images generated by current generative adversarial networks (GANs) and real optical images. Therefore, we propose a dual-generator translation network for fusing structure and texture features. On the one hand, the proposed network has dual generators, a texture generator, and a structure generator, with good cross-coupling to obtain high-accuracy structure and texture features; on the other hand, frequency-domain and spatial-domain loss functions are introduced to reduce the differences between pseudo-optical images and real optical images. Extensive quantitative and qualitative experiments show that our method achieves state-of-the-art performance on publicly available optical and SAR datasets. Our method improves the peak signal-to-noise ratio (PSNR) by 21.0%, the chromatic feature similarity (FSIMc) by 6.9%, and the structural similarity (SSIM) by 161.7% in terms of the average metric values on all test images compared with the next best results. In addition, we present a before-and-after translation comparison experiment to show that our method improves the average keypoint repeatability by approximately 111.7% and the matching accuracy by approximately 5.25%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据