4.7 Article

A Dual-Generator Translation Network Fusing Texture and Structure Features for SAR and Optical Image Matching

Journal

REMOTE SENSING
Volume 14, Issue 12, Pages -

Publisher

MDPI
DOI: 10.3390/rs14122946

Keywords

SAR-to-optical image translation; dual-generator; texture and structure fusing; SAR and optical image matching

Funding

  1. National Natural Science Foundation of China [41961053, 31860182]
  2. Yunnan Fundamental Research Projects [202101AT070102, 202101BE070001-037, 202201AT070164]

Ask authors/readers for more resources

A dual-generator translation network is proposed to fuse structure and texture features of SAR and optical images, with the introduction of frequency-domain and spatial-domain loss functions to reduce differences between pseudo-optical and real optical images. Extensive experiments show that the method achieves state-of-the-art performance in terms of matching accuracy and keypoints repeatability.
The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is the current focus of research. However, the existing methods for SAR-to-optical translation have two main drawbacks. First, single generators usually sacrifice either structure or texture features to balance the model performance and complexity, which often results in textural or structural distortion; second, due to large nonlinear radiation distortions (NRDs) in SAR images, there are still visual differences between the pseudo-optical images generated by current generative adversarial networks (GANs) and real optical images. Therefore, we propose a dual-generator translation network for fusing structure and texture features. On the one hand, the proposed network has dual generators, a texture generator, and a structure generator, with good cross-coupling to obtain high-accuracy structure and texture features; on the other hand, frequency-domain and spatial-domain loss functions are introduced to reduce the differences between pseudo-optical images and real optical images. Extensive quantitative and qualitative experiments show that our method achieves state-of-the-art performance on publicly available optical and SAR datasets. Our method improves the peak signal-to-noise ratio (PSNR) by 21.0%, the chromatic feature similarity (FSIMc) by 6.9%, and the structural similarity (SSIM) by 161.7% in terms of the average metric values on all test images compared with the next best results. In addition, we present a before-and-after translation comparison experiment to show that our method improves the average keypoint repeatability by approximately 111.7% and the matching accuracy by approximately 5.25%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available