4.6 Article

TransMed: Transformers Advance Multi-Modal Medical Image Classification

Journal

DIAGNOSTICS
Volume 11, Issue 8, Pages -

Publisher

MDPI
DOI: 10.3390/diagnostics11081384

Keywords

transformer; medical image classification; deep learning; multiparametric MRI; multi-modal

Funding

  1. National Natural Science Foundation of China [61902058, C61872075]
  2. Fundamental Research Funds for the Central Universities [N2019002, JC2019025]
  3. Natural Science Foundation of Liaoning Province [2019-ZD-0751]
  4. Medical Imaging Intelligence Research [N2124006-3]

Ask authors/readers for more resources

The article discusses the advantages and limitations of convolutional neural networks (CNN) and transformers in medical image analysis, proposing a method called TransMed that combines CNN and transformer for multi-modal medical image classification. The approach achieved significant performance improvement on two datasets and outperformed existing CNN-based models.
Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Computer Science, Information Systems

Efficiently Mastering the Game of NoGo with Deep Reinforcement Learning Supported by Domain Knowledge

Yifan Gao, Lezhou Wu

Summary: This paper introduces a new method called NoGoZero+ to enhance the training process of AlphaZero and improve performance in a game similar to Go, NoGo. The method significantly speeds up training and achieves good competition results under limited resources.

ELECTRONICS (2021)

Article Medicine, General & Internal

Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson's Disease Image Classification

Yin Dai, Yumeng Song, Weibin Liu, Wenhe Bai, Yifan Gao, Xinyang Dong, Wenbo Lv

Summary: This paper investigates the use of deep convolutional neural networks for multi-focus image fusion, merging MRI and PET neural images into multi-modal images to enhance the accuracy of PD image classification. The results show that the test accuracy rates of the multi-modal fusion dataset outperform those of the single-modal MRI dataset, indicating the effectiveness of the multi-focus image fusion method for PD image classification.

DIAGNOSTICS (2021)

Proceedings Paper Computer Science, Artificial Intelligence

GomokuNet: A Novel UNet-style Network for Gomoku Zero Learning via Exploiting Positional Information and Multiscale Features

Yifan Gao, Lezhou Wu, Haoyue Li

Summary: This paper introduces a novel positional attention-based UNet-style model (GomokuNet) for Gomoku AI, which combines positional information modules and multiscale features to improve the performance of zero learning networks. The quantitative results from ablation analysis show that GomokuNet outperforms previous state-of-the-art zero learning networks, demonstrating the potential to enhance zero learning efficiency and AI engine performance.

2021 IEEE CONFERENCE ON GAMES (COG) (2021)

No Data Available