4.7 Article Data Paper

FIVES: A Fundus Image Dataset for Artificial Intelligence based Vessel Segmentation

Journal

SCIENTIFIC DATA
Volume 9, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41597-022-01564-3

Keywords

-

Funding

  1. National Key Research and Development Program of China [2019YFC0118401]
  2. Zhejiang Provincial Key Research and Development Plan [2019C03020]
  3. Natural Science Foundation of Zhejiang Province [LQ21H120002]
  4. National Natural Science Foundation of China [U20A20386]

Ask authors/readers for more resources

In this study, a color fundus image vessel segmentation dataset was collected, which is believed to be beneficial for the further development of retinal vessel segmentation.
Retinal vasculature provides an opportunity for direct observation of vessel morphology, which is linked to multiple clinical conditions. However, objective and quantitative interpretation of the retinal vasculature relies on precise vessel segmentation, which is time consuming and labor intensive. Artificial intelligence (AI) has demonstrated great promise in retinal vessel segmentation. The development and evaluation of AI-based models require large numbers of annotated retinal images. However, the public datasets that are usable for this task are scarce. In this paper, we collected a color fundus image vessel segmentation (FIVES) dataset. The FIVES dataset consists of 800 high-resolution multi-disease color fundus photographs with pixelwise manual annotation. The annotation process was standardized through crowdsourcing among medical experts. The quality of each image was also evaluated. To the best of our knowledge, this is the largest retinal vessel segmentation dataset for which we believe this work will be beneficial to the further development of retinal vessel segmentation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available