BioVFM-21M: Benchmarking and Scaling Self-supervised Vision Foundation Models for Biomedical Image Analysis

Jiarun Liu, Hong Yu Zhou, Weijian Huang, Hao Yang, Dongning Song, Tao Tan, Yong Liang, Shanshan Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Scaling up model and data size have demonstrated impressive improvement over a wide range of tasks. Despite extensive studies on scaling behaviors for general-purpose tasks, medical images exhibit substantial differences from natural data. It remains unclear the key factors in developing medical vision foundation models at scale. In this paper, we explored the scaling behavior across model sizes, training algorithms, data sizes, and imaging modalities in developing scalable medical vision foundation models by self-supervised learning. To support scalable pretraining, we introduce BioVFM-21M, a large-scale biomedical image dataset encompassing a wide range of biomedical image modalities and anatomies. We observed that scaling up does provide benefits but varies across tasks. Additional analysis reveals several factors correlated with scaling benefits. Finally, we propose BioVFM, a large-scale medical vision foundation model pretrained on 21 million biomedical images, which outperforms the previous state-of-the-art foundation models across 12 medical benchmarks. Our results highlight that while scaling up is beneficial for pursuing better performance, task characteristics, data diversity, pretraining methods, and computational efficiency remain critical considerations for developing scalable medical foundation models. We will open the dataset, model, and algorithms of this study at GitHub.

Original languageEnglish
Title of host publicationFoundation Models for General Medical AI - 3rd International Workshop, MedAGI 2025, Held in Conjunction with MICCAI 2025, Proceedings
EditorsWon-Ki Jeong, Hyunwoo J. Kim, Zhongying Deng, Yiqing Shen, Angelica I Aviles-Rivero, Shaoting Zhang
PublisherSpringer Science and Business Media Deutschland GmbH
Pages23-33
Number of pages11
ISBN (Print)9783032078445
DOIs
Publication statusPublished - 2026
Event3rd International Workshop on Foundation Models for Medical Artificial General Intelligence, MedAGI 2025, Held in Conjunction with the 28th International conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2025 - Daejeon, Korea, Republic of
Duration: 27 Sept 202527 Sept 2025

Publication series

NameLecture Notes in Computer Science
Volume16112 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Workshop on Foundation Models for Medical Artificial General Intelligence, MedAGI 2025, Held in Conjunction with the 28th International conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2025
Country/TerritoryKorea, Republic of
CityDaejeon
Period27/09/2527/09/25

Keywords

  • Benchmarking
  • Correlation analysis
  • Foundation models
  • Large medical dataset
  • Scaling law
  • Self-supervised learning

Fingerprint

Dive into the research topics of 'BioVFM-21M: Benchmarking and Scaling Self-supervised Vision Foundation Models for Biomedical Image Analysis'. Together they form a unique fingerprint.

Cite this