Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios

Zhipeng Wu, Xiang Cai, Chengyun Zhang, Haoran Qiao, Yejian Wu, Yun Zhang, Xinqiao Wang, Haiying Xie, Feng Luo, Hongliang Duan

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

In the face of low-resource reaction training samples, we construct a chemical platform for addressing small-scale reaction prediction problems. Using a self-supervised pretraining strategy called MAsked Sequence to Sequence (MASS), the Transformer model can absorb the chemical information of about 1 billion molecules and then fine-tune on a small-scale reaction prediction. To further strengthen the predictive performance of our model, we combine MASS with the reaction transfer learning strategy. Here, we show that the average improved accuracies of the Transformer model can reach 14.07, 24.26, 40.31, and 57.69% in predicting the Baeyer-Villiger, Heck, C-C bond formation, and functional group interconversion reaction data sets, respectively, marking an important step to low-resource reaction prediction.

Original languageEnglish
Pages (from-to)4579-4590
Number of pages12
JournalJournal of Chemical Information and Modeling
Volume62
Issue number19
DOIs
Publication statusPublished - 10 Oct 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios'. Together they form a unique fingerprint.

Cite this