Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios

Zhipeng Wu, Xiang Cai, Chengyun Zhang, Haoran Qiao, Yejian Wu, Yun Zhang, Xinqiao Wang, Haiying Xie, Feng Luo, Hongliang Duan

研究成果: Article同行評審

5 引文 斯高帕斯(Scopus)

摘要

In the face of low-resource reaction training samples, we construct a chemical platform for addressing small-scale reaction prediction problems. Using a self-supervised pretraining strategy called MAsked Sequence to Sequence (MASS), the Transformer model can absorb the chemical information of about 1 billion molecules and then fine-tune on a small-scale reaction prediction. To further strengthen the predictive performance of our model, we combine MASS with the reaction transfer learning strategy. Here, we show that the average improved accuracies of the Transformer model can reach 14.07, 24.26, 40.31, and 57.69% in predicting the Baeyer-Villiger, Heck, C-C bond formation, and functional group interconversion reaction data sets, respectively, marking an important step to low-resource reaction prediction.

原文English
頁(從 - 到)4579-4590
頁數12
期刊Journal of Chemical Information and Modeling
62
發行號19
DOIs
出版狀態Published - 10 10月 2022
對外發佈

指紋

深入研究「Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios」主題。共同形成了獨特的指紋。

引用此