Transformer-based multitask learning for reaction prediction under low-resource circumstances

Haoran Qiao, Yejian Wu, Yun Zhang, Chengyun Zhang, Xinyi Wu, Zhipeng Wu, Qingjie Zhao, Xinqiao Wang, Huiyu Li, Hongliang Duan

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Recently, effective and rapid deep-learning methods for predicting chemical reactions have significantly aided the research and development of organic chemistry and drug discovery. Owing to the insufficiency of related chemical reaction data, computer-assisted predictions based on low-resource chemical datasets generally have low accuracy despite the exceptional ability of deep learning in retrosynthesis and synthesis. To address this issue, we introduce two types of multitask models: retro-forward reaction prediction transformer (RFRPT) and multiforward reaction prediction transformer (MFRPT). These models integrate multitask learning with the transformer model to predict low-resource reactions in forward reaction prediction and retrosynthesis. Our results demonstrate that introducing multitask learning significantly improves the average top-1 accuracy, and the RFRPT (76.9%) and MFRPT (79.8%) outperform the transformer baseline model (69.9%). These results also demonstrate that a multitask framework can capture sufficient chemical knowledge and effectively mitigate the impact of the deficiency of low-resource data in processing reaction prediction tasks. Both RFRPT and MFRPT methods significantly improve the predictive performance of transformer models, which are powerful methods for eliminating the restriction of limited training data.

Original languageEnglish
Pages (from-to)32020-32026
Number of pages7
JournalRSC Advances
Issue number49
Publication statusPublished - 8 Nov 2022
Externally publishedYes


Dive into the research topics of 'Transformer-based multitask learning for reaction prediction under low-resource circumstances'. Together they form a unique fingerprint.

Cite this