Abstract
In actual industrial scenarios, different operating modes and workloads can lead to multiple modes of working conditions, resulting in significantly diverse feature spaces. However, the heterogeneity and complexity among these modes pose a challenge to traditional data processing methods. Therefore, this paper proposes the cross-modality manifold adaptive Network (CMAN) to facilitate cross-modal information transmission for addressing multi-modal prediction issues. Specifically, CMAN divides the prediction process into two steps. Firstly, the manifold discriminative autoencoder (MDAE) is proposed to extract both local and global manifold geometric structures. The loss function of the designed MDAE in mode recognition is formulated to minimize the ratio between within-modal and between-modal features. In this way, the autoencoder not only learns data representations but also learns to differentiate between data from different classes. This lays the foundation for determining fusion strategies between modes in subsequent steps. Secondly, in the process of multimode prediction, to assist the model in learning and understanding the mutual influences and dependencies between different modes, CMAN shares features between modes through cross connections. It can adaptively preserve task specificity while also utilizing between-task correlations. The effectiveness of the proposed method is validated in the Tennessee Eastman (TE) case and an actual power plant case.
Original language | English |
---|---|
Pages (from-to) | 7845-7854 |
Number of pages | 10 |
Journal | IEEE Transactions on Automation Science and Engineering |
Volume | 22 |
DOIs | |
Publication status | Published - 2025 |
Externally published | Yes |
Keywords
- autoencoders
- cross-connections
- industrial soft sensors
- manifold learning
- Multimode process