跳至主導覽 跳至搜尋 跳過主要內容

Role-aware adapters for dialogue summarization in Seq2Seq models

研究成果: Article同行評審

摘要

Dialogue summary aims to convert the content of a complex dialogue into a concise, focused text that allows for a quick understanding of the core elements of the dialogue. By summarizing each role's speech independently, traditional approaches often ignore the key contributions of non-primary roles, resulting in the omission of important information. To address this problem, we propose an innovative Role-Aware Adapters (RAA) approach that focuses on the interactions between roles in a dialogue to more comprehensively distill and integrate the key information of each role. RAA achieves this goal through three core mechanisms: role-aware semantic weighting reinforces the emphasis on important role interactions, local and global semantic weighting assess the importance of each sentence in the dialogue and integrate the key information of each role, and adaptive dynamic weighting automatically adjusts to changes in dialogue content to highlight the most critical information. Our experiments on three publicly available datasets, CSDS, MC and SAMSUM, show that RAA achieves significant performance improvements in several evaluation metrics compared to existing techniques. These results not only demonstrate the importance of including information about other actors, but also highlight the significant advantages of our approach in enriching the content of the summaries, enhancing semantic coherence, and improving the accuracy of the topic structure.

原文English
文章編號114293
期刊Applied Soft Computing Journal
187
DOIs
出版狀態Published - 2月 2026

指紋

深入研究「Role-aware adapters for dialogue summarization in Seq2Seq models」主題。共同形成了獨特的指紋。

引用此