Role-aware adapters for dialogue summarization in Seq2Seq models

Research output: Contribution to journalArticlepeer-review

Abstract

Dialogue summary aims to convert the content of a complex dialogue into a concise, focused text that allows for a quick understanding of the core elements of the dialogue. By summarizing each role's speech independently, traditional approaches often ignore the key contributions of non-primary roles, resulting in the omission of important information. To address this problem, we propose an innovative Role-Aware Adapters (RAA) approach that focuses on the interactions between roles in a dialogue to more comprehensively distill and integrate the key information of each role. RAA achieves this goal through three core mechanisms: role-aware semantic weighting reinforces the emphasis on important role interactions, local and global semantic weighting assess the importance of each sentence in the dialogue and integrate the key information of each role, and adaptive dynamic weighting automatically adjusts to changes in dialogue content to highlight the most critical information. Our experiments on three publicly available datasets, CSDS, MC and SAMSUM, show that RAA achieves significant performance improvements in several evaluation metrics compared to existing techniques. These results not only demonstrate the importance of including information about other actors, but also highlight the significant advantages of our approach in enriching the content of the summaries, enhancing semantic coherence, and improving the accuracy of the topic structure.

Original languageEnglish
Article number114293
JournalApplied Soft Computing Journal
Volume187
DOIs
Publication statusPublished - Feb 2026

Keywords

  • Dialogue summarization
  • Natural language processing
  • Role aware
  • Seq2Seq

Fingerprint

Dive into the research topics of 'Role-aware adapters for dialogue summarization in Seq2Seq models'. Together they form a unique fingerprint.

Cite this