TransHFC: Joints Hypergraph Filtering Convolution and Transformer Framework for Temporal Forgery Localization

Jiahao Huang, Xiaochen Yuan, Chan Tong Lam, Sio Kei Im, Fangyuan Lei, Xiuli Bi

Research output: Contribution to journalArticlepeer-review

Abstract

The authenticity of audio-visual content is being challenged by advanced multimedia editing technologies inspired by Artificial Intelligence-Generated Content (AIGC). Temporal forgery localization aims to detect suspicious contents by locating forged segments. So far, most of the existing methods are based on Convolutional Neural Networks (CNNs) or Transformers, yet neither of them has fully considered the complex relationships within forged audio-visual content. To address this issue, in this paper, we propose a novel method, named TransHFC, which innovatively introduces hypergraphs to model group relationships among segments while considering point-to-point relationships through Transformers. Through its dual hypergraph filtering convolution branch, TransHFC captures both temporal and spatial level group relationships, enhancing the representation of forged segment features. Furthermore, we propose a new hypergraph filtering convolution Auto-Encoder that uses a multi-frequency filter bank for adaptive signal capture. This design compensates for the limitation of a single hypergraph filter. Our extensive experiments on Lav-DF, TVIL, Psynd, and HAD datasets demonstrate that TransHFC achieves state-of-the-art performance.

Original languageEnglish
JournalIEEE Transactions on Circuits and Systems for Video Technology
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • Hypergraph
  • Hypergraph Convolution
  • Temporal Forgery Localization
  • Transformer

Fingerprint

Dive into the research topics of 'TransHFC: Joints Hypergraph Filtering Convolution and Transformer Framework for Temporal Forgery Localization'. Together they form a unique fingerprint.

Cite this