Image alignment-based multi-region matching for object-level tampering detection

Chi Man Pun, Caiping Yan, Xiao Chen Yuan

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Tampering detection methods based on image hashing have been widely studied with continuous advancements. However, most existing models cannot generate object-level tampering localization results, because the forensic hashes attached to the image lack contour information. In this paper, we present a novel tampering detection model that can generate an accurate, object-level tampering localization result. First, an adaptive image segmentation method is proposed to segment the image into closed regions based on strong edges. Then, the color and position features of the closed regions are extracted as a forensic hash. Furthermore, a geometric invariant tampering localization model named image alignment-based multi-region matching (IAMRM) is proposed to establish the region correspondence between the received and forensic images by exploiting their intrinsic structure information. The model estimates the parameters of geometric transformations via a robust image alignment method based on triangle similarity; in addition, it matches multiple regions simultaneously by utilizing manifold ranking based on different graph structures and features. Experimental results demonstrate that the proposed IAMRM is a promising method for object-level tampering detection compared with the state-of-the-art methods.

Original languageEnglish
Article number7583645
Pages (from-to)377-391
Number of pages15
JournalIEEE Transactions on Information Forensics and Security
Volume12
Issue number2
DOIs
Publication statusPublished - Feb 2017
Externally publishedYes

Keywords

  • Image alignment
  • Image hashing
  • Multi-region matching
  • Object-level tampering detection

Fingerprint

Dive into the research topics of 'Image alignment-based multi-region matching for object-level tampering detection'. Together they form a unique fingerprint.

Cite this