跳至主導覽 跳至搜尋 跳過主要內容

Black-box reversible adversarial examples with invertible neural network

  • Jielun Huang
  • , Guoheng Huang
  • , Xuhui Zhang
  • , Xiaochen Yuan
  • , Fenfang Xie
  • , Chi Man Pun
  • , Guo Zhong
  • Guangdong University of Technology
  • Guangdong University of Foreign Studies
  • Sun Yat-Sen University
  • University of Macau

研究成果: Article同行評審

8 引文 斯高帕斯(Scopus)

摘要

Reversible Adversarial Example (RAE) has been widely researched for its ability to ensure authorized access while preventing unauthorized recognition. Existing RAE schemes focus on Reversible Data Hiding techniques and white-box attacks. However, white-box attacks might be impractical due to the unknown parameters of the target model. Besides, these methods suffer massive loss during the embedding of perturbations, impacting the RAE's quality. In this paper, we propose I-RAE scheme to generate black-box RAE with minimal loss based on Invertible Neural Network (INN). Specifically, Black-box Attack Flow (BAFlow) is introduced to generate perturbations on a Gaussian distribution that are more easily embeddable. Furthermore, to enhance the embedding capability of RAE, we innovatively treat the embedding of perturbation as an image hiding and propose Perturbation Hiding Network (PHN) to reversibly hide the entire perturbation into the adversarial example. We also implement wavelet high-frequency hiding to reduce the degradation in the visual quality of RAE. Experimental results on the ImageNet and CIFAR-10 datasets demonstrate that I-RAE achieves state-of-the-art black-box attack ability and visual quality.

原文English
文章編號105094
期刊Image and Vision Computing
147
DOIs
出版狀態Published - 7月 2024

指紋

深入研究「Black-box reversible adversarial examples with invertible neural network」主題。共同形成了獨特的指紋。

引用此