跳至主導覽 跳至搜尋 跳過主要內容

Deep Watermarking Based on Swin Transformer for Deep Model Protection

  • Macao Polytechnic University

研究成果: Article同行評審

3 引文 斯高帕斯(Scopus)

摘要

This study improves existing protection strategies for image processing models by embedding invisible watermarks into model outputs to verify the sources of images. Most current methods rely on CNN-based architectures, which are limited by their local perception capabilities and struggle to effectively capture global information. To address this, we introduce the Swin-UNet, originally designed for medical image segmentation tasks, into the watermark embedding process. The Swin Transformer’s ability to capture global information enhances the visual quality of the embedded image compared to CNN-based approaches. To defend against surrogate attacks, data augmentation techniques are incorporated into the training process, enhancing the watermark extractor’s robustness specifically against surrogate attacks. Experimental results show that the proposed watermarking approach reduces the impact of watermark embedding on visual quality. On a deraining task with color images, the average PSNR reaches 45.85 dB, while on a denoising task with grayscale images, the average PSNR reaches 56.60 dB. Additionally, watermarks extracted from surrogate attacks closely match those from the original framework, with an accuracy of 99% to 100%. These results confirm the Swin Transformer’s effectiveness in preserving visual quality.

原文English
文章編號5250
期刊Applied Sciences (Switzerland)
15
發行號10
DOIs
出版狀態Published - 5月 2025

指紋

深入研究「Deep Watermarking Based on Swin Transformer for Deep Model Protection」主題。共同形成了獨特的指紋。

引用此