MVJSCC: Adaptive Lightweight DeepJSCC for Semantic Image Transmission

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Existing deep joint source-channel coding (DeepJSCC) schemes for orthogonal frequency division multiplexing (OFDM) systems face the problems of high computational complexity and limited dynamic adaptability over multipath fading channels, where the signal-to-noise ratio (SNR) and channel state information (CSI) varies with time. To address these challenges, we propose a lightweight MobileViT-based joint source-channel coding with channel adaptation block mechanisms for wireless semantic transmission, called MVJSCC. The proposed MVJSCC adopts a lightweight autoencoder structure integrated with OFDM transmission, achieving adaptation through a lightweight efficient channel attention (ECA) based channel adaptation block (CAB) mechanism. Specifically, the ECA module introduces a conditional attention mechanism where SNR controls the attention range and CSI guides feature priority. Moreover, we propose to use the tensor-train (TT) decomposition method to further improve the computational efficiency of the existing MobileViT module used in our proposed MVJSCC. Numerical experiments demonstrate that the proposed MVJSCC achieves 4 dB peak signal-to-noise ratio (PSNR) gain and 0.25 metric structural similarity index (SSIM) improvement over conventional DeepJSCC with 50% floating-point operations (FLOPs) reduction and 75% parameter reduction. Furthermore, MVJSCC is robust to different channel bandwidth ratios and different datasets.

Original languageEnglish
Pages (from-to)2516-2520
Number of pages5
JournalIEEE Wireless Communications Letters
Volume14
Issue number8
DOIs
Publication statusPublished - 2025

Keywords

  • OFDM
  • Semantic communications
  • adaptive
  • lightweight
  • lightweight attention mechanism

Fingerprint

Dive into the research topics of 'MVJSCC: Adaptive Lightweight DeepJSCC for Semantic Image Transmission'. Together they form a unique fingerprint.

Cite this