Skip to main navigation Skip to search Skip to main content

Comparative Study of Lightweight Deep Learning Models for Soccer Penalty Kick Image Classification

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Image classification plays a crucial role in sports video analysis, providing insights into player performance and tactics. This study classifies soccer penalty kick images into target zones using lightweight deep learning models. A dataset of 3,000 images from public match videos was divided into three penalty kick regions. Four convolutional neural networks-ShuffleNet-v2, MobileNet-v3, EfficientNet-b0, RegNetX-400MF-and a Simple Vision Transformer were evaluated. Results show all models performed competitively, with EfficientNet-b0 achieving the highest accuracy of 84.11%. These results highlight the effectiveness of lightweight networks for soccer image classification and their potential in real-time sports analytics.

Original languageEnglish
Title of host publication2025 5th International Conference on Digital Society and Intelligent Systems, DSInS 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages343-346
Number of pages4
ISBN (Electronic)9798331587574
DOIs
Publication statusPublished - 2025
Event5th International Conference on Digital Society and Intelligent Systems, DSInS 2025 - Haikou, China
Duration: 7 Nov 20259 Nov 2025

Publication series

Name2025 5th International Conference on Digital Society and Intelligent Systems, DSInS 2025

Conference

Conference5th International Conference on Digital Society and Intelligent Systems, DSInS 2025
Country/TerritoryChina
CityHaikou
Period7/11/259/11/25

Keywords

  • EfficientNet-B0
  • Image classification
  • Lightweight deep learning models
  • Soccer penalty kick
  • Sports Video Analysis

Fingerprint

Dive into the research topics of 'Comparative Study of Lightweight Deep Learning Models for Soccer Penalty Kick Image Classification'. Together they form a unique fingerprint.

Cite this