YOLO–LaserGalvo: A Vision–Laser-Ranging System for High-Precision Welding Torch Localization

Jiajun Li, Tianlun Wang, Wei Wei

Research output: Contribution to journalArticlepeer-review

Abstract

A novel closed loop visual positioning system, termed YOLO–LaserGalvo (YLGS), is proposed for precise localization of welding torch tips in industrial welding automation. The proposed system integrates a monocular camera, an infrared laser distance sensor with a galvanometer scanner, and a customized deep learning detector based on an improved YOLOv11 model. In operation, the vision subsystem first detects the approximate image location of the torch tip using the YOLOv11-based model. Guided by this detection, the galvanometer steers the IR laser beam to that point and measures the distance to the torch tip. The distance feedback is then fused with the vision coordinates to compute the precise 3D position of the torch tip in real-time. Under complex illumination, the proposed YLGS system exhibits superior robustness compared with color-marker and ArUco baselines. Experimental evaluation shows that the system outperforms traditional color-marker and ArUco-based methods in terms of accuracy, robustness, and processing speed. This marker-free method provides high-precision torch positioning without requiring structured lighting or artificial markers. Its pedagogical implications in engineering education are also discussed. Potential future work includes extending the method to full 6-DOF pose estimation and integrating additional sensors for enhanced performance.

Original languageEnglish
Article number6279
JournalSensors
Volume25
Issue number20
DOIs
Publication statusPublished - Oct 2025

Keywords

  • deep learning detection
  • engineering education
  • laser galvanometer scanning
  • vision-based positioning
  • welding torch localization

Fingerprint

Dive into the research topics of 'YOLO–LaserGalvo: A Vision–Laser-Ranging System for High-Precision Welding Torch Localization'. Together they form a unique fingerprint.

Cite this