Enhancing Speaker Recognition with CRET Model: a fusion of CONV2D, RESNET and ECAPA-TDNN

Research output: Contribution to journalArticlepeer-review

Abstract

In today’s society, speaker recognition plays an increasingly important role. Currently, neural networks are widely employed for extracting speaker features. Although the Emphasized Channel Attention, Propagation, and Aggregation in Time Delay Neural Network (ECAPA-TDNN) model can obtain temporal context information through dilated convolution to some extent, this model falls short in acquiring fully comprehensive speech features. To further improve the accuracy of the model, better capture the temporal context information, and make ECAPA-TDNN unaffected by small offsets in the frequency domain, based on the ECAPA-TDNN model, we combine a two-dimensional convolutional network (Conv2D), a residual network (ResNet), and ECAPA-TDNN to form a novel CRET model. In this study, two CRET models are proposed, and these two models are compared with the baseline models Multi-Scale Backbone Architecture (Res2Net) and ECAPA-TDNN in different channels and different datasets. The experimental findings indicate that our proposed models exhibit strong performance across various experiments conducted on both training and test sets, even when the network layer is deep. Our model performs the best on the VoxCeleb2 dataset with 1024 channels, achieving an accuracy of 0.97828, an equal error rate (EER) of 0.03612 on the VoxCeleb1-O dataset, and a minimum detection cost function (MinDCF) of 0.43967. This technology can improve public safety and service efficiency in smart city construction, promote finance, education, and other fields, and bring more convenience to people's lives.

Original languageEnglish
Article number9
JournalEurasip Journal on Audio, Speech, and Music Processing
Volume2025
Issue number1
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Conv2D
  • ECAPA-TDNN
  • ResNet
  • Smart city
  • Speaker recognition

Fingerprint

Dive into the research topics of 'Enhancing Speaker Recognition with CRET Model: a fusion of CONV2D, RESNET and ECAPA-TDNN'. Together they form a unique fingerprint.

Cite this