Faster Inter Prediction by NR-Frame in VVC

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

VVC is the next generation video coding standard in which inter prediction plays an important role to reduce the redundancy between adjacent frames. The coding time is longer since larger blocks and more motion search are supported, and the accuracy of inter prediction is limited because only temporal information is used in the conventional algorithm. This work make use of YOLOv5 to refine inter prediction in VVC, introducing an architecture that combines detected objects and tracking results with the proposed NR-Frame, which perform faster prediction of coded blocks within such detected objects. The experimental results demonstrate that the proposed method can achieve an average 11.45% (up to 13.27%) reduction in coding time under RA conditions compared to VTM-13.0.

Original languageEnglish
Title of host publicationICGSP 2023 - Proceedings of the 2023 7th International Conference on Graphics and Signal Processing
PublisherAssociation for Computing Machinery
Pages24-28
Number of pages5
ISBN (Electronic)9798400700460
DOIs
Publication statusPublished - 23 Jun 2023
Event7th International Conference on Graphics and Signal Processing, ICGSP 2023 - Fujisawa, Japan
Duration: 23 Jun 202325 Jun 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference7th International Conference on Graphics and Signal Processing, ICGSP 2023
Country/TerritoryJapan
CityFujisawa
Period23/06/2325/06/23

Keywords

  • Inter Prediction
  • Motion Searching
  • Neural Network
  • Versatile Video Coding (VVC)

Fingerprint

Dive into the research topics of 'Faster Inter Prediction by NR-Frame in VVC'. Together they form a unique fingerprint.

Cite this