TY - JOUR
T1 - Strong generalization in quantum neural networks
AU - Jiang, Jinzhe
AU - Zhao, Yaqian
AU - Li, Rengang
AU - Li, Chen
AU - Guo, Zhenhua
AU - Fan, Baoyu
AU - Li, Xuelei
AU - Li, Ruyang
AU - Zhang, Xin
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/12
Y1 - 2023/12
N2 - Generalization is an important feature of neural networks (Nns) as it indicates their ability to predict new and unknown data. However, classical Nns face the challenge of overcoming overfitting in applications due to their nonlinear characteristics, which represents poor generalization. By combining quantum computing with Nns, quantum neural networks (Qnns) have more potential than classical Nns. In this work, we study the generalization of Qnns and compare it with classical Nns. We prove that Qnns have a generalization error bound and propose its theoretical value. We also show that Qnns perform almost the same on the training dataset and test dataset without the overfitting phenomenon. To validate our proposal, we simulate three Qnn models on two public datasets and compare them with a traditional network model. The results demonstrate that Qnns have ideal generalization, much better than classical Nns. Finally, we implement the experiment on a quantum processor to prove the simulation’s results.
AB - Generalization is an important feature of neural networks (Nns) as it indicates their ability to predict new and unknown data. However, classical Nns face the challenge of overcoming overfitting in applications due to their nonlinear characteristics, which represents poor generalization. By combining quantum computing with Nns, quantum neural networks (Qnns) have more potential than classical Nns. In this work, we study the generalization of Qnns and compare it with classical Nns. We prove that Qnns have a generalization error bound and propose its theoretical value. We also show that Qnns perform almost the same on the training dataset and test dataset without the overfitting phenomenon. To validate our proposal, we simulate three Qnn models on two public datasets and compare them with a traditional network model. The results demonstrate that Qnns have ideal generalization, much better than classical Nns. Finally, we implement the experiment on a quantum processor to prove the simulation’s results.
KW - Classification
KW - Generalization
KW - Neural network
KW - Quantum neural networks
UR - https://www.scopus.com/pages/publications/85178398194
U2 - 10.1007/s11128-023-04095-x
DO - 10.1007/s11128-023-04095-x
M3 - Article
AN - SCOPUS:85178398194
SN - 1570-0755
VL - 22
JO - Quantum Information Processing
JF - Quantum Information Processing
IS - 12
M1 - 428
ER -