Self-Adaptive Layer: An Application of Function Approximation Theory to Enhance Convergence Efficiency in Neural Networks

研究成果: Conference contribution同行評審

3 引文 斯高帕斯(Scopus)

摘要

Neural networks provide a general architecture to model complex nonlinear systems, but the source data are often mixed with a lot of noise and interference information. One way to offer a smoother alternative for addressing this issue in training is to increase the neural or layer size. In this paper, a new self-adaptive layer is developed to overcome the problems of neural networks so as to achieve faster convergence and avoid local minimum. We incorporate function approximation theory into the layer element arrangement, so that the training process and the network approximation properties can be investigated via linear algebra, where the precision of adaptation can be controlled by the order of polynomials being used. Experimental results show that our proposed layer leads to significantly faster performance in convergence. As a result, this new layer greatly enhances the training accuracy. Moreover, the design and implementation can be easily deployed in most current systems.

原文English
主出版物標題34th International Conference on Information Networking, ICOIN 2020
發行者IEEE Computer Society
頁面447-452
頁數6
ISBN(電子)9781728141985
DOIs
出版狀態Published - 1月 2020
事件34th International Conference on Information Networking, ICOIN 2020 - Barcelona, Spain
持續時間: 7 1月 202010 1月 2020

出版系列

名字International Conference on Information Networking
2020-January
ISSN(列印)1976-7684

Conference

Conference34th International Conference on Information Networking, ICOIN 2020
國家/地區Spain
城市Barcelona
期間7/01/2010/01/20

指紋

深入研究「Self-Adaptive Layer: An Application of Function Approximation Theory to Enhance Convergence Efficiency in Neural Networks」主題。共同形成了獨特的指紋。

引用此