期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 31, 期 3, 页码 1022-1035出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2916597
关键词
Quaternions; Optimization; Convex functions; Recurrent neural networks; Calculus; Biological neural networks; Collective intelligence; Constrained convex optimization; generalized gradient inclusion; generalized Hamilton-real (GHR) calculus; Lyapunov function
类别
资金
- National Natural Science Foundation of China [11671361, 61833005, 61573096, 61573102]
- China Postdoctoral Science Foundation [2015M580378, 2016T90406]
- National Training Programs of Innovation and Entrepreneurship [201610345020]
- Jiangsu Provincial Key Laboratory of Networked Collective Intelligence [BM2017002]
- Natural Science Foundation of Jiangsu Province of China [BK20170019]
- Zhejiang Provincial Natural Science Foundation of China [LD19A010001]
This paper proposes a quaternion-valued one-layer recurrent neural network approach to resolve constrained convex function optimization problems with quaternion variables. Leveraging the novel generalized Hamilton-real (GHR) calculus, the quaternion gradient-based optimization techniques are proposed to derive the optimization algorithms in the quaternion field directly rather than the methods of decomposing the optimization problems into the complex domain or the real domain. Via chain rules and Lyapunov theorem, the rigorous analysis shows that the deliberately designed quaternion-valued one-layer recurrent neural network stabilizes the system dynamics while the states reach the feasible region in finite time and converges to the optimal solution of the considered constrained convex optimization problems finally. Numerical simulations verify the theoretical results.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据