论文部分内容阅读
针对前馈神经网络所使用的学习算法应具有收敛速度快、计算复杂度小、稳定性好的特点 ,利用梯度算法在网络学习初始阶段可使误差函数下降速度快 ,而牛顿法在学习后期可使收敛率提高 ,且具有二阶收敛速度 ,提出了一种梯度 -牛顿耦合学习算法 ;该方法充分发挥了两种算法各自的特长 ,能弥补牛顿法在网络学习初始阶段对学习初值的敏感性和梯度算法在学习后期的震荡现象等不足 ;给出了学习速度参数在线优化、带保护的拟牛顿法、梯度 -牛顿竞争法以及梯度 -牛顿分段等 4种确定学习参数的方案
The learning algorithm used in feedforward neural network should have the characteristics of fast convergence, small computational complexity and good stability. The gradient algorithm can make the error function decline faster in the initial stage of network learning, while Newton’s method can be used in the later stage of learning So that the convergence rate is improved and the second-order convergence rate is obtained. A gradient-Newton coupled learning algorithm is proposed. This method gives full play to each algorithm’s specialty and can make up for the sensitivity of Newton’s method to the initial value of learning in the initial stage of network learning Sexual and gradient algorithm in the late learning shock and other deficiencies; given the learning speed parameters online optimization, with the protection of Quasi-Newton method, gradient - Newton competition and Gradient - Newton section 4 kinds of learning parameters to determine the program