论文部分内容阅读
Newton’s learning algorithm of NN is presente d and realized . In theory, the convergence rate of learning algorithm of NN based on Newton’s method must be faster than BP’s and other learning algorithms, be cause the gradient method is linearly convergent while Newton’s method has seco nd order convergence rate. The fast computing algorithm of Hesse matrix of the c ost function of NN is proposed and it is the theory basis of the improvement of Newton’s learning algorithm. Simulation results show that the convergence rate of Newton’s learning algorithm is high and apparently faster than the tradition al BP method’s, and the robustness of Newton’s learning algorithm is also bett er than BP method’s.
Newton’s learning algorithm of NN is presente d and realized. The theory of the convergence rate of learning algorithm of NN based on Newton’s method must be faster than BP’s and other learning algorithms, be the gradient method is linearly convergent while Newton’s method has seco nd The convergence of rate of newton’s learning algorithm is high and apparently faster than the tradition al BP method’s, and the robustness of Newton’s learning algorithm is also bett er than BP method’s.