论文部分内容阅读
目的研究采用反向传播(BP)算法的人工神经网络用于函数逼近时的最佳隐层结构。方法以典型的n输入、单输出的多层BP网为例,在几种不同的网络隐层结构下对典型的连续函数进行逼近训练,分析各网络输出的全局误差.结果含有4个隐层的BP网具有最佳的学习收敛特性和函数逼近精度,各隐层所包含的单元数应当在10~20个之间,收敛性最差的是单隐层网,结论用于函数逼近的最佳BP神经网络应当是一个包含4个左右隐层的多层网,且各隐层中的单元数应当适中.
Objective To study the best hidden layer structure of artificial neural network using back propagation (BP) algorithm for function approximation. Methods Taking the typical n-input and single-output multilayer BP network as an example, the typical continuous functions are approximated and trained under several different hidden layer structures, and the global errors of each network output are analyzed. Results The BP neural network with four hidden layers has the best learning convergence property and function approximation accuracy. The number of units contained in each hidden layer should be between 10 and 20, and the worst one is the single hidden layer network. Conclusions The best BP neural network for function approximation should be a multi-layer mesh with about 4 hidden layers, and the number of units in each hidden layer should be modest.