论文部分内容阅读
Artificial neural networks (ANNs) have become important tools in many fields, such as pattern recognition, signal processing and artificial intelligence. But many ANNs have their defects. For example, the results of RBF networks greatly rely on the parameters of the basic functions and the squared errors often disperse if the parameters are not properly selected when the gradient descent algorithm is used. This paper describes a new algorithm for training an RBF network. In function approximation, this algorithm has two main advantages: high accuracy and stable learning process. In addition, it can be used as a good classifier in pattern recognition.
For many of these ANNs have their defects. For example, the results of RBF networks greatly rely on the parameters of the basic functions and the squared errors often disperse if the parameters are not properly selected when the gradient descent algorithm is used. This paper describes a new algorithm for training an RBF network. In function approximation, this algorithm has two main advantages: high accuracy and stable learning process. In addition, it can be used as a good classifier in pattern recognition.