论文部分内容阅读
提出了线性化逐层优化MLP训练算法(LOLL).LOLL采用循环方式逐层对MLP的连接权值进行训练.训练连接权值时用一阶泰勒级数表示神经元的非线性激活函数以实现神经网络的线性化,使MLP的训练问题转化为一个线性问题.同时,为保证神经网络线性化条件不被破坏,LOLL通过在神经网络的误差函数中计入部分线性化误差限制参数的改变幅度,对神经网络的误差函数进行了修正.实验结果显示,LOLL训练算法的速度比传统的BP算法快4倍,用它构成的语音信号非线性预测器有较好的预测性能.
A linearized MLP training algorithm (LOLL) is proposed. LOLL cycle-by-layer MLP connection weights for training. The first-order Taylor series is used to represent the nonlinear activation function of neurons in training connection weights to realize the linearization of neural network, which makes MLP training problem into a linear problem. At the same time, in order to ensure that the neural network linearization condition is not damaged, LOLL modifies the error function of neural network by including the change range of partial linearization error limit parameter in the error function of neural network. Experimental results show that the LOLL training algorithm is 4 times faster than the traditional BP algorithm, and the speech signal nonlinear predictor constructed by it has better prediction performance.