论文部分内容阅读
在保证适当学习精度前提下,神经网络的神经元个数应该尽可能少(结构稀疏化),从而降低成本,提高稳健性和推广精度.本文采用正则化方法研究前馈神经网络的结构稀疏化.除了传统的用于稀疏化的L1正则化之外,本文主要采用近几年流行的L1/2正则化.为了解决L1/2正则化算子不光滑、容易导致迭代过程振荡这一问题,本文试图在不光滑点的一个小邻域内采用磨光技巧,构造一种光滑化L1/2正则化算子,希望达到比L1正则化更高的稀疏化效率.本文综述了近年来作者在用于神经网络稀疏化的L1/2正则化的一些工作,涉及的神经网络包括BP前馈神经网络、高阶神经网络、双并行前馈神经网络,以及Takagi-Sugeno模糊模型.
Under the premise of ensuring proper learning accuracy, the number of neurons in neural network should be as few as possible (structure thinning), so as to reduce cost, improve robustness and promotion accuracy.In this paper, the regularization method is used to study the structure sparsification of feedforward neural networks In addition to the traditional L1 regularization for sparsification, this paper mainly adopts the L1 / 2 regularization that has been popular in recent years.In order to solve the problem that the L1 / 2 regularization operator is not smooth and easily lead to the oscillation of the iterative process, In this paper, we attempt to construct a smoothed L1 / 2 regularization operator using a polishing technique in a small neighborhood of the smoothed point, hoping to achieve a higher sparsification efficiency than the L1 regularization.In this paper, In the work of L1 / 2 regularization of neural network thinning, the neural networks involved include BP feedforward neural network, higher order neural network, double parallel feedforward neural network, and Takagi-Sugeno fuzzy model.