论文部分内容阅读
CNN pruning is a well-known process that is highly recommended in the optimization of neural network performance such as reducing computational costs of neural networks.Nowadays,pruning approaches follow a fixed paradigm which first trains a large and redundant network,and then determines which units are less important and thus can be removed.Currently,it’s being used for developing applications,with functions such as image recognition,finding patterns in a dataset,object classification in photographs,character text generation,self-driving cars are just a few examples.However,these functions are supposed to be optimized inorder to improve performance.This dissertation empirically proposed a method to prune neurons in layer patterns based on neuron decisions as pre-conditions that implies a certain desirable output property such as,the accuracy of the network performance.This method highlights the use of post-pruning re-initialization,random weight initialization,finding the optimum number of neurons in each layer,and highlight the significance of each neuron through activation status("on" or "off").The evaluation of the method of this thesis done by using the MNIST dataset.However,only 10,000 elements were selected as the testing set in the CNN refactoring process through the use of a learning rate of 0.01 and a momentum of 0.05.The evaluation results achieve 97% accuracy,with 600 neurons per layer.The difference between the minimum performance value and the maximum performance value is 6.5%.The model also evaluates the time used per iteration with an average time of 25.99 seconds.Finally,it is concluded that the optimum number of neurons must be considered in neural network performance and too much pre-training may not be necessary for pruning algorithms for achieving higher accuracy under the same computation strategy,and also showed that the number of neurons in a given layer can be changed depending on the type of model and dataset used.