论文部分内容阅读
针对传统支持向量机(SVM)增量算法,在学习过程中因基于局部最优解而可能舍弃含隐性信息的非支持向量样本,以及对于新增样本需全部进行训练的缺点,文中提出一种基于KKT条件和壳向量的SVM增量学习算法。该方法利用壳向量的特性保留了训练样本集中可能含隐性信息的非支持向量,并只将违反KKT条件的增量样本加入新的训练集,从而提高运算效率。通过对公共数据集Abalone和Balance Scale的实验表明,新算法在属性列数较多的数据集上分类效果更明显。
For the traditional SVM incremental algorithm, the non-support vector samples with hidden information may be discarded based on the local optimal solution and the disadvantages of all the new samples need to be trained in the learning process. In this paper, SVM incremental learning algorithm based on KKT condition and shell vector. This method preserves the non-support vector which may contain implicit information in the training sample set by using the properties of the shell vector, and only adds the incremental samples that violate the KKT condition to the new training set to improve the computational efficiency. Experiments on public data sets Abalone and Balance Scale show that the new algorithm is more effective in classifying data sets with more attribute columns.