论文部分内容阅读
针对车辆视觉跟踪过程中普遍存在背景复杂、光线变化、尺度旋转等干扰,而现有跟踪算法抗扰动能力差、鲁棒性低的问题,构造了一种基于稀疏约束及深度学习的车辆目标跟踪算法,采用去噪自编码神经网络对包含正负样本的训练集进行特征提取。在正向传播过程中对隐层进行稀疏约束,反向传播微调阶段,对连接矩阵进行权值衰减的稀疏调整,增加神经网络的鲁棒性,实现神经网络不同隐层特征的高效提取,将网络的输出作为Logistics分类器的输入,学习获得车辆分类器,并采用粒子滤波在线跟踪目标。试验结果表明:对连接矩阵和隐层进行稀疏约束的去噪自编码神经网络具有较高的跟踪精度和较强的跟踪鲁棒性,在场景光照剧烈变化、车辆发生遮挡、三维旋转、尺度变化及快速移动时都能进行较好的跟踪,平均中心位置误差远小于对比方法,仅为2.3像素;而增量式学习(IVT)跟踪、在线自适应增强(OAB)跟踪、多示例学习(MIL)跟踪算法的平均中心位置误差分别为17.52像素、28.76像素和17.66像素;该方法的平均重叠率达83%,较IVT跟踪、MIL跟踪和OAB跟踪算法分别提高24.5%、42.2%、28.8%,满足智能交通监控的实际需求。
Aiming at the problem of complex background, light changes, scale rotation and so on, which exist in the vehicle vision tracking process, the existing tracking algorithms have poor anti-disturbance ability and low robustness. A vehicle target tracking based on sparse constraints and depth learning Algorithm, the denoising self-coding neural network is used to extract features from the training set containing positive and negative samples. In the process of forward propagation, the hidden layers are sparsely constrained and backpropagated, the sparse adjustment of the weight of the connection matrix is attenuated, the robustness of the neural network is increased, and the features of different hidden layers in the neural network are efficiently extracted. Network output as Logistics classifier input, learning to get the vehicle classifier, and use particle filter to track the target online. The experimental results show that the denoising self-coding neural network with sparse constraints on the connection matrix and hidden layer has high tracking accuracy and strong tracking robustness. When the scene illumination changes drastically, vehicle occlusion, three-dimensional rotation and scale change And fast moving can be better tracking, the average center position error is much smaller than the contrast method, only 2.3 pixels; and incremental learning (IVT) tracking, online adaptive enhancement (OAB) tracking, multiple sample learning (MIL The average center position error of tracking algorithm is 17.52 pixels, 28.76 pixels and 17.66 pixels respectively. The average overlapping rate of this method is 83%, which is 24.5%, 42.2% and 28.8% higher than IVT tracking, MIL tracking and OAB tracking respectively. Meet the actual needs of intelligent traffic monitoring.