论文部分内容阅读
本文提出了一种基于色度滤波的唇动特征提取与识别方法 ,它通过唇的色度滤波 ,得到增强的唇动图像 ,再利用可变模板 ,描述口型轮廓并提取特征参数 ,并用HMM模型进行唇运动序列图像识别 .该方法鲁棒性强 ,对光照没有苛刻的要求 ,且针对非特定人 ,适用于自然条件下的实用环境 ,解决了可变模板对目标边缘有较高分辨率的要求 ,使方法更实用化 .本文的实验是基于单纯的视觉信息 (没有声音信道的信息 )的唇动识别 ,不加语音信息 ,实验集合只限于单韵母 ,识别率可达 95 8% .
In this paper, a lip-lip feature extraction and recognition method based on chroma filtering is proposed. The lip-chirp filtering is used to obtain enhanced lip-motion images. The variable template is used to describe the lip contour and extract the characteristic parameters. Model is used to recognize lip motion sequence images.This method is robust and has no harsh requirements for illumination and is suitable for non-specific people in practical environment under natural conditions and solves the problem that the variable template has higher resolution on the target edge , The method is more practical.The experiment in this paper is based on lip recognition of pure visual information (without audio channel information) without any voice information, and the experimental set is limited to single vowels with a recognition rate of 95.8%.