论文部分内容阅读
As a typical biometric cue with great diversities, smile is a fairly infl uential signal in social interaction, which reveals the emotional feeling and inner state of a person. Spontaneous and posed smiles initiated by different brain systems have differences in both morphology and dynamics. Distinguishing the two types of smiles remains challenging as discriminative subtle changes need to be captured, which are also uneasily observed by human eyes. Most previous related works about spontaneous versus posed smile recognition concentrate on extracting geometric features while appearance features are not fully used, leading to the loss of texture information. In this paper, we propose a region-specifi c texture descriptor to represent local patte changes of different facial regions and compensate for limitations of geometric features. The temporal phase of each facial region is divided by calculating the intensity of the corresponding facial region rather than the intensity of only the mouth region. A mid-level fusion strategy of support vector machine is employed to combine the two feature types. Experimental results show that both our proposed appearance representation and its combination with geometry-based facial dynamics achieve favorable performances on four baseline databases: BBC, SPOS, MMI, and UvA-NEMO.