Estimation and analysis of facial animation parameter patterns

Ferda Ofli*, Engin Erzin, Yucel Yemez, A. Murat Tekalp

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set. and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

Original languageEnglish
Title of host publication2007 IEEE International Conference on Image Processing, ICIP 2007 Proceedings
PublisherIEEE Computer Society
Pages293-296
Number of pages4
ISBN (Print)1424414377, 9781424414376
DOIs
Publication statusPublished - 2007
Externally publishedYes
Event14th IEEE International Conference on Image Processing, ICIP 2007 - San Antonio, TX, United States
Duration: 16 Sept 200719 Sept 2007

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume4
ISSN (Print)1522-4880

Conference

Conference14th IEEE International Conference on Image Processing, ICIP 2007
Country/TerritoryUnited States
CitySan Antonio, TX
Period16/09/0719/09/07

Keywords

  • Dynamic facial expression analysis
  • Temporal patterns

Fingerprint

Dive into the research topics of 'Estimation and analysis of facial animation parameter patterns'. Together they form a unique fingerprint.

Cite this