Multicamera audio-visual analysis of dance figures

F. Ofli*, Y. Demir, E. Erzin, Y. Yemez, A. M. Tekalp

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

We present an automated system for multicamera motion capture and audio-visual analysis of dance figures. The multiview video of a dancing actor is acquired using 8 synchronized cameras. The motion capture technique is based on 3D tracking of the markers attached to the person's body in the scene, using stereo color information without need for an explicit 3D model. The resulting set of 3D points is then used to extract the body motion features as 3D displacement vectors whereas MFC coefficients serve as the audio features. In the first stage of multimodal analysis, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to determine the recurrent elementary audio and body motion patterns. Then in the second stage, we investigate the correlation of body motion patterns with audio patterns, that can be used for estimation and synthesis of realistic audio-driven body animation.

Original languageEnglish
Title of host publicationProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
PublisherIEEE Computer Society
Pages1703-1706
Number of pages4
ISBN (Print)1424410177, 9781424410170
DOIs
Publication statusPublished - 2007
Externally publishedYes
EventIEEE International Conference onMultimedia and Expo, ICME 2007 - Beijing, China
Duration: 2 Jul 20075 Jul 2007

Publication series

NameProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007

Conference

ConferenceIEEE International Conference onMultimedia and Expo, ICME 2007
Country/TerritoryChina
CityBeijing
Period2/07/075/07/07

Fingerprint

Dive into the research topics of 'Multicamera audio-visual analysis of dance figures'. Together they form a unique fingerprint.

Cite this