TY - GEN
T1 - İşitsel-görsel dans verilerinin birleşik i̇linti analizi
AU - Ofli, F.
AU - Demir, Y.
AU - Erzin, E.
AU - Yemez, Y.
AU - Tekalp, A. M.
PY - 2007
Y1 - 2007
N2 - In this paper we present a framework for analysis of dance figures from audio-visual data. Our audio-visual data is the multiview video of a dancing actor which is acquired using 8 synchronized cameras. The multi-camera motion capture technique of this framework is based on 3D tracking of the markers attached to the dancer's body, using stereo color information. The extracted 3D points are used to calculate the body motion features as 3D displacement vectors. On the other hand, MFC coefficients serve as the audio features. In the first stage of the two stage analysis task, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to extract the recurrent elementary audio and body motion patterns. In the second stage, the correlation of body motion patterns with audio patterns is investigated to create a correlation model that can be used during the synthesis of an audio-driven body animation.
AB - In this paper we present a framework for analysis of dance figures from audio-visual data. Our audio-visual data is the multiview video of a dancing actor which is acquired using 8 synchronized cameras. The multi-camera motion capture technique of this framework is based on 3D tracking of the markers attached to the dancer's body, using stereo color information. The extracted 3D points are used to calculate the body motion features as 3D displacement vectors. On the other hand, MFC coefficients serve as the audio features. In the first stage of the two stage analysis task, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to extract the recurrent elementary audio and body motion patterns. In the second stage, the correlation of body motion patterns with audio patterns is investigated to create a correlation model that can be used during the synthesis of an audio-driven body animation.
UR - http://www.scopus.com/inward/record.url?scp=50249167786&partnerID=8YFLogxK
U2 - 10.1109/SIU.2007.4298617
DO - 10.1109/SIU.2007.4298617
M3 - Conference contribution
AN - SCOPUS:50249167786
SN - 1424407192
SN - 9781424407194
T3 - 2007 IEEE 15th Signal Processing and Communications Applications, SIU
BT - 2007 IEEE 15th Signal Processing and Communications Applications, SIU
PB - IEEE Computer Society
T2 - 2007 IEEE 15th Signal Processing and Communications Applications, SIU
Y2 - 11 June 2007 through 13 June 2007
ER -