Multi-view head-pose estimation in low-resolution, dynamic scenes is diﬃcult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring suﬃcient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is ﬁxed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the diﬀerent face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results conﬁrm eﬀectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
|Titolo:||An Adaptation Framework for Head Pose Estimation in Dynamic Multi-view Scenarios|
|Data di pubblicazione:||2012|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|