Next: Real-Time View Alignment for
Up: Summary of Objectives and
Previous: Database for Learning Appearance
A robust real-time face detection and tracking system using standard
PC based hardware has been successfully developed during the course of
this project
[9, 10, 18, 17].
In particular, methods were developed for
- learning face appearance models and models for real-time visual
motion estimation and clustering [9, 10],
- learning Gaussian mixture based colour models for
both tracking skin tone objects and multi-colour based foreground and
background segmentation and tracking
[22, 23, 24, 25],
- learning an adaptive temporal colour model to cope with extreme
lighting changes [19, 20].
In particular, effort was focused on developing colour models
which can be learned and also adaptive. Colour offers many advantages
over both geometric and motion
information in dynamic vision such as robustness under partial occlusion,
rotation in depth, scale changes and resolution changes.
The main difficulty in modelling colour robustly is the
colour constancy problem which arises due to variation in colour values
brought about by lighting changes. We addressed the problem by
employing colour adaptation over time. Data fusion
in object detection and tracking using both motion and colour cues was
used to bring about the required consistency in face tracking
[16]. The system is able to perform face detection
and tracking in the following manner:
- real-time detection and tracking of moving faces in cluttered
scenes,
- robust tracking of multiple moving faces,
- robust tracking under changes in lighting, scale and image resolution,
- robust tracking under ``facial distortions'' such as spectacle, facial
hair and hair-style changes.
Related to face detection and tracking, we also addressed the issue
of real-time head pose estimation [14]
which is important in tracking moving faces across views. We
introduced a composite Gabor wavelet transform as a representation scheme
for capturing pose changes. We derived a pose eigenspace based on the
principal components analysis to represent and interpret the distribution
of pose changes from continuous sequences of face rotation in depth
[5, 18].
Next: Real-Time View Alignment for
Up: Summary of Objectives and
Previous: Database for Learning Appearance