Managing a large set of face images, whether in a biometric database, security/surveillance videos or a social networking site, presents unique challenges in automatic extraction of data, fusion of many features, and effective user interfaces. Unlike traditional biometric recognition, where an image is use to search for a potential match, our goal in face searching is to allow users to enter text queries and have the system return the most likely matches. We will further permit refinement of the query and possibly output a 3D model of a generic face matching the query. This effort builds on the teamÂ’s recent advances in efficient learning for automated face-feature extraction, indexing and user interfaces leading to the first-ever face-search engine. This effort will expand that prototype, adding features and addressing the critical question of multi-feature fusion, needed to be discriminative in larger databases. It will define a Service-Oriented Architecture for systems integration. The system has already been demonstrated on over 200,000 faces and a few features. The approach is designed for full scalability, and Phase I will take the testing up to millions of faces with more than 30 features computed per face, with adaptive refinement of queries.
Keywords: Face Features, Face Features, Biometrics, Face Database, Machine Learning, Surveillance, Face Synthesis, Face Recognition, User Interfaces