PUT Face Database
Put Face Database Description
The human face is a highly non-rigid, complex 3D body. Its appearance is susceptible to many factors such as pose variations, illumination changes, occlusions or structural disturbances (e.g. make-up, glasses, facial hear etc.). In order to devise a face recognition algorithm robust to these factors one has to have a database of significant size and diversity. Common, publicly available databases are required to provide a testing material to perform a rigorous benchmarking of proposed algorithms. To meet that need we have created the PUT Face database.
Our main goal was to provide credible data for systematic performance evaluation of face localization, feature extraction and recognition algorithms. In order to provide statistically significant results we have gathered 9971 images of 100 people. As we focus on development of validation algorithms images were taken in partially controlled illumination conditions over an uniform background. The main source of face appearance variations were the changes in head pose.
Images of each person were taken in five series:
-neutral face expression with the head turning from left to right (approx. 30 images)
-neutral face expression with the head nodding from the raised to the lowered position (approx. 20 images)
-neutral face expression with the raised head turning from left to right (approx. 20 images)
-neutral face expression with the lowered head turning from left to right (approx. 20 images)
-no constraints regarding the face pose or expression, some images with glasses (approx. 10 images)
In order to facilitate development and testing of algorithms additional data was provided. All images are supplied with manually annotated rectangles containing a face, eyes (left and right separately), a nose and a mouth. Additionally, a set of 30 landmarks corresponding to positions of facial features such as eyes corners etc. was defined. Each image is provided with a file containing positions of all visible landmarks. Finally, 194 control points defining a contour model were manually annotated on a subset of 2193 near-frontal images. These points form polylines representing:
-a face outline (41 points),
-a nose outline (17 points),
-eyes outlines (20 points each),
-eyebrows outlines (20 points each),
-mouth outlines (inner and outer - 28 points each).
The database is also supplied with two list of selected face images subsets. The first one called the Learning Set (LS) consists of 2200 images (22 images of each person). The following images were selected:
-frontal view from the first sequence,
-5 images with gradually increasing turn to the left from the central position,
-5 images with gradually increasing turn to the right from the central position,
-frontal view from the second sequence,
-5 images with gradually increasing tilt over the central position,
-5 images with gradually increasing tilt below the central position.
The second one called Testing Boundary Set (TBS) contains the following images of each person:
-2 images with head turned more to the left than in any image in the LS,
-2 images with head turned more to the right than in any image in the LS,
-2 images with head raised more than in any image in the LS,
-2 images with head lowered more than in any image in the LS,
-3 frontal face images with unconstrained pose and facial expression.
The following activities could benefit from using the PUT Face Database:
-evaluation of the robustness of face recognition algorithms to the pose variations,
-evaluation of the performance of face pose estimation algorithms,
-evaluation of face recognition algorithms using image sequences as input,
-evaluation of face and facial features localization algorithms,
-development of either 2D or 3D statistical face shape models,
-development of algorithms estimating a face 3D structure from a sequence of images.