IAS-Lab RGB-D Face Dataset

The IAS-Lab RGB-D Face Dataset has been created to measure accuracy and precision of 2D and 3D face recognition algorithms based on data coming from consumer RGB-D sensors. The Kinect v2 (or Kinect One) has been used to acquire this dataset.

The training dataset consists of 26 subjects captured in 13 di fferent conditions (with pose, light and expression variations), standing 1 or 2 meters from the sensor.

In order to represent a typical service robotics scenario, where few people have to be recognized and many others have to be classifi ed as unknown, the testing dataset contains 19 subjects and just four of them were also present in the training dataset. The other testing subjects can thus be considered as unknown.
The testing set is furthermore divided into five subsets, as explained in this table: 

 test table

 

The IAS-Lab RGB-D Face Dataset provides two different files for every frame:

  • RGB image (1920x1080 resolution)
  • XYZRGB point cloud (960x540 resolution)

The point cloud is registered to the RGB image and its resolution is downsampled by two. An example of the correspondence between RGB image and point cloud is available here.

The intrinsic parameters of the RGB camera are also provided together with the dataset in the "camera_info.yaml" file.

 

Samples from the training and the testing set are reported here below. The fi rst four rows contain frames of the four known persons of the dataset, that are persons present in both the training and the testing dataset, while the other rows exemplify the "unknown" class. 

 training set  RGBD Face Testing divided writings

 

For obtaining the dataset, please refer to the Downloads section. 

 

For questions and remarks directly related to the IAS-Lab RGB-D Face dataset please contact Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo..

 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Copyright (c) 2016 Matteo Munaro.