Publication

C3I-SynFace: A synthetic head pose and facial depth dataset using seed virtual human models.

Basak, Shubhajit
Khan, Faisal
Javidnia, Hossein
Corcoran, Peter
McDonnell, Rachel
Schukat, Michael
Citation
Basak, Shubhajit, Khan, Faisal, Javidnia, Hossein, Corcoran, Peter, McDonnell, Rachel, & Schukat, Michael. (2023). C3I-SynFace: A synthetic head pose and facial depth dataset using seed virtual human models. Data in Brief, 48, 109087. doi:https://doi.org/10.1016/j.dib.2023.109087
Abstract
This article presents C3I-SynFace: a large-scale synthetic human face dataset with corresponding ground truth annotations of head pose and face depth generated using the iClone 7 Character Creator “Realistic Human 100” toolkit with variations in ethnicity, gender, race, age, and clothing. The data is generated from 15 female and 15 male synthetic 3D human models extracted from iClone software in FBX format. Five facial expressions - neutral, angry, sad, happy, and scared are added to the face models to add further variations. With the help of these models, an open-source data generation pipeline in Python is proposed to import these models into the 3D computer graphics tool Blender and render the facial images along with the ground truth annotations of head pose and face depth in raw format. The datasets contain more than 100k ground truth samples with their annotations. With the help of virtual human models, the proposed framework can generate extensive synthetic facial datasets (e.g., head pose or face depths datasets) with a high degree of control over facial and environmental variations such as pose, illumination, and background. Such large datasets can be used for the improved and targeted training of deep neural networks.
Publisher
Elsevier
Publisher DOI
https://doi.org/10.1016/j.dib.2023.109087
Rights
Attribution 4.0 International