High-accuracy facial depth models derived from 3D synthetic data
Khan, Faisal ; Basak, Shubhajit ; Javidnia, Hossein ; Schukat, Michael ; Corcoran, Peter
Khan, Faisal
Basak, Shubhajit
Javidnia, Hossein
Schukat, Michael
Corcoran, Peter
Loading...
Repository DOI
Publication Date
2020-08-31
Type
Conference Paper
Downloads
Citation
Khan, Faisal, Basak, Shubhajit, Javidnia, Hossein, Schukat, Michael, & Corcoran, Peter. (2020). High-accuracy facial depth models derived from 3D synthetic data. Paper presented at the 31st Irish Signals and Systems Conference (ISSC), Letterkenny, Ireland, 11-12 June, DOI: 10.1109/ISSC49989.2020.9180166.
Abstract
In this paper, we explore how synthetically generated 3D face models can be used to construct a high-accuracy ground truth for depth. This allows us to train the Convolutional Neural Networks (CNN) to solve facial depth estimation problems. These models provide sophisticated controls over image variations including pose, illumination, facial expressions and camera position. 2D training samples can be rendered from these models, typically in RGB format, together with depth information. Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head pose, light direction, etc. The synthetic data is used to train a CNN-based facial depth estimation system which is validated on both synthetic and real images. Potential fields of application include 3D reconstruction, driver monitoring systems, robotic vision systems, and advanced scene understanding.
Funder
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Publisher DOI
10.1109/ISSC49989.2020.9180166
Rights
Attribution-NonCommercial-NoDerivs 3.0 Ireland