Publication

ChildGAN: large scale synthetic child facial data using domain adaptation in StyleGAN

Farooq, Muhammad Ali
Yao, Wang
Costache, Gabriel
Corcoran, Peter
Citation
Farooq, M. A., Yao, W., Costache, G., & Corcoran, P. (2023). ChildGAN: Large Scale Synthetic Child Facial Data Using Domain Adaptation in StyleGAN. IEEE Access, 11, 108775-108791. doi:10.1109/ACCESS.2023.3321149
Abstract
In this research work, we proposed a novel ChildGAN, a pair of GAN networks for generating synthetic boys and girls facial data derived from StyleGAN2. ChildGAN is built by performing smooth domain transfer using transfer learning. It provides photo-realistic, high-quality data samples. A large-scale dataset is rendered with a variety of smart facial transformations: facial expressions, age progression, eye blink effects, head pose, skin and hair color variations, and variable lighting conditions. The dataset comprises more than 300k distinct data samples. Further, the uniqueness and characteristics of the rendered facial features are validated by running different computer vision application tests which include CNN-based child gender classifier, face localization and facial landmarks detection test, identity similarity evaluation using ArcFace, and lastly running eye detection and eye aspect ratio tests. The results demonstrate that synthetic child facial data of high quality offers an alternative to the cost and complexity of collecting a large-scale dataset from real children. The complete dataset along with the trained model are open-sourced on our GitHub website and GitHub page: https://github.com/MAli-Farooq/ChildGAN.
Publisher
IEEE
Publisher DOI
Rights
Attribution 4.0 International