Synthesizing game audio using deep neural networks
McDonagh, Aoife ; Lemley, Joseph ; Cassidy, Ryan ; Corcoran, Peter
McDonagh, Aoife
Lemley, Joseph
Cassidy, Ryan
Corcoran, Peter
Loading...
Repository DOI
Publication Date
2018-08-15
Keywords
Type
Conference Paper
Downloads
Citation
McDonagh, Aoife, Lemley, Joseph, Cassidy, Ryan, & Corcoran, Peter. (2018). Synthesizing game audio using deep neural networks. Paper presented at the 2018 IEEE Games, Entertainment, Media Conference (GEM), Galway, Ireland, 15-17 August.
Abstract
High quality audio plays an important role in gaming, contributing to player immersion during gameplay. Creating audio content which matches overall theme and aesthetic is essential, such that players can become fully engrossed in a game environment. Sound effects must also fit well with visual elements of a game so as not to break player immersion. Producing suitable, unique sound effects requires the use of a wide range of audio processing techniques. In this paper, we examine a method of generating in-game audio using Generative Adversarial Networks, and compare this to traditional methods of synthesizing and augmenting audio.
Publisher
Institute of Electrical and Electronics Engineers
Publisher DOI
10.1109/GEM.2018.8516448
Rights
Attribution-NonCommercial-NoDerivs 3.0 Ireland