Loading...
G-MIND: Galway multimodal infrastructure node dataset for intelligent transportation systems
Molloy, Dara ; George, Roshan ; Brophy, Tim ; Deegan, Brian ; Mullins, Darragh ; Ward, Enda ; Horgan, Jonathan ; Eising, Ciaran ; Denny, Patrick ; Jones, Edward ... show 1 more
Molloy, Dara
George, Roshan
Brophy, Tim
Deegan, Brian
Mullins, Darragh
Ward, Enda
Horgan, Jonathan
Eising, Ciaran
Denny, Patrick
Jones, Edward
Citations
Altmetric:
Publication Date
2025-12-25
Type
journal article
Downloads
Citation
Molloy, D., George, R., Brophy, T., Deegan, B., Mullins, D., Ward, E., Horgan, J., Eising, C., Denny, P., Jones, E., Glavin, M. (2026). G-MIND: Galway Multimodal Infrastructure Node Dataset for Intelligent Transportation Systems. IEEE Open Journal of Vehicular Technology, 7, 491-509. https://doi.org/10.1109/OJVT.2025.3648251
Abstract
Autonomous and semi-autonomous vehicles require accurate perception of their surrounding environment to ensure safe operation, yet onboard sensors frequently encounter occlusion challenges that result in incomplete dynamic environmental maps. Infrastructure-to-vehicle cooperative perception addresses this by deploying infrastructure nodes that monitor scenes and share reliable environmental maps with nearby vehicles via technologies like C-V2X. However, existing infrastructure perspective datasets lack diverse multi-modal data and aerial footage, which are crucial to determine effectively the necessary sensors for safety-critical infrastructure node applications. This paper introduces G-MIND, a multimodal infrastructure node dataset supporting research into sensor suitability for infrastructure-assisted safety-critical applications. G-MIND is the first dataset to incorporate this comprehensive range of sensing modalities for infrastructure-based perception: RGB, FIR, and neuromorphic cameras, LiDARs, RADAR, and aerial drone footage. With 91,500 annotated frames, G-MIND offers a larger scale than existing infrastructure perception datasets such as Ko-PER (10 k frames), CoopScenes (40 K frames), and DAIR-V2X (71 k frames), enabling more comprehensive training and evaluation. The dataset captures day and night scenarios featuring cars, pedestrians, and cyclists across diverse traffic scenarios. Beyond standard perception benchmarking, G-MIND includes specialized collections designed to test perception system boundaries: maximum detection distance scenarios, far and occluded object scenarios, and pedestrian action prediction scenarios that challenge current algorithms. Additionally, this paper analyzes what constitutes effective ITS infrastructure node sensors from a practical perspective, comparing modalities against technical criteria (field of view, spatial resolution, low light performance, adverse weather resilience) and pragmatic criteria (cost, durability).
Publisher
Institute of Electrical and Electronics Engineers
Publisher DOI
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International