Many modern military situational awareness solutions employ sensor fusion and augmented reality to pack in a great deal of information and provide real-world context.
Some of these feeds are enhanced with audio, but new US Army research aims to prove that even more information can be packed into the audio signal itself, by making it fully 3D.
Giving sound a tag – for instance a certain distinct noise meaning the movement of an enemy detachment – and giving it context in 3D space enables to a soldier to react immediately without having to read a label, even when the source is behind him. It can also appear to move in accordance with sensors detecting the movement of the tracked item on the battlefield.
During military operation in urban environments, audio is often a more primary sense than vision, providing a soldier with an early warning system. However, sounds like gunshots can bounce off densely-packed buildings, giving an incorrect location, so an artificial 3D cue through headphones could eliminate this.
The research also shows that using audio cues requires little or no training compared with interpreting readouts. By giving a perceived location that matches the actual one, a soldier instinctively knows where to turn and look.
Incorporating more informative 3D sounds into a Mixed Reality (MR) display could give the most complete solution for soldier situational awareness.
Please enter a short name for this bookmark.