In our world, many visually spectacular events are related to being equally and potently audibly stimulated, for instance, having a truck crossing in front of you or being in the front of a crowd. Visual and auditory information together can help people with performing spatial tasks, such as localization in an ideal situation, because the ability to detect both visual and audible stimulus is critical to safely interacting with our medium. Obviously, we cannot have the ideal situation all the time. There are two visual limitations: not every visual stimulus possesses an auditory counterpart, nor does every person possesses 20/20 vision – visual impairments exist, and this is our topic for today.
Researchers at the Department of Neuroscience from the University of Lethbridge in Alberta, Canada, have developed a system that allows visual information to be transformed to auditory stimulus all thanks to the power of Augmented Reality. Through the use of a neuromorphic camera (DAVIS 240B), the system is able to detect changes of brightness intensity in the scene, which is the perfect example of visual events after that, thanks to a complex and very powerful logarithmic process, the system transforms this information into auditory stimulus in real time.
For the purpose of this study, participants were blindfolded and placed in front of a laptop computer with a custom MATLAB Psychophysics Toolbox that functioned as a generator of still and moving objects for the system to detect; participants were asked to use the device to detect new objects in the scene, and to determine if they were still or moving (and even which direction they were moving). Basically, there were 2 variables evaluated in order to assess the performance of the system: Hit Rate (HR), to determine how many times the participants were correct and Reaction Time (RT), to determine how much cognitive load was required in order to perform this task.
The results were impressive, with HR being divided into Onset Detection HR and Motion Discrimination HR achieving 95.6% (SD of 9.1%) and 62.7% (SD of 3.81%) respectively. Onset RT was also considered, with an average of 1.38s (SD of 0.71s). There were other more complex variables considered at the study, but we believe these results are more than enough to prove a point regarding the system’s ability to fulfill its premise. We must take into account that these results were obtained without any complex training, making it an important step in the right direction.
These results point out a feasible new system for patients with visual impairment, or even attention orienting deficits. This system is focused towards the realization of a prosthetic device, designed for people with special needs, and is a predecessor for future models that will allow patients with severe impairments to live a more functional and independent life. This is the perfect example of how alternate technologies can be a part of a new and interesting direction for medicine, changing people’s lives for good thanks to Augmented Reality.