A new headset could revolutionize communication for locked-in individuals, people with ALS, and anyone who has suffered an injury that makes communication more difficult. In fact, research in this area is proceeding along several different paths. It may not be much longer before real-time speech communication is possible again for people who are now either silent or confined to the use of painfully slow alternatives.
According to MIT Media Lab researcher Arnav Kapur, his new device, the AlterEgo (pictured above), wraps around the neck and detects the micro-movements we make internally with the larynx and vocal cords when we think about speech. This device is not a true telepathic or thought-reading piece of equipment, but it does detect when you’re thinking about speaking without actually doing so.
The video below has more information on the AlterEgo, as well as a demonstration of the hardware in action. It includes a demo of the hardware being used by an individual with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.
There’s new technological work being done in this area on multiple fronts. The AlterEgo device is non-invasive, but may also be limited in what it can achieve for this reason; if individuals don’t have enough muscle control or enough muscles, the AlterEgo might not be able to work. Researchers in a different project published a paper in Nature last month, however, detailing a system that translates brain activity directly into speech. This represents a fundamentally different approach from the AlterEgo in a number of respects, not the least of which is using electrodes implanted into the brain as opposed to a wearable strapped under one’s chin.
Still, the collective work being done here is impressive. The brain-stimulation team in Nature writes:
Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity.
The technological research being done here is early, in all cases. But it certainly seems as though work of this type has the potential to allow people who struggle with slow, eye-tracking or muscle-movement based communication devices to “speak” again. The more valid approaches we can find to deal with this problem, the more people we’ll be able to help who suffer from it.
- Review: Living With the Nuheara IQBuds Boost
- Modular AI Wheelchairs Can Watch for Obstacles, Incorporate Head Tracking
- Drug Delivery Implants, Electrical Stimulation Can Manage Chronic Pain