From the land of Salted Lakes and the great state of Utah comes a new way to translate thoughts into words - roughly 40 percent of the time.
Researchers at the University of Utah have begun work on a method of brain signal analysis that they hope will eventually allow them to aid those with "locked-in" syndrome speak to the outside world, and those who thought Betazoids on Star Trek were the greatest things ever get their freak on.
Right now, the technology is only in its infancy, but through their work with a highly epileptic patient, the researchers were able to distinguish that each spoken word by the patient - from a pre-selected list of "useful" word like yes, no, hot, cold, Slurpee and chili-cheese dog - produced a different set of signals in the brain that could be recorded.
Ok, so maybe one or two of those words weren't used. Still, we'd kill for a Slurpee if we'd been locked in our body for any period of time over about 5 minutes.
Using two grids of micro-electrodes placed on the brain - made easier in this case because the patient had part of his skull removed to limit seizures - the team was able to capture the signals generated by his brain.Mircoelectrodes: smaller than a quarter.
The electrodes were placed over two distinct areas in the brain - the facial motor cortex which accesses muscles for speaking, and Wernicke's area, a portion of the brain tied to language and comprehension but in a way not even Wernicke himself understood. From these two areas, the team was able to gather enough data to create a reliable set of characteristics for each word spoken.
When comparing signals from one word to another, the team was able to pick which word had been spoken 76 to 85% of the time. However, when the useful Utahers tried to determine which word was spoken in comparison to the other nine, their accuracy dropped to 28%.
While this is better than the 10% correctness that would result from random chance, being locked-in and have a 30% chance to have your request for water interpreted as "lava", "motorcade" or "flintlock" might be a problem.
Once the researchers focused their attention on to the five most accurate electrodes in the group, things got easier. Their accuracy bumped up to 48%, just shy of a coin toss.
Being scientists, this got them all gooey inside, and they're off and running with the next stage of their project, which will involve a far larger grid of electrodes and a guy with no face.
Ok, so the 121 electrode grids are real, but no-face man isn't. Well, not in the context of this study.
In time, the U of U team hopes that they will be able to increase the accuracy of spoken word identification after the fact, and also hopes that those that can't speak due a number of various syndromes still process language in the same way as the rest of us.
Sure, they can't read our minds just yet, but we think this is a great idea.
At least, we think we do.