Thanks to scientists at the University of Texas, the last private domain may soon be exposed for all the world to see — or read.
A team of researchers has developed a noninvasive means by which human thoughts can be converted into text. While currently clunky, the "semantic decoder" could possibly be one day miniaturized and mobilized such that the body's sanctum sanctorum can be spied on virtually anywhere.
According to their paper, published Monday in the journal Nature Neuroscience, the researchers indicated that a "brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications."
In addition to helping mutes communicate, the practical applications might include, as the MIT Technology Review has suggested, surveillance and interrogation. However, the present technology relies upon subject cooperation and can be consciously resisted.
Unlike previous brain-computer interfaces, which required invasive neurosurgery to decode speech articulation and other signals from intracranial recordings, this new decoder utilizes both functional magnetic resonance brain imaging and artificial intelligence.
The team, lead by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, trained GPT-1 — an artificial intelligence system from the ChatGPT family of OpenAI language models — on a data set containing various English-language sentences from hundreds of narrative stories.
Test subjects laying in an MRI scanner were each subjected to 16 hours of different episodes of the New York Times’ "Modern Love" podcast, which featured the stories.
With this data, the researcher's AI model found patterns in brain states corresponding with specific words. Relying upon its predictive capability, it could then fill in the gaps by "generating word sequences, scoring the likelihood that each candidate evoked the recorded brain responses and then selecting the best candidate."
When scanned again, the decoder was able to recognize and decipher test subjects' thoughts.
While the resultant translations were far from perfect, reconstructions left little thematically to the imagination.
For instance, one test subject listening to a speaker say, "I don't have my driver's license yet," had their thoughts decoded as, "she has not even started to learn to drive yet."
In another instance, a test subject comprehended the words, "I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’" and had those thoughts decoded as "Started to scream and cry, and then she just said, ‘I told you to leave me alone."
The Texas researchers' decoder was not only tested on reading verbal thoughts but on visual, non-narrative thoughts as well.
Test subjects viewed four 4-6 minute Pixar short films, which were "self-contained and almost entirely devoid of language." They then had their brain responses recorded to ascertain whether the thought decoder could make sense out of what they had seen. The model reportedly showed some promise.
"For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences," Huth told a University of Texas at Austin podcast.
"We’re getting the model to decode continuous language for extended periods of time with complicated ideas," added Huth.
The researchers are aware that the technology raises some ethical concerns.
"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," said Tang. "We want to make sure people only use these types of technologies when they want to and that it helps them."
Even if bad actors got their hands on the technology today, it wouldn't yield tremendous results.
The decoder presently only produces meaningful results when attempting to analyze the thoughts of individuals it has already been trained on. Such training requires a subject undergo scanning for several hours. Use of the decoder on an unwilling passerby, therefore, would produce only unintelligible results. However, a general dataset that is extensive enough might eventually preclude the need for such intimacy or familiarity.
In the event that an authoritarian regime or a criminal code breaker today managed the impossible and got their hands both on this technology and on an individual it has been trained on, the captive would still have ways of defending their mental secrets.
According to the researchers, test subjects were able to actively resist penetrating mind-reading efforts by the decoder by thinking of animals or imagining telling their own story.
Despite the technology's current limitations and the ability to resist, Tang suggested that "it’s important to be proactive by enacting policies that protect people and their privacy. ... Regulating what these devices can be used for is also very important."
"Nobody's brain should be decoded without their cooperation," Tang told the MIT Technology Review.
TheBlaze reported in January on a World Economic Forum event that hyped the era of "brain transparency."
"What you think, what you feel: It's all just data," said Nita Farahany, professor of law and philosophy at Duke Law School and faculty chair of the Duke MA in bioethics and science policy. "And large patterns can be decoded using artificial intelligence."
Farahany explained in her Jan. 19 presentation, entitled "Ready for Brain Transparency?" that when people think or emote, "neurons are firing in your brain, emitting tiny little electrical discharges. As a particular thought takes form, hundreds of thousands of neurons fire in characteristic patterns that can be decoded with EEG- or electroencephalography- and AI-powered devices."
With a similar optimism to that expressed by the UT researchers, Farahany said that that the widespread adoption of these technologies will "change the way that we interact with other people and even how we understand ourselves."
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
After sexually explicit deepfake images of Taylor Swift broke the internet, Washington is finally stepping in; but how effective is its plan?
About a week ago, AI-generated pornographic images of Taylor Swift swept the internet at an alarming rate.
The fake pictures went so viral that one image was “seen 47 million times on X before it was removed” after only being up for “about 17 hours,” Hilary Kennedy tells Pat Gray.
After the images were finally discovered, X blocked any searches related to Taylor Swift for a temporary period of time in an effort to prevent the explicit content from circulating even more.
The website responsible for publishing the images “has done this with lots of celebrities before,” says Hilary.
Because these deepfakes are so “incredibly convincing, people in Washington are finally trying to do something about [it]” via a policy called the Defiance Act, which was “introduced by Senate Judiciary Committee chairman Dick Durbin, Lindsey Graham, Senator Josh Hawley, and Senator Amy Klobuchar.”
Senator Durbin stated that “sexually-explicit deepfake content is often used to exploit and harass women—particularly public figures, politicians, and celebrities...Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real. Victims have lost their jobs, and they may suffer ongoing depression or anxiety. By introducing this legislation, we’re giving power back to the victims, cracking down on the distribution of deepfake images, and holding those responsible for the images accountable.”
The Defiance Act “would enable people who are victims of this to be able to take civil action against anybody that produces it [or] possesses it with the intent to distribute it,” says Hilary.
Further, people who are in possession of deepfake images “knowing the victim did not consent” can also “be held liable.”
But Pat sees some holes in this new Defiance Act.
“For instance, if you got Taylor Swift deepfakes, you don't know for sure whether she said it's okay or not,” he says.
To hear more of the conversation, watch the clip below.
Want more from Pat Gray?
To enjoy more of Pat's biting analysis and signature wit as he restores common sense to a senseless world, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.