Strangling For Sexual Sport Is Exactly The Kind Of Thing We Should ‘Kink-Shame’

If sexual liberation can’t even provide good sex, then what possible reason is there to keep pursuing it?

Scientists have developed an AI system that decodes thoughts and converts them into text



Thanks to scientists at the University of Texas, the last private domain may soon be exposed for all the world to see — or read.

A team of researchers has developed a noninvasive means by which human thoughts can be converted into text. While currently clunky, the "semantic decoder" could possibly be one day miniaturized and mobilized such that the body's sanctum sanctorum can be spied on virtually anywhere.

According to their paper, published Monday in the journal Nature Neuroscience, the researchers indicated that a "brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications."

In addition to helping mutes communicate, the practical applications might include, as the MIT Technology Review has suggested, surveillance and interrogation. However, the present technology relies upon subject cooperation and can be consciously resisted.

Unlike previous brain-computer interfaces, which required invasive neurosurgery to decode speech articulation and other signals from intracranial recordings, this new decoder utilizes both functional magnetic resonance brain imaging and artificial intelligence.

The team, lead by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, trained GPT-1 — an artificial intelligence system from the ChatGPT family of OpenAI language models — on a data set containing various English-language sentences from hundreds of narrative stories.

Test subjects laying in an MRI scanner were each subjected to 16 hours of different episodes of the New York Times’ "Modern Love" podcast, which featured the stories.

With this data, the researcher's AI model found patterns in brain states corresponding with specific words. Relying upon its predictive capability, it could then fill in the gaps by "generating word sequences, scoring the likelihood that each candidate evoked the recorded brain responses and then selecting the best candidate."

When scanned again, the decoder was able to recognize and decipher test subjects' thoughts.

While the resultant translations were far from perfect, reconstructions left little thematically to the imagination.

For instance, one test subject listening to a speaker say, "I don't have my driver's license yet," had their thoughts decoded as, "she has not even started to learn to drive yet."

In another instance, a test subject comprehended the words, "I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’" and had those thoughts decoded as "Started to scream and cry, and then she just said, ‘I told you to leave me alone."

The Texas researchers' decoder was not only tested on reading verbal thoughts but on visual, non-narrative thoughts as well.

Test subjects viewed four 4-6 minute Pixar short films, which were "self-contained and almost entirely devoid of language." They then had their brain responses recorded to ascertain whether the thought decoder could make sense out of what they had seen. The model reportedly showed some promise.

"For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences," Huth told a University of Texas at Austin podcast.

"We’re getting the model to decode continuous language for extended periods of time with complicated ideas," added Huth.

The researchers are aware that the technology raises some ethical concerns.

"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," said Tang. "We want to make sure people only use these types of technologies when they want to and that it helps them."

Even if bad actors got their hands on the technology today, it wouldn't yield tremendous results.

The decoder presently only produces meaningful results when attempting to analyze the thoughts of individuals it has already been trained on. Such training requires a subject undergo scanning for several hours. Use of the decoder on an unwilling passerby, therefore, would produce only unintelligible results. However, a general dataset that is extensive enough might eventually preclude the need for such intimacy or familiarity.

In the event that an authoritarian regime or a criminal code breaker today managed the impossible and got their hands both on this technology and on an individual it has been trained on, the captive would still have ways of defending their mental secrets.

According to the researchers, test subjects were able to actively resist penetrating mind-reading efforts by the decoder by thinking of animals or imagining telling their own story.

Despite the technology's current limitations and the ability to resist, Tang suggested that "it’s important to be proactive by enacting policies that protect people and their privacy. ... Regulating what these devices can be used for is also very important."

"Nobody's brain should be decoded without their cooperation," Tang told the MIT Technology Review.

TheBlaze reported in January on a World Economic Forum event that hyped the era of "brain transparency."

"What you think, what you feel: It's all just data," said Nita Farahany, professor of law and philosophy at Duke Law School and faculty chair of the Duke MA in bioethics and science policy. "And large patterns can be decoded using artificial intelligence."

Farahany explained in her Jan. 19 presentation, entitled "Ready for Brain Transparency?" that when people think or emote, "neurons are firing in your brain, emitting tiny little electrical discharges. As a particular thought takes form, hundreds of thousands of neurons fire in characteristic patterns that can be decoded with EEG- or electroencephalography- and AI-powered devices."

With a similar optimism to that expressed by the UT researchers, Farahany said that that the widespread adoption of these technologies will "change the way that we interact with other people and even how we understand ourselves."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

When No One Really Knows What Health Care Will Cost, Prices Skyrocket

Health care costs too much in part because no one can find out exactly what it costs — and few people have any incentive to do so.