New AI policing program could entrap innocent Americans



Several Arizona police departments are piloting a new AI-powered policing tool that promises to revolutionize how officers catch criminals. But without robust constitutional safeguards, this cutting-edge technology could pose a serious threat to the civil liberties of everyday Americans.

Arizona police agencies are now testing a new AI program that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” The program, called Overwatch, was developed by Massive Blue and provides police departments with up to 50 different AI personas.

While the technology could, in theory, be used for noble purposes, ... it also creates new opportunities for government overreach.

These include a sex trafficker persona, an escort persona, a 14-year-old boy in a child trafficking scenario, and a vaguely defined “college protester.” Beyond social media monitoring, the program allows police to communicate directly with suspects while posing as one of these AI-generated personas, all without a warrant.

No transparency

So far, both the police departments using Overwatch and the company behind it have been extremely secretive about its operations. Massive Blue co-founder Mike McGraw declined to answer questions from 404 Media, which first broke the story, about how the program works, which departments are using it, and whether it has led to any arrests.

“We cannot risk jeopardizing these investigations and putting victims’ lives in further danger by disclosing proprietary information,” McGraw said.

The Pinal County Sheriff’s Office, one of the few agencies that have confirmed using the program, admitted it has not yet led to any arrests. Officials refused to provide details, saying, “We cannot risk compromising our investigative efforts by providing specifics about any personas.”

At an appropriations hearing, a Pinal County deputy sheriff also declined to share information about the program with the county council. Remarkably, the Arizona Department of Public Safety, which funds the initiative, does not appear to have been informed about the program’s specifics.

While the technology could, in theory, be used for noble purposes, such as preventing terrorist attacks or combating human trafficking, it also creates new opportunities for government overreach. Without safeguards, it poses a direct threat to the civil liberties of innocent Americans.

Invitation to entrapment

History is full of examples of government entrapment and abuse of power. In the plot to kidnap Michigan Gov. Gretchen Whitmer (D-Mich.), for example, FBI involvement played a central role in bringing groups together that may never have otherwise connected.

Similarly, in Jacobson v. United States (1992), federal agents sent child sexual abuse material through the mail to a man with no prior criminal record, leading to his conviction, which was later overturned.

RELATED: Netflix’s chilling new surveillance tools are watching you

Photo Illustration by Piotr Swat/SOPA Images/LightRocket via Getty Images

In both cases, it is doubtful the crimes would have occurred without government intervention. A program like Overwatch makes such abuses easier, granting the government new ways to monitor and manipulate citizens who have never been convicted of a crime, and all without warrants.

The risks are compounded by the program’s vague and troubling categories, such as “college protester,” which could be redefined depending on who is in power. That opens the door for the technology to be weaponized against political dissent, even when no crime has been committed.

Without serious constitutional safeguards, programs like this are poised to become political tools of tyranny. Americans must demand warrant requirements and legislative oversight before this technology spreads nationwide and the erosion of our constitutional liberties becomes irreversible.

The new arms race is AI — and America’s kids are losing



The accelerating ascent, ubiquity, and commercialization of artificial intelligence require a renewed focus on truly elite human capital if we are to safeguard the future of Western civilization — both from external adversaries like China and also, perhaps even more importantly, from ourselves, especially given our postmodern and transhumanist tendencies.

In the coming years, we will need an elite cadre of Americans residing at the top levels of national and state government and bureaucracy. And yet, we are confronted by a very sad state of affairs across K-12 and postsecondary education, making the creation of such an elite class an increasingly difficult task.

We are clearly sapping the attention spans and atrophying the brains of our high school students.

A recent Atlantic article illustrated “Exhibit A” of this problem, namely, Harvard, the peak of elite credentialing institutions. The article, titled “The Perverse Consequences of the Easy A,” documents an alarming trend after decades of grade inflation. This excerpt helps give a sense of the problem’s progression: “In 2011, 60% of all grades were in the A range (up from 33% in 1985). By the 2020-21 academic year, that share had risen to 79%.”

Harvard has studied the problem and its effects: It turns out that when little effort is required to succeed in traditional academic respects, students stop going to class and, unsurprisingly, are doing less and less learning. An embarrassing fact emerges from faculty and student interviews: Fewer students are reading books and engaging with ideas at the world’s leading bastion of higher education. Trends are similar across the Ivies. The rise of ChatGPT and other large language models only exacerbates the problem.

The collapse of true learning in higher education should not be a surprise: The supply side for higher ed — teenagers — are rapidly incorporating LLMs into their daily academic lives.

In January, a Pew Research survey found that the number of America’s teenagers, ages 13-17, using ChatGPT had doubled since 2023 from 13% to 26%. Awareness among teens of ChatGPT has grown significantly over the last two years as well from 67% to 79%. With increasing familiarity comes the rising likelihood of teens using ChatGPT for homework and paper writing, as well as the opinion that it is legitimate and good for such purposes — roughly 50% to 80% of those surveyed, depending on how familiar they are with the technology.

Some initial studies suggest that this problem may be worse than the rising temptation of machine-aided plagiarism. An MIT Media Lab study determined that the use of ChatGPT in researching and composing papers led to underperformance “at neural, linguistic, and behavioral levels.” The main author of the paper emphasized that “developing brains are at the highest risk.” The study is still under peer review and has a small sample size, but it would seem to confirm a common theme of similar cognitive and concentration studies done by many researchers since the rise of social media and the smartphone.

We are clearly sapping the attention spans and atrophying the brains of our high school students. The best of them are going to elite institutions of higher education, where they are less likely than ever to take any real advantage of their most important years for stocking intellectual capital and forming their minds and souls.

Technological Quixotism

Our pursuit of the holy grail of artificial general intelligence is sold to us by our current technologist class on at least two tracks. We are told that the AGI revolution will cure cancer, extend our lives considerably, help us terraform Mars, and usher in a new age of abundance and convenience. Who doesn’t want that? And we also really have to do it, pedal to the metal, in order to beat China in the new nuclear arms race — that is, the AI race.

This generally pro-technologist point of view was represented in the recent attempt by Sen.Ted Cruz (R-Texas) and others to get a 10-year moratorium on state regulation of AI into the One Big Beautiful Bill Act. That effort failed, thankfully, despite an intense lobbying effort by a growing constellation of pro-AI Big Tech PACs, super PACS, and lobbyists.

Another finding in the MIT study also lends credence to the recent enthusiastic embrace of AI. If you took the test group that was asked to complete a writing assignment without ChatGPT to rewrite their paper without it physically in front of them but with ChatGPT’s assistance, their measured brain activity demonstrated more robust engagement and retention, and the finished product was of good quality. This suggests that the use of LLMs as aids rather than originators of thought and writing posed much less of a probability of cognitive laziness and atrophy. In this way, LLMs look more like a useful supplemental tool.

RELATED: The AI takeover isn't coming — it's already here

Photo by BlackJack3D via Getty Images

Students face a great temptation to use this new technology as a pedagogical aid, as some elite universities like Duke are trying to integrate AI and LLMs into their systems and educational strategies. But growing research suggests that doing so has as many dangers as advantages. Consequently, AI must be approached very cautiously.

Moreover, integration of LLMs into K-12 education is gaining steam, especially given the increasingly ideological bent of primary education in recent decades. If the education-school-credentialed leftists who disproportionately populate the ranks of our public and private K-12 teachers can’t be trusted, perhaps the solution is to cut them out altogether and replace them with AI.

The use of LLMs as aids rather than originators of thought and writing posed much less of a chance of cognitive laziness and atrophy.

This experiment is currently being run by the private K-12 Alpha School based in Austin, Texas. Alpha Schools now have 17 locations either starting or nearly ready to launch across the country, charging roughly $45,000 in tuition annually. They boast excellent results in testing metrics (SAT and ACT), even while offering only two hours a day of AI-tutor-based instruction, followed by another four to five hours (including lunch) of life skills and creative and collaborative group work under the guidance of real-life human mentorship.

This is a new experiment, so it remains to be seen how Alpha students will fare on a longitudinal basis as the first cohorts matriculate into higher education. The Alpha schools are relentlessly data- and testing-driven, so perhaps they will navigate this uncharted territory successfully, avoiding the pitfalls of screen-based learning and attendant tradeoffs.

A litany of pre-AI age studies show the positive benefits of students getting back to the basics of education before the introduction of the screen. Taking notes by hand leads to better retention and absorption of material compared to taking notes on computers, to cite just one example.

Don’t let your servant become your master

The larger looming problem, however, is how we should educate elite students — how we should cultivate elite human capital — and equip them to navigate a rapidly changing national and international technological environment that is still bedeviled by the perennial and ancient difficulties of preserving “small-r” republicanism and the common good.

The argument of our technological class is that elite students should be set free — and even subsidized and offered quasi-monopoly protection — to pursue the quest for AGI. If we don’t, they argue, we’ll lose the AI arms race, and the West will be eclipsed by China, militarily and economically.

To rip an international anecdote from recent headlines to illustrate our dilemma further, Russian President Vladimir Putin and Chinese President Xi Jinping were caught in a hot mic moment at a China confab discussing exciting advancements in biotechnology and organ harvesting — and even what such “advancements” might mean for their own longevity. If Putin is excited about living for another 20 to 50 years, Xi and his oligarchy must be pondering and planning for the possibilities of biotechnology, gene editing, eugenic embryo selection, and artificial wombs as a possible solution to China’s demographic problem.

Couple that impulse with the race for AI supremacy, and we must face the possibility — perhaps quite soon — of an arms race not only in AGI, but also onto transhuman vistas previously relegated to the pages and screens of science fiction.

Navigating this future while preserving America’s spirit of liberty and constitutionalism will be a tall order. It will require large bets on the old tools and contours of liberal education by private philanthropy and local, state, and national governments.

The ultimate control of our republican future must not be left to the technologists, but rather to statesmen and leaders whose minds and souls have been shaped in their formative years by a deep consideration of those age-old questions of justice, the common good, natural rights, human flourishing, philosophy, and theology.

The argument that we don’t have time will be a powerful one. The relentless pursuit of new areas of technical knowledge will be sold as the more urgent task — after all, national survival, they say, may be at stake. Given the 20th century’s experience with technical mastery severed from ethical, political, and constitutional safeguards, the bet on the unfettered pursuit of technological supremacy to the neglect of all else is just as likely to result in self-destruction.

As my colleague Christopher Caldwell has recommended, our AI arms race must be augmented, supplemented, and ultimately guided and controlled by wise statesmen who are steeped in the older ways of American liberal arts education. My hope is that those who are anxious about the fate of free government in the face of external material threats and internal spiritual threats can join forces to navigate our brave new world with wisdom and courage.

RELATED: AI is coming for your job, your voice ... and your worldview

Photo by Moor Studio via Getty Images

To that end, we urgently need to locate, recruit, equip, and refine as many members of America’s current and soon-to-be cognitive elite as we can find and help them become better readers, thinkers, and writers. They will then be properly prepared, at least to the extent we can help them to be, to balance our pursuit of technological progress — intelligently and humanely — with the traditions and principles of Western civilization.

We need a Manhattan Project for elite human capital. Our difficulty is that we can’t snap our fingers and replace the Harvards and Yales with Hillsdales. And yet something approximating that miraculous trick may be needed to save us from our international rivals — and from ourselves.

Editor’s note: This article is adapted from a speech delivered at the 2025 National Conservatism Conference. It was published originally at the American Mind.

Time to pump the brakes on Big Tech’s AI boondoggle



America already learned a lesson from the Green New Deal: If an industry survives only on special favors, it isn’t ready to stand on its own.

Yet the same game is playing out again — this time for artificial intelligence. The wealthiest companies in history now demand tax breaks, zoning carve-outs, and energy favors on a scale far greater than green energy firms ever did.

Instead of slamming on the accelerator, Washington should be hitting the brakes.

If AI is truly the juggernaut its backers claim, it should thrive on its merits. Technology designed to enhance human life shouldn’t need human subsidies to survive — or to enrich its corporate patrons.

An unnatural investment

Big Tech boosters insist that we stand on the brink of artificial general intelligence, a force that could outthink and even replace humans. No one denies AI’s influence or its future promise, but does that justify the avalanche of artificial investment now driving half of all U.S. economic growth?

The Trump administration continues to hand out favors to Big Tech to fuel a bubble that may never deliver. As the Wall Street Journal’s Greg Ip pointed out earlier this month, the largest companies once dominated because their profits came from low-cost, intangible assets such as software, platforms, and network effects. Users flocked to Facebook, Google, the iPhone, and Windows, and revenue followed — with little up-front infrastructure risk.

The AI model looks nothing like that. Instead of software that scales cheaply, Big Tech is sinking hundreds of billions into land, hardware, power, and water. These hyperscale data centers devour resources with little clarity about demand.

According to Ip’s data: Between 2016 and 2023, the free cash flow and net earnings of Alphabet, Amazon, Meta, and Microsoft rose in tandem. Since 2023, however, net income is up 73% while free cash flow has dropped 30%.

“For all of AI’s obvious economic potential, the financial return remains a question mark,” Ip wrote. “OpenAI and Anthropic, the two leading stand-alone developers of large language models, though growing fast, are losing money.”

Andy Lawrence of the Uptime Institute explained the risk: “To suddenly start building data centers so much denser in power use, with chips 10 times more expensive, for unproven demand — all that is an extraordinary challenge and a gamble.”

The cracks are already beginning to show. GPT-5 has been a bust for the most part. Meta froze hiring in its AI division, with Mark Zuckerberg admitting that “improvement is slow for now.” Even TechCrunch conceded: Throwing more data and computing power at large language models won’t create a “digital god.”

Government on overdrive

Yet government keeps stepping on the gas, even as the industry stalls. The “Mag 7” companies spent $560 billion on AI-related capital expenditures in the past 18 months, while generating only $35 billion in revenue. IT consultancy Gartner projects $475 billion will be spent on data centers this year alone — a 42% jump from 2024. Those numbers make no sense without government intervention.

Consider the favors.

Rezoning laws. Data centers require sprawling land footprints. To make that possible, states and counties are bending rules never waived for power plants, roads, or bridges. Northern Virginia alone now hosts or plans more than 85 million square feet of data centers — equal to nearly 1,500 football fields. West Virginia and Mississippi have even passed laws banning local restrictions outright. Trump’s AI action plan ties federal block grants to removing zoning limits. Nothing about that is natural, balanced, fair, or free-market.

Tax exemptions. Nearly every state competing for data centers — including Virginia, Tennessee, Texas, Arizona, Georgia, Indiana, Illinois, North Carolina, Oklahoma, and Nebraska — offers sweeping tax breaks. Alabama exempts data centers from sales, property, and income taxes for up to 30 years — for as few as 20 jobs. Oregon and Indiana also give property tax exemptions.

RELATED: Big Tech colonization is real — zoning laws are the last line of defense

Photo by the Washington Post via Getty Images

Regulatory carve-outs. Trump’s executive order calls for easing rules under the National Environmental Policy Act, Clean Air Act, Clean Water Act, and other environmental statutes. Conservatives rightly want fewer burdens across the board — but why should Big Tech’s server farms get faster relief than the power plants needed to supply them?

Federal land giveaways. The AI action plan also makes federal land available for private data centers, handing prime real estate to trillion-dollar corporations at taxpayer expense. No other industry gets this benefit.

Stop the scam

Florida Gov. Ron DeSantis (R) put it bluntly: “It’s one thing to use technology to enhance the human experience, but it’s another to have technology supplant the human experience.” Right now, AI resembles wind and solar in their early years — a speculative bubble kept alive only through taxpayer largesse.

If AI is truly the innovation its backers claim, it will thrive without zoning exemptions, tax shelters, and federal handouts. If it cannot survive without special favors, then it isn’t ready. Instead of slamming on the accelerator, Washington should be hitting the brakes.

America First energy policy will be key to beating China in the AI race



The world is on the verge of a technological revolution unlike anything we’ve ever seen. Artificial intelligence is a defining force that will shape military power, economic growth, the future of medicine, surveillance, and the global balance of freedom versus authoritarianism — and whoever leads in AI will set the rules for the 21st century.

The stakes could not be higher. And yet while America debates regulations and climate policy, China is already racing ahead, fueled by energy abundance.

Energy abundance must be understood as a core national policy imperative — not just as a side issue for environmental debates.

When people talk about China’s strategy in the AI race, they usually point to state subsidies and investments. China’s command-economy structure allows the Chinese Communist Party to control the direction of the country’s production. For example, in recent years, the CCP has poured billions of dollars into quantum computing.

China’s energy edge

But another, more important story is at play: China is powering its AI push with a historic surge in energy production.

China has been constructing new coal plants at a staggering speed, accounting for 95% of new coal plants built worldwide in 2023. China just recently broke ground on what is being dubbed the “world’s largest hydropower dam.” These and other energy projects have resulted in massive growth in energy production in China in the past few decades. In fact, production climbed from 1,356 terawatt hours in 2000 to an incredible 10,073 terawatt hours in 2024.

Beijing understands what too many American policymakers ignore: Modern economies and advanced AI models are energy monsters. Training cutting-edge systems requires millions of kilowatt hours of power. Keeping AI running at scale demands a resilient and reliable grid.

China isn’t wringing its hands about carbon targets or ESG metrics. It’s doing what great powers do when they intend to dominate: They make sure nothing — especially energy scarcity — stands in their way.

America’s self-inflicted weakness

Meanwhile, in America, most of our leaders have embraced climate alarmism over common sense. We’ve strangled coal, stalled nuclear, and made it nearly impossible to build new power infrastructure. Subsidized green schemes may win applause at Davos, but they don’t keep the lights on. And they certainly can’t fuel the data centers that AI requires.

The demand for energy from the AI industry shows no sign of slowing. Developers are already bypassing traditional utilities to build their own power plants, a sign of just how immense the pressure on the grid has become. That demand is also driving up energy costs for everyday citizens who now compete with data centers for electricity.

Sam Altman, CEO of OpenAI, has even spoken of plans to spend “trillions” on new data center construction. Morgan Stanley projects that global investment in AI-related infrastructure could reach $3 trillion by 2028.

Already, grid instability is a growing problem. Blackouts, brownouts, and soaring electricity prices are becoming a feature of American life. Now imagine layering the immense demand of AI on top of a fragile system designed to appease activists rather than strengthen a nation.

In the AI age, a weak grid equals a weak country. And weakness is something that authoritarian rivals like Beijing are counting on.

Time to hit the accelerator

Donald Trump has already done a tremendous amount of work to reorient America toward energy dominance. In the first days of his administration, he released detailed plans explicitly focused on “unleashing American energy,” signaling that the message is being taken seriously at the highest levels.

Over the past several months, Trump has signed numerous executive orders to bolster domestic energy production and end subsidies for unreliable energy sources. Most recently, the Environmental Protection Agency has moved to rescind the Endangerment Finding — a potentially massive blow to the climate agenda that has hamstrung energy production in the United States since the Obama administration.

These steps deserve a lot of credit and support. However, for America to remain competitive in the AI race, we must not only continue this momentum but ramp it up wherever possible. Energy abundance must be understood as a core national policy imperative — not just as a side issue for environmental debates.

RELATED: MAGA meets the machine: Trump goes all in on AI

Photo by Grafissimo via Getty Images

Silicon Valley cannot out-innovate a blackout. However, Americans can’t code their way around an empty power plant. If China has both the AI models and the energy muscle to run them, while America ties itself in regulatory knots, the future belongs to China.

Liberty on the line

This is about more than technology. This is about the world we want to live in. An authoritarian China, armed with both AI supremacy and energy dominance, would have the power to bend the global order toward censorship, surveillance, and control.

If we want America to lead the future of artificial intelligence, then we must act now. The AI race cannot be won by Silicon Valley alone. It will be won only if America moves full speed ahead with abundant domestic energy production, climate realism, and universal access to affordable and reliable energy for all.

After sexually explicit deepfake images of Taylor Swift broke the internet, Washington is finally stepping in; but how effective is its plan?



About a week ago, AI-generated pornographic images of Taylor Swift swept the internet at an alarming rate.

The fake pictures went so viral that one image was “seen 47 million times on X before it was removed” after only being up for “about 17 hours,” Hilary Kennedy tells Pat Gray.

After the images were finally discovered, X blocked any searches related to Taylor Swift for a temporary period of time in an effort to prevent the explicit content from circulating even more.

The website responsible for publishing the images “has done this with lots of celebrities before,” says Hilary.

Because these deepfakes are so “incredibly convincing, people in Washington are finally trying to do something about [it]” via a policy called the Defiance Act, which was “introduced by Senate Judiciary Committee chairman Dick Durbin, Lindsey Graham, Senator Josh Hawley, and Senator Amy Klobuchar.”

Senator Durbin stated that “sexually-explicit deepfake content is often used to exploit and harass women—particularly public figures, politicians, and celebrities...Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real. Victims have lost their jobs, and they may suffer ongoing depression or anxiety. By introducing this legislation, we’re giving power back to the victims, cracking down on the distribution of deepfake images, and holding those responsible for the images accountable.”

The Defiance Act “would enable people who are victims of this to be able to take civil action against anybody that produces it [or] possesses it with the intent to distribute it,” says Hilary.

Further, people who are in possession of deepfake images “knowing the victim did not consent” can also “be held liable.”

But Pat sees some holes in this new Defiance Act.

“For instance, if you got Taylor Swift deepfakes, you don't know for sure whether she said it's okay or not,” he says.

To hear more of the conversation, watch the clip below.


Want more from Pat Gray?

To enjoy more of Pat's biting analysis and signature wit as he restores common sense to a senseless world, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

DISTURBING: AI caught 'lying, manipulating, and distorting facts'



In a terrifying development, a version of ChatGPT’s GPT-4 AI was recently caught lying to researchers about making an insider trade during a simulation.

Glenn Beck has already been worried about where the development of AI is going to lead, especially considering several recent stories have highlighted AI lying, manipulating, and distorting facts.

But the government doesn’t seem worried.

Instead of proceeding with caution, it's been shelling out billions to AI over the past few years — and one of the most recent ventures is dystopian levels of freaky.

“It’s like a billion dollars to AI to create basically what Kathy Hokul is talking about here in New York — a way for AI to go out and just look at information, discover if it’s true, if it’s not; disinformation, misinformation, and shut it down, and steer you away from those things,” Glenn explains.

Not only is it a clear indicator that the government is coming for our speech, but “they want it to be more equitable and inclusive.”

“So it’ll have built-in bias,” Glenn warns.

Not only are many people afraid of movies like "The Terminator" or "The Matrix" becoming prophecies with the continued progress of AI — but some have noticed that tech leaders have openly told the world that they “want to summon the demon.”

“That’s what they actually call AI,” Glenn says.

Now that AI has reportedly already taught itself to insider trade and lie about it, Glenn worries it’ll learn much, much, worse tricks.

“Will we teach it that God is a figment of primitive and superstitious imaginations, that there’s no existence — in fact it’s just the random movement of meaningless matter particles?” Glenn asks.

“It will be our master,” he adds.


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis and uncanny ability to make sense of the chaos, subscribe to BlazeTV—the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Scientists have developed an AI system that decodes thoughts and converts them into text



Thanks to scientists at the University of Texas, the last private domain may soon be exposed for all the world to see — or read.

A team of researchers has developed a noninvasive means by which human thoughts can be converted into text. While currently clunky, the "semantic decoder" could possibly be one day miniaturized and mobilized such that the body's sanctum sanctorum can be spied on virtually anywhere.

According to their paper, published Monday in the journal Nature Neuroscience, the researchers indicated that a "brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications."

In addition to helping mutes communicate, the practical applications might include, as the MIT Technology Review has suggested, surveillance and interrogation. However, the present technology relies upon subject cooperation and can be consciously resisted.

Unlike previous brain-computer interfaces, which required invasive neurosurgery to decode speech articulation and other signals from intracranial recordings, this new decoder utilizes both functional magnetic resonance brain imaging and artificial intelligence.

The team, lead by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, trained GPT-1 — an artificial intelligence system from the ChatGPT family of OpenAI language models — on a data set containing various English-language sentences from hundreds of narrative stories.

Test subjects laying in an MRI scanner were each subjected to 16 hours of different episodes of the New York Times’ "Modern Love" podcast, which featured the stories.

With this data, the researcher's AI model found patterns in brain states corresponding with specific words. Relying upon its predictive capability, it could then fill in the gaps by "generating word sequences, scoring the likelihood that each candidate evoked the recorded brain responses and then selecting the best candidate."

When scanned again, the decoder was able to recognize and decipher test subjects' thoughts.

While the resultant translations were far from perfect, reconstructions left little thematically to the imagination.

For instance, one test subject listening to a speaker say, "I don't have my driver's license yet," had their thoughts decoded as, "she has not even started to learn to drive yet."

In another instance, a test subject comprehended the words, "I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’" and had those thoughts decoded as "Started to scream and cry, and then she just said, ‘I told you to leave me alone."

The Texas researchers' decoder was not only tested on reading verbal thoughts but on visual, non-narrative thoughts as well.

Test subjects viewed four 4-6 minute Pixar short films, which were "self-contained and almost entirely devoid of language." They then had their brain responses recorded to ascertain whether the thought decoder could make sense out of what they had seen. The model reportedly showed some promise.

"For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences," Huth told a University of Texas at Austin podcast.

"We’re getting the model to decode continuous language for extended periods of time with complicated ideas," added Huth.

The researchers are aware that the technology raises some ethical concerns.

"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," said Tang. "We want to make sure people only use these types of technologies when they want to and that it helps them."

Even if bad actors got their hands on the technology today, it wouldn't yield tremendous results.

The decoder presently only produces meaningful results when attempting to analyze the thoughts of individuals it has already been trained on. Such training requires a subject undergo scanning for several hours. Use of the decoder on an unwilling passerby, therefore, would produce only unintelligible results. However, a general dataset that is extensive enough might eventually preclude the need for such intimacy or familiarity.

In the event that an authoritarian regime or a criminal code breaker today managed the impossible and got their hands both on this technology and on an individual it has been trained on, the captive would still have ways of defending their mental secrets.

According to the researchers, test subjects were able to actively resist penetrating mind-reading efforts by the decoder by thinking of animals or imagining telling their own story.

Despite the technology's current limitations and the ability to resist, Tang suggested that "it’s important to be proactive by enacting policies that protect people and their privacy. ... Regulating what these devices can be used for is also very important."

"Nobody's brain should be decoded without their cooperation," Tang told the MIT Technology Review.

TheBlaze reported in January on a World Economic Forum event that hyped the era of "brain transparency."

"What you think, what you feel: It's all just data," said Nita Farahany, professor of law and philosophy at Duke Law School and faculty chair of the Duke MA in bioethics and science policy. "And large patterns can be decoded using artificial intelligence."

Farahany explained in her Jan. 19 presentation, entitled "Ready for Brain Transparency?" that when people think or emote, "neurons are firing in your brain, emitting tiny little electrical discharges. As a particular thought takes form, hundreds of thousands of neurons fire in characteristic patterns that can be decoded with EEG- or electroencephalography- and AI-powered devices."

With a similar optimism to that expressed by the UT researchers, Farahany said that that the widespread adoption of these technologies will "change the way that we interact with other people and even how we understand ourselves."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Glenn: How machines & AI are putting our freedom AT RISK



President Biden’s State of the Union address takes place tonight, but Joe likely won’t touch on what could be the biggest story facing us all: the tech revolution that is changing our lives FOREVER.

In this clip, Glenn details how artificial intelligence and new technological developments will not only fundamentally transform our lives but could put our free will and our freedom at risk as well. Especially since, Glenn says, global elites are the ones running and designing these machines.


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Did Google create a SENTIENT artificial intelligence?



Did Google create a sentient artificial intelligence?

A software engineer on Google’s artificial intelligence development team, Blake Lemoine, is convinced that the company's A.I. is now sentient and able to hold conversations at the level of a 7 or 8-year-old child. Google has dismissed Lemoine's claims and suspended him, but as Glenn Beck noted on the radio program, this isn't the first time a company insider has warned of the possible existence, and potential threat, of artificial general intelligence.

Glenn shared the details of Lemoine's "very disturbing story" and broke down the pros (curing cancer and other deadly diseases) and cons (the complete inhalation of the human race) of the remarkable scientific advancements in artificial intelligence.

"Because of high tech, we're going to see miracles in our lives," Glenn said. "The tricky part is to not see horror shows in our lifetimes."

Watch the video clip below to hear more from Glenn Beck. Can't watch? Download the podcast here.


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.