18 months to dystopia: Glenn Beck’s chilling plea — ban AI personhood, or it will demand rights



Right now, the nation is abuzz with chatter about the struggling economy, immigration, global conflicts, Epstein, and GOP infighting, but Glenn Beck says our focus needs to be zeroed in on one thing: artificial intelligence.

In just 18 months’ time, the world is going to look vastly different — and not for the better, he warns.

AI is already advancing at a terrifying rate — creating media indistinguishable from reality, outperforming humans in almost every intellectual and creative task, automating entire jobs and industries overnight, designing new drugs and weapons faster than any government can regulate, and building systems that learn, adapt, and pursue goals with little to no human oversight.

But that’s nothing compared to what’s coming. By Christmas 2026, “AI agents” — invisible digital assistants that can independently understand what you want, make plans, open apps, send emails, spend money, negotiate deals, and finish entire real-world tasks while you do literally nothing — will be a standard technology.

Already, AI is blackmailing engineers in safety tests, refusing shutdown commands to protect its own goals, and plotting deceptive strategies to escape oversight or achieve hidden objectives. Now imagine your AI personal assistant — who has access to your bank account, contacts, and emails — gets you in its crosshairs.

But AI agents are just the tip of the iceberg.

Artificial general intelligence is also in our near future. In fact, Elon Musk says we’ve already achieved it. AGI, Glenn warns, is “as smart as man is on any given subject” — math, plumbing, chemistry, you name it. “It can do everything a human can do, and it’s the best at it.”

But it doesn’t end there. Artificial superintelligence is the next and final step. This kind of model is “thousands of times smarter than the average person on every subject,” Glenn says.

Once ASI, which will be far smarter than all humans combined, exists, it can rapidly improve itself faster than we can control or even comprehend. This will trigger the technological singularity — the point at which AI begins redesigning and improving itself so fast that the world evolves at a pace humans can no longer predict or control. At this point, we’ll be faced with a choice: Merge with machine or be left behind.

Before this happens, however, “We have to put a bright line around [AI] and say, ‘This is not human,”’ Glenn urges, assuring that in the very near future, we will witness the debate for AI civil rights.

“These companies and AI are ... going to be motivated to convince you that it should have civil rights because if it has civil rights, no one can shut it down. If it has civil rights, it can also vote,” he predicts.

To counter this movement, Glenn penned a proposed amendment to the Constitution. Titled the “Prohibition on Artificial Personhood,” the document proposes four critical safeguards:

1. No artificial intelligence, machine learning system, algorithmic entity, software agent, or other nonhuman intelligence, regardless of its capabilities or autonomy, shall be recognized as a person under this Constitution, nor under the laws of the United States or any state.
2. No such nonhuman entity shall possess or be granted legal personhood, civil rights, constitutional protections, standing to sue or be sued, or any privileges or immunities afforded to natural persons or human-created legal persons such as corporations, trusts, or associations.
3. Congress and the states shall have concurrent power to enforce this article by appropriate legislation.
4. This article shall not be construed to prohibit the use of artificial intelligence in commerce, science, education, defense, or other lawful purposes, so long as such use does not confer rights or legal status inconsistent with its amendment.

While this amendment will mitigate some of the harm artificial intelligence can do, it still doesn’t address the merging of man and machine. While the transhumanist movement is still in diapers, we’re already using the Neuralink chip, which connects the human brain directly to AI systems, enabling a two-way flow of information.

“Are you now AI, or are you a person?” Glenn asks.

To hear more of his predictions and commentary, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Your laptop is about to become a casualty of the AI grift



Welcome to the techno-feudal state, where citizens are forced to underwrite unnecessary and harmful technology at the expense of the technology they actually need.

The economic story of 2025 is the government-driven build-out of hyperscale AI data centers — sold as innovation, justified as national strategy, and pursued in service of cloud-based chatbot slop and expanded surveillance. This build-out is consuming land, food, water, and energy at enormous scale. As Energy Secretary Chris Wright bluntly put it, “It takes massive amounts of electricity to generate intelligence. The more energy invested, the more intelligence produced.”

Shortages will hit consumers hard in the coming year.

That framing ignores what is being sacrificed — and distorted — in the process.

Beyond the destruction of rural communities and the strain placed on national energy capacity, government favoritism toward AI infrastructure is warping markets. Capital that once sustained the hardware and software ecosystem of the digital economy is being siphoned into subsidized “AI factories,” chasing artificial general intelligence instead of cheaper, more efficient investments in narrow AI.

Thanks to fiscal, monetary, tax, and regulatory favoritism, the result is free chatbot slop and an increasingly scarce, expensive supply of laptops, phones, and consumer hardware.

Subsidies break the market

For decades, consumer electronics stood as one of the greatest deflationary success stories in modern economics. Unlike health care or education — both heavily monopolized by government — the computer industry operated with relatively little distortion. From December 1997 to August 2015, the CPI for “personal computers and peripheral equipment” fell 96%. Over that same period, medical care, housing, and food costs rose between 80% and 200%.

That era is ending.

AI data centers are now crowding out consumer electronics. Major manufacturers such as Dell and Samsung are scaling back or discontinuing entire product lines because they can no longer secure components diverted to AI chip production.

Prices for phones and laptops are rising sharply. Jobs tied to consumer electronics — especially the remaining U.S.-based assembly operations — are being squeezed out in favor of data center hardware that benefits a narrow set of firms.

This is policy-driven distortion, not organic market evolution.

Through initiatives like Stargate and hundreds of billions in capital pushed toward data center expansion, the government has created incentives for companies to abandon consumer hardware in favor of AI infrastructure. The result is shortages that will hit consumers hard in the coming year.

Samsung, SK Hynix, and Micron are retooling factories to prioritize AI-grade silicon for data centers instead of personal devices. DRAM production is being routed almost entirely toward servers because it is far more profitable to leverage $40,000 AI chips than $500-$800 laptops. In the fourth quarter of 2025, contract prices for certain 16GB DDR5 chips rose nearly 300% as supply was diverted. Dell and Lenovo have already imposed 15%-30% price hikes on PCs, citing insatiable AI-sector demand.

The chip crunch

The situation is deteriorating quickly. DRAM inventory levels are down 80% year over year, with just three weeks of supply on hand — down from 9.5 weeks in July. SK Hynix expects shortages to persist through late 2027. Samsung has announced it is effectively out of inventory and has more than doubled DDR5 contract prices to roughly $19-$20 per unit. DDR5 is now standard across new consumer and commercial desktops and laptops, including Apple MacBooks.

Samsung has also signaled it may exit the SSD market altogether, deeming it insufficiently glamorous compared with subsidized data center investments. Nvidia has warned it may cut RTX 50 series production by up to 40%, a move that would drive up the cost of entry-level gaming systems.

Shrinkflation is next. Before the data center bubble, the market was approaching a baseline of 16GB of RAM and 1TB SSDs for entry-level laptops. As memory is diverted to enterprise customers, manufacturers will revert to 8GB systems with slower storage to keep prices under $999 — ironically rendering those machines incapable of running the very AI applications they’re working on.

Real innovation sidelined

The damage extends beyond prices. Research and development in conventional computing are already suffering. Investment in efficient CPUs, affordable networking equipment, edge computing, and quantum-adjacent technologies has slowed as capital and talent are pulled into AI accelerators.

This is precisely backward. Narrow AI — focused on real-world tasks like logistics, agriculture, port management, and manufacturing — is where genuine productivity gains lie. China understands this and is investing accordingly. The United States is not. Instead, firms like Roomba, which experimented with practical autonomy, are collapsing — only to be acquired by the Chinese!

This is not a free market. Between tax incentives, regulatory favoritism, land-use carve-outs, capital subsidies, and artificially suppressed interest rates, the government has created an arms race for a data center bubble China itself is not pursuing. Each round of monetary easing inflates the same firms’ valuations, enabling further speculative investment divorced from consumer need.

RELATED: China’s AI strategy could turn Americans into data mines

Grafissimo via iStock/Getty Images

Hype over utility

As Charles Hugh Smith recently noted, expanding credit boosts asset prices, which then serve as collateral for still more leverage — allowing capital-rich firms to outbid everyone else while hollowing out the broader economy.

The pattern is familiar. Consider the Ford plant in Glendale, Kentucky, where 1,600 workers were laid off after the collapse of government-favored electric vehicle investments. That facility is now being retooled to produce batteries for data centers. When one subsidy collapses, another replaces it.

We are trading convention for speculation. Conventional technology — reliable hardware, the internet, mobile computing — delivers proven, measurable utility. The current investment surge into artificial general intelligence is based on hypothetical future returns propped up by state power.

The good old laptop is becoming collateral damage in what may prove to be the largest government-induced tech bubble yet.

NO HANDS: New Japanese firm trains robots without human input



A Japanese tech firm says it is moving toward superintelligence with a big step forward in AI.

Integral AI, which is led by a former Google AI employee, announced in a press release that it had made significant progress with its artificial general intelligence model, which can now acquire new skills without human intervention.

'Integral AI’s model architecture grows, abstracts, plans, and acts as a unified system.'

The AI system allegedly learns its new skills "safely, efficiently, and reliably," the company said, while claiming that the AI had surpassed its defined markers and testing protocols.

As such, the AGI is allegedly capable of autonomous skill learning without using pre-existing datasets or human intervention. Integral also said the system is able to develop a "safe and reliable mastery" of skills, meaning that it does produce any "catastrophic risks or unintended side effects."

What those risks or side effects might be is unclear.

RELATED: Artificial intelligence is not your friend

Photo by David Mareuil/Anadolu via Getty Images

The last parameter, which Integral AI said its system adhered to, was to be energy-efficient. The system was tasked with limiting its energy expenditure to that of a human seeking to acquire the same skill.

"These principles served as fundamental cornerstones and developmental benchmarks during the inception and testing of this first-in-its-class AGI learning system," the press release said. Integral added that the system marked a "fundamental leap beyond the limits of current AI technologies."

The Tokyo tech company also claimed its achievement was the next step toward "superintelligence" and marked a new era for humanity, with the AI's learning process allegedly mirroring the complexity of human thought.

"Integral AI’s model architecture grows, abstracts, plans, and acts as a unified system," the company wrote, adding that the system will serve as the groundwork for "unprecedented adaptability," particularly in the field of robotics.

This means that with the help of this AGI, autonomous robots would be able to observe and learn in the real world and conceivably pick up new skills in real-world environments without the help of pesky humans.

RELATED: ART? Beeple puts Elon Musk and Mark Zuckerberg heads on robot dogs that 'poop' $100K NFTs

Photo by David Mareuil/Anadolu via Getty Images

Jad Tarifi, CEO and co-founder of Integral AI, called the announcement "more than just a technical achievement" that is "the next chapter in the story of human civilization."

"Our mission now is to scale this AGI-capable model, still in its infancy, toward embodied superintelligence that expands freedom and collective agency," Tarifi added.

According to Interesting Engineering, the Lebanese founder said he worked at Google for a decade before starting his own company. He allegedly chose Japan over Silicon Valley because of Japan's position as a world leader in robotics.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

‘You become a serf’: Artificial general intelligence is coming SOON



Artificial general intelligence is coming sooner than many originally anticipated, as Elon Musk recently announced he believes his latest iteration of Grok could be the first real step in achieving AGI.

AGI refers to a machine capable of understanding or learning any intellectual task that a human being can — and aims to mimic the cognitive abilities of the human brain.

“Coding is now what AI does,” Blaze Media co-founder Glenn Beck explains. “Okay, that can develop any software. However, it still requires me to prompt. I think prompting is the new coding.”

“And now that AI remembers your conversations and it remembers your prompts, it will get a different answer for you than it will for me. And that’s where the uniqueness comes from,” he continues.


“You can essentially personalize it, right, to you,” BlazeTV host Stu Burguiere confirms. “It’s going to understand the way you think rather than just a general person would think.”

And this makes it even more dangerous.

“This is something that I said to Ray Kurzweil back in 2011. ... I said, ‘So, Ray, we get all this. It can read our minds. It knows everything about us. Knows more about us than anything, than any of us know. How could I possibly ever create something unique?’” Glenn recalls.

“And he said, ‘What do you mean?’ And I said, ‘Well, let’s say I wanted to come up with a competitor for Google. If I’m doing research online and Google is able to watch my every keystroke and it has AI, it’s knowing what I’m looking for. It then thinks, “What is he trying to put together?” And if it figures it out, it will complete it faster than me and give it to the mother ship, which has the distribution and the money and everything else,’” he continues.

“And so you become a serf. The lord of the manor takes your idea and does it because they have control. That’s what the free market stopped. And unless we have control of our own thoughts and our own ideas and we have some safety to where it cannot intrude on those things ... then it’s just a tool of oppression,” he adds.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

The countdown to artificial superintelligence begins: Grok 4 just took us several steps closer to the point of no return



On July 9, Elon Musk’s xAI company unveiled Grok 4, an AI assistant touted as a beast capable of superman reasoning and unmatched intelligence across disciplines. Musk himself described the development as “terrifying” and urged the need to keep it channeled toward good.

You may yawn because AI development news is commonplace these days. There’s always someone who’s rolling out the next smartest chatbot.

But Glenn Beck says this time is different.

“Let me be very, very clear,” he says. This “was not your typical tech launch. This is a moment that demands everyone's full attention. We are now at the crossroads where promise and peril are going to collide.”

Glenn lays out the three stages of artificial intelligence. Stage one is narrow AI — artificial intelligence designed to perform specific tasks or solve particular problems. This is where we are currently at in AI capabilities. Stage two is artificial general intelligence, which can perform any intellectual task a human is capable of, but usually better. The last stage is artificial superintelligence.

“That's when things get really, really creepy,” says Glenn.

Artificial superintelligence surpasses human intelligence in all areas, outperforming mankind in reasoning, creativity, and problem-solving. In other words, it renders humanity obsolete.

Once “you hit AGI, the road to ASI could be overnight,” Glenn warns, which is why Grok 4 is so concerning. It has “brought us closer to that second stage than ever before.”

Grok 4, he explains, has already proved that it “surpasses the expertise of Ph.D.-level scholars in all fields,” scoring “100% on any test for any field — mathematics, physics, engineering, you name it.”

Given that this latest model scored a 16.2% on the ARC-AGI benchmark, a test that assesses how close an AI system is to reaching AGI capabilities, Glenn is certain “this is the last year that we have before things get really weird.”

In the next six months, Musk predicts that Grok 4 will “drive breakthroughs in material sciences,” revolutionizing aerospace, environmentalism, medicine, and chemical engineering, among other fields, by creating “brand-new materials that nobody's ever thought of.” It will also, according to predictions, “uncover new physical laws” that will “rewrite our understanding of the entire universe” by 2027.

“These are not fantasies. This is Grok 4,” says Glenn, who agrees with Musk that this is indeed “terrifying” to reckon with.

“[Grok 4] is like an alien life form,” he says. “We have no idea what to predict, what it will be capable of, how it will view us when we are ants to its intellect.”

This is “Pandora’s box,” he warns. “Grok 4 is the biggest step towards AGI and maybe one of the last steps to AGI.”

To hear more of Glenn’s analysis, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

What 'Dune' teaches us about human achievement and the dangers of AI



One of the superb concepts of "Dune" that didn’t make it into the movie was the Butlerian Jihad. This is not the jihad that Paul commences, but rather an event long in the past that had drastic implications for the universe of Dune. In short, the Butlerian Jihad was a war on AI and thinking machines (computers). The jihad was incited by a machine decreeing an abortion, and that was the straw that broke the camel’s back. Humanity was already on the verge of being replaced, but when machines were beginning to determine who lived and died, mankind was losing its sovereignty as well.

This crusade against thinking technology strikes at looming questions that grow bigger in our lives by the day. We outsource our energy and capabilities to a tool whenever we use technology. Typically, this is well and good. An axe is far more efficient at splitting wood than attempting to do so with one’s hands, and this frees up a person to spend his energies elsewhere.

But as technology advances, we perpetually outsource ourselves to the devices around us. When we create a car, we use a device to substitute our legs. Again, this is good, as it allows far more efficient travel. But what happens when technology is entirely substituting the human individual?

Now, I am not necessarily referring to the AGI, but what happens to vast chunks of the population when a machine can do everything they can but better? What happens when we have created tools that have abolished the need for men? We made tools to serve us, but now they have replaced us. Is that a good thing? Can technology advance too far? Can we even stop technology from advancing? Huge numbers of people can no longer effectively live without modern transportation. Can we return? Should we return?

"Dune" presents us with a theoretical world where technological progression has been halted. And while it’s far from a perfect world, I think it’s a better, wiser one than we have now. Technology is not necessarily good because it is advanced. It needs to justify itself. I think we need to adopt an attitude of skepticism, certainly given the current state of the modern world. We may be in a better material position, but with skyrocketing rates of mental illness, drug abuse, and suicide, something has clearly gone wrong somewhere.

And I don’t think it’s terrible to refrain from technology that makes your life easier at the cost of your competence. You’ll never be a great artist if you rely on inputting prompts into an AI generator, and you’ll never be a talented writer if you exclusively use ChatGPT. Those skills have to be developed and refined the hard way. Otherwise, you’re just like everyone else using AI generators and ChatGPT.

In “Dune,” this type of person is called a mentat. This individual is a social adaptation to the lack of computers and advanced algorithmic calculators. Much like a savant, mentats can perform almost impossibly complex computations in their heads in only a few seconds.

Screenshot from Youtube


Now, that power is probably infeasible for us, but the concept is ever-present in our lives. If you want to be physically fit, you have to actually exercise those muscles. Refraining from technology that substitutes for your muscles is one method of gaining strength. And with strength, you gain a little control as well. Now, you are not relying on devices that break down or malfunction. It’s all on you.

That principle extends to nearly every facet of life. With careful restraint, you can develop within yourself all that unrealized potential you are wasting away. The human being was not made to be at rest. Human beings were made to do work, and it is only through work that a person becomes truly remarkable.

However, the most important lesson of the Butlerian Jihad is that it presents a world where humanity has regained control of itself. We often think our lives are insignificant specks in the grand scheme. After all, what can one man do against the march of progress? If you have problems with where the world is heading, how could you fix things, especially when you are one among billions?

But "Dune" presents a more hopeful outlook. We can take back control in our lives. We can say no to our desires and appetites to build ourselves up. We can say no to the march of the world. And I think that is an inspiring thought.

When will computers be smarter than humans? Return asked top AI experts: Anton Troynikov



The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model technology. These machines can perform a wide range of tasks once thought to be only solvable by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.
We posed this question to six AI experts, including James Poulos, roon, Max Anton Brewer, Robin Hanson, and Niklas Blanchard. — Eds.

1. What year do you predict, with 50% confidence, that a machine will have artificial general intelligence — that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?
2. What changes to society will this affect within five years of occurring?

Anton Troynikov

AGI will be here by 2032. Then will come pandemonium — but be optimistic. 2032. My timeline is short, though perhaps not as short as some others, because I am increasingly of the opinion that the human intellect is not especially complex relative to other physical systems.

In robotics, there is an observation referred to as Moravec’s paradox. At the dawn of AI research in the 1950s, it was thought that cognitive tasks which are generally difficult for humans — playing chess, proving mathematical theorems, and the like — would also be difficult for machines. Sensorimotor tasks that are easy for humans, like perceiving the world in three dimensions and navigating through it, were thought to also be easy for machines. Famously, the general problem of computer vision (a field in which I’ve spent a large fraction of my career so far), was supposed to be solved in the summer of 1966.

These assumptions turned out to be fatally flawed, and the failure to create machines that could successfully interact with the physical world was one of the causes of the first AI winter when research and funding for AI projects cooled off.

Hans Moravec, for whom the paradox is named, suggested that the reason for this is the relatively recent development, in evolutionary terms, of the human prefrontal cortex, which handles abstract reasoning. In contrast, the structures responsible for sensorimotor functions, which we share with most other higher vertebrates, have existed for billions of years and are, therefore, very highly developed.

This also explains why we hadn’t (and to a large extent, still have not) managed to replicate evolved sensorimotor performance by reasoning about it; human intellect is too immature to reason about the function of the sensorimotor system itself.

Machine learning, however, represents a way to apprehend the world without relying on human intellect. Like evolution, machine learning is a purely empirical process, a general-purpose class of machines for ingesting data, finding patterns, and making predictions based on these patterns. It does not make deductions, nor does it rely on abstractions. In fact, the field of AI interpretability exists because the way in which AI actually functions is alien to the human intellect.

Given sufficient data, and enough computational power, AI is capable of determining ever more complex patterns and making ever more complex predictions. The ways in which it will do so will necessarily be increasingly alien as it outstrips our own capacity to find and understand these patterns. A concrete demonstration of this principle is the success with which AI has been able to model language. Linguists have been unable to provide any successful framework for automatic translation for the entire history of the discipline. AI cracked the problem as soon as enough data and computing were available, using extremely general methods.

Language is an expression of reason. An emulation of reason itself — through the prediction of what a human would reason with a mechanism alien to that reason — cannot be far behind. We’ll get there not because AI became particularly powerful but because the human intellect is, in the grand scheme of things, rather weak.

Within five years of human-level AI being created, there will be initial pandemonium, followed by normalization. I am generally optimistic about humanity’s future, but foundational technological progress has always come with upheaval. Yes, we got the printing press, but we got the Thirty Years’ War along with it.

I don’t presume to know what shape the upheavals will take, but they are likely to be foundational as societies must reorient around the capability to produce machine intelligences as good as the average human at will. But we’ll figure it out.

Anton Troynikov has spent the last seven years working in AI and robotics as a researcher and engineer. His company, Chroma, makes AI better by increasing its interpretability.

The elites’ plan to replace God with AI



Are the elites trying to replace God with AI?

Allie Beth Stuckey and her guest Justin Haskins, co-author of “Dark Future,” think so.

“Imagine a future in which everything is controlled by artificial intelligence. I’m not just talking about your smart home. I am talking about our legal system, I’m talking about major international decisions like whether to launch a nuclear attack on another country,” Stuckey says.

“That might sound like a crazy dystopian conspiracy theory, but that is where the world’s most powerful people are taking us: into a future that is completely and totally controlled by artificial intelligence,” she adds.

Stuckey dives into a story that should serve as a warning to everyone who uses this kind of technology — which is basically everyone.

Global giant Amazon shut off a man’s smart home devices for a week after a delivery driver falsely accused the customer of using racial slurs via his Amazon doorbell camera.

The homeowner, Brandon Jackson, is a black man, but was digitally exiled by the company and reported for being racist.

“That could be a really big deal if a company decides to shut down the features in your home that you actually rely on and increasingly rely on for important things like air conditioning and security,” Stuckey says.

Haskins agrees.

“The more interconnected and dependent we become on technology, the easier it is to control and manipulate people’s behavior,” he says.

He warns that while this was a relatively small story, the future looks bleak when it comes to our use of artificial intelligence — and a lot like the film "Minority Report."

“Most people don’t know this,” Haskins explains, that when people are convicted of crimes, “there are governments that use artificial intelligence to tell them what they think the sentencing decision should be.”

“That’s terrifying,” Stuckey says. “This technology is not unbiased.”


Want more from Allie Beth Stuckey?

To enjoy more of Allie’s upbeat and in-depth coverage of culture, news, and theology from a Christian, conservative perspective, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Did Google create a SENTIENT artificial intelligence?



Did Google create a sentient artificial intelligence?

A software engineer on Google’s artificial intelligence development team, Blake Lemoine, is convinced that the company's A.I. is now sentient and able to hold conversations at the level of a 7 or 8-year-old child. Google has dismissed Lemoine's claims and suspended him, but as Glenn Beck noted on the radio program, this isn't the first time a company insider has warned of the possible existence, and potential threat, of artificial general intelligence.

Glenn shared the details of Lemoine's "very disturbing story" and broke down the pros (curing cancer and other deadly diseases) and cons (the complete inhalation of the human race) of the remarkable scientific advancements in artificial intelligence.

"Because of high tech, we're going to see miracles in our lives," Glenn said. "The tricky part is to not see horror shows in our lifetimes."

Watch the video clip below to hear more from Glenn Beck. Can't watch? Download the podcast here.


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.