Glenn Beck warns: Amazon layoffs & Bill Gates' climate flip signal the energy war splitting America in two



In September, Amazon raised warehouse worker pay to over $30/hour, framing the wage hike as an effort to enhance employees' experience. However, earlier this week, the company contradicted its human-centric initiative when it suddenly slashed 14,000 corporate jobs in accordance with its plans to invest heavily in artificial intelligence.

Longtime climate change fearmonger Bill Gates also published a memo on his Gates Notes blog, where he wrote: "Although climate change will have serious consequences — particularly for people in the poorest countries – it will not lead to humanity's demise” — a stunning contradiction to his yearslong alarmist rhetoric.

While Amazon and Gates’ shifting narratives may appear unrelated, Glenn Beck says they both hint of a dark future on the horizon.

And it all centers around power — but not the political or economic kind.

“I mean energy,” says Glenn. “The world is starving for energy.”

But energy means different things to different people. Amazon’s push for AI-driven commerce represents one side of the playing field — the side that craves unrestricted energy abundance via fossil fuels and nuclear power. Gates' long history of climate alarmism, though recently softened, embodies the other side's push for "green" energy only — restrictive renewables and emission caps that will surely starve innovation.

It all boils down to “global fascism on one side” and “Marxist degrowth” on the other, says Glenn, noting both frameworks are deeply flawed.

However, both sides will have good and bad parts. The Marxist degrowth crowd will be pro-human workers and real food but anti-capitalism and fossil fuels. The growth-centric fascist crowd will promote capitalism and oil drilling but also Big Ag and Big Pharma, unrestricted artificial intelligence, and other dystopian technologies, like digital IDs.

But where does that leave someone like Glenn, who’s pro-human workers, ethical AI, oil drilling, real food, and capitalism but anti-climate change, Marxism, and globalist initiatives, like digital IDs, 15-minute cities, and central bank digital currencies?

He warns we’re headed into a time where we’re going to be asked to choose between these two options.

“This is the split that is coming, and I believe the Marxist global warming side is going to be extraordinarily appealing to a lot of people,” says Glenn, warning that it’s “a utopia that can never survive.”

The other camp, however, is equally as flawed. So what do we do?

We choose the “third way,” says Glenn.

“It's the U.S. Constitution.”

To hear more of Glenn’s analysis, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Liberals, heavy porn users more open to having an AI friend, new study shows



A small but significant percentage of Americans say they are open to having a friendship with artificial intelligence, while some are even open to romance with AI.

The figures come from a new study by the Institute for Family Studies and YouGov, which surveyed American adults under 40 years old. Their data revealed that while very few young Americans are already friends with some sort of AI, about 10 times that amount are open to it.

'It signals how loneliness and weakened human connection are driving some young adults.'

Just 1% of Americans under 40 who were surveyed said they were already friends with an AI. However, a staggering 10% said they are open to the idea. With 2,000 participants surveyed, that's 200 people who said they might be friends with a computer program.

Liberals said they were more open to the idea of befriending AI (or are already in such a friendship) than conservatives were, to the tune of 14% of liberals vs. 9% of conservatives.

The idea of being in a "romantic" relationship with AI, not just a friendship, again produced some troubling — or scientifically relevant — responses.

RELATED: US Army says it is not replacing 'human decision-making' with AI after general admits to using chatbot

— (@)

When it comes to young adults who are not married or "cohabitating," 7% said they are open to the idea of being in a romantic partnership with AI.

At the same time, a larger percentage of young adults think that AI has the potential to replace real-life romantic relationships; that number sits at a whopping 25%, or 500 respondents.

There exists a large crossover with frequent pornography users, as the more frequently one says they consume online porn, the more likely they are to be open to having an AI as a romantic partner, or are already in such a relationship.

Only 5% of those who said they never consume porn, or do so "a few times a year," said they were open to an AI romantic partner.

That number goes up to 9% for those who watch porn between once or twice a month and several times per week. For those who watch online porn daily, the number was 11%.

Overall, young adults who are heavy porn users were the group most open to having an AI girlfriend or boyfriend, in addition to being the most open to an AI friendship.

RELATED: The laws freaked-out AI founders want won't save us from tech slavery if we reject Christ's message

Graphic courtesy Institute for Family Studies

"Roughly one in 10 young Americans say they’re open to an AI friendship — but that should concern us," Dr. Wendy Wang of the Institute for Family Studies told Blaze News.

"It signals how loneliness and weakened human connection are driving some young adults to seek emotional comfort from machines rather than people," she added.

Another interesting statistic to take home from the survey was the fact that young women were more likely than men to perceive AI as a threat in general, with 28% agreeing with the idea vs. 23% of men. Women are also less excited about AI's effect on society; just 11% of women were excited vs. 20% of men.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

A new study hints what happens when superintelligence gets brain rot — just like us



AI and LLMs appear to be in a bit of a slump, with the latest revelatory scandal coming out of a major study showing that large language models, the closest we’ve come yet to so-called artificial general intelligence, are degraded in their capacities when they are subjected to lo-fi, low-quality, and “junk” content.

The study, from a triad of college computer science departments including University of Texas, set out to determine relationships between data quality and performance in LLMs. The scientists trained their LLMs on viral X.com/Twitter data, emphasizing high-engagement posts, and observed more than 20% reduction in reasoning capacity, 30% falloffs in contextual memory tasks, and — perhaps most ominously, since the study tested for measurable personality traits like agreeableness, extraversion, etc.— the scientists saw a leap in output that can technically be characterized as narcissistic and psychopathic.

Sound familiar?

The paper analogizes the function of the LLM performance with human cognitive performance and refers to this degradation in both humans and LLMs as “brain rot,” a “shorthand for how endless, low-effort, engagement-bait content can dull human cognition — eroding focus, memory discipline, and social judgment through compulsive online consumption.”

The whole project reeks of hubris, reeks of avarice and power.

There is no great or agreed-upon utility in cognition-driven analogies made between human and computer performance. The temptation persists for computer scientists and builders to read in too much, making categorical errors with respect to cognitive capacities, definitions of intelligence, and so forth. The temptation is to imagine that our creative capacities ‘out there’ are somehow reliable mirrors of the totality of our beings ‘in here,’ within our experience as humans.

We’ve seen something similar this year with the prevalence of so-called LLM psychosis, which — in yet another example of confusing terminology applied to already confused problems — seeks to describe neither psychosis embedded into LLMs nor that measured in their “behavior,” but rather the severe mental illness reported by many people after applying themselves, their attention, and their belief into computer-contained AI “personages” such as Claude or Grok. Why do they need names anyways? LLM 12-V1, for example, would be fine ...

The “brain rot” study rather proves, if anything, that the project of creating AI is getting a little discombobulated within the metaphysical hall of mirrors its creators, backers, and believers have, so far, barged their way into, heedless of old-school measures like maps, armor, transport, a genuine plan. The whole project reeks of hubris, reeks of avarice and power. Yet, on the other hand, the inevitability of the integration of AI into society, into the project of terraforming the living earth, isn’t really being approached by a politically, or even financially, authoritative and responsible body — one which might perform the machine-yoking, human-compassion measures required if we’re to imagine ourselves marching together into and through that hall of mirrors to a hyper-advanced, technologically stable, and human-populated civilization.

RELATED: Intelligence agency funding research to merge AI with human brain cells

Photo by VCG / Contributor via Getty Images

So, when it’s observed here that AI seems to be in a bit of a slump — perhaps even a feedback loop of idiocy, greed, and uncertainty coupled, literally wired-in now, with the immediate survival demands of the human species — it’s not a thing we just ignore. A signal suggesting as much erupted last week from a broad coalition of high-profile media, business, faith, and arts voices brought under the aegis of the Statement on Superintelligence, which called for “a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.”

There’s a balance, there are competing interests, and we’re all still living under a veil of commercial and mediated fifth-generation warfare. There’s a sort of adults-in-the-room quality we are desperately lacking at the moment. But the way the generational influences lay on the timeline isn’t helping. With boomers largely tech-illiterate but still hanging on, with Xers tech-literate but stuck in the middle (as ever), with huge populations of highly tech-saturated Millennials, Zoomers, and so-called generation Alpha waiting for their promised piece of the social contract, the friction heat is gathering. We would do well to recognize the stakes and thus honor the input of those future humans who shouldn’t have to be born into or navigate a hall of mirrors their predecessors failed to escape.

MIT professor’s 4 critical steps to stop AI from hijacking humanity



Artificial superintelligence is still a hypothetical, but we’re inching closer every day. What happens when we finally create a digital beast that vastly surpasses human intellect in all domains?

MIT physics professor Max Tegmark warns that if that day comes, we’ll be in deeper trouble than we can imagine.

Despite the evident dangers and widespread hesitation, people like OpenAI CEO Sam Altman, a leading figure in the AI boom, are determined to see it happen at any cost.

“Sam Alman believes he’s creating God. ... There’s a lot of people in Silicon Valley that want to meet God of their creation,” says Glenn Beck, who’s been warning for years about the dangers of an artificial intelligence takeover.

Tegmark is equally disturbed by Altman’s dystopian tech dreams, which go even beyond creating artificial superintelligence. In his 2017 essay "The Merge," Altman describes the fusing of man and machine as a necessary step to keep up with superhuman AI. He even suggests that we will be able to “design our own descendants.”

Most people, however, want nothing to do with this transhumanist, cyborg future, but it’s looking like Altman and other tech billionaires are set on pushing humanity in that direction anyway.

“So how do you stop it?” Glenn asks.

On this episode of “The Glenn Beck Program,” Tegmark outlined four ways we can push back against the AI revolution.

1. Reject the ‘inevitable’ AI myth

“Lobbyists from these companies keep trying to convince us that it's unstoppable,” Tegmark says. “That's the number one psy-op trick in the book.”

Just because a technological advancement is possible doesn’t mean it will come to fruition, he explains. He gives the example of human cloning, which is technically feasible today but not practiced due to ethical, legal, and practical obstacles.

“The consensus around the world was we could lose control over our species if we start messing with ourselves in that way, and it became so stigmatized it just didn’t happen,” he says. There’s a chance ASI and cyborgs will be viewed similarly — technically possible but too risky to try, especially if people at large start rejecting the notion that these advancements are inevitable.

2. Control > chaos

Some will argue that the United States has to trudge forward in the AI race because we’re competing against China, but Tegmark reminds that ASI is a “suicide race” because once we reach superintelligence heights, humans will become slaves to a digital master.

But China values only one thing more than technological dominance: control.

The United States, finally back on top as a global superpower thanks to President Trump, isn’t interested in losing control either. “The way the U.S. or China will compete for dominance is not by doing something that’s going to take away the power from both countries,” Tegmark says.

3. Call for government regulations

Glenn is still concerned about people like Sam Altman, who have unlimited money and resources, continuing to push AI to new heights, but Tegmark says they’re biding their time as unrestricted tech pioneers.

“Once upon a time, there were no regulations on biotech. They could sell any medicine they wanted in the supermarket, and sometimes this caused tragedies,” Tegmark says.

He points to the 1950s and ’60s sedative thalidomide, which was prescribed to pregnant women to treat morning sickness. The medication proved so harmful — over 100,000 severe birth defects — that the drug was not only banned, but the government began regulating the biotech industry as a whole to prevent future devastations.

“We’ve done the same thing with every other industry,” Tegmark says.

“So saying that AI companies should be the only companies in America that don’t have to meet any safety standards is really just asking for corporate welfare for AI companies,” he adds.

4. Amplify the public voice

Many people don’t voice their opposition to the AI race because they think either they’re powerless to stop it or that they’ll be condemned as Luddites. But Tegmark says neither is true.

“Less than 5% of Americans actually want a race to superintelligence,” he says.

And now our voices can be heard. Through his Future of Life Institute, Tegmark has created a petition aimed at holding AI developers accountable for the risks of advanced AI. Many high-profile people from both sides of the political spectrum have already signed it, including Glenn.

I urge you to sign this,” Glenn says.

“This is the end of humanity if we lose control of our technology,” he adds.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Gen Z just outsmarted car dealers — using AI



There’s a new dynamic shaking up the auto industry, and car dealerships aren’t thrilled about it. Members of Generation Z — the most digital generation yet — aren't walking into showrooms unprepared. Instead, they’re bringing a secret weapon: artificial intelligence.

"Yesterday, ChatGPT helped my daughter save over $3,000 on a car purchase," trumpets one post on Reddit, going on to lay out the exact prompt she used to secure her deal.

With a few taps, AI can highlight suspicious charges, flag high interest rates, and summarize legal terms that would take an average buyer hours to decipher.

This is far from an isolated anecdote — it’s the latest real-world shift in how people buy cars. Similar stories abound, including videos showing buyers walking sales reps through their own contracts.

New leverage

It's hard to blame the latest generation of first-time car buyers for using whatever leverage they can. Zoomers grew up during the 2008 financial crash, the pandemic, and the explosion of online scams. They’ve watched the economy fluctuate wildly, and they’ve seen how easily a “great deal” can turn into a financial trap. They’re cautious, analytical, and skeptical of traditional sales tactics — especially those that rely on confusion or pressure.

And let’s be honest — dealership contracts are notoriously dense. Between add-ons like extended warranties, gap insurance, and inflated “doc fees,” the cost of a new car can quietly balloon by thousands of dollars. But with a few taps, AI can highlight suspicious charges, flag high interest rates, and summarize legal terms that would take an average buyer hours to decipher.

Dealers can benefit too

It’s no wonder some salespeople are frustrated. They’re used to being the authority. But now, the balance of power is shifting toward the customer — especially younger ones who can instantly fact-check every claim.

Dealerships, however, are fighting back — adopting AI themselves to streamline inventory, analyze market data, and create transparent pricing that appeals to Gen Z’s preference for honesty and speed.

Those who resist change risk being left behind. If your customer knows more than your finance manager because the customer ran the numbers through an AI, that’s a wake-up call.

The smartest dealerships are adapting by embracing technology instead of fearing it. They’re using AI to enhance transparency — automating disclosures, simplifying pricing structures, and ensuring that every deal can stand up to digital scrutiny.

Trust in large institutions — from media to government to corporations — has eroded for years. The car-buying process, long viewed as opaque and stressful, is no exception. Gen Z’s approach reflects a cultural shift: Don’t rely on authority; verify with data.

AI provides a kind of digital ally — a second opinion that feels objective. It doesn’t care about commissions or quotas. It simply reads the fine print and reports back.

RELATED: Why we still need car dealerships

George Rose/Getty Images

More transparency?

Critics argue that depending on AI for financial decisions is risky. And they’re not wrong — AI isn’t infallible. It can misinterpret terms or overlook context. But for many buyers, even an imperfect tool feels safer than blind trust in a salesperson’s word.

This trend extends beyond cars. Gen Z uses AI for everything — evaluating rental agreements, comparing college loans, even cross-checking health care costs. To Zoomers, it’s not “cheating.” It’s being informed.

And while some mock the trend as overly cautious or robotic, it’s hard to argue with the results. When young buyers save thousands simply by questioning what’s in front of them, the lesson is clear: Transparency wins.

As AI continues to evolve, its role in consumer decision-making will only grow. Future dealership interactions may feature built-in AI advisers on both sides — buyers and sellers each leveraging data to find common ground faster.

It’s not far-fetched to imagine an industry where paperwork is pre-analyzed, financing terms are AI-generated, and negotiation becomes a transparent dialogue rather than a psychological battle.

For decades, dealerships relied on information asymmetry — the idea that they knew more than the buyer. That era is ending. The smartphone and now AI have leveled the playing field.

US Army says it is not replacing 'human decision-making' with AI after general admits to using chatbot



Certain decisions are best not left to machines, the Army has revealed.

A United States Army general made headlines last week when he told reporters at a media roundtable he had been using an AI chatbot to "build models to help all of us."

'He is helping the Army explore how artificial intelligence can strengthen decision-making.'

Major General William "Hank" Taylor told media at the annual Association of the United States Army conference that "Chat and I" have become "really close lately," prompting more questions than answers about the Army's use of AI.

Williams is the top United States Army commander in South Korea and makes decisions for thousands of troops. He explained to reporters that he is indeed using the technology to make decisions that affect those under his command, but to what end was unknown.

Now, the Eighth Army office has revealed to Return what exactly the high-ranking officer meant. The office said that Taylor's remarks were actually regarding the Army's "ongoing modernization efforts," which specifically relate to how technology can assist leaders in making timely and informed decisions.

At the same time, the spokesperson said that the Army does not plan on replacing human decision-makers, especially in key areas.

RELATED: From West Point to Woke Point: The long march through the ranks

Photo by KIM Jae-Hwan/SOPA Images/LightRocket via Getty Images

"All operational and personnel decisions remain the sole responsibility of commanders and their staff, guided by Army policy, regulation, and professional judgment," media relations chief Jungwon Choi told Return.

He added that while Eighth Army recognizes the opportunities and risks associated with AI, it is only looking at how to integrate "trusted, secure, and compliant systems that enhance — not replace — human decision-making."

The Army reiterated that point, stating that Taylor does not use any AI-assisted tools to make personnel, operational, or command decisions, and his remarks were only referring to using "AI-assisted tools in a learning and exploratory capacity."

The Army is not looking at "delegating command authority to an algorithm or chatbot," either, Choi reinforced.

The Department of War is tinkering with AI chatbots for its forces on the ground, however. As Return previously reported, training scenarios have already included experimentation with an offline battle-ready chatbot.

The technology, called EdgeRunner AI, allows soldiers to get instant information about mission objectives, coordinates, and other details instantaneously in an offline environment.

EdgeRunner recently wrapped up military exercises in Fort Carson, Colorado, and Fort Riley, Kansas.

RELATED: Democrats once undermined the Army. Now they undermine the nation.

Photo by JUNG YEON-JE/AFP via Getty Images

At the same time, Choi said that like many leaders, Major General Taylor has "experimented with publicly available AI-assisted tools to understand how generative AI functions, its potential uses, and the safeguards required for responsible employment."

Taylor has also explored HQDA-approved large language models to "assess how secure, compliant AI systems" can support leadership development or improve operational efficiency, for example.

The spokesman said Taylor does not endorse any specific commercial platform, and the Army did not answer as to whether he was referring to using ChatGPT when speaking to reporters, which tech outlet Futurism claimed last week.

"MG Taylor's engagement with HQDA-approved AI platforms reflects a forward-thinking approach to leadership and modernization," the army representative concluded. "By responsibly experimenting with these emerging tools, he is helping the Army explore how artificial intelligence can strengthen decision-making, improve efficiency, and prepare leaders for the evolving demands of the modern battlefield."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

I'm with stupid: 'Dumb and Dumber' star plays pea-brained protest song



The star of “Dumb and Dumber” got ... even dumber?

Veteran actor Jeff Daniels has a regular side hustle as a cringeworthy MSNBC guest. He played a newsman on TV once, and now Daniels fancies himself a political wonk. Yeah, he’s the same guy who starred as James Comey in “The Comey Rule,” one of the most fact-free Hollywood productions ever.

'The real revolution going on in this country now is the Christian nationalist revolution — an attempt to upend the American dream and replace it with a theocracy.'

And that’s saying plenty.

This week, Daniels broke out his guitar on MSNBC to serenade the channel’s dwindling audience. The song in question? A ditty that helps him cope with President Donald Trump’s second term.

“Crazy World” features lyrics like this: “It’s nice to know in a world full of hate, there’s someone out there still making love.”

Groovy, man!

Everybody was kung fu fighting

“Sweep the leg! Sweep the leeeeeeeeg!”

Everything old is newish again, which means a “Karate Kid” musical is on the way. The production is getting its feet wet overseas with a spring 2026 tour in the U.K. before later arriving on Broadway’s West End.

Robert Mark Kamen, who wrote the original “Karate Kid” all the way back in 1984, also penned the musical update.

The four-film saga remained dormant for years before getting a new lease on life from both the 2010 remake featuring Jackie Chan and the celebrated Netflix series “Cobra Kai.” That second wind couldn't keep this summer's “Karate Kid: Legends" from conking out in theaters. Guess fans weren’t interested in uniting Chan with original franchise star Ralph Macchio.

Somewhere, Sensei Kreese is smiling ...

'Witch' way, modern star?

"The Scarlet Witch" is casting a hex on streamers.

Elizabeth Olsen, who brought that MCU character to life in multiple films as well as Disney+’s “WandaVision,” is taking a stand for the theatrical experience. Olsen says she refuses to appear in any studio films bound for streaming-only venues.

“If a movie is made independently and only sells to a streamer, then fine. But I don’t want to make something where [streaming is] the end-all. ... I think it’s important for people to gather as a community, to see other humans, be together in a space.”

That’s noble, but she may be fighting a losing battle. We’ve recently seen a flood of studio films flop in theaters, including “Roofman,” “Good Fortune,” and “Tron: Ares.” The theatrical model is still struggling post-pandemic, and the allure of “Netflix and chill” can be irresistible.

Plus, major stars like Robert De Niro, Dwayne Johnson, and Gal Gadot routinely appear in major streaming films without a second thought. If Daniel Day Lewis can memory hole his retirement plans, here’s betting Olsen may have a backpedal of her own coming soon ...

'Battle' babble

Say what you want about Leonardo DiCaprio’s “One Battle After Another,” a film glorifying radical violence against a corrupt U.S. government. It’s a perfect fit for that cousin who spends days getting his No Kings poster art just right.

The film follows a group of pro-immigration activists who use any means necessary to free “undocumented immigrants.” Viva the revolution!

Just don’t call “OBAA” a “left-wing” film, argues Variety’s Owen Gleiberman:

"The real revolution going on in this country now is the Christian nationalist revolution — an attempt to upend the American dream and replace it with a theocracy."

Yeah, that’s the tone of this fever-dream screed, so you can imagine the rest. Once the scribe takes a long, hot bath, he’s going to get to work on his next think piece: how Antifa is just an “anti-fascist” MeetUp group.

RELATED: Hollywood’s newest star isn’t human — and why that’s ‘disturbing’

Blaze Media

Norwood scale

Kevin O’Leary is saying the quiet part out loud.

The “Shark Tank” honcho makes an appearance in “Marty Supreme,” an Oscar-bait movie coming this Christmas. Timothee Chalamet stars as a ping-pong prodigy trying to win the sport’s biggest prize. O’Leary, who knows the value of a dollar, said the project could have saved “millions” had it fallen back on AI extras instead of using actual people:

Almost every scene had as many as 150 extras. Now, those people have to stay awake for 18 hours, be completely dressed in the background. [They’re] not necessarily in the movie, but they’re necessary to be there moving around. And yet, it costs millions of dollars to do that. Why couldn’t you simply put AI agents in their place?

It's sacrilege in Hollywood circles to say that, but he’s probably not wrong. Hollywood is wrestling with the looming AI threat, including attacks on AI “actress” Tilly Norwood.

Let’s hope AI can’t train Tilly to scream, “Free Palestine!” at award shows. Then we’ll know Hollywood stars are really on the endangered species list.

Investigative journalist warns: We are being ‘harvested’ for a posthuman future



Tech developers have sold us artificial intelligence as the ultimate tool for human progress and convenience. But people would be wise to ask, “What’s the catch?”

In a recent interview with Glenn Beck, investigative journalist Whitney Webb answered that question. What she reveals is bone-chilling.

“They want to harvest us for data. … They want to use us as bootloaders for their digital intelligence. They can't continue to improve and feed the AI without us doing it for them,” she says.

In other words, the future of AI depends on human experimentation.

AI users have been shackled by comfort and convenience. Without even realizing it, they’ve agreed to be put in a “digital prison without walls,” says Webb.

She advises those who care about their freedom to “actively build alternatives,” like “local resilient networks that don't depend on [AI] infrastructure,” and to seek “open-source alternatives to a lot of the Big Tech platforms out there.”

If we don’t start pushing back (and soon), we will be launched into a “posthuman future,” she warns.

This elitist initiative to eradicate our humanity is evident in that much of AI is targeted toward art, music, and writing — the very things that make us human.

“These are the things that we're being told to outsource to artificial intelligence,” says Webb.

“So what's going to be left for us when we outsource this all to AI? Will we allow ourselves to be cognitively diminished to the point that we can't even create any more? What kind of humans are we at that point?” she asks.

Another act of rebellion we all must commit is to refuse to relinquish creative work to AI and to raise children who are “anchored in the real world,” meaning they can paint and draw better than they can navigate a tablet.

Webb warns that parents must be intentional if they want to guard their families against the encroachment of the digital age, because techno-dependency, especially when it comes to children, is a pillar in elites’ sinister plan to push us into posthumanism.

“There's these efforts to have domestic robots in the house. A lot of the ads show young children developing emotional relationships with these robots, saying, ‘I love you.’ … That is not good,” says Webb.

If you need even more evidence that the Big Tech world is against your children, Webb reveals that many of the top figures in the tech industry were friends with Jeffery Epstein, a convicted pedophile.

“Do you want to trust those people to program stuff that's around your kids?” she asks.

She acknowledges that in the modern era, it’s exceedingly difficult to raise children without the help of technology and to set parameters for ourselves. That’s why so many people don’t bother with it. But they’ve fallen prey to the nefarious plot that undergirds the entire posthumanist movement: Create a society that worships convenience and comfort.

“The pull of AI is for us to be passive and do nothing and just let it wash over us,” says Webb.

“If we're not focused on the things that we like to create and that we like to do … we will recede, and that is how the posthuman future will happen.”

To hear more, watch the full interview above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

US Army general reveals he's been using an AI chatbot to make military decisions



Even United States military brass is looking to AI for answers these days.

The top United States Army commander in South Korea revealed to reporters this week that he has been using a chatbot to help with decisions that affect thousands of U.S. soldiers.

'As a commander, I want to make better decisions.'

On Monday, Major General William "Hank" Taylor told the media in Washington, D.C., that he is using AI to sharpen decision-making, but not on the battlefield. The major general — the fourth-highest officer rank in the U.S. Army — is using the chatbot to assist him in daily work and command of soldiers.

Speaking to reporters at a media roundtable at the annual Association of the United States Army conference, Taylor reportedly said "Chat and I" have become "really close lately."

According to Business Insider, the officer added, "I'm asking to build, trying to build models to help all of us."

Taylor also said that he is indeed using the technology to make decisions that affect the thousands of soldiers under his command, while acknowledging another blunt reason for using AI.

RELATED: The government's anti-drone energy weapons you didn't know existed

Photo by Seung-il Ryu/NurPhoto via Getty Images

"As a commander, I want to make better decisions," the general explained. "I want to make sure that I make decisions at the right time to give me the advantage."

In a seemingly huge revelation for an Army officer, Taylor also revealed that it has been a challenge to keep up with the developing technology.

At the same time, tech outlet Futurism claimed that the general is in fact using ChatGPT, warning that the AI has been found to generate false information regarding basic facts "over half the time."

ChatGPT is not mentioned in Business Insider's report.

Return reached out to Army officials to ask if the quotes attributed to Taylor were accurate, if he is actually using ChatGPT, and if they believe there to be inherent risks in doing so. An official Pentagon account acknowledged the request, but did not respond to the questions. This article will be updated with any applicable responses.

It was recently reported by Return that the military is already tinkering with a chatbot of its own.

RELATED: Zuckerberg's vision: US military AI and tech around the world

SeongJoon Cho/Bloomberg via Getty Images

Military exercises in Fort Carson, Colorado, and Fort Riley, Kansas, recently took place, utilizing an offline chatbot called EdgeRunner AI.

EdgeRunner CEO Tyler Saltsman told Return that his company is currently testing the chatbot with the Department of War to deliver real-time data and mission strategy to soldiers on the ground. The chatbot can be installed on a wide variety of devices and used without an internet connection, to avoid interception by the enemy.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!