‘Get behind me, Satan!’ — Glenn Beck reacts to Zuckerberg’s new ‘AI friends’



Depression, anxiety, substance abuse, and suicide are rampant these days, especially among youth. Studies have shown time and time again that these issues are largely due to social isolation, thanks to society’s addiction to social media that keeps us glued to our screens instead of engaging with others in person.

Thank goodness Meta CEO and Facebook founder Mark Zuckerberg has a solution to the “loneliness epidemic” he’s played a key role in creating: AI friends!

What could go wrong?

In a recent podcast interview with Dwarkesh Patel, Zuckerberg said, “The average American has, I think, it's fewer than three friends ... and the average person has demand for meaningfully more. ... There are all these things that are better about kind of physical connections when you can have them, but the reality is that people just don't have the connection, and they feel more alone.”

AI friends, he argued, could fill that gap.

Glenn Beck’s response? “Get behind me, Satan!”

“From the people who brought you 'Kill yourself because you've been on Facebook too much' brings you new AI friends,” he mocks.

Co-host Stu Burguiere cites a study that assessed how much social interaction dropped between 2003 and 2023 in each age group. “Every single group has massive drops. ... Ages 15-to-24-year-olds, 35% down,” he says.

Zuckerberg’s suggestion to cure this epidemic, however, is to essentially imbibe more of the same poison that landed us in this predicament in the first place.

Is he really that stupid?

Glenn says no. He’s not stupid; he’s disguising his ill intentions by wrapping them in a “beautiful, shiny package” of false concern for others’ loneliness.

“Let me be crystal clear — AI cannot, must not, and will never be your friend,” he warns, “and if you buy into that fantasy, you're opening a door to a world of manipulation, isolation, and control that make some of the darkest days of history look pretty tame.”

To hear Glenn’s predictions about the damage AI friends will do to the human psyche and spirit, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Dead shooting victim makes AI-generated court appearance, giving some the willies



An Arizona man sort of came back from the dead to address his killer in court, thanks to artificial intelligence, leaving his family with a sense of peace but others with a sense of horror.

On Monday, the friends and family of the late Christopher Pelkey gathered in the courtroom of Maricopa County Superior Court Judge Todd Lang for the sentencing of Gabriel Horcasitas, who had been convicted of manslaughter in Pelkey's death.

In November 2021, the two men got into a road-rage battle that escalated quickly. At a red light, Pelkey, 37, exited his vehicle and confronted Horcasitas, 50, outside of his car, at which point Horcasitas drew a weapon and fired several rounds at Pelkey, killing him, KNXV-TV reported at the time.

Now, three and a half years later, Pelkey made a return of sorts after his sister, Stacey Wales; her husband, Tim; and Tim's friend Scott Yentzer created a video that included a simulacrum of Pelkey giving a statement written by Wales.

"I'm a version of Chris Pelkey recreated through AI that uses my picture and my voice profile," the simulacrum said on a clip that can be viewed here.

The simulated Pelkey then addressed Horcasitas, offering forgiveness and a message of hope and redemption.

"To Gabriel Horcasitas, the man who shot me: It is a shame we encountered each other that day in those circumstances," Pelkey's image said. "In another life, we probably could have been friends."

"I believe in forgiveness and God who forgives. I always have, and still do," the image continued.

The simulated Pelkey even drew attention to his digitally aged appearance, encouraging listeners not to lament the natural aging process. "This is the best I can ever give you of what I would have looked like if I got the chance to grow old," AI Pelkey said. "Remember, getting old is a gift that not everybody has, so embrace it and stop worrying about those wrinkles."

'AI can ... hinder or even upend justice if inappropriately used.'

Even though Pelkey's loved ones submitted nearly 50 victim impact statements, "there was one missing piece," Wales said. "There was one voice that was not in those letters."

So Wales' husband and friend Yentzer, who have worked with artificial intelligence technology for years, put their heads together to find a way to give Chris a voice. Wales said the two men created "a Frankenstein of love" using several different tools mashed together. The result is believed to be the first AI-generated victim impact statement used in an American court.

The video of Pelkey's image had a profound emotional impact on many in the courtroom, including Pelkey's brother John. "To see his face and to hear his voice [talk about forgiveness], just waves of healing washed over my soul. Because that was the man that I knew," John said.

Judge Lang was likewise moved. While prosecutors had recommended a nine-and-a-half-year sentence, Judge Lang sentenced Horcasitas to 10 and a half years and professed admiration for the AI-generated victim statement.

"I love that AI," the judge said at the hearing. "Thank you for that. I felt like that was genuine, that his obvious forgiveness of Mr. Horcasitas reflects the character I heard about today."

Not everyone was so enthusiastic about the video, however. The New York Post described it as "eerie," and Arizona Chief Justice Ann Timmer noted that while AI may have an important role to play in American courtrooms going forward, it should be monitored carefully.

"AI can ... hinder or even upend justice if inappropriately used," Timmer explained in a statement. "A measured approach is best. Along those lines, the court has formed an AI committee to examine AI use and make recommendations for how best to use it. At bottom, those who use AI — including courts — are responsible for its accuracy."

Others online were similarly wary:

  • "This is highly inappropriate and I cannot believe it was allowed to happen in a court of law," quipped Michigan journalist James David Dickson.
  • "This doesn't seem ... healthy for anyone or anything involved," said the Scottish Law Librarians Group.
  • "The most distressing technological development of the year," added another user.
  • "Not sure what's worse: the sister presenting the AI of her dead brother in court [or] the judge's reaction to it," said yet another user.

One X user warned loved ones against even thinking about pulling such a stunt. "If I die and you do this s*** to me I will haunt your a** like you would not believe," the user promised. "I'm talkin VERY spooky antics."

Peter Gietl, managing editor of Return at Blaze Media, is likewise disgusted that an "AI ghost" was admitted into an American court of law.

"I can’t believe a judge would allow a kangaroo court to occur in a court of law. The defendant should try to get a mistrial declared," he told Blaze News in a statement. "Unfortunately, these AI ghosts are going to become more common. However, they aren’t the real person any more than a cartoon is."

"An AI creation of a deceased family member is not your family member, and disturbing at a core level of our humanity."

Still, Wales seems at peace with the video and statement she helped create on her late brother's behalf. "I want the world to know Chris existed," she said. "If one person hears his name or sees this footage and goes to his Facebook page or looks him up on YouTube, they will hear Chris’ love."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

AI Child Sexual Abuse Images Would Normalize Pedophilia, Not Cure It

In a world where AI child porn is normalized as a therapeutic tool, we'll have pro-pedophile activists normalizing pedophilia itself.

Forget service with a smile — these days I'd settle for service from a human



After a week of dealing with service calls to my internet company and having to go to many more stores than usual, I suspect there’s a coordinated campaign to prevent humans from talking to each other.

I’m not entirely kidding. Have you noticed, especially since the “pandemic,” that it’s becoming the new-normal to be stopped from speaking to other people? We’re now directed to “interface” with machines. It happens on the phone, at gas stations, at grocery stores, at restaurants.

There’s something so off about walking up to the register while one lone employee stands in front of the cigarette case and monitors you while you do his job.

Have you been handed a piece of paper with a QR code on it when you’re seated at a restaurant and told to “scan this for the menu”? Have you been told (not “asked”) to scan your own groceries, bag them, and punch your payment into the register?

How about the robotic phone tree lady that prevents you from speaking to a person at the gas company, the bank, or any other business you call?

Phoning it in

People have been complaining about the decline in customer service since at least as far back as the 1980s. The worst of it was the then-recently invented phone tree.

Phone trees have always been irritating, but they’re out of control now: There is no human staffed department to which you can be directed. Worse, companies deliberately restrict the subjects you can “ask” about by leaving them off the menu options, and the systems hang up on you if you try to get a human agent.

It’s getting infinitely worse with the overnight adoption of shiny, glittery-new AI technology. In the past month, I finally stopped doing business with my old internet company — a huge multinational company that you have heard of and not in fond terms — because it has programmed its AI “customer service rep” to blatantly refuse to connect customers with a human.

Call waiting (and waiting)

Here’s how these online chats go:

AI agent: Please choose from billing, technical support, or new sales.

Me: Need more options. Need agent.

AI: Please choose from billing, technical support, or new sales.

Me: Agent.

AI: I’m sorry, please choose from ...

Me: Agent! I need an agent! My question is not listed!

AI: I’m sorry, but I cannot connect you to an agent until you follow the suggested steps above. Goodbye.

And then the chat window closes, or the call disconnects.

Yes, I’m serious. The robots now brazenly hang up on you if you don’t obey their commands. How did customers suddenly end up having to take orders from company devices instead of the other way around?

Inconvenience store

It’s no better in person, and I’m sorry to say that human behavior is just as bad as robotic misconduct. This week, I needed a five-gallon jug of kerosene. I heat and light my home in cold weather with restored antique kerosene lamps. These aren’t the small "Little House on the Prairie" oil lamps you’re thinking of; they’re big thirsty bad boys that put out major light and heat.

So I go to the farm store, where they sell kerosene in large jugs at 40% less than other stores. When I walk over to the shelf, there’s nothing there. Damn. Now, I have to weigh whether or not to talk to a staff member.

Fifteen years ago, this wasn’t a hard decision — in fact, it wasn’t a decision at all. But today? The most common response I get from store staff when asking for help is a facial expression that communicates irritation and an attitude meant to express, “You, customer, are inconveniencing me.”

It’s most pronounced in anyone under 40, as Millennials and Gen Zers were not taught things like “doing your job” or “not being awful to the people who pay your wage through their customers.”

I chance it and ask the frazzled 22-year-old at the register. He won’t make eye contact with me, of course. “Hi there. I see that the kerosene isn’t in its usual spot. Could you please tell me if you have it in stock, or when you will have it in stock again?”

Without looking at me, he replies, “I don’t know.” What am I supposed to say to this? Wouldn’t you take that as another way of saying, “I’m not going to answer your question, and I want you to go away?”

So I say, “Right. Could you please tell me who might know or how I will be able to find out whether I will be able to buy kerosene here and when that might be?”

Annoyed, the cashier makes an exasperated noise and says, “They don’t tell us what’s coming on the truck. All I know is that it comes on Tuesdays and Thursdays — check back then.”

When I worked retail, had my boss observed me speak to a patron like this, I would have been fired on the spot.

No talking

My last stop on this outing is to grab some lunch. There’s a brand-new gas station/convenience store/truck stop that just opened two miles up the road from where I live in Vermont. It’s sort of like a northern version of the famous Bucc-ee’s truck stop “malls” you see in the South. You can get hot and cold food, soft drinks, beer, liquor, small electronics accessories, motor oil, and toys to keep the kids quiet.

Sadly, “make the customer do the store’s job” has metastasized to the corner store, too.

This place is all self-checkout. There’s something so off about walking up to the register, while one lone employee stands in front of the cigarette case and monitors you while you do his job. There’s no etiquette for it. The employees don’t greet you, leaving you wondering if they’re afraid you’ll ask them to do something if they signal that they’re aware of your presence.

I am prepared for that. I am not prepared for having to do the same thing for a sandwich.

I stand at the deli counter for about two minutes, while two employees stand behind the counter 20 feet away chatting with each other as if I were not there. Then, it dawns on me. There is that bank of iPads blazing out saturated color. I, the customer, am forced to punch a touchscreen on the machine to put in my order. There is to be no talking to other humans.

The device has every annoyance, starting with the fact that the customer is forced to learn a new, company-bespoke set of “buttons” and software, adding frustration and time to what ought to be a simple request.

Employees won’t talk to you, of course, even when they know you’re having trouble. After finally (I think) placing my order, dramatic pipe organ music starts blaring from a hidden speaker. It’s playing a plagal cadence, the part at the end of a church hymn that goes “aaaaa-men.” Apparently, this signals that one’s order has been sent to St. Peter and will be delivered shortly.

The younger of the two counter staffers looks at me briefly while the fanfare echoes against the tile walls. I say, “Am I allowed to talk to you?”

She just stares at me.

And My App! Palantir’s Quest To Give AI a Moral Purpose

In a hole in the ground there lived some hobbits. Not a nasty, dirty, wet hole. Silicon Valley isn’t known for those. But a respectable place in a respectable town—and yet, somehow, these hobbits ended up going out on a great adventure. They may have lost the neighbors’ respect, but they gained … well, you will see whether they gained anything in the end.

The post And My App! Palantir’s Quest To Give AI a Moral Purpose appeared first on .

Singularity: The elites' dystopian view of human beings



The singularity has been at the tip of many tech-savvy and global-elitist tongues as of late — and its implications are more than a little frightening.

According to Justin Haskins, president of Our Republic and senior fellow at the Heartland Institute, the definition of the singularity is a "hypothetical moment off into the future when technology advances to a point where it just is completely transformative for humanity.”

“Typically, the way it's talked about is artificial intelligence — or just machines in general — become more intelligent than human beings,” Haskins tells Allie Beth Stuckey of “Relatable.” He goes on to say that some people describe the singularity as the time when AI "has the ability to sort of continue to redesign itself."


While Haskins notes that some of the consequences of the singularity are positive — like the potential to cure cancer — it also creates all kinds of ethical problems.

“What happens when a lot of employees are no longer needed because HR and loan officers and all these other big gigantic parts of businesses can just be outsourced to an artificial intelligence system?” he asks.

In response, Haskins says, “There’ll be massive disruptions in the job market.”

Stuckey herself is wary of the small issues we have now that might grow into bigger problems.

“People have posted their interactions with different kinds of AI, whether it's ChatGPT or Grok,” she explains.

She continues, “I've seen people post their conversations of saying like, ‘Would you rather’ — asking the AI bot — ‘Would you rather misgender someone, like misgender Bruce Jenner, or kill a thousand people,’ and it will literally try to give some nuanced take about how misgendering is never okay.”

“And I know that we’re talking beyond just these chat bots. We’re talking about something much bigger than that, but if that’s what's happening on a small scale, we can see a peek into the morality of artificial intelligence,” she adds.

“If all of this is being created and programmed by people with particular values, that are either progressive or just pragmatists, like if they’re just like, 'Yeah, whatever we can do and whatever makes life easier, whatever makes me richer, we should just do that’ — there will be consequences of it,” she says.

Stuckey also notes that she had recently heard someone of importance discussing the loss of jobs and what people will do as a result, and the answer to that was concerning.

“It was some executive that said, ‘I’m not scared about AI killing 150 million jobs. That’s actually why we are creating these very immersive video games — so that when people lose their jobs, they can just play these video games and they can be satisfied and fulfilled that way,” Stuckey explains.

“That is a very dystopian look at the future,” she continues, adding, “And yet, that tells us the mind of a lot of the people at WEF, a lot of the people at Davos, a lot of the people in Silicon Valley. That’s really how they see human beings.”

“Whether you’re talking about the Great Reset, whether you’re talking about singularity, they don’t see us as people with innate worth; they see us as cogs in a wheel,” she adds.

Want more from Allie Beth Stuckey?

To enjoy more of Allie’s upbeat and in-depth coverage of culture, news, and theology from a Christian, conservative perspective, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Report: AI Company DeepSeek ‘Funnels’ American User Data To Red China

The Chinese artificial intelligence (AI) company DeepSeek is allegedly syphoning American user data to China’s communist government, according to a new congressional report. Released on Wednesday by the House Select Committee on the Chinese Communist Party (CCP), the 16-page analysis contends that the China-based AI firm “collects detailed user data, which it transmits via backend […]

How Trump’s Antitrust Agenda Can Tackle Some Of America’s Biggest Problems

By working with U.S. companies, rather than against them, the administration can preserve American values, security, and global leadership.

Tech elites warn ‘reality itself’ may not survive the AI revolution



When Elon Musk warns that money may soon lose its meaning and Dario Amodei speaks of an AI-driven class war, you might think the media would take notice. These aren’t fringe voices. Musk ranks among the world’s most recognizable tech leaders, and Amodei is the CEO of Anthropic, a leading artificial intelligence company developing advanced models that compete with OpenAI.

Together, they are two of the most influential figures shaping the AI revolution. And they’re warning that artificial intelligence will redefine everything — from work and value to meaning and even our grasp of reality.

But the public isn’t listening. Worse, many hear the warnings and choose to ignore them.

Warnings from inside the machine

At the 2025 Davos conference, hosted by the World Economic Forum, Amodei made a prediction that should have dominated headlines. Within a few years, he said, AI systems will outperform nearly all humans at almost every task — and eventually surpass us in everything.

“When that happens,” Amodei said, “we will need to have a conversation about how we organize our economy. How do humans find meaning?”

Either we begin serious conversations about protecting liberty and individual autonomy in an AI-driven world, or we allow a small group of global elites to shape the future for us.

The pace of change is alarming, but the scale may be even more so. Amodei warns that if 30% of human labor becomes fully automated, it could ignite a class war between the displaced and the privileged. Entire segments of the population could become economically “useless” in a system no longer designed for them.

Elon Musk, never one to shy away from bold predictions, recently said that AI-powered humanoid robots will eliminate all labor scarcity. “You can produce any product, provide any service. There’s really no limit to the economy at that point,” Musk said.

Will money even be meaningful?” Musk mused. “I don’t know. It might not be.”

Old assumptions collapse

These tech leaders are not warning about some minor disruption. They’re predicting the collapse of the core systems that shape human life: labor, value, currency, and purpose. And they’re not alone.

Former Google CEO Eric Schmidt has warned that AI could reshape personal identity, especially if children begin forming bonds with AI companions. Filmmaker James Cameron says reality already feels more frightening than “The Terminator” because AI now powers corporate systems that track our data, beliefs, and movements. OpenAI CEO Sam Altman has raised alarms about large language models manipulating public opinion, setting trends, and shaping discourse without our awareness.

Geoffrey Hinton — one of the “Godfathers of AI” and a former Google executive — resigned in 2023 to speak more freely about the dangers of the technology he helped create. He warned that AI may soon outsmart humans, spread misinformation on a massive scale, and even threaten humanity’s survival. “It’s hard to see how you can prevent the bad actors from using [AI] for bad things,” he said.

These aren’t fringe voices. These are the people building the systems that will define the next century. And they’re warning us — loudly.

We must start the conversation

Despite repeated warnings, most politicians, media outlets, and the public remain disturbingly indifferent. As machines advance to outperform humans intellectually and physically, much of the attention remains fixed on AI-generated art and customer service chatbots — not the profound societal upheaval industry leaders say is coming.

The recklessness lies not only in developing this technology, but in ignoring the very people building it when they warn that it could upend society and redefine the human experience.

This moment calls for more than fascination or fear. It requires a collective awakening and urgent debate. How should society prepare for a future in which AI systems replace vast segments of the workforce? What happens when the economy deems millions of people economically “useless”? And how do we prevent unelected technocrats from seizing the power to decide those outcomes?

The path forward provides no room for neutrality. Either we begin serious conversations about protecting liberty and individual autonomy in an AI-driven world, or we allow a small group of global elites to shape the future for us.

The creators of AI are sounding the alarm. We’d better start listening.

Weekend Beacon 3/30/25

As peace remains elusive in Ukraine, it's worth reflecting on a time when the region was similarly witness to carnage—but with the Russians fighting the British, French, Turks, and... Sardinians?

The post Weekend Beacon 3/30/25 appeared first on .