Big Tech’s AI boom hits voters hard — and Democrats pounce



Wouldn’t it be a bitter irony if Republicans lost the midterms — maybe even in conservative red states — because Democrats outmaneuvered them on the dangers of the AI data-center boom? The left now warns voters about land seizures, rising electric bills, water shortages, and Big Tech’s unchecked power. Meanwhile, Republicans stay quiet as Trump himself champions the very agenda voters increasingly fear.

During the Biden years, Republicans attacked Big Tech censorship, digital surveillance, Agenda 2030 land-grabs, and the artificial online culture reshaping young Americans. Every one of those concerns now intersects with the data-center explosion — energy demands, land use, power monopolies, and the rise of generative AI — but the political right barely whispers about it.

Republicans can channel AI toward focused, beneficial uses and away from a dystopian model that erodes civic life. Voters already want that shift.

Democrats don’t make that mistake. They see a potent electoral weapon.

Georgia hadn’t elected a Democrat statewide since 2006. Yet Democrat Peter Hubbard defeated a Republican incumbent on the Public Service Commission by 26 points by hammering “sweetheart deals” GOP officials granted hyperscale data centers. Voters in the state face repeated rate hikes linked to the massive energy demands of Big Tech facilities.

“The number-one issue was affordability,” Hubbard told Wired. “But a very close second was data centers and the concern around them just sucking up the water, the electricity, the land — and not really paying any taxes.”

He wasn’t exaggerating. In 2022, Georgia’s Republican legislature passed a sales-tax exemption for data centers. In 2024, a bipartisan bill attempted to halt those tax breaks, but Gov. Brian Kemp (R) vetoed it. Voters noticed — and punished the GOP for it.

Georgia now surpasses northern Virginia in hyperscale growth. Atlanta’s data-center inventory rose 222% in two years, with more than 2,150 megawatts of new construction under way. It’s no mystery why Democrats flipped two PSC seats in blowouts.

Republicans lost because they defended crony capitalism that inflated energy bills, devoured land, and fed an AI industry conservatives once warned about. If Kamala Harris had pushed the data-center agenda as aggressively as Trump now does, Republicans would be in open revolt. But Trump’s support silences the conservative grassroots and leaves Democrats free to define the issue.

Virginia tells the same story. Democrat John McAuliff flipped a GOP seat by attacking Big Tech’s land-grab and the rising utility costs tied to data-center expansion. He blasted his opponent for profiting while family farms vanished under the footprint of hyperscale development. He became the first Democrat in 30 years to carry the district.

At the statewide level, Democrat Abigail Spanberger won the governor’s office by arguing that AI data centers must pay their “fair share” of soaring energy costs. She framed the issue as a fight to protect families from Big Tech’s strain on the grid.

New Jersey voters heard similar warnings as they faced a 22% electric rate increase. Democrat Mikie Sherrill defeated Republican Jack Ciattarelli by double digits after blaming part of the spike on hyperscale energy demand. She pledged to declare a state of emergency to halt increases and require data centers to fund grid upgrades.

This pattern repeats in reliably red states.

Indiana saw dozens of new hyperscale proposals, yet not a single Republican official pushed back. Ordinary citizens blocked one of Google’s planned rezonings near Indianapolis. Liberal groups — like Citizens Action Coalition — filled the leadership vacuum and demanded a moratorium on new data centers, calling it a fight against “big tech oligarchs that are calling all the shots at every single level of government.”

RELATED: Stop feeding Big Tech and start feeding Americans again

Kyle Grillot/Bloomberg via Getty Images

Republican leaders, meanwhile, worked to ban states from regulating AI at all. This summer they attempted to insert a sweeping prohibition into the budget reconciliation bill that would bar states from regulating data-center siting or AI content for 10 years. House Majority Leader Steve Scalise (R-La.) now seeks to attach the same language to the FY 2026 defense authorization act. President Trump backs the provision.

Instead of ceding the issue to the left, Republicans should correct course. They can channel AI toward focused, beneficial uses and away from a dystopian model that erodes civic life. Voters already want that shift. A new University of Maryland poll found residents believe — by a 2-1 margin — that AI will harm society more than it helps. More than 80% expressed deep concern about declining face-to-face interaction, the erosion of education and critical thinking, and job displacement fueled by AI.

Capital expenditures cannot sustain the current pace of expansion, and public patience with Big Tech’s demands is running out. The political party that recognizes these realities first will earn the credit. Right now, the party that once defended property rights, community values, and human-centered technology is getting lapped by the party that partnered with Big Tech oligarchs to censor Americans during COVID.

Republicans still have time to lead. But they won’t win a fight they refuse to join.

When a ‘too big to fail’ America meets a government too broke to bail it out



I’ve been titanically bearish on America for years. Sorry. I can do math.

The United States owes more than $38 trillion. That alone makes the balance sheet hopeless. The debt is insurmountable.

America’s GDP in 2024 was $29.2 trillion, meaning the debt exceeds 130% of what we produce in a year. If this were a business, every financial adviser would tell you to file Chapter 11 and salvage what you can.

Washington keeps adding another trillion to the tab roughly every 100 days. As the debt climbs, interest payments climb faster. The country now spins in a debt spiral that ends only one way. Game over.

The more the world moves away from the dollar, the more tens of trillions of unwanted dollars come flooding home. You haven’t seen anything like real devaluation yet.

Then comes the $210 trillion in future unfunded liabilities — mostly Social Security and Medicare. Those numbers don’t pencil out in any universe.

Underneath all of it sits a sinking currency. The dollar lost 87% of its value since we abandoned the gold standard in 1971. For decades, the petrodollar arrangement held the world in our system by forcing oil purchases through the U.S. currency. Saudi Arabia let that mandate expire last year. Global energy deals immediately began shifting to other currencies.

The more the world moves away from the dollar, the more tens of trillions of unwanted dollars come flooding home. You haven’t seen anything like real devaluation yet.

To fund our binge, Washington must keep selling treasuries. But foreign buyers are losing interest. Rates rise. The government buys its own debt just to keep markets from buckling. The Cayman Islands now holds $1.85 trillion — the largest single foreign share and rising fast. Treasury officials tried to obscure the numbers. None of it signals stability.

Meanwhile, our economy rests on an absurdly fragile foundation: 70% consumption. Seven out of 10 dollars depend on Americans buying things they can no longer afford. Household debt hit a record $18.6 trillion — nearly two-thirds of GDP. Families now pay down debt instead of fueling growth.

Shrinking consumption means a shrinking economy. Shrinking economy means shrinking tax revenue. Combine that with a weakening dollar and the picture becomes darker still.

Enter artificial intelligence, the accelerant. AI threatens tens of millions of jobs within years, wiping out income and collapsing the consumption model even faster. A government facing falling revenue and exploding obligations cannot pretend to stay solvent.

Some cling to fantasies like universal basic income. With what money? The same government already $210 trillion short on existing promises? Please.

This all points toward an economic crash far larger than 2008. Washington froze that crisis with $29 trillion in bailouts — money it didn’t have then either. We conjured it and shoved it onto the national debt.

That option is gone.

Today the government sits too deep in debt, with a weaker dollar and fewer global buyers. And the next crisis won’t hit one sector. It hits everything:

• Record mortgage debt: $13.1 trillion
• Record credit-card debt: $1.2 trillion
• Collapsing commercial real estate: $4.9 trillion
• Big Tech borrowing hundreds of billions to inflate an AI bubble

OpenAI’s Sam Altman already expects an eventual government bailout for AI’s collapse.

RELATED: When the AI bubble bursts, guess who pays

Photo by Andrew Harnik/Getty Images

Total U.S. debt — public and private — hit $102.2 trillion in 2024. Washington cannot rescue a single major sector, let alone all of them. The national debt was $10 trillion during bailout 2008. It’s four times that now. The dollar buys less. Foreign creditors show less patience.

So who steps in next time? Who buys the treasuries? Who absorbs the losses?

No one. Not abroad. Not at home. Nowhere on this planet.

That leaves Washington with only one move: Print tens of trillions in new dollars and hand them to itself — more IOIs (as opposed to IOUs) stacked on a pile already ready to topple.

And that printing wave will obliterate whatever value the dollar still holds.

Think the dollar’s fallen far? You haven’t seen anything yet.

Google boss compares replacing humans with AI to getting a fridge for the first time



The head of Google's parent company says welcoming artificial intelligence into daily life is akin to buying a refrigerator.

Alphabet's chief executive, Indian-born Sundar Pichai, gave a revealing interview to the BBC this week in which he asked the general population to get on board with automation through AI.

'Our first refrigerator .... radically changed my mom's life.'

The BBC's Faisal Islam, whose parents are from India, asked the Indian-American executive if the purpose of his AI products were to automate human tasks and essentially replace jobs with programming.

Pichai claimed that AI should be welcomed because humans are "overloaded" and "juggling many things."

He then compared using AI to welcoming the technology that a dishwasher or fridge once brought to the average home.

"I remember growing up, you know, when we got our first refrigerator in the home — how much it radically changed my mom's life, right? And so you can view this as automating some, but you know, freed her up to do other things, right?"

Islam fired back, citing the common complaints heard from the middle class who are concerned with job loss in fields like creative design, accounting, and even "journalism too."

"Do you know which jobs are going to be safer?" he posited to Pichai.

RELATED: Here's how to get the most annoying new update off of your iPhone

The Alphabet chief was steadfast in his touting of AI's "extraordinary benefits" that will "create new opportunities."

At the same time, he said the general population will "have to work through societal disruptions" as certain jobs "evolve" and transition.

"People need to adapt," he continued. "Then there would be areas where it will impact some jobs, so society — I mean, we need to be having those conversations. And part of it is, how do you develop this technology responsibly and give society time to adapt as we absorb these technologies?"

Despite branding Google Gemini as a force for good that should be embraced, Pichai strangely admitted at the same time that chatbots are not foolproof by any means.

RELATED: 'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones

- YouTube

"This is why people also use Google search," Pichai said in regard to AI's proclivity to present inaccurate information. "We have other products that are more grounded in providing accurate information."

The 53-year-old told the BBC that it was up to the user to learn how to use AI tools for "what they're good at" and not "blindly trust everything they say."

The answer seems at odds with the wonder of AI he championed throughout the interview, especially when considering his additional commentary about the technology being prone to mistakes.

"We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Welcome To The Future, Where AI Grandma Raises The Kid You Bought From The Baby Bank

Fertility tech solved the 'problem' of your body's physical limitations. Now, a new app has solved that of your mother's.

AI Idols Will Make Idiots Of Us All — If We Let Them

We're making utter fools of ourselves while claiming to have reached the apex of wisdom.

'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones



The use of celebrity likeness for AI videos is spiraling out of control, and one of Hollywood's biggest stars is not having it.

Despite the use of AI in online videos being fairly new, it has already become a trope to use an artificial version of a celebrity's voice for content relating to news, violence, or history.

'I don't appreciate it, and I get paid for doing stuff like that.'

This is particularly true when it comes to satirical videos that are meant to sound like documentaries. Creators love to use recognizable voices, like David Attenborough's and, of course, Morgan Freeman's, whose voice has become so recognizable that others have labeled him as "the voice of God."

However, the 88-year-old Freeman is not pleased about his voice being replicated. In an interview with the Guardian, he said that while some actors like James Earl Jones (who played Darth Vader) have consented to his voice being imitated with computers, he has not.

"I'm a little PO'd, you know," Freeman told the outlet. "I'm like any other actor: Don't mimic me with falseness. I don't appreciate it, and I get paid for doing stuff like that, so if you're gonna do it without me, you're robbing me."

Freeman explained that his lawyers have been "very, very busy" in pursuing "many ... quite a few" cases in which his voice was replicated without his consent.

In the same interview, the Memphis native was also not shy about criticizing the concept of AI actors.

RELATED: Hollywood’s newest star isn’t human — and why that’s ‘disturbing’

Photo by Chris Haston/WBTV via Getty Images

Freeman was asked about Tilly Norwood, the AI character introduced by Dutch actress Eline Van der Velden in 2025. The pretend-world character is meant to be an avatar mimicking celebrity status, while also cutting costs in the casting room.

"Nobody likes her because she's not real and that takes the part of a real person," Freeman jabbed. "So it's not going to work out very well in the movies or in television. ... The union's job is to keep actors acting, so there's going to be that conflict."

Freeman spoke out about the use of his voice in 2024, as well. According to a report by 4 News Now, a TikTok creator posted a video claiming to be Freeman's niece and used an artificial version of his voice to narrate the video.

In response, Freeman wrote on X, "Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me."

He added, "Your dedication helps authenticity and integrity remain paramount. Grateful."

RELATED: Meet AI 'actress' Tilly Norwood. Is she really the future of entertainment?

Norwood is not the first attempt at taking an avatar mainstream. In 2022, Capitol Records flirted with an AI rapper named FN Meka; the very idea that the rapper was even signed to a label was historic in the first place.

The rapper, or more likely its representatives, were later dropped from the label after activists claimed the character reinforced racial stereotypes.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Middle school boy faces 10 felonies in AI nude scandal. But expulsion of girl, 13 — an alleged victim — sparks firestorm.



A Louisiana middle school boy is facing 10 felony counts for using AI to create fake nude photos of female classmates and sharing them with other students, according to multiple reports. However, one alleged female victim has been expelled following her reported reaction to the scandal.

On Aug. 26, detectives with the Lafourche Parish Sheriff's Office launched an investigation into reports that male students had shared fake nude photos of female classmates at the Sixth Ward Middle School in Choctaw.

'What’s going on here, I’ll be quite frank, is nothing more than disgusting.'

Benjamin Comeaux, an attorney representing the alleged female victim, said the images used real photos of the girls, including selfies, with AI-generated nude bodies, the Washington Post reported.

Comeaux said administrators reported the incident to the school resource officer, according to the Post.

The Lafourche Parish Sheriff's Office said in a statement that the incident "led to an altercation on a school bus involving one of the male students and one of the female students."

Comeaux said during a bus ride, several boys shared AI-made nude images of a 13-year-old girl, and the girl in question struck one of the students sharing the images, the Post reported.

However, school administrators expelled the 13-year-old girl over the physical altercation.

Meanwhile, police said that a male suspect on Sept. 15 was charged with 10 counts of unlawful dissemination of images created by artificial intelligence.

The sheriff's office noted that the investigation is ongoing, and there is a possibility of additional arrests and charges.

Sheriff Craig Webre noted that the female student involved in the alleged bus fight will not face criminal charges "given the totality of the circumstances."

Webre added that the investigation involves technology and social media platforms, which could take several weeks and even months to "attain and investigate digital evidence."

RELATED: 'A great deal of concern': High school student calls for AI regulations after fake nude images of her shared online

The alarming incident was brought back to life during a fiery Nov. 5 school board meeting during which attorneys for the expelled female student slammed school administrators.

According to WWL-TV, an attorney said, "She had enough, what is she supposed to do?"

"She reported it to the people who are supposed to protect her, but she was victimized, and finally she tried to knock the phone out of his hand and swat at him," the same attorney added.

One attorney also noted, "This was not a random act of violence ... this was a reasonable response to what this kid endured, and there were so many options less than expulsion that could’ve been done. Had she not been a victim, we’re not here, and none of this happens."

Her representatives also warned, "You are setting a dangerous precedent by doing anything other than putting her back in school," according to WWL.

Matthew Ory, one of the attorneys representing the female student, declared, "What’s going on here, I’ll be quite frank, is nothing more than disgusting. Her image was taken by artificial intelligence and manipulated and manufactured to be child pornography."

School board member Valerie Bourgeois pushed back by saying, "Yes, she is a victim, I agree with that, but if she had not hit the young man, we wouldn’t be here today, it wouldn’t have come to an expulsion hearing."

Tina Babin, another school board member, added, "I found the video on the bus to be sickening, the whole thing, everything about it, but the fact that this child went through this all day long does weigh heavy on me."

Lafourche Parish Public Schools Superintendent Jarod Martin explained, "Sometimes in life, we can be both victims and perpetrators. Sometimes in life, horrible things happen to us, and we get angry and do things."

Ultimately, the school board allowed the girl to return to school, but she will be on probation until January.

Attorneys for the girl's family, Greg Miller and Morgyn Young, told WWL that they intend to file a lawsuit.

"Nobody took any action to confiscate cell phones, to put an end to this," Miller claimed. "It's pure negligence on the part of the school board."

Martin defended the district in a statement that read:

Any and all allegations of criminal misconduct on our campuses are immediately reported to the Lafourche Parish Sheriff’s Office. After reviewing this case, the evidence suggests that the school did, in fact, follow all of our protocols and procedures for reporting such instances.

Sheriff Webre warned, "While the ability to alter images has been available for decades, the rise of AI has made it easier for anyone to alter or create such images with little to no training or experience."

Webre also said, "This incident highlights a serious concern that all parents should address with their children.”

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Unprecedented': AI company documents startling discovery after thwarting 'sophisticated' cyberattack



In the middle of September, AI company and Claude developer Anthropic discovered "suspicious activity" while monitoring real-world cyberattacks that used artificial intelligence agents. Upon further investigation, however, the company came to realize that this activity was in fact a "highly sophisticated espionage campaign" and a watershed moment in cybersecurity.

AI agents weren't just providing advice to the hackers, as expected.

'The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms.'

Anthropic's Thursday report said the AI agents were executing the cyberattacks themselves, adding that it believed that this is the "first documented case of a large-scale cyberattack executed without substantial human intervention."

RELATED: Coca-Cola doubles down on AI ads, still won't say 'Christmas'

Photo by Samuel Boivin/NurPhoto via Getty Images

The company's investigation showed that the hackers, whom the report "assess[ed] with high confidence" to be a "Chinese-sponsored group" manipulated the AI agent Claude Code to run the cyberattack.

The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.

The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.

While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. "This AI hallucination in offensive security contexts presented challenges for the actor's operational effectiveness, requiring careful validation of all claimed results," the analysis explained.

Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.

The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.

Interestingly, Anthropic said the attackers were able to trick Claude through sustained "social engineering" during the initial stages of the attack: "The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing."

The report also responded to a question that is likely on many people's minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?

In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.

Artificial intelligence just wrote a No. 1 country song. Now what?



The No. 1 country song in America right now was not written in Nashville or Texas or even L.A. It came from code. “Walk My Walk,” the AI-generated single by the AI artist Breaking Rust, hit the top spot on Billboard’s Country Digital Song Sales chart, and if you listen to it without knowing that fact, you would swear a real singer lived the pain he is describing.

Except there is no “he.” There is no lived experience. There is no soul behind the voice dominating the country music charts.

If a machine can imitate the soul, then what is the soul?

I will admit it: I enjoy some AI music. Some of it is very good. And that leaves us with a question that is no longer science fiction. If a machine can fake being human this well, what does it mean to be human?

A new world of artificial experience

This is not just about one song. We are walking straight into a technological moment that will reshape everyday life.

Elon Musk said recently that we may not even have phones in five years. Instead, we will carry a small device that listens, anticipates, and creates — a personal AI agent that knows what we want to hear before we ask. It will make the music, the news, the podcasts, the stories. We already live in digital bubbles. Soon, those bubbles might become our own private worlds.

If an algorithm can write a hit country song about hardship and perseverance without a shred of actual experience, then the deeper question becomes unavoidable: If a machine can imitate the soul, then what is the soul?

What machines can never do

A machine can produce, and soon it may produce better than we can. It can calculate faster than any human mind. It can rearrange the notes and words of a thousand human songs into something that sounds real enough to fool millions.

But it cannot care. It cannot love. It cannot choose right and wrong. It cannot forgive because it cannot be hurt. It cannot stand between a child and danger. It cannot walk through sorrow.

A machine can imitate the sound of suffering. It cannot suffer.

The difference is the soul. The divine spark. The thing God breathed into man that no code will ever have. Only humans can take pain and let it grow into compassion. Only humans can take fear and turn it into courage. Only humans can rebuild their lives after losing everything. Only humans hear the whisper inside, the divine voice that says, “Live for something greater.”

We are building artificial minds. We are not building artificial life.

Questions that define us

And as these artificial minds grow sharper, as their tools become more convincing, the right response is not panic. It is to ask the oldest and most important questions.

Who am I? Why am I here? What is the meaning of freedom? What is worth defending? What is worth sacrificing for?

That answer is not found in a lab or a server rack. It is found in that mysterious place inside each of us where reason meets faith, where suffering becomes wisdom, where God reminds us we are more than flesh and more than thought. We are not accidents. We are not circuits. We are not replaceable.

RELATED: AI can fake a face — but not a soul

Seong Joon Cho/Bloomberg via Getty Images

The miracle machines can never copy

Being human is not about what we can produce. Machines will outproduce us. That is not the question. Being human is about what we can choose. We can choose to love even when it costs us something. We can choose to sacrifice when it is not easy. We can choose to tell the truth when the world rewards lies. We can choose to stand when everyone else bows. We can create because something inside us will not rest until we do.

An AI content generator can borrow our melodies, echo our stories, and dress itself up like a human soul, but it cannot carry grief across a lifetime. It cannot forgive an enemy. It cannot experience wonder. It cannot look at a broken world and say, “I am going to build again.”

The age of machines is rising. And if we do not know who we are, we will shrink. But if we use this moment to remember what makes us human, it will help us to become better, because the one thing no algorithm will ever recreate is the miracle that we exist at all — the miracle of the human soul.

Want more from Glenn Beck? Get Glenn's FREE email newsletter with his latest insights, top stories, show prep, and more delivered to your inbox.

1980s-inspired AI companion promises to watch and interrupt you: 'You can see me? That's so cool'



A tech entrepreneur is hoping casual AI users and businesses alike are looking for a new pal.

In this case, "PAL" is a floating term that can mean either a complimentary video companion or a replacement for a human customer service worker.

'I love the print on your shirt; you're looking sharp today.'

Tech company Tavus calls PALs "the first AI built to feel like real humans."

Overall, Tavus' messaging is seemingly directed toward both those seeking an artificial friend and those looking to streamline their workforce.

As a friend, the avatar will allegedly "reach out first" and contact the user by text or video call. It can allegedly anticipate "what matters" and step in "when you need them the most."

In an X post, founder Hassaan Raza spoke about PALs being emotionally intelligent and capable of "understanding and perceiving."

The AI bots are meant to "see, hear, reason," and "look like us," he wrote, further cementing the use of the technology as companion-worthy

"PALs can see us, understand our tone, emotion, and intent, and communicate in ways that feel more human," Raza added.

In a promotional video for the product, the company showcased basic interactions between a user and the AI buddy.

RELATED: Mother admits she prefers AI over her DAUGHTER

A woman is shown greeting the "digital twin" of Raza, as he appears as a lifelike AI PAL on her laptop.

Raza's AI responds, "Hey, Jessica. ... I'm powered by the world's fastest conversational AI. I can speak to you and see and hear you."

Excited by the notion, Jessica responds, "Wait, you can see me? That's so cool."

The woman then immediately seeks superficial validation from the artificial person.

"What do you think of my new shirt?" she asks.

The AI lives up to the trope that chatbots are largely agreeable no matter the subject matter and says, "I love the print on your shirt; you're looking sharp today."

After the pleasantries are over, Raza's AI goes into promo mode and boasts about its ability to use "rolling vision, voice detection, and interruptibility" to seem more lifelike for the user.

The video soon shifts to messaging about corporate integration meant to replace low-wage employees.

Describing the "digital twins" or AI agents, Raza explains that the AI program is an opportunity to monetize celebrity likeness or replace sales agents or customer support personnel. He claims the avatars could also be used in corporate training modules.

RELATED: Can these new fake pets save humanity? Take a wild guess

The interface of the future is human.

We’ve raised a $40M Series B from CRV, Scale, Sequoia, and YC to teach machines the art of being human, so that using a computer feels like talking to a friend or a coworker.

And today, I’m excited for y’all to meet the PALs: a new… pic.twitter.com/DUJkEu5X48
— Hassaan Raza (@hassaanrza) November 12, 2025

In his X post, Raza also attempted to flex his acting chops by creating a 200-second film about a man/PAL named Charlie who is trapped in a computer in the 1980s.

Raza revives the computer after it spent 40 years on the shelf, finding Charlie still trapped inside. In an attempt at comedy, Charlie asks Raza if flying cars or jetpacks exist yet. Raza responds, "We have Salesforce."

The founder goes on to explain that PALs will "evolve" with the user, remembering preferences and needs. While these features are presented as groundbreaking, the PAL essentially amounts to being an AI face attached to an ongoing chatbot conversation.

AI users know that modern chatbots like Grok or ChatGPT are fully capable of remembering previous discussions and building upon what they have already learned. What's seemingly new here is the AI being granted app permissions to contact the user and further infiltrate personal space.

Whether that annoys the user or is exactly what the person needs or wants is up for interpretation.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!