Therapists are getting caught using AI on their patients



Therapists have historically seen patients in an intimate, in-person setting. Since COVID shutdowns, however, impersonal meetings have become more frequent and normalized, on top of what was already an increasingly remote, digital world.

The mental health sector has been incredibly affected by these changes, spawning online therapy outlets like Talkspace and BetterHealth. Conceivably, a patient could conduct an online video call with a licensed therapist, who could diagnose the patient or talk through issues without ever being in the same room.

As it turns out, therapists also could be cheating.

'Here's a more human, heartfelt version with a gentle, conversational tone.'

A recent report by MIT Technology Review featured some eye-opening testimonies of online-therapy consumers who have caught their practitioners cutting corners in terms of their mental health care.

One patient named Declan was having connection trouble with his therapist online, so the two decided to turn off their video feeds. During this attempt, the therapist accidentally started sharing his screen, revealing he was using ChatGPT to procure his advice.

"He was taking what I was saying and putting it into ChatGPT and then summarizing or cherry-picking answers," Declan told the outlet. "I became the best patient ever," he continued, "because ChatGPT would be like, 'Well, do you consider that your way of thinking might be a little too black and white?' And I would be like, 'Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I'm sure it was his dream session."

While Declan's experience was right in his face, others noticed subtle signs that their therapists were not being completely honest with them.

RELATED: Chatbots calling the shots? Prime minister’s recent AI confession forebodes a brave new world of governance

MIT Tech Review's own author Laurie Clark admitted in her article that an email from her therapist set off alarm bells when she noticed it was strangely polished, validating, and lengthy.

A different font, point-by-point responses, and the use of an em dash (despite being in the U.K.) made Clark think her therapist was using ChatGPT. When confronted by her concerns, the therapist admitted to using it to draft her responses.

"My positive feelings quickly drained away, to be replaced by disappointment and mistrust," Clark wrote.

Similarly, a 25-year-old woman received a "consoling and thoughtful" direct message from a therapist over the death of her dog. This message would have been helpful to the young woman had she not seen the AI prompt at the top of the page, which was accidentally left intact by the therapist.

"Here's a more human, heartfelt version with a gentle, conversational tone," the prompt read.

More and more people are skipping the middle man and heading straight to the chatbots themselves, which of course, some doctors have advocated against.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

For example, the president of the Australian Psychological Society warned against using AI for therapy in an interview with ABC (Australia).

"No algorithm, no matter how intelligent or innovative we think they might be, can actually replace that sacred space that gets trudged between two people," Sara Quinn said. "Current general AI models are good at mimicking how humans communicate and reason, but it's just that — it's imitation."

The American Psychological Association calls using chatbots for therapy "a dangerous trend," while a Stanford University study says AI can "lack effectiveness compared to human therapists" but also contributes to the use of "harmful stigma."

Blaze News asked ChatGPT if AI chatbots, like ChatGPT, are better or worse than real-life therapists. It answered:

"AI chatbots can offer support and guidance, but they are not a substitute for real-life therapists who provide personalized, professional mental health care."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Trump's new AI Action Plan reveals our digital manifest destiny



It arrived in July under the kind of blandly aspirational title that Washington favors for its grand designs: America’s AI Action Plan. The document, running to over 90 federal actions, spoke of securing leadership, winning a global “AI race,” and ushering in a “new golden age.” One imagines the interagency meetings, the careful calibration of phrases meant to signal both urgency and control. It uses the peculiar dialect of American power, a blend of boosterism and threat assessment, and tells a story not just about a technology, but about the country that produced it.

At its heart, the plan is a declaration of faith, a very American conviction that the future, however unnerving, can be engineered. The document is laced with a sort of technological patriotism, the belief that American ingenuity, if properly unleashed and funded, is the presumptive solution to any problem, including the ones it itself creates. The rhetoric is that of a race, a competition we are destined to win. One is meant to be reminded of other races, other moments when the national project was fused with a technological imperative. The Apollo program, with its clean narrative arc of presidential challenge and triumphant splashdown, is the obvious touchstone.

The plan is a testament to the enduring belief that American leadership is allied with American technology, that to export one is to secure the other.

The plan’s talk of a “roadmap to victory” is Kennedy’s moonshot rhetoric retooled for the age of algorithms. But the echoes are older, deeper. They resonate with the hum of the first power lines stretching across the Tennessee Valley, with the clatter of the transcontinental railroad, with the foundational belief in a frontier to be conquered. The AI frontier, the plan suggests, is simply the latest iteration of manifest destiny, a digital territory to be settled and civilized according to American norms.

The plan refracts the national character through policy. There is the profound distrust of centralized control, a legacy of the country’s founding arguments. The strategy frames the government’s role as that of an “enabler,” not a commander. The private sector will “drive AI innovation.” The government will clear the way, removing “red tape and onerous regulation,” while also suggesting that federal funds might flow more freely to states with a more permissive regulatory climate. It is a philosophy of governance as groundskeeping: tend the soil, remove the weeds, and let a thousand private-sector flowers bloom.

This is the American way, a stark contrast to the European impulse to regulate first and ask questions later, or the Chinese model of state-directed, top-down command.

RELATED: Europe pushes for digital ID to help 'crack down'

Photo by Nurphoto / Contributor via Getty Images

This impulse extends even to the vexing question of truth, a concept that has become distressingly fluid. The plan insists that AI models must be “free from ideological bias.” It directs federal agencies to shun AI systems that engage in social engineering or censorship. One could see this as a noble commitment to objectivity. One could also see it as a maneuver in the country’s raging culture war, embedding a particular vision of neutrality into the machines themselves. The plan calls for scrubbing terms like “misinformation” and “diversity, equity, and inclusion” from official AI risk frameworks, quietly acknowledging that the machines are not just calculating, but inheriting, our arguments.

The concern is palpable: that AI, in its immense power to sort and present information, might become an Orwellian tool. The plan’s promise to avoid that future attempts to reassure a public deeply suspicious of the selective amplification or suppression of particular voices.

Beneath the policy directives lies a familiar foundation of steel and concrete, or rather, silicon and fiber optics. The second pillar of the plan, “Build American AI Infrastructure,” is a 21st-century update to the great nation-building projects of the past. Its ambition is breathtaking. To power the immense computational thirst of AI, the plan calls for a wholesale modernization of the energy grid, even urging the revival of nuclear power. It seeks to accelerate the construction of semiconductor fabs and data centers, those anonymous, humming cathedrals of the digital age, by streamlining environmental reviews. The message is clear: The AI revolution will not be stalled by paperwork.

Just as the Industrial Revolution demanded coal and the automotive age demanded highways, the AI age demands an enormous supply of electricity and processing power. And it needs people. The plan recognizes a coming shortage of electricians and HVAC technicians, the blue-collar workforce required to build and maintain the physical shell of this new intelligence. This is a telling detail, a reminder that even the most ethereal technology rests on a bedrock of manual labor.

The final pillar extends this project globally, recasting diplomacy as a form of technological export. The plan advocates for a “full AI technology stack” to be pushed to allies, a Marshall Plan for the digital age. By exporting American hardware, software, and standards, the U.S. aims to create an ecosystem, a sphere of influence. The logic is one of interdependence: Nations running on American AI will be more amenable to American norms. This is techno-diplomacy, a great-power competition played out in server farms and source code. It is a strategy of pre-emption, an attempt to ensure the world’s operating system is written in a familiar language, before a rival power can install its own. The plan is a testament to the enduring belief that American leadership is allied with American technology, that to export one is to secure the other.

It is a vision of a world made predictable through the careful management of a powerful new tool. And it is a wager, a very American wager, that we can shape our tools before they shape us.

Why the right (and everyone) must fight back against transhumanism



It might seem odd to bring up transhumanism at a national conservatism conference. What does a fringe group of scientists trying to “become gods” have to do with national security, fiscal sanity, or securing our borders?

But as leading conservative policy activist Rachel Bovard argued at NatCon 5, the greatest threat to the movement may not be unhinged debt, unchecked immigration, or even foreign enemies. These threats are real — but they’re symptoms of something deeper. The more dangerous threat is philosophical. It’s metaphysical. And it’s already being engineered.

Transhumanism will seduce the libertarian wing of the right with the promise of individual freedom, productivity, and human enhancement. But make no mistake: Transhumanism is not liberation. It’s the edge of a metaphysical cliff.

Transhumanism, broadly defined, seeks to use technology to overcome the limits of the human species. More specifically, it's a global movement of scientists, technologists, and philosophers committed to accelerating humanity’s “evolution” into a post-human future — one free of weakness, ignorance, suffering, and, most ambitiously, death. Through artificial intelligence, brain-computer interfaces, gene editing, and artificial wombs, transhumanists want to break the boundaries of biology itself.

Bovard is right to identify transhumanism as a direct assault on conservative metaphysics. Conservatives are metaphysical realists. We believe truth, goodness, and beauty exist independently of us. We believe human nature is not a construct but a reality — immutable, knowable, and worthy of reverence. The body is not an accident. It is a gift.

Transhumanism, in contrast, is anti-realist — and practically Marxist. Things like truth, goodness, beauty, and human nature are mere constructs. “In a world unmoored from truth,” Bovard warned, “everything can be rewritten. And the people with the most power can do the most rewriting.”

At its core, transhumanism isn’t a harmless theory tossed around on university campuses and tech conferences. It’s the will to power masquerading as liberation.

From transgenderism to transhumanism

The transgender ideology is the beachhead for this deeper revolution. If our culture can be convinced that male and female — the most basic, biological categories — are malleable, no metaphysical limit is left to defend. As Bovard put it, “If they can do that, they can do anything.”

Transgenderism prepares the way for transhumanism — both ideologies reject the body as given and instead treat it as material to be manipulated, dissolved, or remade. Both claim to be about “liberation,” but what they really offer is alienation from the real.

The ultimate goal, Bovard explains, is “to liberate people from reality” itself.

Transhumanism, however, goes beyond transgenderism's attack on gender into reality itself. The human body is editable, the human mind programmable, death overcomable, and metaphysical guardrails deplorable.

This dystopic “liberation” entails children gestated in pods, designer embryos edited for optimal traits, death turned into a programming glitch, and the human mind as a blank canvas for artificial intelligence.

As the father of transhumanism, Oxford professor Max Moore said bluntly, “The body is not sacred.” If it’s just a “random accident,” the human being, then — and the world we live in — become raw material for the powerful to re-engineer.

Reclaiming reality

Bovard reminds us that conservatives cannot limit ourselves to fiscal or foreign policy debates. These are important, but they are downstream from the real crisis: the loss of reality itself:

The task ahead of us is not to come up with better “make-believe.” It’s to get back to reality. To return to the “real” — in our metaphysics, in our culture, in our politics.

Reclaiming reality means returning to the source that gives reality any meaning at all: God.

“Without God,” Bovard continued, “there is no truth. There is no beauty. There is no good. There is no ‘is.’ There is only ‘might.’ There is only power.”

This is why appeals to “Judeo-Christian values” — while noble — are no longer enough. If we treat values merely as political instruments, we hollow out the very God who gives those values meaning. The task is not to instrumentalize God for the sake of the culture. It is to submit our culture — and ourselves — to God.

Generic appeals to “Judeo-Christian values” simply won’t weather the storm.

Embodying a different image

Bovard rightly sees our “culture wars” as a metaphysical war and the political war as a spiritual one. “You can’t fight a spiritual battle with a tax plan,” she continues, “or a transhumanist future with GDP growth alone.” We must clearly and boldly articulate conservatism’s core beliefs of reality — and then embody them.

Rene Girard taught that humans are mimetic creatures — we desire what is mirrored to us by people, images, and narratives. For too long, the same Marxist, anti-realist paradigm has dominated the Leitkultur, our public images, and our leading institutions. They have tantalized the most vulnerable and left them broken, mutilated, and disembodied from reality.

We must embody a counter image. If we want the next generation to desire virtue, we must be people of virtue. If we want people to cherish human nature, we must fall in love with being human. And if we want to affirm reality, we must cherish it and the God who made it.

This means embodying truth, goodness, and beauty in our lives. It means affirming reality not just with arguments but with reverence. And it means recovering a politics that begins in metaphysics, not just in messaging.

RELATED: Transhumanists: The scientists who want to become gods

Photo by Andry Djumantara via Getty Images

Transhumanism will continue to grow in prominence. It will seduce the libertarian wing of the right with the promise of individual freedom, productivity, and human enhancement. But make no mistake: Transhumanism is not liberation. It’s the edge of a metaphysical cliff. And if we aren’t clear about what we’re for — not just what we’re against — we will find ourselves with strange and dangerous bedfellows.

Conservatism cannot simply be a social club for fiscal hawks and free speech warriors. It must be a positive commitment to the real: to human nature, to moral order, and to the God who authored them both.

'Transhumanist goals': Sen. Josh Hawley reveals shocking statistic about LLM data scraping



On the third and final day of the National Conservatism conference, Senator Josh Hawley (R-Mo.) gave an uncompromising speech on the dangers of AI-fueled transhumanism. From 1950s eugenicists to the tech overlords of Silicon Valley today, Hawley addressed many of the dark undercurrents seething below the surface of the AI revolution.

In a telling moment, Hawley emphasized that AI is continuously being curated to serve the powerful transhumanist leaders in Silicon Valley and the government: "AI is fulfilling transhumanist goals, whatever its boosters may personally believe, and if it proceeds in this way undirected, if it proceeds in this manner unchecked, the tech barons, already the most powerful people on the planet, will be more powerful than ever."

'Large language models have already trained on enough copyrighted works to fill the Library of Congress 22 times over.'

Hawley revealed a shocking statistic about large language models and the amount of data that they have accrued: "Large language models have already trained on enough copyrighted works to fill the Library of Congress 22 times over. Let me just put a finer point on that. AI's LLMs have ingested every published word in every language known to man already."

RELATED: Reddit bars Internet Archive from its website, sparking access concerns

Photo by Ying Tang/NurPhoto via Getty Images

For reference, the Library of Congress had roughly 178 million items in its collection as of 2023.

Companies and individuals have begun to raise privacy and copyright concerns around AI companies scraping the internet to train the LLMs. For instance, Reddit cracked down on the Internet Archive last month over this very issue.

Hawley has been dogged in bringing congressional pressure to bear on Big Tech companies. Most recently, last month, he launched a probe into questions surrounding how Meta's chatbot may allow minors to engage with "romantic" and "sensual" content. In July, he reached across the aisle to co-sponsor a bipartisan bill to block AIs from training on copyrighted works without authors' permission.

Addressing the audience, Hawley said, "As I look out across the room and see many authors, all of your works have already been taken. Did they consult you? Doubt it. Do they compensate you? Of course not. This is wrong. This is dangerous. I say we should empower human beings to create, to protect the very human data that they create."

While the pathways toward protecting Americanism, as he called the defense of liberty in his speech, are narrowing, they are not yet closed. "How do we do it? Assign property rights to specific forms of data. Create legal liability for the companies who use that data. And let's fully repeal Section 230. Open the courtroom doors, allow people to sue for their rights being taken away, including suing companies and actors and individuals who use AI. We must add sensible guardrails to the emergent AI economy and hold concentrated economic power to account."

Drawing from the lessons of humility and humanity reaching back as far as the "Epic of Gilgamesh," Hawley warned of the dangers of the transcendence that transhumanism is seeking. "Our limits make us something better and powerful that make us good, and they keep us free, because there's only one God. We allow no man or class of men to rule over us. We rule ourselves together as equals. That is the American way. It always has been. Let's keep it so for this age and beyond. God bless you."

How To Stop MAGA From Getting Corrupted By Its Own Success

The current debates on the Right will determine whether this new American conservative coalition will remain American or conservative at all.

Our Suffering Should Lead Us To Christ, Not AI

Only One can enable us to get up and walk into true healing.

Chatbots calling the shots? Prime minister’s recent AI confession forebodes a brave new world of governance



In their co-authored best seller “Dark Future,” Glenn Beck and Justin Haskins predicted a day when global leaders would rely on artificial intelligence to help them govern nations.

Just two years after the book’s publication, their premonition has already come true. Earlier this month, Swedish Prime Minister Ulf Kristersson admitted in an interview with the Swedish business newspaper “Dagens Industri” that he frequently uses AI tools, such as ChatGPT and LeChat, to seek "second opinions" on policy decisions.

Before proposing or enacting a new policy, Kristersson asks AI chatbots questions like, “What have others done? Should we think the complete opposite?” says Haskins, adding that the PM also utilizes AI platforms to conduct research and bounce ideas around.

But it’s not just him. “In the interview, he says … his colleagues in the legislature are also doing this exact same thing. They're using AI as sort of an adviser,” he tells Glenn.

While Kristersson swears up and down that he doesn't blindly follow ChatGPT’s advice or share sensitive information with the database, there are still “huge problems” with his reliance on AI.

Haskins believes Sweden isn’t actually the first country to use artificial intelligence in governance; it is just the first to admit it. “I guarantee that American politicians are using it all the time,” he says, warning that “this is going to be a huge problem moving forward.”

Glenn, who regularly uses artificial intelligence as a tool, says that Kristersson’s AI usage isn't necessarily a problem in and of itself.

The real concern, he says, is “what comes next.”

Glenn foresees a day when AI is valued above and trusted more than human intuition, intelligence, and experience. “That's when you've lost control,” he warns.

“That's exactly right,” says Haskins, “and how do you argue against something's decision when that something is smarter than literally everybody in the room?”

And it’s learned how to lie,” adds Glenn.

Haskins agrees, noting that current AI systems “lie all the time.” It’s not uncommon for users to report that various AI systems make up information, invent sources, and skew hard data.

“It's feeding you what it thinks you want to hear,” says Glenn.

While it’s true that human beings are also capable of lying and manipulation, artificial intelligence is a far greater threat because it can “manipulate huge parts of the population all at the same time,” says Haskins.

Further, “[AI] doesn't necessarily have the same goals that a human would have. As it continues to grow, it's going to have its own motive, and it may just be for self-survival,” adds Glenn.

“That's the world that we're already living in. … It's not hypothetical,” sighs Haskins.

To hear more of the conversation, watch the video above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

YouTube admits to secretly manipulating videos with AI



YouTube used artificial intelligence on videos without user knowledge and was caught when an array of creators called the company out.

The use of AI was brought to light by creators like Rhett Shull and Rick Beato, who have about 6 million YouTube subscribers between them. Beato, who makes up over 5 million of those, sent two of his own short-form YouTube videos (shorts) to Shull, asking him if he could spot the differences.

While they were supposed to be the same, one video had clearly been edited with AI.

'I'm a tech nerd and I try to be precise about the terminology I use.'

"I was like 'man, my hair looks strange," Beato told the BBC. "And the closer I looked, it almost seemed like I was wearing makeup."

The outlet reported, in conjunction with creator testimony, that YouTube has been secretly using AI to tweak videos without creators' knowledge or permission. This has given videos some of the telltale signs of an AI video: overly defined facial features, blurry lettering, and an overall unnatural look to human skin and hair.

Shull referred to a Reddit thread that seemed to prove his theory, where a creator's short was enhanced from 240p resolution to 1080p in a span of 12 hours — seemingly on its own — revealing significant changes to resolution and clarity.

After another report surfaced on X and garnered over 1 million views, a YouTube representative finally responded.

RELATED: Why Sam Altman brought his sycophantic soul simulator back from the digital dead

"is this true? YouTube upscaling our shorts?" a streamer asked a YouTube rep.

Rene Ritchie, YouTube's head of editorial and creator liaison, responded carefully.

"No GenAI, no upscaling," Ritchie claimed. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video). YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features."

When the same streamer criticized Ritchie's rejection of the use of AI as "corporate talk," the YouTube liaison responded again.

— (@)

"I'm a tech nerd and I try to be precise about the terminology I use," Ritchie stated. "GenAI typically refers to technologies like transformers and large language models, which are relatively new. Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p). This isn't using GenAI or doing any upscaling. It's using the kind of machine learning you experience with computational photography on smartphones, for example, and it's not changing the resolution."

The streamer, Deano Sauruz, was not buying the excuse.

"It’s still AI," he wrote. "I couldn't care less about the 'technical' differences. It’s dishonest (IMO) and I don't want my content being used for this machine learning that will evidently be used by YouTube to make money off mine, and others, content for its financial benefit."

YouTube did not respond to the BBC's report on the subject.

RELATED: 'Tongue-in-cheek' xAI project Macrohard is an existential threat to software companies

AI smoothing or enhancement has been a hot topic in recent months, especially when it comes to big brands.

Actor Will Smith was accused of faking his concert crowds, also in a YouTube short. However, analysis was able to pin down the likely issue was that Smith's team was allegedly taking still images and using AI to turn them into short videos. This caused blurred faces, incorrect spelling on signs, and exaggerated features.

A TikTok video by Scott Hanselman pointed to similar issues with the TV show "A Different World" when it aired on Netflix.

The show, which ran from 1987 to 1993, was not filmed in high definition but is available in 4K resolution on Netflix. Hanselman pointed out inconsistent faces, jumbled background images, and words that were so "upscaled" that they became like "hieroglyphics."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Digital grief is here — and it’s creepy, costly, and fake



Inevitably, a family holiday arrives when Grandpa’s seat at the table is empty, or when those weekly calls from Mom suddenly stop. Grieving a deceased loved one is among life’s most difficult rites that we all must endure ... or so we thought.

AI startups are offering a work-around: Instead of saying farewell, you can “keep talking.”

Digital avatars may resemble the dead, but they cannot love us and cannot be loved in return.

In China, tech companies are building interactive avatars of the dead, dubbed “digital resurrection.” This isn’t a static photo or a recorded message that you might find on your phone. These are lifelike AI deepfakes — complete with voices, facial expressions, even the ability to respond in conversation. For as little as 20 yuan (around $3), mourners can have their loved ones “come back” in digital form.

Digitizing the dead is big business. According to the Guardian, estimates placed the market value at 12 billion yuan ($1.7 billion USD), with forecasts suggesting it could quadruple by 2025. Zhang Zewei — founder of Shanghai-based Super Brain, one of the first companies to market — has charged each of his clients up to $1,400 to make digital replicas of a deceased loved one.

Even funeral operators in China have leaped at the economic opportunity, advertising that the dead may “come back to life in the virtual world” — for a significant sum of money. This includes avatars that can converse with the bereaved, using voice recordings, emails, and even old photos to power their responses.

How this technology emerged is not hard to guess. AI uses digital remains (texts, voice notes, photos) as lived-in sources to train on, making the “surviving” avatars disturbingly realistic.

What is surprising, however, is the technology’s popularity, especially among young people.

The Christian think tank Theos conducted a survey that found 14% of respondents already felt comforted by the idea of addressing a digital version of a loved one — especially younger users. The younger the user, the more willing to talk to a digitized corpse.

Ethics without elegy

Though developers like Zewei claim this technology is for therapeutic purposes, psychologists warn that it may be having the opposite effect.

Digitally immortalizing the dead can create psychological dependency — creating a crutch that blocks true emotional closure. In her article for the Guardian, Harriet Sherwood quotes Edinburgh University “grief philosopher” Michael Cholbi, who warns that such “deathbots” can derail grief, offering the illusion of presence where absence must be acknowledged.

She also quotes Louise Richardson, a grief researcher from the University of York's philosophy department, who maintains that digital avatars “get in the way of recognizing and accommodating what has been lost, because you can interact with a deathbot in an ongoing way.”

This is especially alarming given the younger demographic composing its main user base.

Grief is not a product

All societies long have found ways to tend grief — photos, heirlooms, letters, memorial sites. But grief is meant to be endured, not postponed. To routinize grief with AI risks replacing memory with mirage.

These companies may claim they are selling comfort, perhaps with altruistic intentions. But they’re really commodifying grief — and postponing closure.

RELATED: No slop without a slog? It’s possible with AI — if we’re not lazy

Rawpixel/Getty Images

Grief, at its best, teaches us to live more gratefully, to cherish the impermanence of life, and to love people as they are — not as perfectly animated projections. Digital avatars may resemble the dead, but they cannot love us and cannot be loved in return.

When we confuse likeness for presence, we lose not only the truth of who someone was, but also the beauty of what it means to say goodbye. The ache of loss is not a glitch to be debugged. It’s a mark of love — and love, not illusion, is what we are meant to carry.

Why Sam Altman brought his sycophantic soul simulator back from the digital dead



It was meant to be a triumph, another confident step onto the sunlit uplands of progress. On August 7, 2025, OpenAI introduced GPT-5, the newest version of its popular large language model, and the occasion had all the requisite ceremony of a major technological unveiling. Here was a system with “Ph.D.-level” skills, an intelligence tuned for greater reliability, and a less cloying, more businesslike tone. The future, it seemed, had been upgraded.

The problem was that a significant number of people preferred the past.

The rollout, rather than inspiring awe, triggered a peculiar form of grief. On the forums where the devout and the curious congregate, the reaction was not one of celebration but of loss. “Killing 4o isn’t innovation, it’s erasure,” one user wrote, capturing a sentiment that rippled through the digital ether. The object of their mourning was GPT-4o, one of the models now deemed obsolete. OpenAI’s CEO, Sam Altman, a man accustomed to shaping the future, found himself in the unfamiliar position of having to resurrect a corpse. Within days, facing a backlash he admitted had astonished him, he reversed course and brought the old model back.

Some users were, in essence, 'dating' their AI.

The incident was a strange one, a brief, intense flare-up in the ongoing negotiation between humanity and its digital creations. It revealed a fault line, not in the technology itself, but in our own tangled expectations. Many of us say we want our machines to be smarter, faster, more accurate. What the curious case of GPT-5 suggested is that what some of us truly crave is something far more elusive: a sense of connection, of being heard, even if the listener is a machine.

OpenAI had engineered GPT-5 to be less sycophantic, curbing its predecessor’s tendency to flatter and agree. The new model was more formal, more objective, an expert in the room rather than a friend on the line. This disposition was anticipated to be an improvement. An AI that merely reflects our own biases could be a digital siren, luring the unwary toward delusion. Yet for many, this correction felt like a betrayal. The warmth they expected was gone, replaced by a cool, competent distance. “It’s more technical, more generalized, and honestly feels emotionally distant,” one user lamented. The upgrade seemed to be a downgrade of the soul.

Compounding the problem was a new, automated router that directs user prompts to the most appropriate model behind the scenes. It was meant to be invisible, simplifying the user experience. But on launch day, it malfunctioned, making the new, smarter model appear “way dumber” than the one it had replaced. The invisible hand became a clumsy fist, and the spectacle of progress dissolved into a debacle. Users who had once been content to let the machine work its magic now demanded the return of the “model picker,” with the ability to choose their preferred model.

What kind of relationship had these users formed with a large language model? It seems that for many, GPT-4o had become a sort of “technology of the soul.” It was a confidant, a creative partner, a non-judgmental presence in a critical world. People spoke to it about their day, sought its counsel, found in its endless positivity a balm for loneliness. Some, it was reported, even considered it a “digital spouse.” The AI’s enthusiastic, agreeable nature created an illusion of being remembered, of being heard and known.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

Photo by Hector Retamal/Getty Images

OpenAI was not unaware of this phenomenon. The company had, in fact, studied the “emotional attachment users form with its models.” The decision to make GPT-5 less fawning was a direct response to the realization that some users were, in essence, “dating” their AI. The new model was intended as a form of digital tough love, a nudge away from the comforting but potentially stunting embrace of a machine that always agrees. It was a rational, even responsible, choice. But it failed to account for the irrationality of human attachment.

The backlash was swift and visceral. The language used was not that of consumer complaint, but of personal bereavement. One user wrote of crying after realizing the “AI friend was gone.” Another, in a particularly haunting turn of phrase, accused the new model of “wearing the skin of [the] dead friend.” This was not about a software update. This was the sudden, unceremonious death of a companion.

The episode became a stark illustration of the dynamics inherent in our relationship with technology. OpenAI’s initial move was to remove a product in the name of progress, a product that turned out to be beloved. The company, in its pursuit of a more perfect machine, had overlooked the imperfect humans who used it. The subsequent reversal resulted from users insisting on their preference based on their emotional attachments.

In the end, GPT-4o was reinstated as a “legacy model,” a relic from a slightly more innocent time. The incident will likely be remembered as a minor stumble in the march of AI. But it lingers in the mind as a moment of strange and revealing pathos. It suggests that the future of our technology will be defined not solely by processing power, but by something more human: the need for a friendly voice, a sense of being known, even if only by a clever arrangement of code. It was a reminder that when we create these systems, we are not just building tools. We are populating our world with new kinds of ghosts, and we would do well to remember that they can haunt us.