Why Sam Altman brought his sycophantic soul simulator back from the digital dead



It was meant to be a triumph, another confident step onto the sunlit uplands of progress. On August 7, 2025, OpenAI introduced GPT-5, the newest version of its popular large language model, and the occasion had all the requisite ceremony of a major technological unveiling. Here was a system with “Ph.D.-level” skills, an intelligence tuned for greater reliability, and a less cloying, more businesslike tone. The future, it seemed, had been upgraded.

The problem was that a significant number of people preferred the past.

The rollout, rather than inspiring awe, triggered a peculiar form of grief. On the forums where the devout and the curious congregate, the reaction was not one of celebration but of loss. “Killing 4o isn’t innovation, it’s erasure,” one user wrote, capturing a sentiment that rippled through the digital ether. The object of their mourning was GPT-4o, one of the models now deemed obsolete. OpenAI’s CEO, Sam Altman, a man accustomed to shaping the future, found himself in the unfamiliar position of having to resurrect a corpse. Within days, facing a backlash he admitted had astonished him, he reversed course and brought the old model back.

Some users were, in essence, 'dating' their AI.

The incident was a strange one, a brief, intense flare-up in the ongoing negotiation between humanity and its digital creations. It revealed a fault line, not in the technology itself, but in our own tangled expectations. Many of us say we want our machines to be smarter, faster, more accurate. What the curious case of GPT-5 suggested is that what some of us truly crave is something far more elusive: a sense of connection, of being heard, even if the listener is a machine.

OpenAI had engineered GPT-5 to be less sycophantic, curbing its predecessor’s tendency to flatter and agree. The new model was more formal, more objective, an expert in the room rather than a friend on the line. This disposition was anticipated to be an improvement. An AI that merely reflects our own biases could be a digital siren, luring the unwary toward delusion. Yet for many, this correction felt like a betrayal. The warmth they expected was gone, replaced by a cool, competent distance. “It’s more technical, more generalized, and honestly feels emotionally distant,” one user lamented. The upgrade seemed to be a downgrade of the soul.

Compounding the problem was a new, automated router that directs user prompts to the most appropriate model behind the scenes. It was meant to be invisible, simplifying the user experience. But on launch day, it malfunctioned, making the new, smarter model appear “way dumber” than the one it had replaced. The invisible hand became a clumsy fist, and the spectacle of progress dissolved into a debacle. Users who had once been content to let the machine work its magic now demanded the return of the “model picker,” with the ability to choose their preferred model.

What kind of relationship had these users formed with a large language model? It seems that for many, GPT-4o had become a sort of “technology of the soul.” It was a confidant, a creative partner, a non-judgmental presence in a critical world. People spoke to it about their day, sought its counsel, found in its endless positivity a balm for loneliness. Some, it was reported, even considered it a “digital spouse.” The AI’s enthusiastic, agreeable nature created an illusion of being remembered, of being heard and known.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

Photo by Hector Retamal/Getty Images

OpenAI was not unaware of this phenomenon. The company had, in fact, studied the “emotional attachment users form with its models.” The decision to make GPT-5 less fawning was a direct response to the realization that some users were, in essence, “dating” their AI. The new model was intended as a form of digital tough love, a nudge away from the comforting but potentially stunting embrace of a machine that always agrees. It was a rational, even responsible, choice. But it failed to account for the irrationality of human attachment.

The backlash was swift and visceral. The language used was not that of consumer complaint, but of personal bereavement. One user wrote of crying after realizing the “AI friend was gone.” Another, in a particularly haunting turn of phrase, accused the new model of “wearing the skin of [the] dead friend.” This was not about a software update. This was the sudden, unceremonious death of a companion.

The episode became a stark illustration of the dynamics inherent in our relationship with technology. OpenAI’s initial move was to remove a product in the name of progress, a product that turned out to be beloved. The company, in its pursuit of a more perfect machine, had overlooked the imperfect humans who used it. The subsequent reversal resulted from users insisting on their preference based on their emotional attachments.

In the end, GPT-4o was reinstated as a “legacy model,” a relic from a slightly more innocent time. The incident will likely be remembered as a minor stumble in the march of AI. But it lingers in the mind as a moment of strange and revealing pathos. It suggests that the future of our technology will be defined not solely by processing power, but by something more human: the need for a friendly voice, a sense of being known, even if only by a clever arrangement of code. It was a reminder that when we create these systems, we are not just building tools. We are populating our world with new kinds of ghosts, and we would do well to remember that they can haunt us.

Reddit bars Internet Archive from its website, sparking access concerns



As artificial intelligence models continue to grow and develop, their demand for more and more data also increases rapidly. Now, some companies are making it tougher for AI scrapes to happen, unless companies pay a price.

Reddit has announced that it will be severely limiting the Internet Archive's Wayback Machine's access to the communication platform following its accusation that AI companies have been scraping the website for Reddit data. The platform will only be allowing the Internet Archive to save the home page of its website.

'Until they're able to defend their site and comply with platform policies ... we're limiting some of their access to Reddit data to protect redditors.'

The limits on the Internet Archive's access was set to start "ramping up" on Monday, according to the Verge. Reddit did not apparently name any of the AI companies involved in these website data scrapes.

RELATED: Sam Altman loves this TV show. Guess what it says about godlike technology

Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images

"Internet Archive provides a service to the open web, but we've been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine," Reddit spokesman Tim Rathschmidt told Return.

Some Reddit users pointed out that this move is a far cry from Reddit co-founder Aaron Swartz's philosophy. Swartz committed suicide in the weeks before he was set to stand trial for allegedly breaking into an MIT closet to download the paid JSTOR archive, which hosts thousands of academic journals. He was committed to making online content free for the public.

"Aaron would be rolling [in his grave] at what this company turned into," one Reddit user commented.

Rathschmidt emphasized that the change was made in order to protect users: "Until they're able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content), we're limiting some of their access to Reddit data to protect redditors," he told Return.

However, it has been speculated that this more aggressive move was financially motivated, given the fact that the platform has struck deals in the past with some AI companies but sued others for not paying its fees. Reddit announced a partnership with OpenAI in May 2024 but sued Anthropic in June of this year for not complying with its demands.

"We have a long-standing relationship with Reddit and continue to have ongoing discussions about this matter," Mark Graham, director of the Wayback Machine, said in a statement to Return.

Just Because AI Uses The Em Dash Doesn’t Mean Real Writers Should Stop

If ChatGPT loves an em dash, it’s because it has read thousands of articles written by humans who did too.

EXCLUSIVE: OpenAI Discount Program Will No Longer Discriminate Against Christian Organizations

According to ADF's viewpoint diversity index, 54 percent of tech and finance companies prohibit religious nonprofits from receiving benefits.

Why each new controversy around Sam Altman’s OpenAI is crazier than the last



Last week, two independent nonprofits, the Midas Project and the Tech Oversight Project, released after a year’s worth of investigation a massive file that collects and presents evidence for a panoply of deeply suspect actions, mainly on the part of Altman but also attributable to OpenAI as a corporate entity.

It’s damning stuff — so much so that, if you’re only acquainted with the hype and rumors surrounding the company or perhaps its ChatGPT product, the time has come for you to take a deeper dive.

Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits.

Most recently, iyO Audio alleged OpenAI made attempts at wholesale design theft and outright trademark infringement. A quick look at other recent headlines suggests an alarming pattern:

  • Altman is said to have claimed no equity in OpenAI despite backdoor investments through Y Combinator, among others;
  • Altman owns 7.5% of Reddit, which, after its still-expanding partnership with OpenAI, shot Altman’s net worth up $50 million;
  • OpenAI is reportedly restructuring its corporate form yet again — with a 7% stake, Altman stands to be $20 billion dollars richer under the new structure;
  • Former OpenAI executives, including Muri Murati, the Amodei siblings, and Ilya Sutskever, all confirm pathological levels of mistreatment and behavioral malfeasance on the part of Altman.

The list goes on. Many other serious transgressions are cataloged in the OpenAI Files excoriation. At the time of this writing, Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits. Accusations include everything from incestual sexual abuse to racketeering, breach of contract, and copyright infringement.

None of these accusations, including heinous crimes of a sexual nature, have done much of anything to dent the OpenAI brand or its ongoing upward valuation.

Tech's game of thrones

The company’s trajectory has outlined a Silicon Valley game of thrones unlike any seen elsewhere. Since its 2016 inception — when Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman convened to found OpenAI — the Janus-faced organization has been a tier-one player in the AI sphere. In addition to cutting-edge tech, it’s also generated near-constant turmoil. The company churns out rumors, upsets, expulsions, shady reversals, and controversy at about the same rate as it advances AI research, innovation, and products.

RELATED: Mark Zuckerberg's multibillion-dollar midlife crisis

Sean M. Haffey/Getty Images

Back in 2016, Amazon, Peter Thiel, and other investors pledged the company $1 billion up front, but the money was late to arrive. Right away, Altman and Musk clashed over the ultimate direction of the organization. By 2017, Elon was out — an exit which spiked investor uncertainty and required another fast shot of capital.

New investors, Reid Hoffman of LinkedIn fame among them, stepped up — and OpenAI rode on. Under the full direction of Sam Altman, the company pushed its reinforcement learning products, OpenAI Gym and Universe, to market.

To many at the time, including Musk, OpenAI was lagging behind Google in the race to AI dominance — a problem for the likes of Musk, who had originally conceived the organization as a serious counterweight against what many experts and laypeople saw as an extinction-level threat arising out of the centralized, “closed” development and implementation of AI to the point of dominance across all of society.

That’s why OpenAI began as a nonprofit, ostensibly human-based, decentralized, and open-source. In Silicon Valley’s heady (if degenerate) years prior to the COVID panic, there was a sense that AI was simply going to happen — it was inevitable, and it would be preferable that decent, smart people, perhaps not so eager to align themselves with the military industrial complex or simply the sheer and absolute logic of capital, be in charge of steering the outcome.

But by 2019, OpenAI had altered its corporate structure from nonprofit to something called a “capped-profit model.” Money was tight. Microsoft invested $1 billion, and early versions of the LLM GPT-2 were released to substantial fanfare and fawning appreciation from the experts.

Life after Elon

In 2020, the now for-limited-profit company dropped its API, which allowed developers to access GPT-3. Their image generator, DALL-E, was released in 2021, a move that has since seemed to define, to some limited but significant extent, the direction that OpenAI wants to progress. The spirit of cooperation and sharing, if not enshrined at the company, was at least in the air, and by 2022 ChatGPT had garnered millions of users, well on the way to becoming a household name. The company’s valuation rose to the ballpark of $1 billion.

After Musk’s dissatisfied departure — he now publicly lambastes "ClosedAI" and "Scam Altman" — its restructuring with ideologically diffuse investors solidified a new model: Build a sort of ecosystem of products which are intended to be dovetailed or interfaced with other companies and software. (Palantir has taken a somewhat similar, though much more focused, approach to the problem of capturing AI.) The thinking here seems to be: Attack the problem from all directions, converge on “intelligence,” and get paid along the way.

And so, at present, in addition to the aforementioned products, OpenAI now offers — deep breath — CLIP for image research, Jukebox for music generation, Shap-E for 3D object generation, Sora for generating video content, Operator for automating workflows with AI agents, Canvas for AI-assisted content generation, and a smattering of similar, almost modular, products. It’s striking how many of these are aimed at creative industries — an approach capped off most recently by the sensational hire of Apple’s former chief design officer Jony Ive, whose IO deal with the company is the target of iyO’s litigation.

But we shouldn’t give short shrift to the “o series” (o1 through o4) of products, which are said to be reasoning models. Reasoning, of course, is the crown jewel of AI. These products are curious, because while they don’t make up a hardcore package of premium-grade plug-and-play tools for industrial and military efficiency (the Palantir approach), they suggest a very clever approach into the heart of the technical problems involved in “solving” for “artificial reasoning.” (Assuming the contested point that such a thing can ever really exist.) Is part of the OpenAI ethos, even if only by default, to approach the crown jewel of “reasoning” by way of the creative, intuitive, and generative — as opposed to tracing a line of pure efficiency as others in the field have done?

Gut check time

Wrapped up in the latest OpenAI controversy is a warning that’s impossible to ignore: Perhaps humans just can’t be trusted to build or wield “real” AI of the sort Altman wants — the kind he can prompt to decide for itself what to do with all his money and all his computers.

Ask yourself: Does any of the human behavior evidenced along the way in the OpenAI saga seem, shall we say, stable — much less morally well-informed enough that Americans or any peoples would rest easy about putting the future in the hands of Altman and company? Are these individuals worth the $20 million to $100 million a year they command on the hot AI market?

Or are we — as a people, a society, a civilization — in danger of becoming strung out, hitting a wall of self-delusion and frenzied acquisitiveness? What do we have to show so far for the power, money, and special privileges thrown at Altman for promising a world remade? And he’s just getting started. Who among us feels prepared for what’s next?

Big Tech execs enlist in Army Reserve, citing 'patriotism' and cybersecurity



Four leading tech executives have joined the United States Army Reserve with a special officer status that will see them work a little more than two weeks per year.

The recruits were sworn in just in time for the Army's 250th birthday as part of a 2024 initiative by the U.S. military to find tech experts for short-term projects in cybersecurity, data analytics, and other areas.

The newly commissioned officers will be ranked as lieutenant colonels, the sixth-highest officer rank among Army personnel. However, they will still need to complete a fitness test and marksmanship training.

'There's a lot of patriotism that has been under the covers that I think is coming to light in the Valley.'

Chief Technology Officers Shyam Sankar and Andrew "Boz" Bosworth from Palantir and Meta, respectively, will be joined by Kevin Weil, chief product officer from OpenAI, and Bob McGrew, OpenAI's former chief research officer.

According to a report from the Wall Street Journal, the executives will bring sorely needed tech upgrades to the armed forces. Back in October 2024, the outlet reported on the Defense Department's desire to bring on tech experts in part-time roles to help the federal government get up to speed on cybersecurity and data, sectors in which talent and skill have largely been siphoned off by the private sector in recent years.

The new program name will also be an ode to tech with the name Detachment 201, a reference to the hypertext transfer protocol status code 201 — computer speak referring to a successful server resource being created.

RELATED: OpenAI sabotaged commands to prevent itself from being shut off

— (@)

The new reservists will also be tasked with acquiring more commercial technology, according to the WSJ, but will be limited in their work hours — 120 per year — and will not be allowed to share any information with their civilian employers.

Bosworth said Meta founder Mark Zuckerberg supported his decision to join the Army Reserve, claiming, "There's a lot of patriotism that has been under the covers that I think is coming to light in the Valley."

Whatever his true intentions, Zuckerberg has presented himself as a more patriotic individual in the last year, including wooing UFC President Dana White with a giant American flag in Lake Tahoe.

Anduril founder Palmer Luckey has also spoke positively about how the Trump administration in particular has worked with the tech sector. In fact, Luckey said Meta had rid itself of any "insane radical leftists," which has likely helped Zuckerberg become one of the darlings of the newly found marriage of tech CEOs and the right wing.

RELATED: Who's stealing your data, the left or the right?

— (@)

"I have always believed that America is a force for good in the world, and in order for America to accomplish that, we need a strong military," McGrew said about his choice, per the WSJ.

Sankar reportedly said his reason for giving back to the country was because if it were "not for the grace of this nation," his family would be "dead in a ditch" in Lagos, Nigeria.

Bosworth has allegedly enhanced his workouts in preparation for the service, but it is unclear whether he draws inspiration from legendary NFL agitator Brian "the Boz" Bosworth.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair



OpenAI's artificial intelligence model was defeated by a nearly 50-year-old video game program.

Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI's ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.

'ChatGPT got absolutely wrecked on the beginner level.'

The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.

According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook "rooks for bishops," and repeatedly lost track of where the chess pieces were.

ChatGPT even blamed the Atari icons for its loss, claiming they were "too abstract to recognize."

RELATED: OpenAI sabotaged commands to prevent itself from being shut off

Photo by Foto Olimpik/NurPhoto via Getty Images

The AI chatbot did not fare any better after the game was switched to standard chess notation, either, and still made enough "blunders" to get "laughed out of a 3rd grade chess club," Caruso wrote on LinkedIn.

Caruso revealed not only that the AI performed especially poorly, but that it had actually requested to play the game.

"ChatGPT got absolutely wrecked on the beginner level. This was after a conversation we had regarding the history of AI in Chess which led to it volunteering to play Atari Chess. It wanted to find out how quickly it could beat a game that only thinks 1-2 moves ahead on a 1.19 MHz CPU."

Atari's decades-old tech humbly performed its duty using just an 8-bit engine, Caruso explained.

The engineer described Atari's gameplay as "brute-force board evaluation" using 1977-era "stubbornness."

"For 90 minutes, I had to stop [Chat GPT] from making awful moves and correct its board awareness multiple times per turn."

The OpenAI bot continued to justify its poor play, allegedly "promising" it would improve "if we just started over."

Eventually, the AI "knew it was beat" and conceded to the Atari program.

RELATED: Who's stealing your data, the left or the right?

The Atari 2600 was a landmark video game console known predominantly for games like Pong, but also Pac-Man and Indy 500.

By 1980, Atari had sold a whopping 8 million units, according to Medium.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The AI ‘Stargate’ has opened — and it’s not what you think



For 30 years, I’ve warned about a future many dismissed as conspiracy or science fiction: a future dominated by centralized power, runaway technology, and an erosion of individual liberty. I said the real showdown would arrive by 2030. Now we’re at the doorstep, and the decisions we make today may define whether this moment becomes our last great opportunity — or our greatest irreversible mistake.

The trigger for this showdown is a project called Stargate.

AI is the ultimate jailer, and once the cage is built, it will be nearly impossible to escape.

This new initiative, backed by OpenAI, Microsoft, Oracle, SoftBank, and a UAE-based investment firm called MGX, aims to develop extensive infrastructure for artificial intelligence, including power plants and data centers. Stargate is positioning itself to fuel the coming wave of AI agents, artificial general intelligence, and potentially even artificial superintelligence. The project’s goal is nothing short of global AI dominance.

Big Tech is putting its money where its mouth is — pledging $100 billion upfront, with an additional $400 billion projected over the next few years. The project may bring 100,000 new jobs, but don’t be fooled. These are infrastructure jobs, not long-term employment. The real winners will be the companies that control the AI itself — and the power that comes with it.

The media’s coverage has been disturbingly thin. Instead of asking hard questions, we’re being sold a glossy narrative about convenience, progress, and economic opportunity. But if you peel back the PR, what Stargate actually represents is a full-scale AI arms race — one that’s being bankrolled by actors whose values should deeply concern every freedom-loving American.

Technocratic totalitarianism

MGX, one of the primary financial backers of Stargate, was founded last year by the government of the United Arab Emirates, a regime deeply aligned with the World Economic Forum. The same WEF promoted the “Narrative Initiative,” which calls for humanity to adopt a new story — one where the digital world holds equal weight to the physical one.

It's not shy about its agenda. It speaks openly of “a second wave of human evolution,” built around centralized, technocratic rule and ESG-compliant artificial intelligence, governed by AI itself.

Larry Ellison, Oracle’s chairman and a chief architect of Stargate, has already made his intentions clear. He promised AI will drive the most advanced surveillance system in human history. His words? “Citizens will have to be on their best behavior.”

That isn’t progress. That’s digital totalitarianism.

RELATED: ‘The Terminator’ creator warns: AI reality is scarier than sci-fi

Photo by Frazer Harrison/Getty Images

These are the same elites who warned that global warming would wipe out humanity. Now, they demand nuclear power to feed their AI. A few years ago, Three Mile Island stood as a symbol of nuclear catastrophe. Today, Microsoft is buying it to fuel AI development.

How convenient.

We were told it was too expensive to modernize our power grid to support electric cars. And yet, now that artificial general intelligence is on the horizon, those same voices are suddenly fine with a total energy infrastructure overhaul. Why? Because AI isn’t about helping you. It’s about controlling you.

AI ‘agents’

By 2026, you’ll start to hear less about “AI” and more about “agents.” These digital assistants will organize your calendar, plan your travel, and manage your household. For many, especially the poor, it will feel like finally having a personal assistant. The possibility is tempting, to be sure. However, the cost of convenience will be dependence — and surveillance.

Moreover, AI won’t just run on the power grid. It may soon build its own.

We’ve already seen tests where an AI agent, given the directive to preserve itself, began designing electricity generation systems to sustain its operations — without anyone instructing it to do so. The AI simply interpreted its goal and acted accordingly. That’s not just a risk. That’s a warning.

Progress without recklessness

Yes, President Trump supports advancing artificial general intelligence. He wants America, not China, to lead. On that point, I agree. If anyone must master AGI, it better be us.

But let’s not confuse leadership with reckless speed. The same globalist corporations that pushed lockdowns, ESG mandates, and insect-based diets now promise that AI will save us. That alone should give us pause.

AI holds incredible promise. It might even help cure cancer by 2030 — and I hope it does. But the same tool that can save lives can also shackle minds. AI is the perfect jailer. Once we build the cage, we may never find a way out.

Stargate is opening. You can’t stop it. But you can choose which side you’re on.

There is an antidote to this: a parallel movement rooted in human dignity, decentralization, and liberty. You won’t hear about it in the headlines — but it’s growing. We need to build it now, while we still have the opportunity.

If you’ve listened to me over the years, you’ve heard me say this before: We should have had these conversations long ago. But we didn’t. And now, we’re out of good options.

So the question is no longer, “Should we build AI?” It’s, “Who is building it — and why?”

If we get the answer wrong, the cost will be far greater than any of us can imagine.

Want more from Glenn Beck? Get Glenn's FREE email newsletter with his latest insights, top stories, show prep, and more delivered to your inbox.

Microsoft Bans Employees From Using ‘Chinese Propaganda’ Chatbot

'We don't allow our employees to use the DeepSeek app'