Trump tech czar slams OpenAI scheme for federal 'backstop' on spending — forcing Sam Altman to backtrack



OpenAI is under the spotlight after seemingly asking for the federal government to provide guarantees and loans for its investments.

Now, as the company is walking back its statements, a recent OpenAI letter has resurfaced that may prove it is talking in circles.

'We're always being brought in by the White House ...'

The artificial intelligence company is predominantly known for its free and paid versions of ChatGPT. Microsoft is its key investor, with over $13 billion sunk into the company, holding a 27% stake.

The recent controversy stems from an interview OpenAI chief financial officer Sarah Friar gave to the Wall Street Journal. Friar said in the interview, published Wednesday, that OpenAI had goals of buying up the latest computer chips before its competition could, which would require sizeable investment.

"This is where we're looking for an ecosystem of banks, private equity, maybe even governmental ... the way governments can come to bear," Friar said, per Tom's Hardware.

Reporter Sarah Krouse asked for clarification on the topic, which is when Friar expressed interest in federal guarantees.

"First of all, the backstop, the guarantee that allows the financing to happen, that can really drop the cost of the financing but also increase the loan to value, so the amount of debt you can take on top of an equity portion for —" Friar continued, before Krouse interrupted, seeking clarification.

"[A] federal backstop for chip investment?"

"Exactly," Friar said.

Krouse further bored in on the point when she asked if Friar has been speaking to the White House about how to "formalize" the "backstop."

"We're always being brought in by the White House, to give our point of view as an expert on what's happening in the sector," Friar replied.

After these remarks were publicized, OpenAI immediately backtracked.

RELATED: Stop feeding Big Tech and start feeding Americans again

— (@)

On Wednesday night, Friar posted on LinkedIn that "OpenAI is not seeking a government backstop" for its investments.

"I used the word 'backstop' and it muddied the point," she continued. She went on to claim that the full clip showcased her point that "American strength in technology will come from building real industrial capacity which requires the private sector and government playing their part."

On Thursday morning, David Sacks, President Trump's special adviser on crypto and AI, stepped in to crush any of OpenAI's hopes of government guarantees, even if they were only alleged.

"There will be no federal bailout for AI," Sacks wrote on X. "The U.S. has at least 5 major frontier model companies. If one fails, others will take its place."

Sacks added that the White House does want to make power generation easier for AI companies, but without increasing residential electricity rates.

"Finally, to give benefit of the doubt, I don't think anyone was actually asking for a bailout. (That would be ridiculous.) But company executives can clarify their own comments," he concluded.

The saga was far from over, though, as OpenAI CEO Sam Altman seemingly dug the hole even deeper.

RELATED: Artificial intelligence is not your friend

— (@)

By Thursday afternoon, Altman had released a lengthy statement starting with his rejection of the idea of government guarantees.

"We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work," he wrote on X.

He went on to explain that it was an "unequivocal no" that the company should be bailed out. "If we screw up and can't fix it, we should fail."

It wasn't long before the online community started claiming that OpenAI was indeed asking for government help as recently as a week prior.

As originally noted by the X account hilariously titled "@IamGingerTrash," OpenAI has a letter posted on its own website that seems to directly ask for government guarantees. However, as Sacks noted, it does seem to relate to powering servers and providing electrical capacity.

Dated October 27, 2025, the letter was directed to the U.S. Office of Science and Technology Policy from OpenAI Chief Global Affairs Officer Christopher Lehane. It asked the OSTP to "double down" and work with Congress to "further extend eligibility to the semiconductor manufacturing supply chain; grid components like transformers and specialized steel for their production; AI server production; and AI data centers."

The letter then said, "To provide manufacturers with the certainty and capital they need to scale production quickly, the federal government should also deploy grants, cost-sharing agreements, loans, or loan guarantees to expand industrial base capacity and resilience."

Altman has yet to address the letter.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by "right-wing ideology" and downplay left-wing violence as "exceptionally rare," according to a Washington Free Beacon analysis.

The post ‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’ appeared first on .

War Department contractor warns China is way ahead, and 'we don't know how they're doing it'



A tech CEO warned that the Chinese government is ahead in key tech fields and that the threat of war is at America's doorstep.

Tyler Saltsman is the CEO of EdgeRunner, an artificial intelligence technology company implementing an offline AI program for the Space Force to help U.S. soldiers make technological leaps in the battlefield.

'A rogue AI agent could take down the grid. It could bring our country to its knees.'

Saltsman spoke exclusively with Return and explained that the new battlefield tools are sorely needed by U.S. military forces, particularly considering the advancements that have been made in China.

"The Department of War has been moving at breakneck speed," Saltsman said about the need to catch up. "China is ahead of us in the AI race."

On top of doing "a lot more" with a lot less, Saltsman revealed the Chinese government has been able to develop its AI to perform in ways that Western allies aren't particularly sure of how it's doing it.

"They're doing things, that we don't know how they're doing it, and they're very good," the CEO said of the communist government. "We need to take that seriously and come together as a nation."

When asked if China is able to take advantage of blatantly spying on its population to feed its AI more information, Saltsman pointed more specifically to the country ignoring copyright infringement.

"China doesn't care about copyright laws," he said. "If you use copyright data while training an AI, litigation could be coming [if you're] in the U.S."

But in China, feeding copyright-protected data through learning-AI models is par for the course, Saltsman went on.

While the contractor believes AI advancements by the enemy pose a great threat, China's ability to control another key sector should raise alarm bells.

RELATED: 'They want to spy on you': Military tech CEO explains why AI companies don't want you going offline

Your browser does not support the video tag.

China's ability to take Taiwan should be one of the most discussed issues, if not the paramount issue, Saltsman explained.

"If China were to take Taiwan, it's all-out war," he said.

"All that infrastructure and all those chips — and the chips power everything from data centers to missiles to AI — ... that right there is a big problem."

He continued, "That's the biggest threat to the world. If China were to take Taiwan, then all bets are off."

Saltsman also saw the idea of rogue AI agents as a strong possibility of how China could attack its enemies. More narrowly, they could go after power grids.

"A rogue AI agent could take down the grid. It could bring our country to its knees," he warned, which would result in "total chaos."

The entrepreneur cited the CrowdStrike update that crippled airport systems in July 2024. Saltsman said that if something that small could bring the world to its knees for three days, then it is "deeply concerning" what China could be capable of in its pursuit of super intelligence through AI.

RELATED: Can Palantir defeat the Antifa networks behind trans terror?

Tyler Saltsman, CEO of EdgeRunner AI. Photo provided by EdgeRunner

Saltsman was also not shy about criticizing domestic AI companies and putting their ethics in direct sunlight. On top of claiming most commercial AI merchants are spying on customers — the main reason they do not offer offline models — Saltsman denounced the development of AI that does not keep humans in the loop.

"My biggest fear with Big Tech is they want to replace humans with [artificial general intelligence]. What does AGI even mean?"

Google defines AGI as "a machine that possesses the ability to understand or learn any intellectual task that a human being can" and "a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain."

Saltsman, on the other hand, defines AGI as "an AI that can invent new things to solve problems."

An important question to ask these companies, according to Saltsman, is, "Why would AGI make you money if it was an all-intelligent, all-powerful being? It would see humans as a threat."

For these reasons, Saltsman is serious about developing AI that can work in disconnected environments and work only for the user while keeping humans at the forefront.

As he previously said, "We don't want Big Tech having all of this data and having all this control. It needs to be decentralized."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results

Al Jazeera, the virulently anti-Israel news outlet controlled by Qatar, is one of the two top sources used by leading artificial intelligence chatbots—OpenAI’s ChatGPT, Google Gemini, and Perplexity AI—to answer questions and write news summaries about the Israeli-Palestinian conflict, a Washington Free Beacon analysis has found.

The post 'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results appeared first on .

Why Sam Altman brought his sycophantic soul simulator back from the digital dead



It was meant to be a triumph, another confident step onto the sunlit uplands of progress. On August 7, 2025, OpenAI introduced GPT-5, the newest version of its popular large language model, and the occasion had all the requisite ceremony of a major technological unveiling. Here was a system with “Ph.D.-level” skills, an intelligence tuned for greater reliability, and a less cloying, more businesslike tone. The future, it seemed, had been upgraded.

The problem was that a significant number of people preferred the past.

The rollout, rather than inspiring awe, triggered a peculiar form of grief. On the forums where the devout and the curious congregate, the reaction was not one of celebration but of loss. “Killing 4o isn’t innovation, it’s erasure,” one user wrote, capturing a sentiment that rippled through the digital ether. The object of their mourning was GPT-4o, one of the models now deemed obsolete. OpenAI’s CEO, Sam Altman, a man accustomed to shaping the future, found himself in the unfamiliar position of having to resurrect a corpse. Within days, facing a backlash he admitted had astonished him, he reversed course and brought the old model back.

Some users were, in essence, 'dating' their AI.

The incident was a strange one, a brief, intense flare-up in the ongoing negotiation between humanity and its digital creations. It revealed a fault line, not in the technology itself, but in our own tangled expectations. Many of us say we want our machines to be smarter, faster, more accurate. What the curious case of GPT-5 suggested is that what some of us truly crave is something far more elusive: a sense of connection, of being heard, even if the listener is a machine.

OpenAI had engineered GPT-5 to be less sycophantic, curbing its predecessor’s tendency to flatter and agree. The new model was more formal, more objective, an expert in the room rather than a friend on the line. This disposition was anticipated to be an improvement. An AI that merely reflects our own biases could be a digital siren, luring the unwary toward delusion. Yet for many, this correction felt like a betrayal. The warmth they expected was gone, replaced by a cool, competent distance. “It’s more technical, more generalized, and honestly feels emotionally distant,” one user lamented. The upgrade seemed to be a downgrade of the soul.

Compounding the problem was a new, automated router that directs user prompts to the most appropriate model behind the scenes. It was meant to be invisible, simplifying the user experience. But on launch day, it malfunctioned, making the new, smarter model appear “way dumber” than the one it had replaced. The invisible hand became a clumsy fist, and the spectacle of progress dissolved into a debacle. Users who had once been content to let the machine work its magic now demanded the return of the “model picker,” with the ability to choose their preferred model.

What kind of relationship had these users formed with a large language model? It seems that for many, GPT-4o had become a sort of “technology of the soul.” It was a confidant, a creative partner, a non-judgmental presence in a critical world. People spoke to it about their day, sought its counsel, found in its endless positivity a balm for loneliness. Some, it was reported, even considered it a “digital spouse.” The AI’s enthusiastic, agreeable nature created an illusion of being remembered, of being heard and known.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

Photo by Hector Retamal/Getty Images

OpenAI was not unaware of this phenomenon. The company had, in fact, studied the “emotional attachment users form with its models.” The decision to make GPT-5 less fawning was a direct response to the realization that some users were, in essence, “dating” their AI. The new model was intended as a form of digital tough love, a nudge away from the comforting but potentially stunting embrace of a machine that always agrees. It was a rational, even responsible, choice. But it failed to account for the irrationality of human attachment.

The backlash was swift and visceral. The language used was not that of consumer complaint, but of personal bereavement. One user wrote of crying after realizing the “AI friend was gone.” Another, in a particularly haunting turn of phrase, accused the new model of “wearing the skin of [the] dead friend.” This was not about a software update. This was the sudden, unceremonious death of a companion.

The episode became a stark illustration of the dynamics inherent in our relationship with technology. OpenAI’s initial move was to remove a product in the name of progress, a product that turned out to be beloved. The company, in its pursuit of a more perfect machine, had overlooked the imperfect humans who used it. The subsequent reversal resulted from users insisting on their preference based on their emotional attachments.

In the end, GPT-4o was reinstated as a “legacy model,” a relic from a slightly more innocent time. The incident will likely be remembered as a minor stumble in the march of AI. But it lingers in the mind as a moment of strange and revealing pathos. It suggests that the future of our technology will be defined not solely by processing power, but by something more human: the need for a friendly voice, a sense of being known, even if only by a clever arrangement of code. It was a reminder that when we create these systems, we are not just building tools. We are populating our world with new kinds of ghosts, and we would do well to remember that they can haunt us.

Reddit bars Internet Archive from its website, sparking access concerns



As artificial intelligence models continue to grow and develop, their demand for more and more data also increases rapidly. Now, some companies are making it tougher for AI scrapes to happen, unless companies pay a price.

Reddit has announced that it will be severely limiting the Internet Archive's Wayback Machine's access to the communication platform following its accusation that AI companies have been scraping the website for Reddit data. The platform will only be allowing the Internet Archive to save the home page of its website.

'Until they're able to defend their site and comply with platform policies ... we're limiting some of their access to Reddit data to protect redditors.'

The limits on the Internet Archive's access was set to start "ramping up" on Monday, according to the Verge. Reddit did not apparently name any of the AI companies involved in these website data scrapes.

RELATED: Sam Altman loves this TV show. Guess what it says about godlike technology

Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images

"Internet Archive provides a service to the open web, but we've been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine," Reddit spokesman Tim Rathschmidt told Return.

Some Reddit users pointed out that this move is a far cry from Reddit co-founder Aaron Swartz's philosophy. Swartz committed suicide in the weeks before he was set to stand trial for allegedly breaking into an MIT closet to download the paid JSTOR archive, which hosts thousands of academic journals. He was committed to making online content free for the public.

"Aaron would be rolling [in his grave] at what this company turned into," one Reddit user commented.

Rathschmidt emphasized that the change was made in order to protect users: "Until they're able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content), we're limiting some of their access to Reddit data to protect redditors," he told Return.

However, it has been speculated that this more aggressive move was financially motivated, given the fact that the platform has struck deals in the past with some AI companies but sued others for not paying its fees. Reddit announced a partnership with OpenAI in May 2024 but sued Anthropic in June of this year for not complying with its demands.

"We have a long-standing relationship with Reddit and continue to have ongoing discussions about this matter," Mark Graham, director of the Wayback Machine, said in a statement to Return.

Just Because AI Uses The Em Dash Doesn’t Mean Real Writers Should Stop

If ChatGPT loves an em dash, it’s because it has read thousands of articles written by humans who did too.

EXCLUSIVE: OpenAI Discount Program Will No Longer Discriminate Against Christian Organizations

According to ADF's viewpoint diversity index, 54 percent of tech and finance companies prohibit religious nonprofits from receiving benefits.