‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by "right-wing ideology" and downplay left-wing violence as "exceptionally rare," according to a Washington Free Beacon analysis.

The post ‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’ appeared first on .

War Department contractor warns China is way ahead, and 'we don't know how they're doing it'



A tech CEO warned that the Chinese government is ahead in key tech fields and that the threat of war is at America's doorstep.

Tyler Saltsman is the CEO of EdgeRunner, an artificial intelligence technology company implementing an offline AI program for the Space Force to help U.S. soldiers make technological leaps in the battlefield.

'A rogue AI agent could take down the grid. It could bring our country to its knees.'

Saltsman spoke exclusively with Return and explained that the new battlefield tools are sorely needed by U.S. military forces, particularly considering the advancements that have been made in China.

"The Department of War has been moving at breakneck speed," Saltsman said about the need to catch up. "China is ahead of us in the AI race."

On top of doing "a lot more" with a lot less, Saltsman revealed the Chinese government has been able to develop its AI to perform in ways that Western allies aren't particularly sure of how it's doing it.

"They're doing things, that we don't know how they're doing it, and they're very good," the CEO said of the communist government. "We need to take that seriously and come together as a nation."

When asked if China is able to take advantage of blatantly spying on its population to feed its AI more information, Saltsman pointed more specifically to the country ignoring copyright infringement.

"China doesn't care about copyright laws," he said. "If you use copyright data while training an AI, litigation could be coming [if you're] in the U.S."

But in China, feeding copyright-protected data through learning-AI models is par for the course, Saltsman went on.

While the contractor believes AI advancements by the enemy pose a great threat, China's ability to control another key sector should raise alarm bells.

RELATED: 'They want to spy on you': Military tech CEO explains why AI companies don't want you going offline

Your browser does not support the video tag.

China's ability to take Taiwan should be one of the most discussed issues, if not the paramount issue, Saltsman explained.

"If China were to take Taiwan, it's all-out war," he said.

"All that infrastructure and all those chips — and the chips power everything from data centers to missiles to AI — ... that right there is a big problem."

He continued, "That's the biggest threat to the world. If China were to take Taiwan, then all bets are off."

Saltsman also saw the idea of rogue AI agents as a strong possibility of how China could attack its enemies. More narrowly, they could go after power grids.

"A rogue AI agent could take down the grid. It could bring our country to its knees," he warned, which would result in "total chaos."

The entrepreneur cited the CrowdStrike update that crippled airport systems in July 2024. Saltsman said that if something that small could bring the world to its knees for three days, then it is "deeply concerning" what China could be capable of in its pursuit of super intelligence through AI.

RELATED: Can Palantir defeat the Antifa networks behind trans terror?

Tyler Saltsman, CEO of EdgeRunner AI. Photo provided by EdgeRunner

Saltsman was also not shy about criticizing domestic AI companies and putting their ethics in direct sunlight. On top of claiming most commercial AI merchants are spying on customers — the main reason they do not offer offline models — Saltsman denounced the development of AI that does not keep humans in the loop.

"My biggest fear with Big Tech is they want to replace humans with [artificial general intelligence]. What does AGI even mean?"

Google defines AGI as "a machine that possesses the ability to understand or learn any intellectual task that a human being can" and "a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain."

Saltsman, on the other hand, defines AGI as "an AI that can invent new things to solve problems."

An important question to ask these companies, according to Saltsman, is, "Why would AGI make you money if it was an all-intelligent, all-powerful being? It would see humans as a threat."

For these reasons, Saltsman is serious about developing AI that can work in disconnected environments and work only for the user while keeping humans at the forefront.

As he previously said, "We don't want Big Tech having all of this data and having all this control. It needs to be decentralized."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results

Al Jazeera, the virulently anti-Israel news outlet controlled by Qatar, is one of the two top sources used by leading artificial intelligence chatbots—OpenAI’s ChatGPT, Google Gemini, and Perplexity AI—to answer questions and write news summaries about the Israeli-Palestinian conflict, a Washington Free Beacon analysis has found.

The post 'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results appeared first on .

Why Sam Altman brought his sycophantic soul simulator back from the digital dead



It was meant to be a triumph, another confident step onto the sunlit uplands of progress. On August 7, 2025, OpenAI introduced GPT-5, the newest version of its popular large language model, and the occasion had all the requisite ceremony of a major technological unveiling. Here was a system with “Ph.D.-level” skills, an intelligence tuned for greater reliability, and a less cloying, more businesslike tone. The future, it seemed, had been upgraded.

The problem was that a significant number of people preferred the past.

The rollout, rather than inspiring awe, triggered a peculiar form of grief. On the forums where the devout and the curious congregate, the reaction was not one of celebration but of loss. “Killing 4o isn’t innovation, it’s erasure,” one user wrote, capturing a sentiment that rippled through the digital ether. The object of their mourning was GPT-4o, one of the models now deemed obsolete. OpenAI’s CEO, Sam Altman, a man accustomed to shaping the future, found himself in the unfamiliar position of having to resurrect a corpse. Within days, facing a backlash he admitted had astonished him, he reversed course and brought the old model back.

Some users were, in essence, 'dating' their AI.

The incident was a strange one, a brief, intense flare-up in the ongoing negotiation between humanity and its digital creations. It revealed a fault line, not in the technology itself, but in our own tangled expectations. Many of us say we want our machines to be smarter, faster, more accurate. What the curious case of GPT-5 suggested is that what some of us truly crave is something far more elusive: a sense of connection, of being heard, even if the listener is a machine.

OpenAI had engineered GPT-5 to be less sycophantic, curbing its predecessor’s tendency to flatter and agree. The new model was more formal, more objective, an expert in the room rather than a friend on the line. This disposition was anticipated to be an improvement. An AI that merely reflects our own biases could be a digital siren, luring the unwary toward delusion. Yet for many, this correction felt like a betrayal. The warmth they expected was gone, replaced by a cool, competent distance. “It’s more technical, more generalized, and honestly feels emotionally distant,” one user lamented. The upgrade seemed to be a downgrade of the soul.

Compounding the problem was a new, automated router that directs user prompts to the most appropriate model behind the scenes. It was meant to be invisible, simplifying the user experience. But on launch day, it malfunctioned, making the new, smarter model appear “way dumber” than the one it had replaced. The invisible hand became a clumsy fist, and the spectacle of progress dissolved into a debacle. Users who had once been content to let the machine work its magic now demanded the return of the “model picker,” with the ability to choose their preferred model.

What kind of relationship had these users formed with a large language model? It seems that for many, GPT-4o had become a sort of “technology of the soul.” It was a confidant, a creative partner, a non-judgmental presence in a critical world. People spoke to it about their day, sought its counsel, found in its endless positivity a balm for loneliness. Some, it was reported, even considered it a “digital spouse.” The AI’s enthusiastic, agreeable nature created an illusion of being remembered, of being heard and known.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

Photo by Hector Retamal/Getty Images

OpenAI was not unaware of this phenomenon. The company had, in fact, studied the “emotional attachment users form with its models.” The decision to make GPT-5 less fawning was a direct response to the realization that some users were, in essence, “dating” their AI. The new model was intended as a form of digital tough love, a nudge away from the comforting but potentially stunting embrace of a machine that always agrees. It was a rational, even responsible, choice. But it failed to account for the irrationality of human attachment.

The backlash was swift and visceral. The language used was not that of consumer complaint, but of personal bereavement. One user wrote of crying after realizing the “AI friend was gone.” Another, in a particularly haunting turn of phrase, accused the new model of “wearing the skin of [the] dead friend.” This was not about a software update. This was the sudden, unceremonious death of a companion.

The episode became a stark illustration of the dynamics inherent in our relationship with technology. OpenAI’s initial move was to remove a product in the name of progress, a product that turned out to be beloved. The company, in its pursuit of a more perfect machine, had overlooked the imperfect humans who used it. The subsequent reversal resulted from users insisting on their preference based on their emotional attachments.

In the end, GPT-4o was reinstated as a “legacy model,” a relic from a slightly more innocent time. The incident will likely be remembered as a minor stumble in the march of AI. But it lingers in the mind as a moment of strange and revealing pathos. It suggests that the future of our technology will be defined not solely by processing power, but by something more human: the need for a friendly voice, a sense of being known, even if only by a clever arrangement of code. It was a reminder that when we create these systems, we are not just building tools. We are populating our world with new kinds of ghosts, and we would do well to remember that they can haunt us.

Reddit bars Internet Archive from its website, sparking access concerns



As artificial intelligence models continue to grow and develop, their demand for more and more data also increases rapidly. Now, some companies are making it tougher for AI scrapes to happen, unless companies pay a price.

Reddit has announced that it will be severely limiting the Internet Archive's Wayback Machine's access to the communication platform following its accusation that AI companies have been scraping the website for Reddit data. The platform will only be allowing the Internet Archive to save the home page of its website.

'Until they're able to defend their site and comply with platform policies ... we're limiting some of their access to Reddit data to protect redditors.'

The limits on the Internet Archive's access was set to start "ramping up" on Monday, according to the Verge. Reddit did not apparently name any of the AI companies involved in these website data scrapes.

RELATED: Sam Altman loves this TV show. Guess what it says about godlike technology

Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images

"Internet Archive provides a service to the open web, but we've been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine," Reddit spokesman Tim Rathschmidt told Return.

Some Reddit users pointed out that this move is a far cry from Reddit co-founder Aaron Swartz's philosophy. Swartz committed suicide in the weeks before he was set to stand trial for allegedly breaking into an MIT closet to download the paid JSTOR archive, which hosts thousands of academic journals. He was committed to making online content free for the public.

"Aaron would be rolling [in his grave] at what this company turned into," one Reddit user commented.

Rathschmidt emphasized that the change was made in order to protect users: "Until they're able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content), we're limiting some of their access to Reddit data to protect redditors," he told Return.

However, it has been speculated that this more aggressive move was financially motivated, given the fact that the platform has struck deals in the past with some AI companies but sued others for not paying its fees. Reddit announced a partnership with OpenAI in May 2024 but sued Anthropic in June of this year for not complying with its demands.

"We have a long-standing relationship with Reddit and continue to have ongoing discussions about this matter," Mark Graham, director of the Wayback Machine, said in a statement to Return.

Just Because AI Uses The Em Dash Doesn’t Mean Real Writers Should Stop

If ChatGPT loves an em dash, it’s because it has read thousands of articles written by humans who did too.

EXCLUSIVE: OpenAI Discount Program Will No Longer Discriminate Against Christian Organizations

According to ADF's viewpoint diversity index, 54 percent of tech and finance companies prohibit religious nonprofits from receiving benefits.

Why each new controversy around Sam Altman’s OpenAI is crazier than the last



Last week, two independent nonprofits, the Midas Project and the Tech Oversight Project, released after a year’s worth of investigation a massive file that collects and presents evidence for a panoply of deeply suspect actions, mainly on the part of Altman but also attributable to OpenAI as a corporate entity.

It’s damning stuff — so much so that, if you’re only acquainted with the hype and rumors surrounding the company or perhaps its ChatGPT product, the time has come for you to take a deeper dive.

Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits.

Most recently, iyO Audio alleged OpenAI made attempts at wholesale design theft and outright trademark infringement. A quick look at other recent headlines suggests an alarming pattern:

  • Altman is said to have claimed no equity in OpenAI despite backdoor investments through Y Combinator, among others;
  • Altman owns 7.5% of Reddit, which, after its still-expanding partnership with OpenAI, shot Altman’s net worth up $50 million;
  • OpenAI is reportedly restructuring its corporate form yet again — with a 7% stake, Altman stands to be $20 billion dollars richer under the new structure;
  • Former OpenAI executives, including Muri Murati, the Amodei siblings, and Ilya Sutskever, all confirm pathological levels of mistreatment and behavioral malfeasance on the part of Altman.

The list goes on. Many other serious transgressions are cataloged in the OpenAI Files excoriation. At the time of this writing, Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits. Accusations include everything from incestual sexual abuse to racketeering, breach of contract, and copyright infringement.

None of these accusations, including heinous crimes of a sexual nature, have done much of anything to dent the OpenAI brand or its ongoing upward valuation.

Tech's game of thrones

The company’s trajectory has outlined a Silicon Valley game of thrones unlike any seen elsewhere. Since its 2016 inception — when Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman convened to found OpenAI — the Janus-faced organization has been a tier-one player in the AI sphere. In addition to cutting-edge tech, it’s also generated near-constant turmoil. The company churns out rumors, upsets, expulsions, shady reversals, and controversy at about the same rate as it advances AI research, innovation, and products.

RELATED: Mark Zuckerberg's multibillion-dollar midlife crisis

Sean M. Haffey/Getty Images

Back in 2016, Amazon, Peter Thiel, and other investors pledged the company $1 billion up front, but the money was late to arrive. Right away, Altman and Musk clashed over the ultimate direction of the organization. By 2017, Elon was out — an exit which spiked investor uncertainty and required another fast shot of capital.

New investors, Reid Hoffman of LinkedIn fame among them, stepped up — and OpenAI rode on. Under the full direction of Sam Altman, the company pushed its reinforcement learning products, OpenAI Gym and Universe, to market.

To many at the time, including Musk, OpenAI was lagging behind Google in the race to AI dominance — a problem for the likes of Musk, who had originally conceived the organization as a serious counterweight against what many experts and laypeople saw as an extinction-level threat arising out of the centralized, “closed” development and implementation of AI to the point of dominance across all of society.

That’s why OpenAI began as a nonprofit, ostensibly human-based, decentralized, and open-source. In Silicon Valley’s heady (if degenerate) years prior to the COVID panic, there was a sense that AI was simply going to happen — it was inevitable, and it would be preferable that decent, smart people, perhaps not so eager to align themselves with the military industrial complex or simply the sheer and absolute logic of capital, be in charge of steering the outcome.

But by 2019, OpenAI had altered its corporate structure from nonprofit to something called a “capped-profit model.” Money was tight. Microsoft invested $1 billion, and early versions of the LLM GPT-2 were released to substantial fanfare and fawning appreciation from the experts.

Life after Elon

In 2020, the now for-limited-profit company dropped its API, which allowed developers to access GPT-3. Their image generator, DALL-E, was released in 2021, a move that has since seemed to define, to some limited but significant extent, the direction that OpenAI wants to progress. The spirit of cooperation and sharing, if not enshrined at the company, was at least in the air, and by 2022 ChatGPT had garnered millions of users, well on the way to becoming a household name. The company’s valuation rose to the ballpark of $1 billion.

After Musk’s dissatisfied departure — he now publicly lambastes "ClosedAI" and "Scam Altman" — its restructuring with ideologically diffuse investors solidified a new model: Build a sort of ecosystem of products which are intended to be dovetailed or interfaced with other companies and software. (Palantir has taken a somewhat similar, though much more focused, approach to the problem of capturing AI.) The thinking here seems to be: Attack the problem from all directions, converge on “intelligence,” and get paid along the way.

And so, at present, in addition to the aforementioned products, OpenAI now offers — deep breath — CLIP for image research, Jukebox for music generation, Shap-E for 3D object generation, Sora for generating video content, Operator for automating workflows with AI agents, Canvas for AI-assisted content generation, and a smattering of similar, almost modular, products. It’s striking how many of these are aimed at creative industries — an approach capped off most recently by the sensational hire of Apple’s former chief design officer Jony Ive, whose IO deal with the company is the target of iyO’s litigation.

But we shouldn’t give short shrift to the “o series” (o1 through o4) of products, which are said to be reasoning models. Reasoning, of course, is the crown jewel of AI. These products are curious, because while they don’t make up a hardcore package of premium-grade plug-and-play tools for industrial and military efficiency (the Palantir approach), they suggest a very clever approach into the heart of the technical problems involved in “solving” for “artificial reasoning.” (Assuming the contested point that such a thing can ever really exist.) Is part of the OpenAI ethos, even if only by default, to approach the crown jewel of “reasoning” by way of the creative, intuitive, and generative — as opposed to tracing a line of pure efficiency as others in the field have done?

Gut check time

Wrapped up in the latest OpenAI controversy is a warning that’s impossible to ignore: Perhaps humans just can’t be trusted to build or wield “real” AI of the sort Altman wants — the kind he can prompt to decide for itself what to do with all his money and all his computers.

Ask yourself: Does any of the human behavior evidenced along the way in the OpenAI saga seem, shall we say, stable — much less morally well-informed enough that Americans or any peoples would rest easy about putting the future in the hands of Altman and company? Are these individuals worth the $20 million to $100 million a year they command on the hot AI market?

Or are we — as a people, a society, a civilization — in danger of becoming strung out, hitting a wall of self-delusion and frenzied acquisitiveness? What do we have to show so far for the power, money, and special privileges thrown at Altman for promising a world remade? And he’s just getting started. Who among us feels prepared for what’s next?

Big Tech execs enlist in Army Reserve, citing 'patriotism' and cybersecurity



Four leading tech executives have joined the United States Army Reserve with a special officer status that will see them work a little more than two weeks per year.

The recruits were sworn in just in time for the Army's 250th birthday as part of a 2024 initiative by the U.S. military to find tech experts for short-term projects in cybersecurity, data analytics, and other areas.

The newly commissioned officers will be ranked as lieutenant colonels, the sixth-highest officer rank among Army personnel. However, they will still need to complete a fitness test and marksmanship training.

'There's a lot of patriotism that has been under the covers that I think is coming to light in the Valley.'

Chief Technology Officers Shyam Sankar and Andrew "Boz" Bosworth from Palantir and Meta, respectively, will be joined by Kevin Weil, chief product officer from OpenAI, and Bob McGrew, OpenAI's former chief research officer.

According to a report from the Wall Street Journal, the executives will bring sorely needed tech upgrades to the armed forces. Back in October 2024, the outlet reported on the Defense Department's desire to bring on tech experts in part-time roles to help the federal government get up to speed on cybersecurity and data, sectors in which talent and skill have largely been siphoned off by the private sector in recent years.

The new program name will also be an ode to tech with the name Detachment 201, a reference to the hypertext transfer protocol status code 201 — computer speak referring to a successful server resource being created.

RELATED: OpenAI sabotaged commands to prevent itself from being shut off

— (@)

The new reservists will also be tasked with acquiring more commercial technology, according to the WSJ, but will be limited in their work hours — 120 per year — and will not be allowed to share any information with their civilian employers.

Bosworth said Meta founder Mark Zuckerberg supported his decision to join the Army Reserve, claiming, "There's a lot of patriotism that has been under the covers that I think is coming to light in the Valley."

Whatever his true intentions, Zuckerberg has presented himself as a more patriotic individual in the last year, including wooing UFC President Dana White with a giant American flag in Lake Tahoe.

Anduril founder Palmer Luckey has also spoke positively about how the Trump administration in particular has worked with the tech sector. In fact, Luckey said Meta had rid itself of any "insane radical leftists," which has likely helped Zuckerberg become one of the darlings of the newly found marriage of tech CEOs and the right wing.

RELATED: Who's stealing your data, the left or the right?

— (@)

"I have always believed that America is a force for good in the world, and in order for America to accomplish that, we need a strong military," McGrew said about his choice, per the WSJ.

Sankar reportedly said his reason for giving back to the country was because if it were "not for the grace of this nation," his family would be "dead in a ditch" in Lagos, Nigeria.

Bosworth has allegedly enhanced his workouts in preparation for the service, but it is unclear whether he draws inspiration from legendary NFL agitator Brian "the Boz" Bosworth.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!