'Swarms of killer robots': Former Biden official says US military is afraid of using AI



A former Biden administration official working on cyber policy says the United States military would have a problem controlling its soldiers' use of artificial intelligence.

Mieke Eoyang, the deputy assistant secretary of defense for cyber policy during the Joe Biden administration, said that current AI models are poorly suited for use in the U.S. military and would be dangerous if implemented.

'There are any number of things that you might be worried about.'

With claims of "AI psychosis" and killer robots, Eoyang said the military cannot simply use an existing, public AI agent and morph it into use for the military. This would of course involve giving a chatbot leeway on suggesting the use of violence, or even killing a target.

Allowing for such capabilities is cause for alarm in the Department of Defense, now Department of War, Eoyang claimed.

"A lot of the conversations around AI guardrails have been, how do we ensure that the Pentagon's use of AI does not result in overkill? There are concerns about 'swarms of AI killer robots,' and those worries are about the ways the military protects us," she told Politico.

"But there are also concerns about the Pentagon's use of AI that are about the protection of the Pentagon itself. Because in an organization as large as the military, there are going to be some people who engage in prohibited behavior. When an individual inside the system engages in that prohibited behavior, the consequences can be quite severe, and I'm not even talking about things that involve weapons, but things that might involve leaks."

Perhaps unbeknownst to Eoyang, the Department of War is already working on the development of an internal AI system.

RELATED: War Department contractor warns China is way ahead, and 'we don't know how they're doing it'

GREG BAKER/AFP via Getty Images

According to EdgeRunner CEO Tyler Saltsman, not only is the Department of War not afraid of AI, but it's "all about it."

Saltsman just wrapped up a test run with the Department of War during military exercises in Fort Carson, Colorado, and Fort Riley, Kansas. He recently told Blaze News about his offline chatbot, EdgeRunner AI, which is modernizing the delivery of information to on-the-ground troops.

"The Department of War is trying to fortify what their AI strategy looks like; they're not afraid of it," Saltsman told Blaze News in response to Eoyang's claims.

He added, "It's concerning that folks who are clueless on technology were put in such highly influential positions."

In her interview, Eoyang — a former MSNBC contributor — also raised concerns about operational security and that "malicious actors" could get "their hands on" AI tools used by the U.S. military.

"There are any number of things that you might be worried about. There's information loss; there's compromise that could lead to other, more serious consequences," she said.

RELATED: 'They want to spy on you': Military tech CEO explains why AI companies don't want you going offline

Photo by VCG/VCG via Getty Images

These valid concerns were seemingly put to bed by Saltsman when he previously revealed to Blaze News that EdgeRunner AI would remain completely offline.

The entrepreneur even advocated for publicly available AI models to offer an offline version that users can pay for and keep. Alternatives, he explained, "want your data, they want your prompts, they want to learn more about you."

"They want to spy on you," he added.

Saltsman recently announced a partnership with Mark Zuckerberg's Meta that will see the technology shared with military allies across the world.

"It's important for the government to partner with industry and academia and have joint-force operations in this field," he told Blaze News. "I'm thankful for Secretary of War Pete Hegseth and all he is doing to reshape the DOW and help it become more effective."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Phones and drones expose the cracks in America’s defenses



In June, Israel embarrassed Iran’s ruling class, killing generals, politicians, and nuclear scientists with precision strikes. Tehran’s top brass thought they were safe. They weren’t.

Why? Their bodyguards and drivers carried cell phones that gave them away. That’s all it took for Israel to trace them and unleash devastation. The supreme leader only survived because President Donald Trump ordered Israel not to pull the trigger on him.

Phones in pockets and drones in the sky may not look like weapons, but they’re deadly if left unchecked.

The Israelis achieved this feat by identifying the weak link and exploiting it.

“We know senior officials and commanders did not carry phones, but their interlocutors, security guards, and drivers had phones; they did not take precautions seriously, and this is how most of them were traced,” an Iranian analyst told theNew York Times.

Iran’s failure should be America’s wake-up call — because we share the same blind spots.

The weakest link in US security

The U.S. government spends billions on cybersecurity. All that it takes is one careless employee with a smartphone in his pocket to blow it all up.

Even when not in use, phones emit wireless signals that can be detected, tracked, or exploited, potentially allowing adversaries to locate classified sites or intercept top-secret communications.

Most sensitive government facilities ban phones, but bans mean nothing without enforcement. Few have the tools to actually detect compromising phone use.

The solution already exists: wireless intrusion detection systems. Think of them as radar for the invisible spectrum. They pick up unauthorized devices, expose the threat, and let security teams act before adversaries do.

Washington wastes trillions on bureaucratic nonsense, but it can’t make sure the guy walking into a sensitive compartmented information facility isn’t carrying a digital beacon for the Chinese Communist Party? That’s how empires fall.

The new terrorist weapon

Drone technology is also changing the game.

In 2020, Azerbaijan crushed Armenia with cheap drones. Ukraine used $1,000 drones to destroy billions of dollars’ worth of Russian aircraft during Operation Spider’s Web. A hundred hobby drones, a few bombs, and some know-how — that’s all it took to humiliate the Kremlin.

RELATED: Does anyone think we’re up to the task of controlling AI?

Photo by Surasak Suwanmake via Getty Images

Now imagine what Iran, China, or even a terrorist cell on U.S. soil could do using the same playbook. Hackers can override “no-fly” geofencing software in minutes. That means no city, power plant, or military base is truly safe.

Stopping this requires ripping China out of our drone supply chains and arming American law enforcement with real anti-drone defenses. Anything less is a gamble with American lives.

Adapt or die

War evolves, technology evolves, and America must evolve with them. Phones in pockets and drones in the sky may not look like weapons, but they’re deadly if left unchecked.

America doesn’t need more bloated Pentagon reports or blue-ribbon commissions. We need decisive action — mandating wireless intrusion detection systems in every secure facility, hardening our skies against drones, and cutting China out of the equation entirely.

The Israelis exploited Iran’s weakness. Tomorrow, someone will exploit ours — unless we fix our weaknesses now.

Adapt or lose. That’s the choice.

New AI policing program could entrap innocent Americans



Several Arizona police departments are piloting a new AI-powered policing tool that promises to revolutionize how officers catch criminals. But without robust constitutional safeguards, this cutting-edge technology could pose a serious threat to the civil liberties of everyday Americans.

Arizona police agencies are now testing a new AI program that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” The program, called Overwatch, was developed by Massive Blue and provides police departments with up to 50 different AI personas.

While the technology could, in theory, be used for noble purposes, ... it also creates new opportunities for government overreach.

These include a sex trafficker persona, an escort persona, a 14-year-old boy in a child trafficking scenario, and a vaguely defined “college protester.” Beyond social media monitoring, the program allows police to communicate directly with suspects while posing as one of these AI-generated personas, all without a warrant.

No transparency

So far, both the police departments using Overwatch and the company behind it have been extremely secretive about its operations. Massive Blue co-founder Mike McGraw declined to answer questions from 404 Media, which first broke the story, about how the program works, which departments are using it, and whether it has led to any arrests.

“We cannot risk jeopardizing these investigations and putting victims’ lives in further danger by disclosing proprietary information,” McGraw said.

The Pinal County Sheriff’s Office, one of the few agencies that have confirmed using the program, admitted it has not yet led to any arrests. Officials refused to provide details, saying, “We cannot risk compromising our investigative efforts by providing specifics about any personas.”

At an appropriations hearing, a Pinal County deputy sheriff also declined to share information about the program with the county council. Remarkably, the Arizona Department of Public Safety, which funds the initiative, does not appear to have been informed about the program’s specifics.

While the technology could, in theory, be used for noble purposes, such as preventing terrorist attacks or combating human trafficking, it also creates new opportunities for government overreach. Without safeguards, it poses a direct threat to the civil liberties of innocent Americans.

Invitation to entrapment

History is full of examples of government entrapment and abuse of power. In the plot to kidnap Michigan Gov. Gretchen Whitmer (D-Mich.), for example, FBI involvement played a central role in bringing groups together that may never have otherwise connected.

Similarly, in Jacobson v. United States (1992), federal agents sent child sexual abuse material through the mail to a man with no prior criminal record, leading to his conviction, which was later overturned.

RELATED: Netflix’s chilling new surveillance tools are watching you

Photo Illustration by Piotr Swat/SOPA Images/LightRocket via Getty Images

In both cases, it is doubtful the crimes would have occurred without government intervention. A program like Overwatch makes such abuses easier, granting the government new ways to monitor and manipulate citizens who have never been convicted of a crime, and all without warrants.

The risks are compounded by the program’s vague and troubling categories, such as “college protester,” which could be redefined depending on who is in power. That opens the door for the technology to be weaponized against political dissent, even when no crime has been committed.

Without serious constitutional safeguards, programs like this are poised to become political tools of tyranny. Americans must demand warrant requirements and legislative oversight before this technology spreads nationwide and the erosion of our constitutional liberties becomes irreversible.

College student trash-talks ChatGPT after allegedly confessing to mass vandalism: 'go f**k urslef'



A sophomore from Missouri State University allegedly confessed his crimes to a chatbot just minutes after committing them.

The student, Ryan Schaefer, is accused by the Springfield Police Department of mass vandalizing 17 different vehicles in a university parking lot in the early hours of August 28.

'Yeah go f**k urslef. thats why i f**ked up all those useless f**kers cars.'

According to a police report obtained by the Smoking Gun, the damage included shattered windshields, ripped-off windshield wipers, dented hoods, and torn-away side mirrors.

The bounty of alleged evidence includes Schaefer's shoe prints, cellphone data, security footage, and even witness statements, but the more compelling part of the story is Schaefer's alleged conversations with ChatGPT after the alleged crimes occurred.

Schaefer reportedly consented to his phone being searched, which resulted in police saying that just 10 minutes after the incident, Schaefer asked the ChatGPT app on his phone, "how f**ked am i bro."

The conversation with the AI is riddled with spelling mistakes and will be published as is.

ChatGPT gave the user tips about the potential outcome of getting caught for "vreaking the windzhaileds or random cars," to which Schaefer allegedly responded, "what if i smashed the s**t oitta multipls cars."

Schaefer then allegedly asked ChatGPT if the MSU freshman parking lot has cameras, while also allegedly saying, "i mean i was being chull ab it but i was smahisng the winshikefs of random fs cars."

The chat continued, "Well they dont know it was me, there was a pfff campus oarty at artifacts. and yhen they f**ked uppp da cars at artifacts and it was me bc they has two cops here but they eventually left."

Police then said at that point, "It appears that Schaefer begins to spiral."

RELATED: Chatbots calling the shots? Prime minister’s recent AI confession forebodes a brave new world of governance

Photo by Smith Collection/Gado/Getty Images

Police wrote that ChatGPT began to worry and allegedly asked Schaefer to stop talking about harming people and property.

Now seemingly antagonistic toward the AI, the user wrote, "smd p***y," before citing troubling details about freshman year. In summary, the user said he was hazed by his brothers and that his girlfriend was "raped" the previous school year.

But the user continued, seemingly confident that police would not find the suspect:

"smd ikl text y tmr cu i wont get in no trouble bc if i get in groubke for doung s**t i will kill all u fi kers."

The user continued with threats toward the chatbot along with more statements about not getting caught.

"Yeah go f**k urslef. thats why i f**ked up all those useless f**kers cars, cuz they all dexerve to get raped and murdered, exactly like u."

The messages continued, "i dont give a f**k shut the f**k up until dumb n****r try and get me in trouble for the shi i didn't tn u wont ill do it f**king again."

RELATED: ‘AI psychosis’ is sending men to the hospital

Photographer: Laura Proctor/Bloomberg via Getty Images

Schaefer's alleged conversation showed that he was very confident that authorities would not recognize him, even if he was shown on camera.

Police described an interview with Schaefer in his residence during which he said, "I can see it, I guess, the resemblance," while looking at screenshots from security footage.

Police seized his shoe and his iPhone as evidence, which Schaefer later agreed to have searched.

A witness told police that the suspect in police photos "was possibly Ryan Schaefer" and matched the description of the suspect who was on camera. Another witness told police that Schaefer had told them in recent weeks that he had smashed a windshield while he walked home. Schaefer denied any involvement in the incident and also denied making any admission.

The Smoking Gun reported that Schaefer was jailed on $7,500 bond. Upon his release, he will allegedly be barred from any premises "where the primary item for sale is alcoholic beverages" and will be required to submit to random testing for drugs and alcohol. Additionally, he will reportedly be fitted with a GSP monitoring device.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Does anyone think we’re up to the task of controlling AI?



So many slide decks and white papers promise a future of AI under human control, a project framed not as a technological sprint but as a long journey. The language is meant to reassure, a steady hand on the shoulder of a jittery public. Yet the very premise of the journey implies a certain departure, a recognition that the systems we are building now operate at a speed and complexity that have outstripped our capacity to easily oversee. One might nervously wonder whether the center will hold.

One answer to this predicament is “interpretability,” a technique for examining an AI model to figure out why it did what it did. It’s the equivalent of reading a plane’s flight recorder after a crash. But a system making thousands of autonomous decisions a second offers no time for such leisurely forensics. A failure may not be an event but a condition, a constant state of potential deviation.

The new thinking, then, is to move from forensics to architecture. The goal is to build in the oversight, to treat governance, not as a secondary analysis, but as a foundational requirement, an immutable audit trail that logs not just a model’s output but its entire lineage: the data it was fed, the model version that made the call, the key inputs that shaped its rationale. We are no longer merely watching the machine; we are building a watchtower.

In the loop?

At the heart of this new architecture is the “human-in-the-loop,” a concept whose neatness belies anxiety. The human, we are told, will shift from a passive reviewer to an active designer, engage in a continuous loop of governance that sets the boundaries and defines the goals. But the very act of depending on these systems can engender a state of cognitive offloading, a subtle atrophy of our own critical faculties. We are asked to be the system’s ultimate arbiter at the very moment the system is eroding the instincts required for the job.

The friction is everywhere. We see it in the laboratory, when a researcher at the University of Washington uses deep learning to design functional proteins that have never existed in nature, opening doors to novel medicines and biosensors. We see it in the game of Go, when a machine makes a move that defies centuries of human wisdom, a move of startling, alien creativity. The promise is one of discovery, of accelerating the scientific method. The possible reality is a “theory glut,” a condition in which the bottleneck shifts from ideation to validation. We find ourselves in a world that can generate hypotheses at a superhuman rate, but our capacity to test them, to ground them in the physical world, remains stubbornly, irreducibly human. We might drown in brilliant answers to questions we have not yet learned how to ask.

This dissonance echoes in the most intimate spaces of our lives. We are offered “digital twins,” virtual replicas of our own physiology, updated in real time, upon which a surgeon can rehearse a procedure in a risk-free environment. We are told that AI copilots will save the legal profession a great number of hours per year, freeing lawyers from the drudgery of document review to focus on the higher arts of deepening client relationships.

RELATED: Female avatar appointed as Europe's first AI government official

Photo by SFOTO / Contributor via Getty Images

Free and fragile

The narrative is one of liberation, of efficiency begetting connection. And yet, this reclaimed time exists within a system of escalating expectations. The Jevons paradox, a 19th-century economic observation, finds its modern footing here: As efficiency increases, so sometimes does demand. The two hours a sales professional saves each day are not banked for leisure; they are reinvested into the pursuit of higher quotas. The freedom from menial tasks does not lead to rest, but to the creation of new, more complex work.

And beneath it all, there is a persistent hum of vulnerability. The very transparency we engineer for control becomes a new attack surface. An adversary can engage in “data poisoning,” slipping malicious information into a training set to warp a model’s output in subtle, insidious ways. The system built for audibility becomes uniquely susceptible to a kind of attack that leaves no obvious trace, a hidden vulnerability that could lie dormant for years in a system that guides autonomous vehicles or calibrates antibiotic dosages. The solution, it turns out, has problems of its own.

The long journey points not toward a destination but toward a state of perpetual negotiation. The most critical constraint is not hardware or networking or power. It is talent. The crisis is human. The skill gap between the demand for those who can manage these systems and the available supply is the true bottleneck. The government may frame this challenge as a matter of national security, an imperative to maintain a competitive advantage. But it seems to be something more fundamental. A controllable AI future is not about building smarter machines. It is about the far more complex and uncertain project of building a more resilient and healthy human society, one capable of managing the strange and brilliant weather of its own creation.

Here's how a penguin avatar will be the new leader of a Japanese political party



A political party in Japan is turning to a non-human leader after failing to gain any seats in a recent election.

The Path to Rebirth party was launched in January after a former small-town mayor, Shinji Ishimaru, shocked Japan by coming in second in Tokyo's 2024 gubernatorial race.

'Legally, the representative must be a natural person.'

Despite not having any policies, platforms, or member guidelines, the party had hoped to gain traction in Japan's House of Councillors election. However, the Path to Rebirth party failed to pick up any of the 124 seats that were up for grabs in the election.

Ishimaru quit following the massive defeat, the Japan Times reported, and now a new leader has been installed.

Doctoral student Koki Okumura won the party's leadership race, but decided he is not fit for the job, and last week he tapped a new party leader.

RELATED: Female avatar appointed as Europe's first AI government official

Shinji Ishimaru, former mayor of Akitakata City, speaks during the WebX2024 in Tokyo, Japan. Photographer: Kiyoshi Ota/Bloomberg via Getty Images

"The new leader will be AI," the 25-year-old declared.

Describing himself as the assistant to the artificial intelligence, the Kyoto University student said the AI will be a penguin avatar, a nod to Japan's love for animals.

Okumura said that while the party will "entrust decision-making to AI," he will be the formal figurehead because party leaders in Japan must be human.

"Legally, the representative must be a natural person, so formally, a human serves as the representative," he explained.

In an interview with CNN, Okumura said he believes AI will eventually take over all the decision-making for the Path to Rebirth party.

"I believe it has the potential to achieve things with greater precision than humans. This approach allows us to carefully consider voices that are often overlooked by humans, potentially creating a more inclusive and humane environment for political participation," Okumura added.

Perhaps surprisingly, the AI penguin is not the first major appointment for a non-human entity this month.

RELATED: Can these new fake pets save humanity? Take a wild guess

Albania's new AI-generated minister "Diella" "speaks" during a parliamentary session. Photo by ADNAN BECI/AFP via Getty Images

Just a week prior, Albania announced that public tenders — bids made by companies to supply goods or services to the government — will be handled by an AI minister.

The AI, named Diella, meaning "Sun," was already in use as a virtual assistant for Albania's online public services website that deals with digital documents. Prime Minister Edi Rama boldly claimed that the AI will be "100% free of corruption."

As for the penguin leader, Okumura said there is no timeline for its formal implementation, and its appearance has not been revealed.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Wired In

A trained computational biologist—one who discovers biological truths through simulations rather than physical experiments—Arbesman volunteers as our guide. With software now embedded in our daily routines, he rests uneasily knowing that only the technologically savvy wield all creative potential. He envisions a world in which everyone possesses this power. Thanks to recent advances in generative artificial intelligence, such as ChatGPT, that vision is more plausible than ever.

The post Wired In appeared first on .