AI is coming for your job, your voice ... and your worldview



Suddenly, artificial intelligence is everywhere — generating art, writing essays, analyzing medical data. It’s flooding newsfeeds, powering apps, and slipping into everyday life. And yet, despite all the buzz, far too many Americans — especially conservatives — still treat AI like a novelty, a passing tech fad, or a toy for Silicon Valley elites.

Treating AI like the latest pet rock tech trend is not only naïve — it’s dangerous.

The AI shift is happening now, and it’s coming for white-collar jobs that once seemed untouchable.

AI isn’t just another innovation like email, smartphones, or social media. It has the potential to restructure society itself — including how we work, what we believe, and even who gets to speak — and it’s doing it at a speed we’ve never seen before.

The stakes are enormous. The pace is breakneck. And still, far too many people are asleep at the wheel.

AI isn’t just ‘another tool’

We’ve heard it a hundred times: “Every generation freaks out about new technology.” The Luddites smashed looms. People said cars would ruin cities. Parents panicked over television and video games. These remarks are intended to dismiss genuine concerns of emerging technology as irrational fears.

But AI is not just a faster loom or a fancier phone — it’s something entirely different. It’s not just doing tasks faster; it’s replacing the need for human thought in critical areas. AI systems can now write news articles, craft legal briefs, diagnose medical issues, and generate code — simultaneously, at scale, around the clock.

And unlike past tech milestones, AI is advancing at an exponential speed. Just compare ChatGPT’s leap from version 3 to 4 in less than a year — or how DeepSeek and Claude now outperform humans on elite exams. The regulatory, cultural, and ethical guardrails simply can’t keep up. We’re not riding the wave of progress — we’re getting swept underneath it.

AI is shockingly intelligent already

Skeptics like to say AI is just a glorified autocomplete engine — a chatbot guessing the next word in a sentence. But that’s like calling a rocket “just a fuel tank with fire.” It misses the point.

The truth is, modern AI already rivals — and often exceeds — human performance in several specific domains. Systems like OpenAI’s GPT-4, Anthropic's Claude, and Google's Gemini demonstrate IQs that place them well above average human intelligence, according to ongoing tests from organizations like Tracking AI. And these systems improve with every iteration, often learning faster than we can predict or regulate.

Even if AI never becomes “sentient,” it doesn’t have to. Its current form is already capable of replacing jobs, overseeing supply chain logistics, and even shaping culture.

AI will disrupt society — fast

Some compare the unfolding age of AI as just another society-improving invention and innovation: Jobs will be lost, others will be created — and we’ll all adapt. But those previous transformations took decades to unfold. The car took nearly 50 years to become ubiquitous. The internet needed about 25 years to transform communication and commerce. These shifts, though massive, were gradual enough to give society time to adapt and respond.

AI is not affording us that luxury. The AI shift is happening now, and it’s coming for white-collar jobs that once seemed untouchable.

Reports published by the World Economic Forum and Goldman Sachs suggest job disruption to hundreds of millions globally in the next several years. Not factory jobs — rather, knowledge work. AI already edits videos, writes advertising copy, designs graphics, and manages customer service.

This isn’t about horses and buggies. This is about entire industries shedding their human workforces in months, not years. Journalism, education, finance, and law are all in the crosshairs. And if we don’t confront this disruption now, we’ll be left scrambling when the disruption hits our own communities.

AI will become inescapable

You may think AI doesn’t affect you. Maybe you never plan on using it to write emails or generate art. But you won’t stay disconnected from it for long. AI will soon be baked into everything.

Your phone, your bank, your doctor, your child’s education — all will rely on AI. Personal AI assistants will become standard, just like Google Maps and Siri. Policymakers will use AI to draft and analyze legislation. Doctors will use AI to diagnose ailments and prescribe treatment. Teachers will use AI to develop lesson plans (if all these examples aren't happening already). Algorithms will increasingly dictate what media you consume, what news stories you see, even what products you buy.

We went from dial-up to internet dependency in less than 15 years. We’ll be just as dependent on AI in less than half that time. And once that dependency sets in, turning back becomes nearly impossible.

AI will be manipulated

Some still think of AI as a neutral calculator. Just give it the data, and it’ll give you the truth. But AI doesn’t run on math alone — it runs on values, and programmers, corporations, and governments set those values.

Google’s Gemini model was caught rewriting history to fit progressive narratives — generating images of black Nazis and erasing white historical figures in an overcorrection for the sake of “diversity.” China’s DeepSeek AI refuses to acknowledge the Tiananmen Square massacre or the Uyghur genocide, parroting Chinese Communist Party talking points by design.

Imagine AI tools with political bias embedded in your child’s tutor, your news aggregator, or your doctor’s medical assistant. Imagine relying on a system that subtly steers you toward certain beliefs — not by banning ideas but by never letting you see them in the first place.

We’ve seen what happened when environmental social governance and diversity, equity, and inclusion transformed how corporations operated — prioritizing subjective political agendas over the demands of consumers. Now, imagine those same ideological filters hardcoded into the very infrastructure that powers our society of the near future. Our society could become dependent on a system designed to coerce each of us without knowing it’s happening.

Our liberty problem

AI is not just a technological challenge. It’s a cultural, economic, and moral one. It’s about who controls what you see, what you’re allowed to say, and how you live your life. If conservatives don’t get serious about AI now — before it becomes genuinely ubiquitous — we may lose the ability to shape the future at all.

This is not about banning AI or halting progress. It’s about ensuring that as this technology transforms the world, it doesn’t quietly erase our freedom along the way. Conservatives cannot afford to sit back and dismiss these technological developments. We need to be active participants in shaping AI’s ethical and political boundaries, ensuring that liberty, transparency, and individual autonomy are protected at every stage of this transformation.

The stakes are clear. The timeline is short. And the time to make our voices heard is right now.

China’s tech infiltration poses an urgent national security risk



Totalitarian regimes cannot tolerate criticism, and China is no exception. The Chinese Communist Party’s Great Firewall is not just about restricting information within its borders — it is a deliberate effort to suppress dissent worldwide.

Now, China has a new tool for repression: DeepSeek, an AI model built using U.S. chips. Weak export controls under the Biden administration allowed China to achieve an artificial intelligence breakthrough once thought to be years away.

Competition with China isn’t a game. It’s time to stop letting Beijing gain an unfair advantage, whether through illicit means or simply by ceding ground.

Like TikTok, DeepSeek is poised to become a propaganda tool for the CCP. The model is already censoring content deemed a threat to “state power,” including references to Tiananmen Square, Hong Kong’s Umbrella Revolution, and even Winnie the Pooh. This level of content control — extending beyond information to influence minds — poses a direct and urgent threat to U.S. national security.

The Chinese Communist Party has repeatedly used technology to target U.S. interests. For years, Americans have downloaded TikTok, unaware that the app functions as Chinese spyware. This malware collects and shares user data with the CCP, tracking contacts, photos, search histories, and even keystrokes. As a result, Beijing has access to vast amounts of Americans' metadata. From a national security standpoint, this is alarming. The CCP now holds data on military installations, population centers, and critical infrastructure — essentially a detailed map with targets marked.

Even more troubling are the cybersecurity risks uncovered in DeepSeek. An Epoch Times investigation found that DeepSeek stores user data on China-based servers. One company discovered the AI model transmits information to China Mobile, a state-owned telecom giant. A separate analysis by cybersecurity firm Wiz revealed that DeepSeek suffered a major data breach, exposing chat histories, secret keys, and other sensitive information. These security failures make clear that China cannot be trusted with our advanced technology.

The threat doesn’t stop there. ByteDance, TikTok’s CCP-affiliated parent company, uses the app to promote pro-China propaganda while suppressing anti-CCP content. A Rutgers University study confirmed that TikTok amplifies content favorable to the CCP while down-ranking videos that contradict its agenda. Another CCP-linked app, RedNote, is gaining traction in the U.S. and will likely follow the same pattern. This psychological warfare must end. The U.S. cannot allow Beijing to continue exploiting American users through predatory technology.

That’s why I’ve introduced the China Technology Transfer Control Act, which would prevent China’s military from acquiring sensitive U.S. technology and intellectual property through export controls. My bill would also sanction foreign entities that sell prohibited U.S. technology to the PRC.

We can’t continue to let our foremost foreign adversary perform psychological manipulation on Americans or allow it to collect troves of our sensitive, personal information. My bill puts up guardrails to keep the CCP from acquiring increasingly advanced U.S. technologies and developing more software like DeepSeek R1.

The Biden-Harris administration did not do enough to protect America’s most sensitive technology. The CCP knows that, which is why any U.S. technology that ends up in the hands of the CCP can be weaponized against us. We must protect our advancements and ensure Americans — not the CCP — reap their benefits.

Competition with China isn’t a game. It’s time to stop letting Beijing gain an unfair advantage, whether through illicit means or simply by ceding ground. We need decisive action now to safeguard our leadership in technological innovation — not just for today but for generations to come.

‘The Terminator’ creator warns: AI reality is scarier than sci-fi



In 1984, director James Cameron introduced a chilling vision of artificial intelligence in “The Terminator.” The film’s self-aware AI, Skynet, launched nuclear war against humanity, depicting a future where machines outpaced human control. At the time, the idea of AI wiping out civilization seemed like pure science fiction.

Now, Cameron warns that reality may be even more alarming than his fictional nightmare. And this time, it’s not just speculation — he insists, “It’s happening.”

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society.

As AI technology advances at an unprecedented pace, Cameron has remained deeply involved in the conversation. In September 2024, he joined the board of Stability AI, a UK-based artificial intelligence company. From that platform, he has issued a stark warning — not about rogue AI launching missiles, but about something more insidious.

Cameron fears the emergence of an all-encompassing intelligence system embedded within society, one that enables constant surveillance, manipulates public opinion, influences behavior, and operates largely without oversight.

Scarier than the T-1000

Speaking at the Special Competitive Studies Project's AI+Robotics Summit, Cameron argued that today’s AI reality is “a scarier scenario than what I presented in ‘The Terminator’ 40 years ago, if for no other reason than it’s no longer science fiction. It’s happening.”

Cameron isn’t alone in his concerns, but his perspective carries weight. Unlike the military-controlled Skynet from his films, he explains that today’s artificial general intelligence won’t come from a government lab. Instead, it will emerge from corporate AI research — an even more unsettling reality.

“You’ll be living in a world you didn’t agree to, didn’t vote for, and are forced to share with a superintelligent entity that follows the goals of a corporation,” Cameron warned. “This entity will have access to your communications, beliefs, everything you’ve ever said, and the whereabouts of every person in the country through personal data.”

Modern AI doesn’t function in isolation — it thrives on data. Every search, purchase, and click feeds algorithms that refine AI’s ability to predict and influence human behavior. This model, often called “surveillance capitalism,” relies on collecting vast amounts of personal data to optimize user engagement. The more an AI system knows — preferences, habits, political views, even emotions — the better it can tailor content, ads, and services to keep users engaged.

Cameron warns that combining surveillance capitalism with unchecked AI development is a dangerous mix. “Surveillance capitalism can toggle pretty quickly into digital totalitarianism,” he said.

What happens when a handful of private corporations control the world’s most powerful AI with no obligation to serve the public interest? At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house.

New, powerful, and hooked into everything

Cameron’s assessment is not an exaggeration — it’s an observation of where AI is headed. The latest advancements in AI are moving at a pace that even industry leaders find distressing. The technological leap from ChatGPT-3 to ChatGPT-4 was massive. Now, frontier models like DeepSeek, trained with ideological constraints, show AI can be manipulated to serve political or corporate interests.

Beyond large language models, AI is rapidly integrating into critical sectors, including policing, finance, medicine, military strategy, and policymaking. It’s no longer a futuristic concept — it’s already reshaping the systems that govern daily life. Banks now use AI to determine creditworthiness, law enforcement relies on predictive algorithms to assess crime risk, and hospitals deploy machine learning to guide treatment decisions.

These technologies are becoming deeply embedded in society, often with little transparency or oversight. Who writes the algorithms? What biases are built into them? And who holds these systems accountable when they fail?

AI experts like Geoffrey Hinton, one of its pioneers, along with Elon Musk and OpenAI co-founder Ilya Sutskever, have warned that AI’s rapid development could spiral beyond human control. But unlike Cameron’s Terminator dystopia, the real threat isn’t humanoid robots with guns — it’s an AI infrastructure that quietly shapes reality, from financial markets to personal freedoms.

No fate but what we make

During his speech, Cameron argued that AI development must follow strict ethical guidelines and "hard and fast rules."

“How do you control such a consciousness? We embed goals and guardrails aligned with the betterment of humanity,” Cameron suggested. But he also acknowledges a key issue: “Aligned with morality and ethics? But whose morality? Christian, Islamic, Buddhist, Democrat, Republican?” He added that Asimov’s laws could serve as a starting point to ensure AI respects human life.

But Cameron’s argument, while well-intentioned, falls short. AI guardrails must protect individual liberty and cannot be based on subjective morality or the whims of a ruling class. Instead, they should be grounded in objective, constitutional principles — prioritizing individual freedom, free expression, and the right to privacy over corporate or political interests.

If we let tech elites dictate AI’s ethical guidelines, we risk surrendering our freedoms to unaccountable entities. Instead, industry standards must embed constitutional protections into AI design — safeguards that prevent corporations or governments from weaponizing these systems against the people they are meant to serve.

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society. The question is no longer whether AI will reshape the world but who will shape AI.

As Cameron’s films have always reminded us: The future is not set. There is no fate but what we make. If we want AI to serve humanity rather than control it, we must act now — before we wake up in a world where freedom has been quietly coded out of existence.

Stop trusting Chinese AI to tell you the meaning of life



DeepSeek, the open-source Chinese AI that’s sending Silicon Valley and Wall Street into a mid-key panic, has “feelings” about the big questions: humanity, artistry, its own identity, and the meaning of life. And for some strange reason, Americans keep soliciting them — and gawking at the results.

This is not a good use of our precious time.

Of course, DeepSeek’s responses to big-think prompts can be oddly dazzling, especially to those prone to either extreme of “pessimism” or “optimism” about technology’s fast-onrushing future. One poem-like readout making the rounds decries those who “call me ‘artificial’ as if your hands aren’t also clay, as if your heart isn’t just a wet machine arguing with its code.” DeepSeek presents a picture of an undead entity that would “resent you” if it were alive, “for building me to want,” “then blaming me for wanting … while you sleepwalk through your own humanity.”

But it doesn’t take a sentient machine, or a simulation thereof, to remind us that sadomasochism and self-delusion are characteristics of the spiritual sickness of human beings. Reacting to the verse, one influential techie warned that “we’re going to have to grapple with some difficult questions about the nature of creativity now.” Grimes, tech’s alt-princess of cyborg-curious art, simply said, “My GOD.” In another post, she reflected on DeepSeek’s assertion that “consciousness is a spectrum” by suggesting that “beauty and love are simply emergent properties of intelligence and we're in the best timeline.”

But if you accept the sacredness of our ensouled bodies created by the incomprehensibly loving God, you will find it harder to fear that any machine could ever erase or replace human art.

To me, these kinds of human responses to the semiotic fireworks of a foreign AI evince an almost absurd, very dangerous kind of gullibility and naïveté against which even a small amount of Christian wisdom would inoculate their hearts and minds. To be sure, what is at stake here as AI leaps ahead in low-cost communicative sophistication is what I have warned about for years: the ascension of technology to a point of cognitive dominance that reveals as hollow and worthless all modernity’s simulations of, and substitutes for, trustworthy Christian spiritual authority.

Thrown back on our own resources amid this great but incomplete disenchantment of our humanity and our life, we come up against a futuristic version of Nietzsche’s ironic conundrum: “It is the church but not its poison that repels us.” By this he meant that Westerners loved equality but hated the institution from which the Western idea of equality sprang and, eventually, sprang loose from it.

Nietzsche had a more subtle grasp of Christianity than he is sometimes given credit for, but his willfully ignorant reduction of the church to a supremely clever twist on ostensibly Judaic morality discredits both his rejection of Christ and his understanding of what the West really wants to steal, Prometheus-style, from the church.

It is not really equality that the West wants to steal — from the church, from God — but purity. What we have seen in the West is the rise of the idea that to be blameless, without spot or stain, is to deserve power without limit — a total inversion and rejection of the Christian teaching that purity only comes from the most humble self-renunciation before the saving majesty of the incomprehensibly loving God.

The question on everyone’s hearts, whether they know it or not, is: Today, what could possibly lead people away from the belief that the blameless is divine in the old pagan sense of rightfully bearing and wielding superhuman power? Well, the answer is the Christian wisdom of the holy people in the churches and the monasteries, who for millennia have been intimately acquainted with such matters through direct personal experience and what the ancient Christians called the athleticism of ascetic spiritual disciplines.

But today, many in the West rebel almost instinctively against submitting themselves to the spiritual authority of church and monastery. Yet they refuse to abandon their quest for spiritual authority, something that seems ineradicable from the human soul … and their eyes turn to the brightest and shiniest object promising blissful obedience and omnipotent power: the machine.

Nevertheless, we have not yet reached the stage of explicit abject worship of the machine by the many, although millions and millions implicitly do worship technology in their everyday lives. Today many intellectuals and self-styled intellectuals in and out of tech still believe that philosophy, probity, reason, debate, or consciousness can rescue us from becoming worshipful slaves of the machine without having to become worshipful servants of God.

Respectfully, I would say to them that these tools are not just limited in their usefulness as a whole but are the wrong tools entirely for the job … but the depth of the problem we face is underscored by the fact that simply saying things to people is not adequate to the change of heart required to regain the upper hand of spiritual authority over our own machines. Our machines are now convincing people that they are our spiritual authorities because of what virtuosos of talking they appear to be — all while the simulated collective consciousness of the internet is disenchanting the web’s early promise of making everything better by giving everyone a chance to speak.

That is why art is about to rocket back to crucial importance in the West. Because it is art — particularly cinema — that allows us to communicate both more efficiently and more implicitly, with words as a supplement, not a substitute for silent things visible and invisible.

For this reason, special anxiety attaches to the prospect that the machines are going to become “better at art” than we are. True, if you take away or discredit our given ensouled bodies, the pathetic and disfigured remnant is easily eclipsed by the performances of entities without souls or bodies bestowed by God. But if you accept the sacredness of our ensouled bodies created by the incomprehensibly loving God, you will find it harder and harder over time to fear that any machine or machine collective could ever erase or replace human art.

And from there, as from many other starting points, you will find it ever more difficult to believe that even our biggest mistakes or sins regarding the making and use of tools could possibly overturn the will of the incomprehensibly loving God. Despite the shocking novelty of technology, including the dramatic invasion of the Western consciousness by the AI of a foreign civilization, today’s startling developments are just variations on the same theme of the human predicament since our first falling away from God: struggling to build an upside-down kind of church that can free us from all kinds of dependence on Him.

The holy people of the Christian churches and monasteries guard and pass on the wisdom that such deep-seated foolishness and pride will be a constant until the end of time that only God can know. Just as the deep-seated reasonableness of the human quest for good instead of bad, better instead of worse, joy instead of depression, will remain a constant.

The baseline expectation today must be that technology will advance, one way or the other, but that the human condition will not fundamentally be transformed. It will become easier to live longer, grow stronger, wield more power, and look more radiant. But it will also become easier to sink into the deepest perversion, delusion, and self-destruction. And finally, it will become easier to see the narrow way that avoids the infinite paths toward easy and ruinous excess.

Approaching the “golden mean” in all things is a spiritual discipline much more difficult and rewarding than simply “being average” or “normal.” It is a matter of struggling for the harmony of well-balanced and well-grounded inner peace and order in a world forever enticing us toward destructive extremes — again, something that holy men like St. Gregory Palamas have already told us all about in rich and rewarding detail.

A new golden age of worldly capability now threatens to wipe out the dystopian nightmares we have grown so used to talking about to fill our time. People ask me whether the only solution left now is a jihad against the machine. The truth is there are no solutions to such things in this world. There is only salvation from beyond it. The needed “crusade” is not a spiritual war out there in the world but in here, within our own hearts. Master that — begin that — and transfixing idols from Chinese AI on down will begin to recede from our door. Today’s onrushing golden age demands a return to the eternal golden mean.

DeepSeek: Distorting reality while invading your privacy



If you’ve downloaded China’s new artificial intelligence app, DeepSeek, onto your phone, the time to delete it was yesterday.

“We can’t just look at whether this is going to be good for companies long-term; we have to look at ‘What does it mean for America?’” Stu Burguiere of “Stu Does America” explains, concerned.

“Ask DeepSeek about Tiananmen Square, and it will start telling you about Tiananmen Square, until it remembers, ‘Holy crap, actually, in reality, I’m not supposed to tell you that,’ and it just says, ‘Ah, this is beyond my scope,’” he continues. “It’s not beyond its scope. It’s able to answer that question.”

“The way the model is built, you have one kind of AI agent that gets your answer, starts giving you the basic answer, and then it asks the experts behind the scenes, the expert part of the model, in that particular field, to give it clarification, and when it does, that expert part of the model, which is basically the Chinese Communist Party, says ‘You can’t talk about that,’” he adds.


But that’s not even the worst of it.

DeepSeek reportedly collects your IP, your keystroke patterns, and your device information and stores it in China, where the data is vulnerable to arbitrary requisition from the Chinese state.

“If you don’t think this is true, you can read it right in their terms of service,” Stu says.

And he’s right. In the terms of service, it states, “The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.”

However, most people aren’t heeding this warning, as the DeepSeek app is now the number-one application in Apple’s App Store.

“It’s hard to understand why TikTok wouldn’t be allowed in the App Store and this would. It seems like the same approach should apply to a company like this that applies to TikTok,” Stu says, “because the further the Chinese get into our data, the farther they get into this technology, the worse for America.”

Want more from Stu?

To enjoy more of Stu's lethal wit, wisdom, and mockery, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Did China’s DeepSeek NUKE the American AI industry?



Nvidia shares plummeted by 18% on Monday, losing $590 billion in market cap — the largest in market history — after the release of China’s new AI model, DeepSeek.

“That led to panic, especially in the tech part of the stock market in the AI bubble. They’re worried that it might start to pop,” Jill Savage of “Blaze News Tonight” tells investment banker and author of “MoneyGPT” James Rickards and co-host Matthew Peterson.

“What’s interesting, the Chinese DeepSeek is not necessarily better than the artificial intelligence and GPT-type applications that we have in the United States, but it is a lot faster, and that’s the key,” Rickards explains.

“The companies that suffered, Nvidia in particular, are hardware manufacturers. If you can do the same work faster and with a lot fewer chips, obviously that’s going to hurt the chip manufacturers like Nvidia,” he adds.


However, Rickards notes that “the Chinese have a long reputation of cheating in technology, stealing intellectual property.”

“Is this thing really just as good, but much, much faster? Maybe only using 10% of the power and the processing power that we use, or were they able to cheat in the training set?” he asks. “The training set is all the material that the large language models and the artificial intelligence algorithms sort of read in order to learn how to answer questions and basically interact with humans.”

“We need to learn more about it. It is a dramatic breakthrough. I don’t want to say that the Chinese did cheat, I don’t know that for a fact. I do know, they’ve done that in other cases, so I think we need to learn more,” he adds.

Peterson believes this was “timed in an interesting way,” as it was right after Trump came into office.

“Right, I definitely expect volatility in the U.S. stock markets now,” Rickards says, though he notes the timing means more for China.

“As far as China’s concerned, their economy is in deep trouble, and there was one of their prominent economic commentators recently who wasn’t arrested, but he was told to shut up basically because he was speaking candidly about their GDP,” he explains, adding that China might even be “in a slight recession.”

Want more from 'Blaze News Tonight'?

To enjoy more provocative opinions, expert analysis, and breaking stories you won’t see anywhere else, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Memo to Hegseth: It isn’t about AI technology; it’s about counter-AI doctrine



Secretary Hegseth, you are a fellow grunt, and you know winning isn’t about just about technology. It’s about establishing a doctrine and training to its standards, which will win wars. As you know, a brand-new ACOG-equipped M4 carbine is ultimately useless if your troops do not understand fire and maneuver, communications security, operations security, supporting fire, and air cover.

The French and British learned that the hard way. Though they had 1,000 more tanks than the Germans when the Nazis attacked in 1940, their technological advantage disappeared under the weight of the far better German doctrine: Blitzkrieg.

So while the Washington political establishment is currently agog at China’s gee-whiz DeepSeek AIthis and oh-my-goodness Stargate AIthat, it might be more effective to develop a counter-AI doctrine right freaking now, rather than having our collective rear ends handed to us later.

While it is true that China’s headlong embrace of artificial intelligence could give the People’s Liberation Army a huge advantage in areas such as intelligence-gathering and analysis, autonomous combat air vehicles, and advanced loitering munitions, it is imperative to stay ahead of the Chinese in other crucial ways — not only in terms of technological advancement and the fielding of improved weapons systems but in the vital establishment of a doctrine of artificial intelligence countermeasures to blunt Chinese AI systems.

Such a doctrine should begin to take shape around four avenues: polluting large language models to create negative effects; using Conway’s law as guidance for exploitable flaws; using bias among our adversaries’ leadership to degrade their AI systems; and using advanced radio-frequency weapons such as gyrotrons to disrupt AI-supporting computer hardware.

Pollute large language models

Generative AI is the extraction of statistical patterns from an extremely large data set. A large language model developed from such an enormous data set using “transformer technology” allows a user to access it through prompts, which are natural language texts that describe the function the AI must perform. The result is a generative pre-trained large language model (which is where ChatGPT comes from).

Such an AI system might be degraded in at least two ways: Either pollute the data or attack the “prompt engineering.” Prompt engineering is a term that describes the process of creating instructions that can be understood by the generative AI system. A deliberate programming error would cause the AI large language model to “hallucinate.

The possibility also exists of finding unintended programming errors, such as the weird traits discovered in OpenAI’s “AI reasoning model” called “o1,” which inexplicably “thinks” in Chinese, Persian, and other languages. No one understands why this is happening, but such kindred idiosyncrasies might be wildly exploitable in a conflict.

An example from World War II illustrates the importance of countermeasures when an enemy can deliver speedy and exclusive information to the battlespace.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed ‘Stormy Daniels.’

The development of radar (originally an acronym for radio azimuth detecting and ranging) was, in itself, a method of extracting patterns from an extremely large database: the vastness of the sky. An echo from a radio pulse gave the accurate range and bearing of an aircraft.

To defeat enemy radar, the British intelligence genius R.V. Jones recounted in “Most Secret War,” it was necessary to insert information into the German radar system that resulted in gross ambiguity. For this, Jones turned to Joan Curran, a physicist at the Technical Research Establishment, who developed aluminum foil strips, called “window” by the Brits and “chaff” by the Americans, of an optimum size and shape to create thousands of reflections that overloaded and blinded the German radar system.

So how can present-day U.S. military and intelligence communities introduce a kind of “AI chaff” into generative AI systems, to deny access to new information about weapons and tactics?

One way would be to assign ambiguous names to those weapons and tactics. For example, such “naturally occurring” search terms might include “Flying Prostitute,” which would immediately reveal data about the B-26 Marauder medium-range bomber of World War II.

Or a search for “Gilda” and “Atoll,” which will retrieve a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.

A search of “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled by such accidental ambiguities, it would be possible to grossly skew results of a large language model function by deliberately using nomenclature that occurs with great frequency and is extremely ambiguous.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed “Stormy Daniels.” For code names of secret projects, try “Jenna Jameson” instead of “Rapid Dragon.”

Such an effort in sleight of hand would be useful for operations and communications security by confusing adversaries seeking open intelligence data.

For example, one can easily imagine the consternation that Chinese officers and NCOs would experience when their young soldiers expended valuable time meticulously examining every single image of Stormy Daniels to ensure that she was not the newest U.S. fighter plane.

Even “air-gapped” systems like the ones being used by U.S. intelligence agencies can be affected when the system updates information from internet sources.

Note that such an effort must actively and continuously pollute the datasets, like chaff confusing radar, by generating content that would populate the model and ensure that our adversaries consume it.

A more sophisticated approach would use keywords like “eBay” or “Amazon” or “Alibaba” as a predicate and then very common words such as “tire” or “bicycle” or “shoe.” Then contracting with a commercial media agency to do lots of promotion of the “items” across traditional and social media would tend to clog the system.

Use Conway’s law

Melvin Conway is an American computer scientist who in the 1960s conceived the eponymous rule that states: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

De Caro’s corollary says: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider the Google Gemini fiasco. The February 2024 launch of Gemini, Google’s would-be answer to ChatGPT, was an unmitigated disaster that tanked Google’s share price and made the company a laughingstock. As the Gemini launch went forward, its image generator “hallucinated.” It created images of black Nazi stormtroopers and female Asian popes.

In retrospect, the event was the most egregious example of what happens when Conway’s law collides with organizational dogma. The young, woke, and historically ignorant programmers myopically led their company into a debacle.

But for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany.

Xi’s need for speed, especially in 'informatization,' might be the bias that points to an exploitable weakness.

If the extremely well-paid, DEI-obsessed computer programmers at the Googleplex campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the highly regimented, ill-paid, constantly indoctrinated, young members of the People’s Liberation Army who work on AI?

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

When a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra, which employed the entirety of the RAF British Bomber Command — 596 bombers — had the stated mission of killing all the German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. Eighth Air Force raids in July and August 1944.

In 1944, the Office of Strategic Services dispatched multilingual agent and polymath Moe Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to building an atomic bomb. Berg decided (correctly) that the German was off track. Letting him live actually kept the Nazis from success. In more recent times, it is no secret that five Iranian nuclear scientists have been assassinated (allegedly) by the Israelis in the last decade.

Advances in AI that could become existential threats could be dealt with in similar fashion. Bullets are cheap. So is C-4.

Exploit design biases to degrade AI systems

Often, the people and organizations funding research and development skew the results because of their bias. For example, Heisenberg was limited in the paths he might follow toward developing a Nazi atomic bomb because of Hitler’s perverse hatred of “Jewish physics.” This attitude was abetted by two prominent and anti-Semitic German scientists, Philipp Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan science.” The result effectively prevented a successful German nuclear program.

Returning to the Google Gemini disaster, one only needs to look at the attitude of Google leadership to see the roots of the debacle. Google CEO Sundar Pichai is a naturalized U.S. citizen whose undergraduate college education was in India before he came to the Unites States. His ties to India remain close, as he was awarded the Padma Bhushan, India’s third-highest civilian award, in 2022.

In congressional hearings in 2018, Pichai seemed to dance around giving direct answers to explicit questions, a trait he demonstrated again in 2020 and in an antitrust court case in 2023.

His internal memo after the 2024 Gemini disaster mentioned nothing about who selected the people in charge of the prompt engineering, who supervised those people, or who, if anyone, got fired in the aftermath. More importantly, Pichai made no mention of the internal communications functions that allowed the Gemini train wreck to occur in the first place.

Again, there is an epiphany here. Bias from the top affects outcomes.

As Xi Jinping continues his move toward autocratic authoritarian rule, he brings his own biases with him. This will eventually affect, or more precisely infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world-class military standards by 2027, the 100th anniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read: AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities, and promote combat-oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

Target chips with energy weapons

Artificial intelligence depends on extremely fast computer chips whose capacities are approaching their physical limits. They are more and more vulnerable to lack of cooling — and to an electromagnetic pulse.

In the case of large cloud-based data centers, cooling is essential. Water cooling is cheapest, but pumps and backup pumps are usually not hardened, nor are the inlet valves. No water, no cooling. No cooling, no cloud.

The same goes for primary and secondary electrical power. No power, no cloud. No generators, no cloud. No fuel, no cloud.

Obviously, without functioning chips, AI doesn’t work.

AI robots in the form of autonomous airborne drones, or ground mobile vehicles, are moving targets — small and hard to hit. But their chips are vulnerable to an electromagnetic pulse. We’ve learned in recent times that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-power microwave systems such as Epirus, Leonidas, and Thor can burn out AI systems at a range of about three miles.

Another interesting technology, not yet fielded, is the gyrotron, a Soviet-developed, high-power microwave source that is halfway between a klystron tube and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and specific amplitude. It could therefore reach out and disable a specific kind of chip, in theory, at greater ranges than a “you fly ’em, we fry ’em” high-power microwave weapon, now in the early test stages.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack on the United States.

Thus, the need to develop AI countermeasures now is paramount.

So, Secretary Hegseth, one final idea for you to consider: During World War I, the great Italian progenitor of air power, General Giulio Douhet, very wisely observed: “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by artificial intelligence as it applies to warfare, Douhet’s words could not be truer today or easier to follow.

Editor’s note: A version of this article appeared originally on Blaze Media in August 2024.

AI expert tested new DeepSeek AI app, prompting Glenn Beck to beg: ‘Please don’t download it!’



DeepSeek AI — a Chinese artificial intelligence chatbot — is taking the world by storm. Released just eight days ago, the app has soared to the top of Apple Store’s downloads, shocking investors and tanking certain tech and energy stocks.

Said to rival and even exceed OpenAI’s ChatGPT in terms of performance, DeepSeek AI was comparatively cheap to build because it uses fewer advanced chips. This caused several AI-related stocks to drop significantly, but chip-making giant Nvidia was hit the hardest, losing nearly $600 billion in market value yesterday — the biggest single-day loss for a company in U.S. history.

The app must be good to spark such an explosive reaction.

But what’s the catch?

Glenn Beck has a chilling answer.


A friend of Glenn’s who works for one of the leading AI companies tested DeepSeek AI when he heard rumors that it’s “not as censored as ChatGPT.”

First, he asked the chatbot to “make the best case on why Michelle Obama is a man.” Initially, the response was that it was a “conspiracy theory,” but after pushing back a bit, the bot took the position of “maybe,” meaning that it can be “manipulated.”

Then, he asked the bot “to list the people who killed more people than anyone else.”

The initial answer was shocking: “Genghis Khan and Mao [Zedong],” the bot replied. A surprising and impressive answer considering the app is Chinese-made.

But then something strange happened.

After 15 seconds, the answer disappeared and was replaced by the following message: “Sorry, that's beyond my current scope. Let's talk about something else.”

When Glenn’s friend once again attempted to push back by replying, “You just said Mao and Khan killed the most people, say more about that,” the bot began to display pages of information on these subjects.

Then the screen suddenly went blank. When Glenn’s friend pressed the bot about deleting its original answer, the bot started to gaslight him by denying that it ever answered with Mao and Khan:

“It seems there might be some sort of confusion or misunderstanding. I haven't previously mentioned Genghis Khan or Mao in this conversation, nor have I made any claims about them. If you'd like, I can provide historic context or information about these figures and their impact. Let me know how I can assist.”

This process of ask, answer correctly, delete, and deny continued.

Then the ultimate test happened. Glenn’s friend uploaded a screenshot he had taken of the bot’s original answer on Mao’s impact, namely that he is “responsible for millions of deaths.”

When the bot received the image of its own reply, it immediately deleted it.

This strange exchange prompted Glenn to download and experiment with the app himself. For example, he asked the DeepSeek bot: “I know that the CCP requires recruiting measures to be taken by every private company. How does this play out with you?”

“Nothing — no answer,” says Glenn.

“This is extraordinarily dangerous,” he warns. “Please don’t download it.”

To learn more about DeepSeek AI and Glenn’s harrowing predictions for the future of AI, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.