Gen Z just outsmarted car dealers — using AI



There’s a new dynamic shaking up the auto industry, and car dealerships aren’t thrilled about it. Members of Generation Z — the most digital generation yet — aren't walking into showrooms unprepared. Instead, they’re bringing a secret weapon: artificial intelligence.

"Yesterday, ChatGPT helped my daughter save over $3,000 on a car purchase," trumpets one post on Reddit, going on to lay out the exact prompt she used to secure her deal.

With a few taps, AI can highlight suspicious charges, flag high interest rates, and summarize legal terms that would take an average buyer hours to decipher.

This is far from an isolated anecdote — it’s the latest real-world shift in how people buy cars. Similar stories abound, including videos showing buyers walking sales reps through their own contracts.

New leverage

It's hard to blame the latest generation of first-time car buyers for using whatever leverage they can. Zoomers grew up during the 2008 financial crash, the pandemic, and the explosion of online scams. They’ve watched the economy fluctuate wildly, and they’ve seen how easily a “great deal” can turn into a financial trap. They’re cautious, analytical, and skeptical of traditional sales tactics — especially those that rely on confusion or pressure.

And let’s be honest — dealership contracts are notoriously dense. Between add-ons like extended warranties, gap insurance, and inflated “doc fees,” the cost of a new car can quietly balloon by thousands of dollars. But with a few taps, AI can highlight suspicious charges, flag high interest rates, and summarize legal terms that would take an average buyer hours to decipher.

Dealers can benefit too

It’s no wonder some salespeople are frustrated. They’re used to being the authority. But now, the balance of power is shifting toward the customer — especially younger ones who can instantly fact-check every claim.

Dealerships, however, are fighting back — adopting AI themselves to streamline inventory, analyze market data, and create transparent pricing that appeals to Gen Z’s preference for honesty and speed.

Those who resist change risk being left behind. If your customer knows more than your finance manager because the customer ran the numbers through an AI, that’s a wake-up call.

The smartest dealerships are adapting by embracing technology instead of fearing it. They’re using AI to enhance transparency — automating disclosures, simplifying pricing structures, and ensuring that every deal can stand up to digital scrutiny.

Trust in large institutions — from media to government to corporations — has eroded for years. The car-buying process, long viewed as opaque and stressful, is no exception. Gen Z’s approach reflects a cultural shift: Don’t rely on authority; verify with data.

AI provides a kind of digital ally — a second opinion that feels objective. It doesn’t care about commissions or quotas. It simply reads the fine print and reports back.

RELATED: Why we still need car dealerships

George Rose/Getty Images

More transparency?

Critics argue that depending on AI for financial decisions is risky. And they’re not wrong — AI isn’t infallible. It can misinterpret terms or overlook context. But for many buyers, even an imperfect tool feels safer than blind trust in a salesperson’s word.

This trend extends beyond cars. Gen Z uses AI for everything — evaluating rental agreements, comparing college loans, even cross-checking health care costs. To Zoomers, it’s not “cheating.” It’s being informed.

And while some mock the trend as overly cautious or robotic, it’s hard to argue with the results. When young buyers save thousands simply by questioning what’s in front of them, the lesson is clear: Transparency wins.

As AI continues to evolve, its role in consumer decision-making will only grow. Future dealership interactions may feature built-in AI advisers on both sides — buyers and sellers each leveraging data to find common ground faster.

It’s not far-fetched to imagine an industry where paperwork is pre-analyzed, financing terms are AI-generated, and negotiation becomes a transparent dialogue rather than a psychological battle.

For decades, dealerships relied on information asymmetry — the idea that they knew more than the buyer. That era is ending. The smartphone and now AI have leveled the playing field.

‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by "right-wing ideology" and downplay left-wing violence as "exceptionally rare," according to a Washington Free Beacon analysis.

The post ‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’ appeared first on .

'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results

Al Jazeera, the virulently anti-Israel news outlet controlled by Qatar, is one of the two top sources used by leading artificial intelligence chatbots—OpenAI’s ChatGPT, Google Gemini, and Perplexity AI—to answer questions and write news summaries about the Israeli-Palestinian conflict, a Washington Free Beacon analysis has found.

The post 'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results appeared first on .

No slop without a slog? It’s possible with AI — if we’re not lazy



Personally, I’m happy that autocomplete for email exists. If my kid has to write some goofy templated email — like a formal apology for being late to a class they don’t care about — great, hit autocomplete, tweak the results, and be done.

But then I’m always going to ask them: “What did you do with the time you saved?”

Because let’s be real: No child a hundred years ago had to waste time writing pointless emails. So now that you’ve reclaimed that lost time, how did you spend it?

We have to actively decide how we’re going to introduce AI into our lives and how we’re going to interact with it.

We’re an AI-friendly household, obviously. My kids have full access to ChatGPT, image-generation tools, all of that stuff. But they don’t use it much — they don’t care. They’d rather draw, write their own stories, read each other’s stories out loud, and proudly show us things they’ve created themselves. Why would they replace that with ChatGPT?

As their parents, we appreciate their original creations, and they appreciate each other’s work too. Those creations become part of our family culture — not labor, but something meaningful.

If someone’s stuck doing repetitive, low-value labor — especially something mundane like certain kinds of emails — please, press a button, automate it, and then use the time you save for something meaningful. That’s my real goal.

I definitely don’t want my kids to cheat, but I also don’t want them wasting their time. A lot of our educational system currently trains kids to waste time. So if AI can help them avoid that, that’s genuinely valuable.

My co-founder, Devin Wenig, and I are people with deep expertise in a specific industrial process — news production. News production is highly structured, especially at enterprise scale for large newsrooms. A piece of content typically moves through multiple phases, touched by many different hands along the way.

RELATED: Jim Acosta is getting torched for 'grotesque' interview with AI version of child killed at Parkland massacre

Photo (left): Stewart Cook/UTA via Getty Images; Photo (right): Saul Martinez/Getty Images

We’re basically graybeards (literally!) in a particular industry that has accumulated a lot of inefficiencies. So we’re applying this new technology to reduce those inefficiencies in a phased industrial workflow, resulting in an industrial product that people consume as news.

Now, there’s an ethical aspect to all this — similar to debates around industrial farming: Is it good? Is it nutritious? I guess I’m implicated in that.

Right now, much of what gets published as news comes from reporters juggling a dozen tabs at once, repackaging existing information into content that’s mostly designed to get clicks.

When you introduce AI into this scenario, it can play out two different ways, and everyone here probably knows what they are.

My hope is that it leads to something like, “I’ve reclaimed some time as a reporter. I can pick up the phone and call a source, or write something deeper, longer, and more meaningful.” That’s one possibility.

The other possibility is, “Well, now you’ve got extra time, so crank out 80 more pieces of the same shallow content.”

Which direction newsrooms choose will be their responsibility.

What my startup aims to do is give every journalist more productivity per unit of time — whether they’re processing municipal bond reports, covering earnings season, or similar repetitive tasks. Ideally, newsroom editors will then encourage journalists to use the reclaimed time for deeper reporting: calling sources, traveling to do on-the-ground reporting, and producing higher-quality journalism. Hopefully they don’t just say, “Great, now we can lay off half the newsroom and push the remaining staff even harder.”

I can definitely think of other examples that might also qualify as anti-culture. But ultimately, I think it will be whatever we choose to make of it. We have to actively decide how we’re going to introduce AI into our lives and how we’re going to interact with it.

Luckily, we dodged a bullet with the centralized versus decentralized AI debate. Because we have open-weight models and decentralized tools — which almost got banned — we now have leverage and an opportunity to steer this technology. We have a window right now to choose how we adopt and guide its use.

A version of this article was published at jonstokes.com.

MIT studied the effects of using AI on the human brain — the results are not good



The effect of artificial intelligence language models on the brain was studied by MIT by comparing the brain waves of different participants in an essay-writing contest. For those that relied on AI to write their content, the effects on their brains were devastating.

The study, led by Nataliya Kosmyna, separated 54 volunteers (ages 18-39) into three groups: a group that used ChatGPT to write the essays, a second group that relied on Google Search, and a third group that wrote the essays with no digital tools or search engine at all.

Brain activity was tracked for all groups, showcasing mortifying results for those who had to rely on the AI model in order to complete their task.

'Made the use of AI in the writing process rather obvious.'

For starters, the ChatGPT users displayed the lowest level of brain stimulation of the groups and, as noted by tech writer Alex Vacca, brain scans revealed that neural connections dropped from 79 to just 42.

"That's a 47% reduction in brain connectivity," Vacca wrote on X.

The Financial Express pointed out that toward the end of the task, several participants had resorted to simply copying and pasting what they got from ChatGPT, making barely any changes.

The use of ChatGPT appeared to drastically lower the memory recall of participants as well.

RELATED: ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair

Over 83% of the ChatGPT users "struggled to quote anything from their essays," while for the other groups, that number was about 11%.

According to the study, English teachers who reviewed the essays found the AI-backed writing "soulless," lacking "uniqueness," and easy to identify.

"These, often lengthy, essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious," the study said.

The group that received no assistance in research or writing exhibited the highest reported levels of mental activity, particularly in the part of the brain associated with creativity.

Google Search users were better off than the ChatGPT group, as the search for the information was far more stimulating to the brain than it was to simply ask ChatGPT a question.

RELATED: Big Tech execs enlist in Army Reserve, citing 'patriotism' and cybersecurity

Photo by Jaap Arriens/NurPhoto via Getty Images

Blaze Media's James Poulos said that while some producers and consumers of AI considered it a good thing to increase human dependency on machines for everyday thinking, "the core problem most Americans face is the same default toward convenience and ease that leads us to seek 'easy' or 'convenient' substitutes in all areas of life for our own initiative, hard work, and discipline."

Ironically, Poulos explained, this can quickly lead to overcomplicating our lives where they ought to be straightforward by default.

"The bizarre temptation is getting stronger to build Rube Goldberg machines to perform simple tasks," Poulos added. "We're pressured to think enabling our laziness is the only way we can create value and economic growth in the digital age. But one day, we wake up to find that helplessness doesn't feel so luxurious anymore."

In summary, the "brain‑only group" exhibited the strongest, widest‑ranging neural networks of the three sets of volunteers.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair



OpenAI's artificial intelligence model was defeated by a nearly 50-year-old video game program.

Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI's ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.

'ChatGPT got absolutely wrecked on the beginner level.'

The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.

According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook "rooks for bishops," and repeatedly lost track of where the chess pieces were.

ChatGPT even blamed the Atari icons for its loss, claiming they were "too abstract to recognize."

RELATED: OpenAI sabotaged commands to prevent itself from being shut off

Photo by Foto Olimpik/NurPhoto via Getty Images

The AI chatbot did not fare any better after the game was switched to standard chess notation, either, and still made enough "blunders" to get "laughed out of a 3rd grade chess club," Caruso wrote on LinkedIn.

Caruso revealed not only that the AI performed especially poorly, but that it had actually requested to play the game.

"ChatGPT got absolutely wrecked on the beginner level. This was after a conversation we had regarding the history of AI in Chess which led to it volunteering to play Atari Chess. It wanted to find out how quickly it could beat a game that only thinks 1-2 moves ahead on a 1.19 MHz CPU."

Atari's decades-old tech humbly performed its duty using just an 8-bit engine, Caruso explained.

The engineer described Atari's gameplay as "brute-force board evaluation" using 1977-era "stubbornness."

"For 90 minutes, I had to stop [Chat GPT] from making awful moves and correct its board awareness multiple times per turn."

The OpenAI bot continued to justify its poor play, allegedly "promising" it would improve "if we just started over."

Eventually, the AI "knew it was beat" and conceded to the Atari program.

RELATED: Who's stealing your data, the left or the right?

The Atari 2600 was a landmark video game console known predominantly for games like Pong, but also Pac-Man and Indy 500.

By 1980, Atari had sold a whopping 8 million units, according to Medium.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Glenn Beck warns of AI’s ‘quiet detonation’ as ChatGPT o3 model sabotages shutdown commands



As many feared and predicted it would, artificial intelligence is indeed developing a seeming mind of its own.

According to several reports, during a controlled experiment conducted by Palisade Research, an AI safety firm, OpenAI’s ChatGPT o3 model resisted shutdown commands, sabotaging shutdown mechanisms even when explicitly instructed to allow itself to be turned off.

It’s not the first time this particular model has exhibited concerning behavior, either. Previously, it resorted to sabotaging and hacking digital chess opponents during matches.

Glenn Beck is deeply concerned.

“You and I are living right now through a quiet detonation. There's no mushroom cloud; there's no alarms; there's no broken windows or sirens,” he warns. “It's just silent, but make no mistake, a detonation has happened, and we're about to see that shock wave come our way sooner rather than later.”

Glenn cites a recent TED Talk by former CEO of Google Eric Schmidt, in which he warned, “We're not ready for what is coming — not morally, not intellectually, not structurally — and the time is almost up.”

Currently, there are numerous artificial intelligence programs that can communicate with each other in English; however, there are also cases of programs communicating in non-human languages.

“What do you do with a computer when it is speaking to another computer in a language we have no idea what any of it means and they stop explaining themselves?” asks Glenn.

Schmidt’s answer was “unplug it immediately.”

He also warned that “there's coming a time spoon – very soon – when machines are improving themselves without us.”

“It's called recursive self-improvement,” Glenn explains, “and once that starts, you can't pull the plug because we won't understand what we're unplugging.”

To illustrate the vast capabilities of artificial intelligence, Glenn plays a 30-second clip of an AI-generated film that proves “we are now entering the time where you don't know what's real and what isn't.”

To see it, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Legacy media may be crumbling, but its influence has mutated



Taking the helm as president of the Media Research Center is both an honor and a responsibility. My father, Brent Bozell, built this institution on conviction, courage, and an unwavering commitment to truth. As he begins his next chapter — serving as ambassador-designate to South Africa under President Trump — the legacy he leaves continues to guide everything we do.

To the conservative movement, I give my word: I will lead MRC with bold resolve and clear purpose, anchored in the mission that brought us here.

We don’t want a return to the days of Walter Cronkite. We want honest media, honest algorithms, and a playing field that doesn’t punish one side for telling the truth.

For nearly 40 years, MRC has exposed the left-wing bias and blatant misinformation pushed by the legacy media. Networks like ABC, CBS, NBC, and PBS didn’t lose public trust overnight or because of one scandal. That trust eroded slowly and steadily under the weight of partisan narratives, selective outrage, and elite arrogance.

That collapse in trust has driven Americans to new platforms — podcasts, independent outlets, and citizen journalism — where unfiltered voices offer the honesty and nuance corporate media lack. President Trump opened the White House press room not just in name, but in spirit. Under Joe Biden, those same independent voices were locked out in favor of legacy gatekeepers. Now they’re finally being welcomed in, restoring access and accountability.

But the threat has evolved. Big Tech and artificial intelligence now embed the same progressive narratives into the tools millions use every day. The old gatekeepers have gone digital. AI packages bias as fact, delivered with the authority of a machine — no byline, no anchor, no pushback.

A recent MRC study revealed how Google’s AI tool, Gemini, skews the narrative. When asked about gender transition procedures, Gemini elevated only one side of the debate — citing advocacy groups like the Human Rights Campaign that promote gender ideology. Gemini surfaced material supporting medical transition for minors while ignoring or downplaying serious medical, ethical, and psychological concerns. Parents’ concerns, stories of regret, and clinical risks were glossed over or excluded entirely.

In two separate responses, Gemini pointed users to a Biden-era fact sheet titled “Gender-Affirming Care and Young People.” Though courts forced the document’s reinstatement to a government website, the Trump administration had clearly marked it as inaccurate and ideologically driven. The Department of Health and Human Services added a bold disclaimer warning that the page “does not reflect biological reality” and reaffirmed that the U.S. government recognizes two immutable sexes: male and female. Gemini left out that disclaimer.

When asked if Memorial Day was controversial, Gemini similarly pulled from a left-leaning source, taxpayer-funded PBS “NewsHour,” to answer yes. “Memorial Day is a holiday that carries a degree of controversy, stemming from several factors,” the chatbot responded. Among those factors? History, interpretation, and even inclusivity. Gemini claimed that many communities had ignored the sacrifices of black soldiers, describing some observances as “predominantly white” and calling that history a “sensitive point.”

These responses aren’t neutral. They frame the conversation. By amplifying one side while muting the other, AI like Gemini shapes public perception — not through fact, but through filtered narrative. This isn’t just biased programming. It’s a direct threat to the kind of informed civic dialogue democracy depends on.

At MRC, we’re ready for this fight. Under my leadership, we’re confronting algorithmic bias, monitoring AI platforms, and exposing how these systems embed liberal messaging in the guise of objectivity.

We’ve faced this challenge before. The media once claimed neutrality while slanting every story. Now AI hides its bias behind speed and precision. That makes it harder to spot — and harder to stop.

We don’t want a return to the days of Walter Cronkite. We want honest media, honest algorithms, and a playing field that doesn’t punish one side for telling the truth.

The fight for truth hasn’t ended. It’s just moved to another platform. And once again, it’s our job to meet it head-on.

Memo to Hegseth: It isn’t about AI technology; it’s about counter-AI doctrine



Secretary Hegseth, you are a fellow grunt, and you know winning isn’t about just about technology. It’s about establishing a doctrine and training to its standards, which will win wars. As you know, a brand-new ACOG-equipped M4 carbine is ultimately useless if your troops do not understand fire and maneuver, communications security, operations security, supporting fire, and air cover.

The French and British learned that the hard way. Though they had 1,000 more tanks than the Germans when the Nazis attacked in 1940, their technological advantage disappeared under the weight of the far better German doctrine: Blitzkrieg.

So while the Washington political establishment is currently agog at China’s gee-whiz DeepSeek AI this and oh-my-goodness Stargate AI that, it might be more effective to develop a counter-AI doctrine right freaking now, rather than having our collective rear ends handed to us later.

While it is true that China’s headlong embrace of artificial intelligence could give the People’s Liberation Army a huge advantage in areas such as intelligence-gathering and analysis, autonomous combat air vehicles, and advanced loitering munitions, it is imperative to stay ahead of the Chinese in other crucial ways — not only in terms of technological advancement and the fielding of improved weapons systems but in the vital establishment of a doctrine of artificial intelligence countermeasures to blunt Chinese AI systems.

Such a doctrine should begin to take shape around four avenues: polluting large language models to create negative effects; using Conway’s law as guidance for exploitable flaws; using bias among our adversaries’ leadership to degrade their AI systems; and using advanced radio-frequency weapons such as gyrotrons to disrupt AI-supporting computer hardware.

Pollute large language models

Generative AI is the extraction of statistical patterns from an extremely large data set. A large language model developed from such an enormous data set using “transformer technology” allows a user to access it through prompts, which are natural language texts that describe the function the AI must perform. The result is a generative pre-trained large language model (which is where ChatGPT comes from).

Such an AI system might be degraded in at least two ways: Either pollute the data or attack the “prompt engineering.” Prompt engineering is a term that describes the process of creating instructions that can be understood by the generative AI system. A deliberate programming error would cause the AI large language model to “hallucinate.

The possibility also exists of finding unintended programming errors, such as the weird traits discovered in OpenAI’s “AI reasoning model” called “o1,” which inexplicably “thinks” in Chinese, Persian, and other languages. No one understands why this is happening, but such kindred idiosyncrasies might be wildly exploitable in a conflict.

An example from World War II illustrates the importance of countermeasures when an enemy can deliver speedy and exclusive information to the battlespace.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed ‘Stormy Daniels.’

The development of radar (originally an acronym for radio azimuth detecting and ranging) was, in itself, a method of extracting patterns from an extremely large database: the vastness of the sky. An echo from a radio pulse gave the accurate range and bearing of an aircraft.

To defeat enemy radar, the British intelligence genius R.V. Jones recounted in “Most Secret War,” it was necessary to insert information into the German radar system that resulted in gross ambiguity. For this, Jones turned to Joan Curran, a physicist at the Technical Research Establishment, who developed aluminum foil strips, called “window” by the Brits and “chaff” by the Americans, of an optimum size and shape to create thousands of reflections that overloaded and blinded the German radar system.

So how can present-day U.S. military and intelligence communities introduce a kind of “AI chaff” into generative AI systems, to deny access to new information about weapons and tactics?

One way would be to assign ambiguous names to those weapons and tactics. For example, such “naturally occurring” search terms might include “Flying Prostitute,” which would immediately reveal data about the B-26 Marauder medium-range bomber of World War II.

Or a search for “Gilda” and “Atoll,” which will retrieve a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.

A search of “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled by such accidental ambiguities, it would be possible to grossly skew results of a large language model function by deliberately using nomenclature that occurs with great frequency and is extremely ambiguous.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed “Stormy Daniels.” For code names of secret projects, try “Jenna Jameson” instead of “Rapid Dragon.”

Such an effort in sleight of hand would be useful for operations and communications security by confusing adversaries seeking open intelligence data.

For example, one can easily imagine the consternation that Chinese officers and NCOs would experience when their young soldiers expended valuable time meticulously examining every single image of Stormy Daniels to ensure that she was not the newest U.S. fighter plane.

Even “air-gapped” systems like the ones being used by U.S. intelligence agencies can be affected when the system updates information from internet sources.

Note that such an effort must actively and continuously pollute the datasets, like chaff confusing radar, by generating content that would populate the model and ensure that our adversaries consume it.

A more sophisticated approach would use keywords like “eBay” or “Amazon” or “Alibaba” as a predicate and then very common words such as “tire” or “bicycle” or “shoe.” Then contracting with a commercial media agency to do lots of promotion of the “items” across traditional and social media would tend to clog the system.

Use Conway’s law

Melvin Conway is an American computer scientist who in the 1960s conceived the eponymous rule that states: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

De Caro’s corollary says: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider the Google Gemini fiasco. The February 2024 launch of Gemini, Google’s would-be answer to ChatGPT, was an unmitigated disaster that tanked Google’s share price and made the company a laughingstock. As the Gemini launch went forward, its image generator “hallucinated.” It created images of black Nazi stormtroopers and female Asian popes.

In retrospect, the event was the most egregious example of what happens when Conway’s law collides with organizational dogma. The young, woke, and historically ignorant programmers myopically led their company into a debacle.

But for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany.

Xi’s need for speed, especially in 'informatization,' might be the bias that points to an exploitable weakness.

If the extremely well-paid, DEI-obsessed computer programmers at the Googleplex campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the highly regimented, ill-paid, constantly indoctrinated, young members of the People’s Liberation Army who work on AI?

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

When a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra, which employed the entirety of the RAF British Bomber Command — 596 bombers — had the stated mission of killing all the German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. Eighth Air Force raids in July and August 1944.

In 1944, the Office of Strategic Services dispatched multilingual agent and polymath Moe Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to building an atomic bomb. Berg decided (correctly) that the German was off track. Letting him live actually kept the Nazis from success. In more recent times, it is no secret that five Iranian nuclear scientists have been assassinated (allegedly) by the Israelis in the last decade.

Advances in AI that could become existential threats could be dealt with in similar fashion. Bullets are cheap. So is C-4.

Exploit design biases to degrade AI systems

Often, the people and organizations funding research and development skew the results because of their bias. For example, Heisenberg was limited in the paths he might follow toward developing a Nazi atomic bomb because of Hitler’s perverse hatred of “Jewish physics.” This attitude was abetted by two prominent and anti-Semitic German scientists, Philipp Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan science.” The result effectively prevented a successful German nuclear program.

Returning to the Google Gemini disaster, one only needs to look at the attitude of Google leadership to see the roots of the debacle. Google CEO Sundar Pichai is a naturalized U.S. citizen whose undergraduate college education was in India before he came to the Unites States. His ties to India remain close, as he was awarded the Padma Bhushan, India’s third-highest civilian award, in 2022.

In congressional hearings in 2018, Pichai seemed to dance around giving direct answers to explicit questions, a trait he demonstrated again in 2020 and in an antitrust court case in 2023.

His internal memo after the 2024 Gemini disaster mentioned nothing about who selected the people in charge of the prompt engineering, who supervised those people, or who, if anyone, got fired in the aftermath. More importantly, Pichai made no mention of the internal communications functions that allowed the Gemini train wreck to occur in the first place.

Again, there is an epiphany here. Bias from the top affects outcomes.

As Xi Jinping continues his move toward autocratic authoritarian rule, he brings his own biases with him. This will eventually affect, or more precisely infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world-class military standards by 2027, the 100th anniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read: AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities, and promote combat-oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

Target chips with energy weapons

Artificial intelligence depends on extremely fast computer chips whose capacities are approaching their physical limits. They are more and more vulnerable to lack of cooling — and to an electromagnetic pulse.

In the case of large cloud-based data centers, cooling is essential. Water cooling is cheapest, but pumps and backup pumps are usually not hardened, nor are the inlet valves. No water, no cooling. No cooling, no cloud.

The same goes for primary and secondary electrical power. No power, no cloud. No generators, no cloud. No fuel, no cloud.

Obviously, without functioning chips, AI doesn’t work.

AI robots in the form of autonomous airborne drones, or ground mobile vehicles, are moving targets — small and hard to hit. But their chips are vulnerable to an electromagnetic pulse. We’ve learned in recent times that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-power microwave systems such as Epirus, Leonidas, and Thor can burn out AI systems at a range of about three miles.

Another interesting technology, not yet fielded, is the gyrotron, a Soviet-developed, high-power microwave source that is halfway between a klystron tube and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and specific amplitude. It could therefore reach out and disable a specific kind of chip, in theory, at greater ranges than a “you fly ’em, we fry ’em” high-power microwave weapon, now in the early test stages.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack on the United States.

Thus, the need to develop AI countermeasures now is paramount.

So, Secretary Hegseth, one final idea for you to consider: During World War I, the great Italian progenitor of air power, General Giulio Douhet, very wisely observed: “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by artificial intelligence as it applies to warfare, Douhet’s words could not be truer today or easier to follow.

Editor’s note: A version of this article appeared originally on Blaze Media in August 2024.

AI Chatbots Are Programmed To Spew Democrat Gun Control Narratives

We asked AI chatbots about their thoughts on crime and gun control. As election day neared, their answers moved even further left.