The One Big Beautiful Bill Act hides a big, ugly AI betrayal



Picture your local leaders — the ones you elect to defend your rights and reflect your values — stripped of the power to regulate the most powerful technology ever invented. Not in some dystopian future. In Congress. Right now.

Buried in the House version of Donald Trump’s One Big Beautiful Bill Act is a provision that would block every state in the country from passing any AI regulations for the next 10 years.

The idea that Washington can prevent states from acting to protect their citizens from a rapidly advancing and poorly understood technology is as unconstitutional as it is unwise.

An earlier Senate draft took a different route, using federal funding as a weapon: States that tried to pass their own AI laws would lose access to key resources. But the version the Senate passed on July 1 dropped that language entirely.

Now House and Senate Republicans face a choice — negotiate a compromise or let the "big, beautiful bill" die.

The Trump administration has supported efforts to bar states from imposing their own AI regulations. But with the One Big Beautiful Bill Act already facing a rocky path through Congress, President Trump is likely to sign it regardless of how lawmakers resolve the question.

Supporters of a federal ban on state-level AI laws have made thoughtful and at times persuasive arguments. But handing Washington that much control would be a serious error.

A ban would concentrate power in the hands of unelected federal bureaucrats and weaken the constitutional framework that protects individual liberty. It would ignore the clear limits the Constitution places on federal authority.

Federalism isn’t a suggestion

The 10th Amendment reserves all powers not explicitly granted to the federal government to the states or the people. That includes the power to regulate emerging technologies, such as artificial intelligence.

For more than 200 years, federalism has safeguarded American freedom by allowing states to address the specific needs and values of their citizens. It lets states experiment — whether that means California mandating electric vehicles or Texas fostering energy freedom.

If states can regulate oil rigs and wind farms, surely they can regulate server farms and machine learning models.

A federal case for caution

David Sacks — tech entrepreneur and now the White House’s AI and crypto czar — has made a thoughtful case on X for a centralized federal approach to AI regulation. He warns that letting 50 states write their own rules could create a chaotic patchwork, stifle innovation, and weaken America’s position in the global AI race.

— (@)  
 

Those concerns aren’t without merit. Sacks underscores the speed and scale of AI development and the need for a strategic, national response.

But the answer isn’t to strip states of their constitutional authority.

America’s founders built a system designed to resist such centralization. They understood that when power moves farther from the people, government becomes less accountable. The American answer to complexity isn’t uniformity imposed from above — it’s responsive governance closest to the people.

Besides, complexity isn’t new. States already handle it without descending into chaos. The Uniform Commercial Code offers a clear example: It governs business law across all 50 states with remarkable consistency — without federal coercion.

States also have interstate compacts (official agreements between states) on several issues, including driver’s licenses and emergency aid.

AI regulation can follow a similar path. Uniformity doesn’t require surrendering state sovereignty.

State regulation is necessary

The threats posed by artificial intelligence aren’t theoretical. Mass surveillance, cultural manipulation, and weaponized censorship are already at the doorstep.

In the wrong hands, AI becomes a tool of digital tyranny. And if federal leaders won’t act — or worse, block oversight entirely — then states have a duty to defend liberty while they still can.

RELATED: Your job, your future, your humanity: AI just crossed the line we can never undo

  BlackJack3D via iStock/Getty Images

From banning AI systems that impersonate government officials to regulating the collection and use of personal data, local governments are often better positioned to protect their communities. They’re closer to the people. They hear the concerns firsthand.

These decisions shouldn’t be handed over to unelected federal agencies, no matter how well intentioned the bureaucracy claims to be.

The real danger: Doing nothing

This is not a question of partisanship. It’s a question of sovereignty. The idea that Washington, D.C., can or should prevent states from acting to protect their citizens from a rapidly advancing and poorly understood technology is as unconstitutional as it is unwise.

If Republicans in Congress are serious about defending liberty, they should reject any proposal that strips states of their constitutional right to govern themselves. Let California be California. Let Texas be Texas. That’s how America was designed to work.

Artificial intelligence may change the world, but it should never be allowed to change who we are as a people. We are free citizens in a self-governing republic, not subjects of a central authority.

It’s time for states to reclaim their rightful role and for Congress to remember what the Constitution actually says.

Study: Using ChatGPT To Write Essays May Increase ‘Cognitive Debt’

A recent study out of MIT Media Lab shows that students using ChatGPT and other AI tools to write essays may be acquiring “cognitive debt” at a higher rate than students using searching engines or only their brains. According to the study, “Cognitive debt defers mental effort in the short term but results in long-term […]

If AI isn’t built for freedom, it will be programmed for control



Once the domain of science fiction, artificial intelligence now shapes the foundations of modern life. It governs how we access information, interact with institutions, and connect with one another. No longer just a tool, AI is becoming infrastructure — an embedded force with the potential to either safeguard our liberty or quietly dismantle it.

In a deeply divided political climate, it is rare to find an issue that unites Americans across ideological lines. But when it comes to AI, something extraordinary is happening: Americans agree that these systems must be designed to protect our most basic rights.

Voters from both parties recognize that AI must be built to reflect the values that make us free.

A new Rasmussen poll reveals that 77% of likely voters, including 80% of Republicans and 77% of Democrats, support laws that would require developers and tech companies to design AI systems to uphold constitutional rights such as freedom of speech and freedom of religious expression. Such a consensus is practically unheard of in today’s political climate.

The same poll found that more than 70% of voters are concerned about the growing role of AI in our economy and society. And that concern isn’t limited to any one party: 74% of Democrats and 70% of Republicans say they are “very” or “somewhat concerned.”

Americans are watching the AI revolution unfold, and they’re sending a clear message: If we’re going to let these systems shape our future, they must be governed by the same principles that have preserved freedom for generations.

Why it matters now

That concern is more than hypothetical. We are already seeing the consequences of AI systems that reflect narrow ideological agendas rather than broad constitutional values.

Google’s Gemini AI made headlines last year when it produced historically inaccurate images of black Founding Fathers and Asian Nazi soldiers. This wasn’t a technical glitch. It was the direct result of ideological programming that prioritized “diversity” over truth.

In China, the DeepSeek AI model was trained to avoid any criticism of the Chinese Communist Party. Ask it about the Tiananmen Square massacre, and it refuses to give you an answer at all. When models are trained to serve power rather than seek truth, they become tools of suppression.

If left unchecked, agenda-driven AI systems in the United States could soon shape what news we see, what content is amplified — or buried — on social media, and what opinions are allowed in public discourse, thereby conforming society to its pre-programmed ideals.

Biased AI systems could even influence public policy debates by skewing public opinion toward "solutions" that optimize for social or environmental justice goals. These constitutionally unaligned AI systems may quietly reshape society with complete disregard for liberty, consent, and due process.

Regulation for freedom’s sake

Some conservatives bristle at the word “regulation,” and rightly so. But what we're talking about here isn’t micromanagement or bureaucratic control. It’s the same kind of constraint our Founders placed on government power: constitutional guardrails that prevent abuse and preserve freedom.

When AI is unbound by those principles, it doesn’t become neutral — it becomes ideological. It doesn’t protect liberty; it calculates outcomes. And in doing so, it can rationalize censorship, coercion, and discrimination, all in the name of “progress.”

RELATED: Eyes everywhere: The AI surveillance state looms

  hamzaturkkol via iStock/Getty Images

This is why Americans are right to demand action now. The window for shaping AI's trajectory is still open, but it won’t remain open forever. As these systems become more advanced and more embedded in our institutions, retrofitting them to respect liberty will become harder, not easier.

Don’t let the opportunity slip away

We are living through a rare moment of political clarity. Voters from both parties recognize that AI must be built to reflect the values that make us free. They want systems to protect speech, not suppress it. They want AI to respect human conscience, not override it. They want AI to serve the people, not manage them.

This is not a partisan issue. It is a moral one. And it’s an opportunity we must seize before the future is decided for us.

AI doesn’t have to be our master. But it must be taught to serve what makes us free.

Memo to Hegseth: It isn’t about AI technology; it’s about counter-AI doctrine



Secretary Hegseth, you are a fellow grunt, and you know winning isn’t about just about technology. It’s about establishing a doctrine and training to its standards, which will win wars. As you know, a brand-new ACOG-equipped M4 carbine is ultimately useless if your troops do not understand fire and maneuver, communications security, operations security, supporting fire, and air cover.

The French and British learned that the hard way. Though they had 1,000 more tanks than the Germans when the Nazis attacked in 1940, their technological advantage disappeared under the weight of the far better German doctrine: Blitzkrieg.

So while the Washington political establishment is currently agog at China’s gee-whiz DeepSeek AI this and oh-my-goodness Stargate AI that, it might be more effective to develop a counter-AI doctrine right freaking now, rather than having our collective rear ends handed to us later.

While it is true that China’s headlong embrace of artificial intelligence could give the People’s Liberation Army a huge advantage in areas such as intelligence-gathering and analysis, autonomous combat air vehicles, and advanced loitering munitions, it is imperative to stay ahead of the Chinese in other crucial ways — not only in terms of technological advancement and the fielding of improved weapons systems but in the vital establishment of a doctrine of artificial intelligence countermeasures to blunt Chinese AI systems.

Such a doctrine should begin to take shape around four avenues: polluting large language models to create negative effects; using Conway’s law as guidance for exploitable flaws; using bias among our adversaries’ leadership to degrade their AI systems; and using advanced radio-frequency weapons such as gyrotrons to disrupt AI-supporting computer hardware.

Pollute large language models

Generative AI is the extraction of statistical patterns from an extremely large data set. A large language model developed from such an enormous data set using “transformer technology” allows a user to access it through prompts, which are natural language texts that describe the function the AI must perform. The result is a generative pre-trained large language model (which is where ChatGPT comes from).

Such an AI system might be degraded in at least two ways: Either pollute the data or attack the “prompt engineering.” Prompt engineering is a term that describes the process of creating instructions that can be understood by the generative AI system. A deliberate programming error would cause the AI large language model to “hallucinate.

The possibility also exists of finding unintended programming errors, such as the weird traits discovered in OpenAI’s “AI reasoning model” called “o1,” which inexplicably “thinks” in Chinese, Persian, and other languages. No one understands why this is happening, but such kindred idiosyncrasies might be wildly exploitable in a conflict.

An example from World War II illustrates the importance of countermeasures when an enemy can deliver speedy and exclusive information to the battlespace.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed ‘Stormy Daniels.’

The development of radar (originally an acronym for radio azimuth detecting and ranging) was, in itself, a method of extracting patterns from an extremely large database: the vastness of the sky. An echo from a radio pulse gave the accurate range and bearing of an aircraft.

To defeat enemy radar, the British intelligence genius R.V. Jones recounted in “Most Secret War,” it was necessary to insert information into the German radar system that resulted in gross ambiguity. For this, Jones turned to Joan Curran, a physicist at the Technical Research Establishment, who developed aluminum foil strips, called “window” by the Brits and “chaff” by the Americans, of an optimum size and shape to create thousands of reflections that overloaded and blinded the German radar system.

So how can present-day U.S. military and intelligence communities introduce a kind of “AI chaff” into generative AI systems, to deny access to new information about weapons and tactics?

One way would be to assign ambiguous names to those weapons and tactics. For example, such “naturally occurring” search terms might include “Flying Prostitute,” which would immediately reveal data about the B-26 Marauder medium-range bomber of World War II.

Or a search for “Gilda” and “Atoll,” which will retrieve a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.

A search of “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled by such accidental ambiguities, it would be possible to grossly skew results of a large language model function by deliberately using nomenclature that occurs with great frequency and is extremely ambiguous.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed “Stormy Daniels.” For code names of secret projects, try “Jenna Jameson” instead of “Rapid Dragon.”

Such an effort in sleight of hand would be useful for operations and communications security by confusing adversaries seeking open intelligence data.

For example, one can easily imagine the consternation that Chinese officers and NCOs would experience when their young soldiers expended valuable time meticulously examining every single image of Stormy Daniels to ensure that she was not the newest U.S. fighter plane.

Even “air-gapped” systems like the ones being used by U.S. intelligence agencies can be affected when the system updates information from internet sources.

Note that such an effort must actively and continuously pollute the datasets, like chaff confusing radar, by generating content that would populate the model and ensure that our adversaries consume it.

A more sophisticated approach would use keywords like “eBay” or “Amazon” or “Alibaba” as a predicate and then very common words such as “tire” or “bicycle” or “shoe.” Then contracting with a commercial media agency to do lots of promotion of the “items” across traditional and social media would tend to clog the system.

Use Conway’s law

Melvin Conway is an American computer scientist who in the 1960s conceived the eponymous rule that states: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

De Caro’s corollary says: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider the Google Gemini fiasco. The February 2024 launch of Gemini, Google’s would-be answer to ChatGPT, was an unmitigated disaster that tanked Google’s share price and made the company a laughingstock. As the Gemini launch went forward, its image generator “hallucinated.” It created images of black Nazi stormtroopers and female Asian popes.

In retrospect, the event was the most egregious example of what happens when Conway’s law collides with organizational dogma. The young, woke, and historically ignorant programmers myopically led their company into a debacle.

But for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany.

Xi’s need for speed, especially in 'informatization,' might be the bias that points to an exploitable weakness.

If the extremely well-paid, DEI-obsessed computer programmers at the Googleplex campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the highly regimented, ill-paid, constantly indoctrinated, young members of the People’s Liberation Army who work on AI?

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

When a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra, which employed the entirety of the RAF British Bomber Command — 596 bombers — had the stated mission of killing all the German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. Eighth Air Force raids in July and August 1944.

In 1944, the Office of Strategic Services dispatched multilingual agent and polymath Moe Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to building an atomic bomb. Berg decided (correctly) that the German was off track. Letting him live actually kept the Nazis from success. In more recent times, it is no secret that five Iranian nuclear scientists have been assassinated (allegedly) by the Israelis in the last decade.

Advances in AI that could become existential threats could be dealt with in similar fashion. Bullets are cheap. So is C-4.

Exploit design biases to degrade AI systems

Often, the people and organizations funding research and development skew the results because of their bias. For example, Heisenberg was limited in the paths he might follow toward developing a Nazi atomic bomb because of Hitler’s perverse hatred of “Jewish physics.” This attitude was abetted by two prominent and anti-Semitic German scientists, Philipp Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan science.” The result effectively prevented a successful German nuclear program.

Returning to the Google Gemini disaster, one only needs to look at the attitude of Google leadership to see the roots of the debacle. Google CEO Sundar Pichai is a naturalized U.S. citizen whose undergraduate college education was in India before he came to the Unites States. His ties to India remain close, as he was awarded the Padma Bhushan, India’s third-highest civilian award, in 2022.

In congressional hearings in 2018, Pichai seemed to dance around giving direct answers to explicit questions, a trait he demonstrated again in 2020 and in an antitrust court case in 2023.

His internal memo after the 2024 Gemini disaster mentioned nothing about who selected the people in charge of the prompt engineering, who supervised those people, or who, if anyone, got fired in the aftermath. More importantly, Pichai made no mention of the internal communications functions that allowed the Gemini train wreck to occur in the first place.

Again, there is an epiphany here. Bias from the top affects outcomes.

As Xi Jinping continues his move toward autocratic authoritarian rule, he brings his own biases with him. This will eventually affect, or more precisely infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world-class military standards by 2027, the 100th anniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read: AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities, and promote combat-oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

Target chips with energy weapons

Artificial intelligence depends on extremely fast computer chips whose capacities are approaching their physical limits. They are more and more vulnerable to lack of cooling — and to an electromagnetic pulse.

In the case of large cloud-based data centers, cooling is essential. Water cooling is cheapest, but pumps and backup pumps are usually not hardened, nor are the inlet valves. No water, no cooling. No cooling, no cloud.

The same goes for primary and secondary electrical power. No power, no cloud. No generators, no cloud. No fuel, no cloud.

Obviously, without functioning chips, AI doesn’t work.

AI robots in the form of autonomous airborne drones, or ground mobile vehicles, are moving targets — small and hard to hit. But their chips are vulnerable to an electromagnetic pulse. We’ve learned in recent times that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-power microwave systems such as Epirus, Leonidas, and Thor can burn out AI systems at a range of about three miles.

Another interesting technology, not yet fielded, is the gyrotron, a Soviet-developed, high-power microwave source that is halfway between a klystron tube and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and specific amplitude. It could therefore reach out and disable a specific kind of chip, in theory, at greater ranges than a “you fly ’em, we fry ’em” high-power microwave weapon, now in the early test stages.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack on the United States.

Thus, the need to develop AI countermeasures now is paramount.

So, Secretary Hegseth, one final idea for you to consider: During World War I, the great Italian progenitor of air power, General Giulio Douhet, very wisely observed: “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by artificial intelligence as it applies to warfare, Douhet’s words could not be truer today or easier to follow.

Editor’s note: A version of this article appeared originally on Blaze Media in August 2024.

Why ChatGPT Could Actually Be A Good Thing For High School English

The more we write, the stronger our minds become, making us better communicators, planners, analysts, creators, critics — just better human beings overall.