How to power the AI race without losing control



The artificial intelligence revolution is here, and it arrives charged with the capacity to fundamentally change society for better or worse.

America is currently leading the world in AI development. U.S. companies are building the most advanced models, attracting the most capital, and designing the infrastructure that will shape the next century. But there is one increasingly obvious constraint standing in the way: electricity accessibility.

The political consequences of rapid automation could be just as transformative as the technology itself.

Energy scarcity is only half the story. Even if we succeed in generating the power required to fuel the AI revolution, we must confront a deeper challenge. The same technology that promises medical breakthroughs and economic growth also carries profound societal and even existential risk.

If America wants to win the AI race, we will need to consider a massive expansion of energy production and an equally massive expansion of vigilance.

The energy bottleneck

Modern AI models are trained and deployed in massive data centers packed with tens of thousands of high-performance graphics processing units running continuously. Training a single frontier model can require weeks or months of nonstop computation, while everyday AI tools used by millions of people must process queries around the clock.

These facilities consume electricity at industrial scale, rivaling entire cities in their power demands. In fact, the hyperscale Stargate data center in Saline Township is projected to consume the same amount of electricity as 1.17 million homes.

The understanding of just how much energy is needed to power the AI revolution is still unfolding across the industry. Just a few years ago, Silicon Valley leaders were still thinking in megawatts.

Meta CEO Mark Zuckerberg, speaking on a podcast less than two years ago, said his company would build larger AI clusters “if we could get the energy to do it,” describing 50-to-100-megawatt facilities and speculating that 1-gigawatt data centers were probably inevitable someday.

Today, 1-gigawatt facilities are on the smaller end of planned AI infrastructure, with projects up to 5 gigawatts already in motion throughout the United States, including but not limited to the following:

And this list barely scratches the surface. Dozens more large-scale facilities are planned or under construction across the country, and every single one of them will require enormous flows of reliable electricity to operate.

Elon Musk recently stated at Davos that “the limiting factor for AI deployment is, fundamentally, electrical power.” He warned that while AI chip production is increasing exponentially, electricity generation is not.

“Very soon, maybe even later this year,” Musk said, “we will be producing more chips than we can turn on.”

In Santa Clara, California, reports indicate newly built data centers may sit idle for years because the local grid cannot handle the load.

According to a report published by the global consulting group McKinsey & Company, U.S. demand for AI-ready data center capacity could grow from roughly 60 gigawatts today to 170 to 298 gigawatts by 2030.

The International Energy Agency reports that data centers consumed more than 4% of total U.S. electricity in 2024. This amounts to 183 terawatt-hours. IEA projections suggest this number could increase by 133% to 426 TWh by 2030.

To put that in perspective, 426 TWh is roughly equivalent to the annual electricity consumption of more than 40 million American homes.

The dilemma is obvious. If we do not have reliable energy, AI innovation will be compromised and could potentially migrate elsewhere. Worse, American households could find themselves competing with Big Tech for increasingly scarce power, driving up electricity costs for families and small businesses.

But energy is only the first layer of this story.

RELATED: States should work with AI, not against it

Alex Wong/Getty Images

The promise and the disruption

AI is not your typical technological advancement. It is a general-purpose intelligence system capable of transforming nearly every sector of society. In the coming years, AI could accelerate drug discovery, personalize medicine, supercharge logistics, automate research, and unlock new materials and engineering breakthroughs, just to name a few potential benefits. The economic upside is staggering.

Artificial intelligence is a powerful tool and a dangerous weapon. While promising efficiency and innovation, AI also threatens disruption on a historic scale. Job displacement could occur faster than previous technological revolutions. Entire professions, from legal research to software development, could be reshaped or automated.

If widespread job displacement occurs, there will inevitably be calls for sweeping government intervention. The political consequences of rapid automation could be just as transformative as the technology itself.

Exponential technological developments have changed political operations throughout history. As a recent example, social media algorithms have dominated political discourse over the past decade. Political polarization has subsequently skyrocketed as people on all sides of the aisle are trapped in online echo chambers and subjected to a panopticon of surveillance.

Artificial intelligence has the frightening capabilities of supercharging mass surveillance while baselessly boosting preconceived biases without an objective basis in truth.

There is certainly reason for concern about the potential bias and coercive nature of AI. In recent years, we have already witnessed how tech companies can shape narratives and suppress viewpoints on popular media platforms. Embedding ideological bias into AI systems would mean embedding that bias into education, finance, health care, and governance.

If AI becomes the invisible infrastructure of society, who writes its rules? Who determines its boundaries? And who holds it accountable?

Playing with probabilities

Beyond economic and cultural disruption lies an even deeper uncertainty.

We are introducing a form of intelligence that even its creators admit they do not fully understand. There are already documented cases of advanced AI systems behaving in deceptive or strategically manipulative ways. In controlled environments, some models have been observed lying to human evaluators, scheming to achieve assigned goals, or resisting shutdown instructions.

OpenAI’s stated ambition is to create artificial superintelligence — systems that surpass human capability across virtually every domain. There is no telling where this path may lead. Humanity has never had to grapple with the prospect of a man-made intelligence that is superior to our own.

And remarkably, some of the leading figures in the field openly discuss the possibility of catastrophic outcomes.

Elon Musk has suggested there is “only a 20% chance of annihilation.” Anthropic CEO Dario Amodei has estimated roughly a 25% chance that AI development goes “really, really badly.” Geoffrey Hinton, often referred to as the “godfather of AI,” has placed the odds of extinction-level consequences somewhere between 10 and 20% over the coming decades.

Those numbers still imply that positive outcomes are more likely than not. But when the downside is losing human civilization itself, percentages matter.

We are advancing a technology with transformative power while relying largely on overzealous corporate discretion to steer its trajectory. Humanity finds itself fiddling with the key to Pandora’s box, and we have no rational means of gauging what will happen if the box is opened.

RELATED: AI’s PR is in the toilet — for good reason

Alina Naumova/Getty Images

Power and prudence

As stalwart advocates for smaller government, we hesitate to call for slamming the brakes on AI development, but it is important to have sober discernment moving forward. America is in a strategic competition with geopolitical rivals who would gladly dominate both this field and us if we retreat.

Reliable energy production is necessary to promote competition and American innovation. Yet it is arguably more important that society engages in serious dialogue surrounding this emerging technology. Government cannot, and should not, be the only voice in this conversation.

Independent institutions dedicated to transparency, accountability, and the defense of individual liberty need to rise and challenge the current trajectory.

Technological revolutions have always reshaped society. The difference this time is scale and speed. AI is a decision-making engine that may soon operate faster and more broadly than any human institution.

America can power the AI revolution. The real question is whether we can power it without surrendering control over our economy, institutions, and ultimately, our freedom.

The future may well belong to artificial intelligence. But whether that future advances prosperity or undermines humanity depends on the vigilance we exercise today.

Elon Musk's Terafab is coming, and you're not ready



The announcement of Terafab was made at a decommissioned power plant, reflecting Elon Musk’s understanding of stagecraft: The ruined infrastructure of one era makes a convenient altar for the next. On March 21 and 22, 2026, at the Seaholm Power Plant in Austin, Musk presented Terafab. It is either the most ambitious semiconductor manufacturing project in history or a very expensive project that may not come to be.

Terafab is a plan to build vertically integrated chip-manufacturing capacity in Austin, combining under one roof the design, fabrication, packaging, and testing of advanced semiconductors. Tesla, SpaceX, and xAI are the collaborating entities. The announced investment figure is $20 billion. The stated long-run target is one terawatt of compute capacity per year, a number that converts the language of performance into the language of power.

Terafab is a cultural event as much as a technical announcement.

Measuring compute in watts means that the limiting factor is energy throughput. The International Energy Agency has described data centers as a fast-growing fraction of global electricity demand; by 2030, in its base case, that demand could roughly double.

The technical core of Terafab is its most defensible part. The pitch is about iteration speed: If you can design a chip, fabricate it, package it, test it, and revise the mask, all inside one building, without shipping components between specialized facilities in different countries, you can improve faster than anyone who does not. In conventional semiconductor manufacturing, these functions are geographically and organizationally scattered. A mask set travels; a wafer ships; a packaged part crosses an ocean. Each journey is a delay, and delay is the enemy of the feedback loop. Terafab is a wager that learning velocity beats static node leadership.

A factory within a factory

Advanced fabs are among the most expensive and complex structures human beings have ever built, typically $10 billion and several years for a single facility, dependent on supply chains for equipment that cannot be wished into existence by ambition or capital alone. Extreme ultraviolet lithography machines, to name one critical dependency, cost hundreds of millions of dollars apiece and are manufactured by a single Dutch company. The closed loop is a compelling engineering idea. The project will involve equipment lead times, utility provisioning, the yielding of learning curves, and the peculiar physics of building things in the real world.

There is a second Terafab nested inside the first. The announcement includes chips, named D3, designed for space environments, paired with a vision of solar-powered orbital compute satellites, initially around 100 kilowatts and scaling toward the megawatt range. Terrestrial compute is constrained by land, power, cooling, and local political opposition to enormous data centers. Space has sunlight and no neighbors to complain about the noise.

RELATED: Bernie Sanders and AOC propose law to shut down future AI data centers

Photo (left): Andrew Harnik/Getty Images; Photo (right): Alex Kraus/Bloomberg/Getty Images

Of course, space also has no air. In vacuum, heat cannot leave a system by convection, only by radiation, which requires very large radiator surfaces at high power levels. The International Space Station’s thermal control system requires radiators the size of tennis courts to reject the heat generated by its systems. Radiation poses its own complications: The energetic particles of the space environment induce bit flips and long-term degradation in electronics not specifically hardened against them. The orbital vision is not impossible. It is simply a different problem than the earthbound one, even when presented in the same breath, as though the same momentum carries the project from Austin to low Earth orbit without friction.

The future needs power

Terafab’s “everything under one roof” approach has an ancestor in the great vertical integration projects of industrial capitalism, such as Ford’s River Rouge complex, which turned raw materials into finished automobiles inside a single, vast geography, its own power plant humming at the center.

The global semiconductor supply chain is highly concentrated: Roughly 92% of the world’s most advanced chip manufacturing capacity sits in Taiwan. To build end-to-end domestic capability is simultaneously a resilience project and a power project, a bid to internalize a strategic resource inside one corporate constellation rather than depend on the broader market of specialized suppliers.

Terafab is a cultural event as much as a technical announcement, and its cultural work is to naturalize a particular diagnosis: that intelligence is infrastructure, infrastructure is energy, and energy is the horizon of meaning for civilizational progress. Whether or not the fab gets built on schedule, whether or not the orbital satellites ever achieve megawatt-scale compute, the frame has been installed. The factory is where the future lives, and the future needs power.

Big Tech’s Plan To Make Work ‘Optional’ Is Evil

While innovations in robotic automation may create new and exciting economic opportunities, a world without work is something none of us should desire.

Right-wing billionaires are barking up the wrong tree



Democrats are currently on track to take the House of Representatives in the 2026 midterms. If this happens, they will empower resistance bureaucrats to slow down all Trump administration initiatives. Of course, they’ll not only impeach Trump, but will also pursue impeachment proceedings against many Trump officials. This will substantially drain momentum from the administration and increase it for Democrats heading into the crucial 2028 presidential election.

The Democrats are already putting together plans, formulating a narrative, and accumulating evidence, which they will use against Republicans should they retake power. We’ve seen this movie before.

Since the billionaires do not know how to wield their potential power, they have become targets.

The Marxist machine has had time to learn from its mistakes during 2020-2024. The Democrats will likely pursue criminal prosecution against key targets in the MAGA orbit, including big donors like Elon Musk, the DOGE bros, and even junior Trump staffers. We’ve already seen in Arctic Frost an effort to spy on sitting Republican United States senators — they’ll be on the target list, too.

This is power. Force is power. Politics is the management of force. For his tech-oriented publication Pirate Wires, Mike Solana recently published “Theory of Power,” which outlines how the left will replicate California’s wealth tax to target billionaires nationwide. He believes that the left is targeting billionaires because wealth is power. He’s half right.

Wealth itself is not power — it is the means to power. The left seeks to redistribute the wealth of the billionaire class to the people living in America in exchange for power. Leftists are not targeting the billionaires because their wealth poses a threat to the left’s power — they want to seize the power of that wealth for themselves. Since the billionaires do not know how to wield their potential power, they have become targets. If they did, the California wealth tax wouldn’t even be an issue.

Wealth cannot protect its holder from force. If politics is the management of force, then political influence is power. There are plenty of people with political influence and no wealth who have more power than billionaires. There are 20-something political staffers who have more political power than billionaires. There is a legion of bureaucrats with more political power than billionaires. Who has more power, a billionaire or the IRS lawyer investigating him? Of course, it’s the IRS lawyer, because the IRS lawyer is backed by regime power.

The billionaire class has largely abdicated regime power — the question of who is in charge — with a few notable exceptions, such as Elon Musk’s 2024 election engagement and purchase of Twitter. The wealthy are quite good at influencing politics for their discreet business interests, with one analysis finding that they receive a 220-times return on investment through their lobbying efforts (other analyses attribute the rise in corporate profits to lobbying).

However, regime politics is not fundamentally about lobbying for an appropriation or a carve-out in the tax code, which puts generating wealth above gaining political power. Machiavelli warned against this in “The Prince”:

And, on the contrary, it is seen that when princes have thought more of ease than of arms, they have lost their states. And the first cause of your losing it is to neglect this art.

Wielding political influence for higher corporate profits to buy another jet or a fifth vacation home is thinking of ease more than of arms.

If politics is the management of force, then political influence is the “arms.” The billionaires are on track to lose their “state,” because they’ve neglected the art of influencing regime politics.

RELATED: The case against ‘principled conservatism’

wenjin chen/Getty Images

For all its faults, the left understands regime politics. Billionaire wealth extraction is just one part of its plan to sustain and deepen its regime-level power. If its only opposition, the MAGA political class, is destroyed by regime politics, the left’s wealth extraction scheme is not only inevitable, but it will also be the least of the billionaires’ worries.

All of this means that right-aligned billionaires should move immediately to gain regime-level political influence. To be clear, wealth can be a strong amplifier of political influence. Still, political influence has a simple recipe: It requires access, credibility, leverage, and the ability to change behavior. In other words, donating to campaigns is not enough. Elected officials must be lobbied to act in the interest of those who support them, or someone else will lobby them for their own interests.

Before a politician is elected, the benefactor has the leverage. But once the politician has regime-level power, the benefactor is subject to the beneficiary’s power. If right-wing billionaires want to survive what’s coming, they must have a well-run machine to influence politicians after they are elected. Solana makes this point — with which I fully agree: They must “respond as if [their lives depend] on it, because my reading of what these people are saying, casually, cheerfully, and increasingly out loud, is…it does.”

But power is fickle. Any billionaires who wield political influence strictly for their own benefit rather than on behalf of the people will find themselves burdened with all the paranoia and stress of a tyrant. To that end, Xenophon’s “On Tyranny” provides relevant advice: “Consider the fatherland to be your estate, the citizens your comrades, friends your own children, your sons the same as your life, and try to surpass all these in benefactions. For if you prove superior to your friends in beneficence, your enemies will be utterly unable to resist you.”

Editor’s note: This article appeared originally at the American Mind.

Elon Musk Offers To Pay Salaries Of TSA Agents Working Through Government Shutdown

'I would like to offer to pay the salaries of TSA personnel'

AI’s PR is in the toilet — for good reason



It may be one of the most remarkable technological breakthroughs in human history. Ask the American public, though, and you’ll hear something else entirely about artificial intelligence.

A recent NBC News survey asked registered voters how they feel about a range of public figures and political topics. The results were striking. While Pope Leo posted a net favorability rating of +34, artificial intelligence came in at -20. That puts AI near the bottom of the list, ranking ahead of only the Democratic Party and Iran. According to the poll, only 26% responded “positive” to AI, while 46% responded “negative.”

Who designs the systems? Whose values do they embed? Who gets accountability when they fail? The public does not have satisfying answers, and the industry hasn’t given them many.

Think about that for a moment.

A technology widely touted as capable of curing diseases, discovering new materials, and unlocking unprecedented productivity is viewed more negatively than every U.S. politician and institution included in the poll.

Artificial intelligence may be revolutionary, but unless its architects confront the distrust surrounding it, AI risks losing the public confidence it will ultimately depend on.

A perfect storm of distrust

As someone who follows AI closely, I can’t point to a single cause of the unease. It looks more like a perfect storm.

For decades, science fiction trained audiences to associate AI with dystopia. From “2001: A Space Odyssey” to “The Terminator,” AI often appears as the moment humanity loses control of its own creation. Fiction isn’t the whole story, but it primes the public to expect the worst.

Many Americans also worry about what AI will do to the workforce. Automation has threatened certain industries for years, but AI scales the threat. It now appears poised to hit huge swaths of white-collar work, including creative fields and even decision-making roles once assumed to require human judgment.

Then came the explosion of what critics call “AI slop.” Across the internet, AI-generated articles, videos, images, and posts flood the feed. Much of it is low-effort content built to attract clicks, not provide value. The internet already buckles under misinformation and spam. AI has supercharged this problem.

Americans also distrust the companies building these systems. The left has long been skeptical of massive corporations wielding too much power. The right grew more suspicious after years of fights over social media censorship and ideological activism. ESG efforts, which used corporate power to reshape incentives around political priorities, only reinforced the sense that tech and finance elites want to run the country by proxy.

In short, both sides now distrust many of the institutions developing artificial intelligence. That is a bad position for an industry trying to introduce world-changing technology.

When the experts sound the alarm

Public unease also draws fuel from the people closest to the machine. Several prominent voices in the AI world have issued stark warnings about risk.

Elon Musk has suggested there may be “only a 20% chance of annihilation” from future advanced AI systems. Anthropic CEO Dario Amodei has cited a 25% chance AI development goes “really, really badly.” Geoffrey Hinton, often called the “godfather of AI,” has floated human extinction-level risk in the 10% to 20% range over the coming decades.

When the builders of a technology openly speculate about catastrophic outcomes, it’s not surprising the public grows uneasy. To the average voter, it can sound like civilization is playing Russian roulette — and the people loading the cylinder are asking to be trusted.

RELATED: Ex-NFL player asked ChatGPT for advice after allegedly murdering his fiancée

Photo by Wesley Hitt/Getty Images

Power, control, and fear of the unknown

Beyond jobs and misinformation, a deeper concern lies underneath: AI is becoming an infrastructure of decision-making.

Algorithms already shape what news we see, what products we buy, and what ideas spread online. As AI grows more capable, it will influence public opinion, political discourse, and cultural norms even more.

In authoritarian systems, that becomes an obvious tool of surveillance and control. But even in a constitutional republic, concentrating that much power in a handful of corporations — or in government — raises hard questions. Who designs the systems? Whose values do they embed? Who gets accountability when they fail? The public does not have satisfying answers, and the industry hasn’t given them many.

The AI industry should pay attention

Despite the excitement in Silicon Valley and Washington, the NBC poll reveals a simple truth: Much of the public does not trust AI. For the companies racing to build ever more powerful systems, that should be a wake-up call.

The industry often sells AI in near-utopian terms: medicine, energy breakthroughs, scientific discovery. Those gains may come. But many Americans see something else. They see massive data centers consuming energy while the internet fills with synthetic garbage. They see tech firms raising and spending billions while ordinary life gets harder. They see executives talking openly about betting civilization on tools they admit they don’t fully control.

If AI’s architects want public buy-in, they will have to address these fears directly.

A good place to start would be a clear public commitment to the constitutional principles Americans still expect: free speech, individual liberty, and personal autonomy. If AI will play a larger role in shaping information and decisions, the public needs confidence that these systems will protect fundamental freedoms rather than erode them.

AI will be shaped, in part, by trust. Right now, that trust is in short supply.

Voting With Their Feet: Ex-Starbucks CEO Howard Schultz Latest Billionaire To Flee to Florida As Washington ‘Millionaires Tax’ Poised To Become Law

Former Starbucks CEO Howard Schultz is ditching his longtime residence in Seattle for Florida, just as Washington Democrats are nearing the finish line on a proposed "millionaires tax."

The post Voting With Their Feet: Ex-Starbucks CEO Howard Schultz Latest Billionaire To Flee to Florida As Washington ‘Millionaires Tax’ Poised To Become Law appeared first on .