Chatbots don’t run on magic. They run on your money.



Imagine someone walks into your town with a proposition: Rezone large swaths of residential and farmland. Hand out tax breaks. Let us build ugly, noisy facilities for chatbots — facilities that will devour nearly a quarter of the power supply.

Then, before you run him out of the room, he adds a final promise: Do not worry. We will pay our own way.

Argue about the projections if you want. Do not tell the public they will not pay more for data centers. They already do.

That is the rope-a-dope Americans are supposed to accept from the government-tech oligopoly, even as politicians insist that data centers will not cost the public a dime.

Sensing a growing backlash against the data-slop colonization of rural America, President Trump promised during the State of the Union that every data center company will pay its own way. Awareness of the problem helps. The president’s pledge does not.

Facts on the ground point in the opposite direction: consumers already pay for data centers, the economics make “paying their own way” implausible at scale, and the industry fights efforts to put that promise into law.

The scope of the problem

The hyperscale build-out being stacked on top of roughly 4,000 existing facilities is not a “burden” on the grid. It is an industrial-scale demand shock.

MIT Technology Review reports that AI alone could soon consume as much electricity as 22% of all U.S. households. Boston Consulting Group projects data center energy needs of up to 1,050 terawatt-hours annually by 2030 — about 120 gigawatts on average. That figure exceeds current U.S. nuclear capacity by roughly 23%.

To put it in plain terms, the United States has about 97 gigawatts of nuclear capacity across 94 reactors. If the high end of OpenAI’s hyperscale ambitions materializes, those facilities alone would require roughly 36% of total U.S. nuclear capacity.

Now scale it out. Clearview estimates that if the 680 planned data centers get built and become operational, they would require the energy equivalent of 186 large nuclear power plants.

That should end the fantasy that these companies can “pay their own way” while drowning in debt, burning cash, and chasing thin margins.

These are not last decade’s data centers, either. Bloomberg reports that only 10% of facilities today draw more than 50 megawatts. Over the next decade, the average new facility will draw well over 100 megawatts. Nearly a quarter will exceed 500 megawatts, and a few will top 1 gigawatt.

Electricity is only the first bill. This demand shock forces major grid upgrades: transmission lines, transformers, substations, and capacity expansions. Utilities do not eat those costs. They pass them on to taxpayers — that is, us.

Wood Mackenzie estimates that AI-driven build-outs will push transformer demand beyond supply by about 30% this year, driving costs up and delaying projects. Consumers will pay for that, too.

RELATED: How data centers could spark the next populist revolt

Photo by Jim West/UCG/Universal Images Group via Getty Images

We already pay for data centers

Consumers already pay. Any serious fix starts with admitting it.

Yet Interior Secretary Doug Burgum has the nerve to tell Americans that nobody has paid higher prices because of data centers.

Grid operators say otherwise.

Bloomberg reports that in areas within 50 miles of significant data center activity, wholesale prices have risen by as much as 267% over five years, with more than 70% of recorded price spikes occurring near that activity. Dominion, the largest utility in Virginia — home to “Data Center Alley” — cited data center demand as a factor in proposing a base-rate increase that would add $8.51 a month to typical residential bills in 2026 and another $2 a month in 2027. That comes after rates already surged 13%.

Then look at PJM, the nation’s largest grid. Monitoring Analytics, PJM’s independent market monitor, says consumers will pay $16.6 billion to secure future power supplies from 2025 through 2027, with about 90% of that bill tied to projected data center demand. Monitoring Analytics called it a “massive wealth transfer” from consumers to the data center industry.

Costs spread across state lines. Maryland transmission infrastructure helps serve Northern Virginia’s data centers. In Baltimore, some residents have seen steep bill increases over three years, with additional increases anticipated starting mid-2026. Across the PJM region, capacity charges spiked 833% for the 2025-2026 period as supply struggled to keep up with these behemoths.

Texas faces its own version. ERCOT expects data center demand to exceed 22,000 megawatts by 2030, which could push wholesale rates up 22% or more, even before population growth enters the equation.

Argue about the projections if you want. Do not tell the public they will not pay more for data centers. They already do.

That reality explains why the industry resists any effort to put teeth behind its “we will pay our own way” pledge. Oklahoma state Rep. Jim Shaw (R) introduced HB 3724, which would have required data centers to pay their own way. Every Republican on the committee voted it down.

So the next time the pitch arrives — that you will not pay a dime extra once the facilities go live — treat it as marketing, not math.

Do not trust. Only verify.

Republicans Ignore Big Tech’s Data Center Takeover At Their Own Peril

It's unclear whether some Republicans are willing to do what it takes to protect their constituents from Big Tech's data center land grab.

How Developers Are Making AI Your Kid’s Third Parent In The Classroom

The CEOs of Anthropic and OpenAI admit AI is like a parent nobody can resist, while teachers unions support Big Tech’s rule.

My school’s AI challenge raised a scary question: What do students need me for?



I might have talked myself out of a job this week. I teach philosophy at Arizona State University, and the university wants to position itself as a leader in the AI revolution. I remain skeptical about AI’s ability to replace a humanities professor. Because of that skepticism, I signed up for what ASU called its AI Challenge.

My project involved what I called the “AI Dialogues.” I used ASU’s version of ChatGPT to hold Socratic-style dialogues, prompting Chat to reply as a given philosopher. I conducted dialogues with Chat as Aristotle, Hume, Marx, and even Lucifer. My students evaluated these exchanges to see how well Chat performed.

We can avoid the toil of learning to be wise — but we cannot avoid the need for it.

Chat could draw on public information and represent each thinker with reasonable accuracy. It also showed another trait: It wanted to please. It often leaned toward whatever it believed I wanted from the debate.

How does that work me out of a job? ASU now provides an AI that professors can customize for individual courses by uploading syllabi and course materials. Students can ask basic questions and receive answers that save me from writing emails that begin with, “Did you read the syllabus?” They can also ask what we covered in class and get quick explanations of key concepts and questions.

When I told my students about this feature, I asked them what they need me for at this point. I was joking — a little.

My classes depend on Socratic discussion. It is conceivable that ASU could project a realistic AI image of me at the front of the classroom and have it ask and answer questions with students. Maybe the only remaining edge is the “personal touch” of a real professor in the room. Even that could vanish if tuition becomes tiered: Students might pay less for “AI Anderson Socrates” than for the in-person version. Add one of Elon Musk’s Optimus robots made to look like Anderson, and I’m in trouble.

A new myth dies

Musk has been talking for months about how the AI revolution is upending the myth we have told for six decades about university education. The myth, he says, promised an escape from toil. Students were told a degree was the path to an air-conditioned job that avoids heavy lifting and involves spreadsheets.

But spreadsheets are exactly what AI does better than humans. The new John Henry isn’t competing to pound railroad spikes; he’s competing to calculate data. No human can keep up with a microprocessor.

In Musk’s view, jobs that involve toil become the “safe” jobs, while many degree-based jobs disappear — replaced by technicians who keep AI running while it calculates taxes, diagnoses medical problems, and writes legal paperwork. The university-educated track no longer looks like the safe route. Universities now compete not just with fewer students due to demographic decline, but with an increasingly outdated product that students may stop buying.

Toil may not stay safe

The problem is worse than Musk lets on. The first jobs on the chopping block might be “numbers jobs,” but Elon has also said he plans to produce 100 million Optimus robots in 10 years. If so, even many physical jobs may not remain protected from automation.

One version of this future says we enter a utopia: Food is plentiful, toil disappears, and we cash our basic income checks — though an AI could do even that for us. We end up living in “Wall-E.

RELATED: Almost half of Gen Z wants AI to run the government. You should be terrified.

Moor Studio via iStock/Getty Images

The more dystopian version looks like sci-fi depictions of AI overlords controlling humans as property — “The Matrix.” Or worse: Like Ultron, super-AI robots decide we must be exterminated to save us from ourselves and protect the planet. We build our own worst enemy.

Whichever future arrives, Musk may have highlighted something about human nature. We avoid suffering like toil. We build machines to avoid toil. And yet we uniquely need toil.

God introduced toil in the Garden of Eden after Adam sinned. Because of sin, we could no longer live in a paradise without toil. We must suffer and strive for our daily bread. History has been divided ever since between those who try to avoid suffering altogether and those who see suffering as a call to repent before God. AI is only the newest version of the philosopher’s stone.

AI as ‘philosopher’

Can I really be replaced by an AI philosophy instructor? I’m not worried.

What AI cannot do, in its counterfeit attempt to replace humans, is serve as an example of how to suffer well to attain wisdom. The Hebrew definition of wisdom is “skillful living.” Being told, “Here is an AI that can simulate skillful living,” is not the same as learning from a human who is actually skillful.

Students will still need to learn how to be wise themselves. A human professor who has actually done this will remain the gold standard that AI can only imitate. We can avoid the toil of learning to be wise — but we cannot avoid the need for it.

The Doorbell Camera Surveillance State Is Not Just About Finding Fido And Grandma

The mass surveillance state is quite literally on your doorstep in the form of a big-tech owned doorbell camera.

Europeans Testify On How Europe Is Banning Americans From Saying What They Believe

'European laws [are] now being exported by the European Union. ... American speech is already being affected.'

LinkedIn Blames ‘Error’ After Removing Pro-ICE Post Over ‘Hateful Speech’

'LinkedIn just proved once again that their woke censorship machine is still running full throttle,' SFCN President Andy Roth told The Federalist.

A federal 'kill switch' for your car is coming — and neither Democrats nor Republicans will stop it



The federal government is moving closer to giving your car the authority to decide whether you are allowed to drive — without a warrant, without due process, and with no guaranteed way to reverse the decision once it is made.

And it is happening not because of one party alone, but because Congress, across party lines, has failed to stop it.

This is not about defending drunk driving. It is about stopping a government overreach that treats every driver as a suspect.

No accident

It's no accident that all this happened quietly. It was written into law under the Biden administration’s 2021 Infrastructure Investment and Jobs Act, buried deep in Section 24220 — a provision few lawmakers publicly debated, but one that now threatens to fundamentally alter the relationship between Americans and their vehicles.

Section 24220 directs the National Highway Traffic Safety Administration to mandate “advanced drunk and impaired driving prevention technology” in all new passenger vehicles. In plain terms, it requires systems that continuously monitor drivers and can prevent a vehicle from operating if impairment is suspected. No breath test is required. No police officer is involved. The judgment is made by software.

Once flagged, a vehicle may refuse to start or restrict operation. Here is the most troubling part: Federal law provides no clear process for getting out of that lockout. There is no required appeal. No mandated reset timeline. No human review. Drivers can find themselves trapped in what critics have begun calling “kill switch jail,” with no guaranteed path to restore access to their own car.

This is not targeted enforcement. It applies to every driver, every time, regardless of driving history.

That alone should raise constitutional alarms.

Proven approach

Drunk driving laws already exist — and they work. Ignition interlock devices have long been required for convicted offenders, and there are 31 approved interlock systems currently in use nationwide. Those systems require a breath sample and are imposed only after due process. Section 24220 discards that proven, targeted approach and instead subjects all drivers to pre-emptive punishment, including those who do not drink at all.

To comply with the mandate, automakers may choose from a range of technologies: driver-facing cameras that track eye movement and head position; software that analyzes steering, braking, and lane-keeping behavior; or touch-based alcohol sensors embedded in the steering wheel or start button. None of these systems determine guilt. They calculate probability — and then deny access.

False positives are inevitable. Fatigue, prescription medications, medical conditions such as diabetes or neurological disorders, and even stress can trigger impairment alerts. Shift workers, caregivers, parents, and first responders are especially vulnerable. When the system is wrong, the consequences are immediate — and the driver has no guaranteed recourse.

Pre-emptive denial

This is not a passive safety feature like an airbag. It is a government-mandated, pre-emptive denial of mobility enforced by an algorithm.

Despite growing concern, Congress has chosen not to stop the mandate, with Democrats largely supporting continued funding and a number of Republicans also voting to keep the program intact.

In January 2026, the House voted on an amendment offered by Republican Representative Thomas Massie of Kentucky that would have blocked funding for NHTSA’s implementation of Section 24220. That amendment failed, allowing the mandate to continue moving toward full enforcement.

Supporters argue the technology does not allow government agents or police to remotely shut down vehicles. While that may be technically true today, the mandate still requires continuous driver monitoring. Once that hardware becomes standard across the national vehicle fleet, expanding its use becomes a political decision — not a technical limitation.

RELATED: Dystopian future as misguided safety push sends drivers to 'kill switch jail'

Library of Congress/Getty Images

Privacy risks

Privacy and cybersecurity risks only deepen the concern. Any system capable of denying vehicle operation must meet extraordinarily high standards of accuracy and security. Those standards have not been proven at national scale. A malfunctioning or compromised system could strand drivers during extreme weather, medical emergencies, or in remote locations.

Cost is another unavoidable consequence. Vehicles are already becoming unaffordable for many Americans. Adding cameras, sensors, software, and compliance infrastructure will only accelerate price increases and reduce consumer choice. Drivers who want simpler, more reliable vehicles will have fewer options — because mandates do not allow opting out.

Proponents often compare this mandate to seatbelts or airbags. That analogy fails. Seatbelts do not prevent you from driving. Airbags deploy after an accident. This system intervenes before any wrongdoing occurs, based on assumptions rather than certainty, and enforces compliance by denying access altogether.

This is not about defending drunk driving. It is about stopping a government overreach that treats every driver as a suspect and hands control of personal mobility to software.

If Americans want to prevent this future, Section 24220 must be defunded — before “kill switch jail” becomes the default setting for the next generation of cars.

The following are the Republican members who voted against the amendment to block funding for NHTSA’s implementation of Section 24220:

Mark Amodei (Nev.-02)
French Hill (Ark.-02)
Max Miller (Ohio-07)
Don Bacon (Neb.-02)
Jeff Hurd (Colo.-03)
Mariannette Miller-Meeks (Iowa-01)
Stephanie Bice (Okla.-05)
Brian Jack (Ga.-03)
Blake Moore (Utah-01)
Gus Bilirakis (Fla.-12)
John James (Mich.-10)
Tim Moore (N.C.-14)
Mike Bost (Ill.-12)
David Joyce (Ohio-14)
James Moylan (Guam-A.L.)
Ken Calvert (Calif.-41)
Thomas Kean Jr. (N.J.-07)
Greg Murphy (N.C.-03)
John Carter (Texas-31)
Mike Kelly (Penn.-16)
Dan Newhouse (Wash.-04)
Tom Cole (Okla.-04)
Jen Kiggans (Va.-02)
Zach Nunn (Iowa-03)
Mario Diaz-Balart (Fla.-26)
Kevin Kiley (Calif.-03)
Hal Rogers (Ky.-05)
Neal Dunn (Fla.-02)
Young Kim (Calif.-40)
Maria Elvira Salazar (Fla.-27)
Chuck Edwards (N.C.-11)
Kimberlyn King-Hinds (Northern Mariana Islands-A.L.)
Mike Simpson (Idaho-02)
Jake Ellzey (Texas-06)
Darin LaHood (Ill.-16)
Elise Stefanik (N.Y.-21)
Randy Feenstra (Iowa-04)
Nick LaLota (N.Y.-01)
Glenn “GT” Thompson (Penn.-15)
Randy Fine (Fla.-06)
Mike Lawler (N.Y.-17)
Mike Turner (Ohio-10)
Chuck Fleischmann (Tenn.-03)
Frank Lucas (Okla.-03)
David Valadao (Calif.-22)
Vince Fong (Calif.-20)
Nicole Malliotakis (N.Y.-11)
Derrick Van Orden (Wis.-03)
Brian Fitzpatrick (Penn.-01)
Celeste Maloy (Utah-02)
Rob Wittman (Va.-01)
Andrew Garbarino (N.Y.-02)
Brian Mast (Fla.-21)
Steve Womack (Ark.-03)
Carlos Gimenez (Fla.-28)
Dan Meuser (Penn.-09)
Ryan Zinke (Mont.-01)

Google’s new motto: Don’t be Christian



Google once had an informal motto: “Don’t be evil.” How about be ideologically driven? Opaque? Arbitrary?

Google sells itself as online Switzerland — a neutral search engine that doesn’t tilt one way or the other. That neutrality vanishes fast when you search for something its algorithm doesn’t like. Suddenly the thing you want becomes strangely hard to find unless you already know exactly where it lives. If you don’t, good luck.

You can’t fix what you’re not allowed to understand.

And good luck advertising it, too — if Google disapproves.

Most people still think of Google as a search engine. That’s outdated. Google is the 900-pound gorilla of online advertising through Google Ads. It has vacuumed up so much of the market that anyone who wants to advertise online usually has to go through Google’s pipeline, under Google’s terms, with Google acting as judge and jury.

This isn’t the print era, when advertisers bought space from newspapers and magazines directly, publication by publication. Today, a huge share of the ad economy runs through a single gatekeeper.

Some might call that a monopoly. Monopolies become even more dangerous when they turn ideological.

Google — and it is far from alone — leans hard left. It dislikes conservative and Christian content, and it has learned how to suppress it without leaving fingerprints. It buries the content in search rankings so that almost no one sees it unless they already know where to look. It throttles monetization. It blocks ads with vague warnings and “policy” language designed to end the conversation.

Google and TikTok now appear to be doing the same thing to faith-based content.

Have you heard of TruPlay? Probably not. That’s the point.

TruPlay is an entertainment app that offers faith-based games and videos for kids. It’s explicitly family-friendly — no sexual themes, no violence, no garbage disguised as “content.” Parents want that. Millions of them. There’s a market for wholesome screen time, and there’s money to be made providing it.

But according to the American Center for Law and Justice, Google has refused to do business with TruPlay for ideological reasons. The ACLJ says Google rejected TruPlay’s efforts to launch advertising campaigns, citing “religious belief in personalized advertising.”

Read that again. Google flagged religious belief as the problem.

The ACLJ says TruPlay tried to comply, filing appeals and revising its ad content repeatedly, only to receive the same rejection notices no matter what changes it made. The ads weren’t inflammatory. They were straightforward: “Turn Game Time into God Time,” “Christian Games for Kids,” “Safe Bible Games for Kids.”

Google’s policy supposedly prohibits “selecting an audience based on sensitive information, such as health information or religious beliefs.” But TruPlay wasn’t targeting a religious audience or harvesting private data. It was advertising Christian kids’ content to the general public.

Google’s response wasn’t “you’re targeting.” It was “your content is too sensitive to advertise.”

That’s the move. “Sensitive” once meant porn, violence, or content not suitable for children. Now it means “Christian games for kids.”

TikTok, the ACLJ says, applied the same logic with even less transparency. The platform allegedly suspended TruPlay’s advertising account over unspecified “repeated violations,” without explaining what those violations were. The ACLJ says one rejected ad contained the word “church.” Another issue allegedly involved an App Store preview image showing Jesus on the cross — not in the ad itself, but in the app’s images. The ACLJ claims TikTok barred advertising anyway.

RELATED: Google’s new plan: To learn everything about you from your online shopping

Photo by Idrees MOHAMMED/AFP via Getty Images

You can’t fix what you’re not allowed to understand. That’s the point of opacity. You don’t get a rule you can follow. You get a verdict.

What makes this even more revealing is the economic angle. This isn’t Google or TikTok avoiding ads that risk scaring off customers. TruPlay offers the kind of content parents actively want. Platforms should want that money. Instead, they appear willing to lose revenue just to suppress anything overtly Christian and family-friendly.

The ACLJ has sent a letter to Rep. Jim Jordan (R-Ohio), chairman of the House Judiciary Committee, urging an investigation into what it calls “systemic discrimination” against Christian content creators and advertisers — part of a broader pattern of viewpoint-based censorship.

Google and TikTok will respond with the standard defense: We’re private companies. We can do what we want.

Fine. But stop pretending you’re Switzerland. If you present yourself as a neutral platform open to all, while quietly functioning as a political gatekeeper, you don’t get to hide behind the language of neutrality when people notice the double standard.

You can’t have it both ways. Either you’re Switzerland — or you’re not.

Google and TikTok are not. It’s time to treat them accordingly.

How do you solve a problem like Wikipedia?



Wikipedia has recently come under the microscope. I take some credit for this, as a co-founder of Wikipedia and a longtime vocal critic of the knowledge platform.

In September, I nailed (virtually) “Nine Theses About Wikipedia” to the digital door of Wikipedia and started a round of interviews about it, beginning with Tucker Carlson. This prompted Elon Musk to announce Grokipedia’s impending launch the very next day. And a national conversation evolved from there, with left- and right-leaning voices complaining about the platform’s direction or my critique of it.

As long as Wikipedia remains open, it is entirely possible for those who think differently to get involved.

As its 25th anniversary approaches, Wikipedia clearly needs reform. Not only does the platform have a long history of left-wing bias, but the purveyors of that bias — administrators, everyday editors, and others — stubbornly cling to their warped worldview and vilify those who dare to contest it.

The “Nine Theses” are the project’s first-ever thoroughgoing reform proposal. Among the ideas:

  • Allow multiple, competing articles per topic.
  • Stop ideological blacklisting of sources.
  • Restore the original neutrality policy.
  • Reveal the identities of the most powerful managers.
  • End unfair, indefinite blocking.
  • Adopt a formal legislative process.

Such ideas were bound to be a hard sell on Wikipedia. It has become institutionally ossified.

Nevertheless, I was delighted that the discussion of the theses has been robust, without much further prodding from me. Following the launch, Jimmy Wales actually stepped into the fray on the so-called talk page of an article called “Gaza genocide,” chiding the participants for violating Wikipedia’s neutrality policy. I chimed in as well. But the criticism was thrown back in our faces.

This brings me to the deeper problem: Wikipedia is stuck in its ways. How can it possibly be reformed when so many of its contributors like the bias, the anonymous leadership, the ease of blocking ideological foes, and other aspects of dysfunction? Reform seems impossible.

Yet there is one realistic way that we can make progress toward reform.

Above all else, those who care should get involved in Wikipedia. The total number of people who are really active on Wikipedia is surprisingly small. The number editing 100 times in any given month is in the low thousands, and this does not amount to that much time — perhaps one or two hours per week. Those who treat it as a part-time or full-time job — and so have real day-to-day influence — number in the hundreds.

In interviews, I have been urging the outcasts to converge on Wikipedia. You might think this is code for saying that conservatives and libertarians should try to stage a coup, but that is not so. Hindus and Israelis, among others, have also complained of being left out in recent years. The problem is an entrenched ruling class. As long as Wikipedia remains open, it is entirely possible for those who think differently to get involved.

RELATED: Wikipedia editors are trying to scrub the record clean of Iryna Zarutska’s slaughter by violent thug

Photo by Peter Zay/Anadolu via Getty Images

If you are a conservative or libertarian who is concerned about the slanted framing of Charlie Kirk’s assassination, get involved. If you are a classical liberal who is alarmed by the anti-Semitism within Wikipedia — like Florida Democrat Debbie Wasserman Schultz — it is time to make your presence felt. Wherever you may fall on the ideological spectrum, I call on good-faith citizens to become engaged editors who take productive discourse seriously, rather than scapegoating “the other side.”

Even a dozen new editors could make a difference, let alone hundreds or thousands who might be reading this column. Given that Wikipedia attracts billions of readers, in addition to featuring prominently in Google Search, Google Gemini, and elsewhere, improving the platform will strengthen our collective access to high-quality information across the board. It will bring us closer to truth.

So how do we solve the Wikipedia problem? With you, me, and all of us — individual action at scale.

Editor’s note: This article was originally published by RealClearPolitics and made available via RealClearWire.