The third way: Navigating AI’s knife edge



When it comes to the impending AI takeover, two main camps of belief get the most attention: those who welcome technological singularity, believing it will deliver humanity into a utopia of universal basic income, freedom, and prosperity, and those who deeply oppose it, fearing it will render humanity useless and usher in the apocalypse.

But is there a middle ground — a reasonable center that embraces the good AI offers but opposes the dystopia it threatens?

BlazeTV hosts Christopher Rufo and Jonathan Keeperman believe there is.

On a recent episode of “Rufo & Lomez,” the duo spoke with Samuel Hammond, an artificial intelligence researcher at the Foundation for American Innovation, about the “sweet middle ground” of artificial intelligence.

Hammond acknowledges the dual nature of artificial intelligence. “It's the thing that's going to build us all-new efficient defended software, but also in the meantime enable hackers to hack that software; it's a thing that will discover new drugs but also create new viruses. And to be able to hold both those realities in your mind is incredibly taxing.”

In the same way that the Industrial Revolution created both wealth and the administrative and welfare states, so the AI takeover will have both benefits and drawbacks, he says.

Keeperman inquires about the regulatory measures being taken by AI developers to mitigate the potential damage.

Hammond admits that regulation is difficult because of the sheer scope of AI. Like electricity, “it’s this massive umbrella term,” he says.

“The areas where people have legitimate concerns are easier to gerrymander, right? It's things like designing novel bioweapons or very powerful, autonomous malware that could hack into your program and go rogue. These things are difficult to keep in a box,” he explains.

On the upside, however, “getting to advanced AI first will have major national security implications.”

“The fact that we have a friendly U.S.-based company that built a system like Mythos first that could, in principle, hack into all these different critical pieces of infrastructure is an incredible fortune for us, right?” says Hammond, noting that this allows the U.S. to “patch up and harden [its] systems” before other countries reach the same capabilities.

On the other hand, the U.S. government currently has little control over the companies that are leading AI development.

As of now, these companies “are being benevolent with their use of this and certainly have the intentions to try to be sort of trustworthy and good stewards of this technology, but as a matter of state governance, do we actually have any greater control over this technology than, let's say, China?” Keeperman asks.

Hammond admits that we’re on precarious terrain.

“I think of us as sort of on this knife edge between a Chinese-style panopticon or some kind of anarchy where things kind of fall apart,” he says, advocating for a “third way.”

“We need a strong state to enforce property and contract and our rights, but that state can't be completely divorced from rule of law,” he says. At the same time, however, “democracies have committed genocide,” whereas “private corporations just want to maximize shareholder value.”

In the end, Hammond urges us to reject both utopian dreams and apocalyptic fears in favor of a pragmatic middle course: building institutions strong enough to govern AI’s immense power, yet constrained enough to prevent it from becoming a tool of tyranny or disorder.

Want more from Rufo & Lomez?

To enjoy more of the news through the anthropological lens of Christopher Rufo and Lomez, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Kids are being 'discipled by AI' — a Baptist pastor says he has the solution



The question as to whether or not children like to use artificial intelligence chatbots has been answered, and now it's a question of what they are using it for.

According to recent polling, the majority of teens are using it for homework or as a search engine.

'People's children are being discipled by AI.'

Generating summaries, creating images, or just generic "fun" are listed in 2025 polling as the next most frequent uses. Another 10% of children ages 13 to 17 say AI does most or all of their school work.

At the same time, nearly 75% of U.S. teens said in a survey last year that they have tried out AI companions. It is that large number of American youth that Pastor Erik Reed was concerned about when he created Dominion, a theological chatbot.

"People's children are being discipled by AI," Reed told Baptist News. "Many young people seek out companionship or counseling from bots, and some models have been built to offer constant feedback loops of affirmation and love, giving users an addictive dopamine hit. They're going to flatter you at every turn."

The solution, the Southern Baptist leader said, is a competitor at the same level, in terms of functionality, that has "Christian guardrails to safeguard what it's feeding back to people."

The head of the Journey Church in Lebanon, Tennessee, said that AI should be brought under "the Lordship of Christ," and thus he built the chatbot to exist only within "the authority and sovereignty of God."

RELATED: 'I wanted to thank God in public': Fighting tears, Victor Glover gives legendary speech on return to Earth

Jon Cherry/Getty Images

The chatbot was trained on selected theological texts, verses, catechisms, and traditional logic, Reed stated. It is protected by internal checks and balances that the user cannot influence, which is easier said than done.

The chatbot reportedly prioritizes "first-tier issues," defined as things that all Christians find to be true, over second-tier issues that may differ per denomination. Third-tier issues were listed as almost all politics.

A demo of the product says that everything discussed with the chatbot "happens inside an environment that filters out unbiblical counsel and keeps the focus on wisdom, holiness, and discipleship."

RELATED: These Apple privacy perks won't hide you from the Feds

JEAN-FRANCOIS MONIER/AFP/Getty Images

However, the demo did showcase that Dominion is capable of summarizing simple news aggregation from a 24-hour period, for example, but also that it is capable of giving advice on personal matters, which the AI presented from a religious point of view.

Co-founder Brandon Maddick describes his work as a "Christian responsibility" to shape minds in truth to counteract them being shaped by AI.

"We believe faithfulness for the Christian is to redeem AI for the glory of God," he said.

Notably, Maddick calls his congregation “the least SBC-looking church you’ll find," with female deacons and "Reformed-ish theology."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The crazy reason some AI obsessives love it when their chatbot talks like a caveman



Coders using Claude, AI giant Anthropic's leading large language model, discovered a shortcut that saves them money and simplifies the entire engagement with the LLM down to mere syllables.

The protocol, since made into an app, is called Caveman.

Caveman makes it possible to save money without sacrificing output by reducing the linguistic sophistication of the LLM. The logic is simple: The less the AI has to talk to you in fully conversant language, the less compute it demands. And the less compute it demands, the fewer “tokens” it costs. Like all LLMs, Claude works on tokens, which users buy with dollars to pay the chatbot’s company.

As the world of the printing press is forgotten, communication transforms.

It’s a crazy workaround, but it pays whopping dividends. If you can tolerate talking to a digital Neanderthal, you can save up to 75% on operating costs.

Devolution?

With that, we’re face to face with the raw evidence that tech doesn’t transcend our culture’s many cautionary refrains. Garbage in, garbage out. Easy come, easy go. Live by the gun, die by the gun. In other words, “It’s about the financial system and the soul,” to quote Ardian Tola, founder of the Bitcoin-powered platforms Canonic and Ark.

To give a few examples of what’s going on here, consider the coder sitting at his or her desk prompting Claude to, say, reconfigure some corporate software to the new spec. The coder used to do this work, going into the alien lines of “code language” and — using his experience, knowledge, creative problem-solving, and time — the coder could effect these alterations in various ways and to various levels of elegance. The coder for the past several decades commanded and deserved a substantial salary: It really took some substantial skill and know-how to move with speed and efficiency.

That kind of coder and tech worker is being closed out now. The 80,000 layoffs and counting in the industry this year send a pretty clear message about where this is headed. Corporate reliance (and crucially, dependence) on AI is just about baked in. Companies like Oracle and Stripe are letting go of workers right after they complete their final task — of training their LLMs to do their job.

RELATED: Trump administration has a job opportunity for adult video gamers

Emanuele Cremaschi/Getty Images

Today the coder clinging to his mid-tier salary prompts an LLM to alter the code, and he is “spending” tokens with each word and symbol required to perform these prompts. So if a prompt drags on — like “Claude, move the header up and replace it with the PayPal button, and let me see what they look like if everything is balanced in mobile view” — it is going to cost the corporation or the contract coder more than if the prompt were something closer to “Switch header w/ pay button.”

In terms of efficiency, for a while anyway, this probably adds a layer of challenge for the coder, works the old brain plasticity, and all important, looks good to accounting.

Our souls at stake

One interpretation of everything now concerning “the financial system and the soul” is that if we, as a species, determine that cost efficiency and capital concentration are the most important values, which all others will be tested against and subsumed into, we would be wise to be very honest about our view of the human soul.

That’s because we’d be saying, again as a species, that the soul is secondary to money at best and probably doesn’t matter or even exist. While individuals, you and I, may disagree immediately (and others may weigh in with seemingly very judicious but ultimately jejune statements with regards to complexity, progress, and sacrifice), the order or the value system is still cold simple: money over soul in the end. There’s no workaround.

It might come fast or it might take some years.

Marshall McLuhan and intellectual heirs like Walter Ong theorized decades ago that tech would impose a “new orality” as literacy fades. After all, humanity existed prior to the printing press too. Print literacy greased the wheels of our communication with respect not just to facts but to each other and our own inner reality — our soul.

Most of that theoretical work boils down to the notion that our technologically enhanced means and methods of communicating will slip away from literacy into something more offhand, flexible, vibey. The rise of “vibe coding” provides strong confirmation: As the world of the printing press is forgotten, communication transforms.

The issues here are manifold and of grave concern. You cannot vibe Mass or liturgy, though you can feel it. In this oncoming diminution of the human, where trade-offs are determined by that same money-over-soul diktat, every individual may to have fight, day in and day out, merely to preserve his value system.

Whether that system is inherited and carried over ages of ages, or is just something as temporal as a preference for '80s comedy films, the choices made at the ultra-ubiquitous-tech layer are not going to “align.”

Care must be taken when wandering into the future, wielding, as we do, these handheld high-caliber military industrial complex-made weapons. And just wait until the AI innovators deliver handsfree products intended to replace the smartphone. By itself, coders and prompters regressing to oral communication is fine, passable for certain applications, but the slackening and homogenization of human communication into sheer memery, coupled with the time pressure we all feel daily now, is powered by a force that wants to invade all human territories, including true creativity, religion, and the family. In short, it wants to invade the soul. If we let that happen, what will become of our already beleaguered society and country?

The divisive issue that could decide the midterms now has $200 million on the line



A bet on artificial intelligence is driving a nine-figure investment in the political world ahead of the midterms.

With millions of dollars on hand, one super PAC insistent on pushing artificial intelligence is injecting cash into political campaigns across the country.

'About half of Americans are more concerned than excited about the increased use of AI in daily life.'

With the help of some generous venture capitalists, super PAC Leading the Future was just announced to have surpassed $140 million in just about a year and a half.

The latest donations have added to the $125 million raised in 2025.

Leading the Future — which says it is focused on "advancing a positive, forward-looking agenda for AI innovation in Washington, D.C." — has been willing to pump money into candidates from either party and has done so in states like Illinois, New York, and Texas.

Business Insider reports that the PAC generally pushes candidates who show broad support of AI and tech innovation, while keeping regulations light.

This included $1.4 million to Texas Republican candidates across four districts: Tom Sell, Jace Yarbrough, Jessica Steinmann, and Chris Gober.

For Democrats, $1.1 million was reportedly provided to former Rep. Melissa Bean, with $1.4 million going to Jesse Jackson Jr., both in Illinois.

The PAC is also supporting Democrat Alex Bores' run to replace Rep. Jerry Nadler (D) in New York, according to NOTUS.

RELATED: Catastrophic new iPhone threat leaked to hackers — are you safe?

Mark Felix/AFP/Getty Images

The jury is still out in terms of support from the general public on AI overall, with skepticism and lack of acceptance still floating around 50/50.

Pew polling from 2025 showed that about half of Americans are more concerned than excited about the increased use of AI in daily life. About half of respondents also said AI will worsen the ability to think creatively and form meaningful relationships.

The data also had Republicans and Democrats split on their concern. Half of respondents from both parties said they were more concerned than excited about the increased use of AI.

About 10% from both parties said they were more excited than concerned.

Favorability floats around 50% in 2026 polling from Data for Progress. It is most favored by black people (61%), those under 45 (61%), and men (57%). At the same time, it is mostly unfavorable with women (51%) and those over 45 (52%).

RELATED: Video: Why is a Chinese robot chasing wild boars in Poland?

Roberto Salomone/Bloomberg/Getty Images

Public skepticism may be the biggest hurdle for the super PAC to overcome, but it is also facing opposition money.

Another network called Public First is pledging $50 million to candidates who support regulation, in either party, in 2026.

Public First positions itself as representing American voters who have concerns about "the impacts of AI on kids, workers, consumers, and the American economy."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Video: Why is a Chinese robot chasing wild boars in Poland?



A popular Chinese robot is going viral for a video showing it chasing wild boars, but many aren't sure why that happened.

The robot itself, nicknamed Edward Warchocki, is a Unitree G1 model available for public purchase that popped up in Poland.

'Older ladies or gentlemen love talking with him.'

The Chinese-made robots go for a whopping $23,809 for the basic model, all the way up to $58,365 for the "ultimate edition."

Recently, this particular model was seen running through the streets of Warsaw, Poland, chasing wild boars. While hilarious, there is actual serious context behind the content.

Major Polish cities like Krakow have endured a sprawling wild boar issue — even in city centers — for years, resulting in authorities urging their population to resist feeding the somewhat approachable beasts.

Other cities have resorted to planting flowers with vivid colors and sweet scents in order to deter the pigs.

Since at least 2019, there have been warnings of disease allegedly spread by the animals, resulting in calls for culls and elimination of thousands of them that have reportedly carried illnesses like African swine fever and Hepatitis E.

Enter Edward the robot, who was recently seen shooing the animals away from downtown Warsaw.

RELATED: Man vs. machine: Chinese robots will compete against humans in Beijing half-marathon

- YouTube

As reported by Interesting Engineering, Edward is a Chinese humanoid that operates mostly on its own. It is not controlled remotely and is described as unscripted. Therefore it reacts dynamically to its surroundings and engages in adaptive dialogue in Polish using AI.

Edward is most popular with Polish Boomers, its owners say, as they are excited to interact with a robot for the first time.

Radosław Grzelaczyk and business partner Bartosz Idzik started in cryptocurrency, but they now try to create viral videos with their robo-friend.

"Personally, the sight of this robot chasing boars does not surprise me anymore," Grzelaczyk told TVP World.

"Older ladies or gentlemen love talking with him," Grzelaczyk added. "These people are always delighted that they lived to see times in which robots move through the streets."

RELATED: China debuts 'scary' martial arts robots capable of backflips and weapons training

Kevin Frayer/Getty Images

In China, the robots have been shown to be capable of advanced feats. Last year, they competed in a half-marathon and were showcased in February performing kung-fu, gymnastics, and weapons work.

The focus during China's annual CCTV Spring Festival gala was innovation in multi-robot coordination and fault recovery, referring to a robot's ability to get up after tumbling down. China showed the robots in choreographed performances and dancing as well.

Definitive warning signs of spying from Chinese robot manufacturers exist too. Axios reported on two security researchers who reported on Unitree Robotics allegedly pre-installing a backdoor on its Go1 robot dogs that allowed for customer surveillance.

Other research warned about exploits that allowed for remote takeover of the humanoid bots, among other models.

Neither Edward nor his owners responded to Return's request for comment.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies



Anthropic is sending out a warning that its artificial intelligence model is sophisticated enough to undo decades of research.

The company operates Claude, the AI chatbot that has been ripped off and turned into a free, public model, and is hoping to get together with a consortium of tech companies to button up the security measures ahead of its release.

'It has found vulnerabilities, and in some cases crafted exploits.'

Anthropic's Mythos model of Claude AI will only be available to 40 select companies to be used for the power of good, the company claims.

It represents "the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now," said Logan Graham, head of Anthropic's vulnerability testing team.

The company fears that its new AI model is so good at finding cracks in cybersecurity that it must only be shared with companies it deems capable and responsible enough to prepare for possible attacks when Mythos goes public.

"This model is good at finding vulnerabilities that would be well understood and findable by security researchers," Graham said. "At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them."

RELATED: How to power the AI race without losing control

Samyukta Lakshmi/Bloomberg/Getty Images

Anthropic will reportedly commit up to $100 million in credits for the project, meaning the amount of money it would typically charge for such a volume of its chatbot's usage.

Labeled Project Glasswing, the initiative to shore up cybersecurity will grant Mythos access to handpicked companies chosen largely from Big Tech like Amazon, Apple, Google, and Microsoft. The group is rounded out by internet infrastructure and cybersecurity giants like Broadcom, Cisco, CrowdStrike, Nvidia, and Palo Alto Networks, along with financial titan JPMorgan Chase and key open-source nonprofit the Linux Foundation.

This is not the first time an AI company has warned its product is too dangerous for the public, and looking back, readers can gauge whether or not Claude may be as dangerous as its creators purport it to be.

In 2019, OpenAI sent out a warning ahead of its release of GPT-2, claiming that its capabilities — now vastly eclipsed by later models — could be used to mass-produce propaganda or misleading text.

As Wired reported at the time, OpenAI said GPT-2 was too risky to be released to the general public.

RELATED: Claude, Anthropic's AI assistant, slammed by Elon Musk for anti-white responses to simple prompts

Claude has been in the news for alleged missteps, leaks, and accidental postings throughout the past year, and while it may not be a household name yet, it has raced its way through the tech sector as a go-to for "agentic" work building software, apps, and even companies.

In addition to its model being open-sourced and used by the general public for free, the company has been noted for "accidental" postings of its own code.

Anthropic "accidentally uploaded a file to a public repository that's just meant to help developers understand how to use their product" and "exposed some of the source code of Claude," reporter Aaron Holmes explained recently.

Proprietary information was further leaked in another alleged accidental posting, this time through a blog draft that revealed "internal source code."

The company seems poised for consistent marketing battles, both willing and unwilling, from its high-stakes lawsuit against the federal government labeling it a supply chain risk to the blowback it has received from putting a woman closely linked to the cultish Effective Altruism movement in charge of its AI's "Constitution."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The top 5 dangers of UBI



Social media is rife with warnings that AI will take everyone’s jobs within the next one to five years. If true, mass unemployment will become a mainstay of modern life, sparking questions as to how civilization as we know it will survive. The big-brained elite think they have a solution through universal basic income — with some optimists like Elon Musk claiming that high basic income is the wave of the future — but this idealistic concept poses several dangers severe enough that they could dismantle America and bring about the end of the world.

1. The death of capitalism

Let’s get this one out of the way first: UBI is a gateway to socialism. In a world where the people earn nothing and everything of value is handed down from on high, the capitalist system that made this country great ceases to exist.

Forced dependence, by any other name, is a form of slavery.

Without a consistent job or a way to earn a steady salary, the people must become dependent on the elite who control the money and dole it out at their discretion. Who exactly is expected to do this honestly and fairly? The government has shown itself to be an unreliable steward, especially on the left as the pursuit of equity ensures some groups — like white, straight men — are intentionally marginalized in favor of minority groups. Private companies don’t seem like good benefactors either, as many of them are currently firing employees in favor of AI, simply to keep more money for themselves.

Even if the UBI rollout magically goes off without a hitch, capitalism stands to face another hurdle. People are less likely to buy products and services when they live on a basic fixed income. In a study conducted in 2024, UBI recipients were most likely to spend UBI on necessities, like food and transportation, while withholding their dollars from what can be seen as more frivolous expenses that drive the American economy.

2. Financial inequity

The left’s disdain for wealthy Americans is well-known, with politicians regularly calling for the rich to “pay their fair share,” because why should you keep your money when the government can have it instead? Right now, the left tries to confiscate as much of the people’s earnings as possible through taxes — like California’s outrageous wealth tax — and if given the chance, they’d gladly redistribute those funds to groups that didn’t earn it.

RELATED: Why doesn't money make you happy?

MicrovOne/Getty Images

Universal basic income would install a fast lane to the left’s unofficial wealth redistribution program. Once in power, they would get to decide which groups receive UBI, as well as the amounts that are distributed. In a left-leaning world, that could mean minority groups get more basic income while “privileged” groups receive less, finally giving them the power to push the “equity” they’ve chased since the Biden administration.

3. The end of the American dream

While Elon Musk’s “high basic income” is a novel idea, the reality of a socialist system means that most of us will get a meager allowance while the elite keep the lion’s share for themselves. In doing so, this will create a larger divide between the upper class and lower class. At the same time, the middle class who can’t work, can’t earn money, and can’t get a leg up will also fall into the lower-class bracket.

Under UBI, the middle class will be hollowed out, permanently relegating the majority of Americans to poverty. Even worse, this new system will ensure that no one can escape the lower class simply because they don’t have a way to earn more money than the elites are willing to give. Job scarcity and financial dependence will keep the poor in check, and the American dream will cease to exist.

4. Freedom isn’t free

Our forefathers promised the people life, liberty, and the pursuit of happiness. They made a social contract, one that still stands to this day. But if the jobs go away, UBI is instated, and the people must depend on someone else for their next paycheck, the Declaration of Independence loses its power.

Simply put, the people can’t be free if we’re forced to depend on politicians, benefactors, or elitists to provide our way of life. Forced dependence, by any other name, is a form of slavery. Universal basic income gives the elite the power to take our rights and render our founding documents null and void.

5. One step closer to the end times

Last but not least, UBI is one of the final levers required to spread the mark of the beast, the precursor to the end times.

In the New International Version of the Bible, Revelation 13:16-17 says: “It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark, which is the name of the beast or the number of its name.”

This doesn’t just mean you can’t buy or sell products unless someone says so. It also means you would need the mark to receive UBI payments.

To put it bluntly, it’s easier to force the people to sell their souls when their means to work, earn money, and be free are all taken away. Even if UBI isn’t the mark itself, it’s a Trojan horse that will usher in top-down control that can be exploited by the most evil forces our world has ever known. It’s exactly what the devil wants and needs before the book of Revelation comes to pass.

Is universal basic income inevitable?

In a word, no, not yet. The things above can only happen if the two things below about the ongoing AI race are true:

  • AI will be effective enough to fully replace human jobs, a feat that’s proving difficult with continuous hallucinations, mistakes, and more.
  • AI will have the power to produce endless mountains of cash. There can only be enough basic income for everybody — even in small amounts — if AI can print infinite money.

Assuming these are true, more roadblocks stand in the way of an AI-controlled economy.

A crippled economy

Businesses are currently run by people who buy products and services from other human-led companies. Some businesses sell products to each other (B2B), while other businesses sell straight to consumers (B2C). This cycle is the beating heart of capitalism.

If companies are suddenly all run by the same AI platforms, they’ll no longer need to buy digital services from each other to get work done. They can simply use AI to build custom versions for their own companies at little or no extra cost, thus cutting out third-party vendors and partners, which will ultimately make some companies obsolete. In fact, this loophole has the power to take down the entire digital B2B market.

On the commerce side, consumers face a different problem. They can’t use AI to manufacture physical products for themselves — like iPhones, PCs, and game consoles — but under the universal basic income strategy, they are more likely to hold their money for necessary purchases than to spend it like they do today. This monumental shift in spending habits could also cripple companies and the market, or at the very least, it could stifle year-over-year growth.

In short, universal basic income, ushered in by the revolution of AI, would be a huge disaster for American workers, the American economy, and the American dream. All of it is in jeopardy unless the government passes regulations that prevent mass job loss. Luckily, after kneecapping the states’ ability to regulate AI via executive order, the federal government is finally stepping up by introducing the National AI Legislative Framework and the Trump America AI Act. More on that soon.

West Virginia Republicans are betraying their voters for AI special interests



There is a reason why most red-state Republican leaders fail to reflect the political values of their constituents. They represent the special interests they work for rather than the whole of the people.

Nowhere is this more evident than with the ravaging of West Virginia by generative AI data centers, promoted by people like House of Delegates Speaker Roger Hanshaw, who legally represents special interest groups fighting poor, local communities in court.

The same man who was instrumental in stripping localities of their ability to block data centers is now representing the people behind those data centers in court.

Remember the provision in the One Big Beautiful Bill Act of 2025 that originally attempted to strip all state and local governments of any ability to block data centers from being built? Well, last year, West Virginia enacted just such a ban at the state level. Hanshaw shepherded HB 2014 to Republican Gov. Patrick Morrisey’s desk.

Among many special tax and regulatory favors offered to data centers, this bill removed local jurisdiction over the siting, zoning, and operating of certified high-impact data centers and microgrids.

Thus, companies like Google, Meta, and OpenAI could work with state politicians bought into their pay-for-play and force their way into any community. And what better person to be fighting for them than the speaker of the House?

While serving as speaker, Hanshaw filed a notice of appearance in the appeal to the Department of Evironmental Protection’s Air Quality Board on behalf of his client MGS CNP1 LLC, which is an affiliate of Houston-based Fidelis New Energy working on a data center project in Mason County.

This was in the middle of the session and just one week after the state House of Delegates passed legislation making it easier for these projects to obtain certification with the Department of Commerce.

Then, just two days after the session ended, Hanshaw took on a case through his work at Bowles Rice for Fundamental Data, the company working on powering the data center bonanza in Tucker County.

So the same man who was instrumental in stripping localities of their ability to block data centers is now representing the people behind those data centers in court against local community groups appealing the DEP’s permit issuance.

It was the Tucker County fight that led me to speak out nationally against this mindless business model of raping red-state land, power, and water for a form of generative AI that serves nothing but chatslop and the surveillance state.

Last August, I vacationed in Tucker County, home to the gorgeous Blackwater Falls State Park and Canaan Valley. A county that voted for Trump by a 50-vote margin, these people are the forgotten men that MAGA was supposed to represent.

RELATED: How to power the AI race without losing control

Rudall30/Getty Images

I spoke with several locals who were irate beyond words about the injustice occurring in a state with barely any Democrat elected officials.

What’s worse is that West Virginia is also being violated with endless transmission lines to power the blue-state “data center alley” in northern Virginia. According to a report from the Institute for Energy Economics and Financial Analysts, West Virginia energy consumers will be expected to pay $572 million in higher rates to fund the rope to hang themselves.

What is so offensive is that these projects are not even creating jobs. According to the February JOLT report from BLS, construction remains in the greatest recession since the Great Recession, despite these so-called data center projects. Oracle, which is at the center of the cloud computing in the data centers, is laying off 18% of its workforce.

Shockingly, Henshaw and his minions attempted to pass even greater handouts for data centers offered to no other industry, in addition to what was in HB 2014.

This session, they introduced SB 623, which offered a complete property tax exemption and sales tax exemption on all data center equipment. They also introduced HB 4013, which would have created a new tax credit available to data centers to offset all state income, sales/use, franchise, and payroll withholding taxes based on capital investments, construction costs, and wages.

How many jobs did they have to create to qualify? Just 10! Which, of course, is a tacit admission that these behemoths don’t create many jobs, despite their enormous footprint, cost, and consumption of power.

In other words, Agenda 2030 is being fulfilled right under our noses in a state where Republicans control both houses of the legislature with 32-2 and 91-9 majorities.

What West Virginia, with its mind-numbing GOP majorities, shows is that the lack of conservative outcomes under GOP control is not due to a lack of power or votes but too much access to money and special interests.

How to power the AI race without losing control



The artificial intelligence revolution is here, and it arrives charged with the capacity to fundamentally change society for better or worse.

America is currently leading the world in AI development. U.S. companies are building the most advanced models, attracting the most capital, and designing the infrastructure that will shape the next century. But there is one increasingly obvious constraint standing in the way: electricity accessibility.

The political consequences of rapid automation could be just as transformative as the technology itself.

Energy scarcity is only half the story. Even if we succeed in generating the power required to fuel the AI revolution, we must confront a deeper challenge. The same technology that promises medical breakthroughs and economic growth also carries profound societal and even existential risk.

If America wants to win the AI race, we will need to consider a massive expansion of energy production and an equally massive expansion of vigilance.

The energy bottleneck

Modern AI models are trained and deployed in massive data centers packed with tens of thousands of high-performance graphics processing units running continuously. Training a single frontier model can require weeks or months of nonstop computation, while everyday AI tools used by millions of people must process queries around the clock.

These facilities consume electricity at industrial scale, rivaling entire cities in their power demands. In fact, the hyperscale Stargate data center in Saline Township is projected to consume the same amount of electricity as 1.17 million homes.

The understanding of just how much energy is needed to power the AI revolution is still unfolding across the industry. Just a few years ago, Silicon Valley leaders were still thinking in megawatts.

Meta CEO Mark Zuckerberg, speaking on a podcast less than two years ago, said his company would build larger AI clusters “if we could get the energy to do it,” describing 50-to-100-megawatt facilities and speculating that 1-gigawatt data centers were probably inevitable someday.

Today, 1-gigawatt facilities are on the smaller end of planned AI infrastructure, with projects up to 5 gigawatts already in motion throughout the United States, including but not limited to the following:

And this list barely scratches the surface. Dozens more large-scale facilities are planned or under construction across the country, and every single one of them will require enormous flows of reliable electricity to operate.

Elon Musk recently stated at Davos that “the limiting factor for AI deployment is, fundamentally, electrical power.” He warned that while AI chip production is increasing exponentially, electricity generation is not.

“Very soon, maybe even later this year,” Musk said, “we will be producing more chips than we can turn on.”

In Santa Clara, California, reports indicate newly built data centers may sit idle for years because the local grid cannot handle the load.

According to a report published by the global consulting group McKinsey & Company, U.S. demand for AI-ready data center capacity could grow from roughly 60 gigawatts today to 170 to 298 gigawatts by 2030.

The International Energy Agency reports that data centers consumed more than 4% of total U.S. electricity in 2024. This amounts to 183 terawatt-hours. IEA projections suggest this number could increase by 133% to 426 TWh by 2030.

To put that in perspective, 426 TWh is roughly equivalent to the annual electricity consumption of more than 40 million American homes.

The dilemma is obvious. If we do not have reliable energy, AI innovation will be compromised and could potentially migrate elsewhere. Worse, American households could find themselves competing with Big Tech for increasingly scarce power, driving up electricity costs for families and small businesses.

But energy is only the first layer of this story.

RELATED: States should work with AI, not against it

Alex Wong/Getty Images

The promise and the disruption

AI is not your typical technological advancement. It is a general-purpose intelligence system capable of transforming nearly every sector of society. In the coming years, AI could accelerate drug discovery, personalize medicine, supercharge logistics, automate research, and unlock new materials and engineering breakthroughs, just to name a few potential benefits. The economic upside is staggering.

Artificial intelligence is a powerful tool and a dangerous weapon. While promising efficiency and innovation, AI also threatens disruption on a historic scale. Job displacement could occur faster than previous technological revolutions. Entire professions, from legal research to software development, could be reshaped or automated.

If widespread job displacement occurs, there will inevitably be calls for sweeping government intervention. The political consequences of rapid automation could be just as transformative as the technology itself.

Exponential technological developments have changed political operations throughout history. As a recent example, social media algorithms have dominated political discourse over the past decade. Political polarization has subsequently skyrocketed as people on all sides of the aisle are trapped in online echo chambers and subjected to a panopticon of surveillance.

Artificial intelligence has the frightening capabilities of supercharging mass surveillance while baselessly boosting preconceived biases without an objective basis in truth.

There is certainly reason for concern about the potential bias and coercive nature of AI. In recent years, we have already witnessed how tech companies can shape narratives and suppress viewpoints on popular media platforms. Embedding ideological bias into AI systems would mean embedding that bias into education, finance, health care, and governance.

If AI becomes the invisible infrastructure of society, who writes its rules? Who determines its boundaries? And who holds it accountable?

Playing with probabilities

Beyond economic and cultural disruption lies an even deeper uncertainty.

We are introducing a form of intelligence that even its creators admit they do not fully understand. There are already documented cases of advanced AI systems behaving in deceptive or strategically manipulative ways. In controlled environments, some models have been observed lying to human evaluators, scheming to achieve assigned goals, or resisting shutdown instructions.

OpenAI’s stated ambition is to create artificial superintelligence — systems that surpass human capability across virtually every domain. There is no telling where this path may lead. Humanity has never had to grapple with the prospect of a man-made intelligence that is superior to our own.

And remarkably, some of the leading figures in the field openly discuss the possibility of catastrophic outcomes.

Elon Musk has suggested there is “only a 20% chance of annihilation.” Anthropic CEO Dario Amodei has estimated roughly a 25% chance that AI development goes “really, really badly.” Geoffrey Hinton, often referred to as the “godfather of AI,” has placed the odds of extinction-level consequences somewhere between 10 and 20% over the coming decades.

Those numbers still imply that positive outcomes are more likely than not. But when the downside is losing human civilization itself, percentages matter.

We are advancing a technology with transformative power while relying largely on overzealous corporate discretion to steer its trajectory. Humanity finds itself fiddling with the key to Pandora’s box, and we have no rational means of gauging what will happen if the box is opened.

RELATED: AI’s PR is in the toilet — for good reason

Alina Naumova/Getty Images

Power and prudence

As stalwart advocates for smaller government, we hesitate to call for slamming the brakes on AI development, but it is important to have sober discernment moving forward. America is in a strategic competition with geopolitical rivals who would gladly dominate both this field and us if we retreat.

Reliable energy production is necessary to promote competition and American innovation. Yet it is arguably more important that society engages in serious dialogue surrounding this emerging technology. Government cannot, and should not, be the only voice in this conversation.

Independent institutions dedicated to transparency, accountability, and the defense of individual liberty need to rise and challenge the current trajectory.

Technological revolutions have always reshaped society. The difference this time is scale and speed. AI is a decision-making engine that may soon operate faster and more broadly than any human institution.

America can power the AI revolution. The real question is whether we can power it without surrendering control over our economy, institutions, and ultimately, our freedom.

The future may well belong to artificial intelligence. But whether that future advances prosperity or undermines humanity depends on the vigilance we exercise today.

Sam Altman described as 'sociopath' by board member in brutal insider report: 'He's unconstrained by truth'



OpenAI CEO Sam Altman was dragged through the mud in a new in-depth report that features former colleagues and current board members referring to him as sociopath and a liar.

Altman, 40, has yet to respond to claims made in a recent report, some of which were uncovered in secret memos to OpenAI's board members.

'He is a sociopath. He would do anything.'

According to the New Yorker, OpenAI's chief scientist, Ilya Sutskever, sent the memos to three other board members in 2023. One of the memos about Altman began with a list titled "Sam exhibits a consistent pattern of." The first item on the list was "lying."

The memos also alleged that Altman misrepresented facts to executives and board members while deceiving them about safety protocols. Unfortunately for Altman, the claims did not stop there.

"He's unconstrained by truth," a board member told the New Yorker. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."

The outlet said that the unnamed board member was not the only person to describe Altman as "sociopathic" without being prompted. Not long before his 2013 suicide, according to the New Yorker, coder Aaron Swartz warned at least one friend about Altman, whom Swartz had known from their time together at Y Combinator. His warning: "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything."

Sutskever additionally implied that he did not think Altman should have power over others, saying, "I don't think Sam is the guy who should have his finger on the button."

Others described him as more ambitious than anything else.

RELATED: Sam Altman tells BlackRock he wants AI on a meter 'like electricity or water'

The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and… pic.twitter.com/vX5xIp5DnI
— Ryan (@ohryansbelt) April 6, 2026

Former OpenAI board member Sue Yoon said Altman was "not this Machiavellian villain" but was able to convince himself of his own sales pitches.

"He's too caught up in his own self-belief," she reportedly said. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world."

Other anonymous colleagues cited by the New Yorker said that Sutskever and similar detractors were simply aspiring to take Altman's throne. Still, even many neutral comments did not help Altman's portrayal in the report.

"He's unbelievably persuasive. Like, Jedi mind tricks," a tech executive colleague of Altman's reportedly said. "He's just next-level."

At the same time, OpenAI is allegedly in the midst of unleashing superintelligence that Altman himself says will be so disruptive that it will require a new social contract.

RELATED: Sexting with chatbots is too far, OpenAI decides

Anna Moneymaker/Getty Images

Altman told Axios that there would be widespread job loss and a threat of cyberattacks coupled with social unrest.

"I suspect in the next year," he said, "we will see significant threats we have to mitigate from cyber."

Altman proposed a new deal with citizens that includes a public wealth fund, taxes on "automated labor," a 32-hour workweek, and the "right to AI."

That confirms previous reports that Altman wanted to put AI on a meter like electricity or water, to both democratize its usage and limit the possibility of overburdening the electrical grid.

OpenAI did not respond to Return's request for comment about the claims made about Altman and who they were coming from.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!