The One Big Beautiful Bill Act hides a big, ugly AI betrayal



Picture your local leaders — the ones you elect to defend your rights and reflect your values — stripped of the power to regulate the most powerful technology ever invented. Not in some dystopian future. In Congress. Right now.

Buried in the House version of Donald Trump’s One Big Beautiful Bill Act is a provision that would block every state in the country from passing any AI regulations for the next 10 years.

The idea that Washington can prevent states from acting to protect their citizens from a rapidly advancing and poorly understood technology is as unconstitutional as it is unwise.

An earlier Senate draft took a different route, using federal funding as a weapon: States that tried to pass their own AI laws would lose access to key resources. But the version the Senate passed on July 1 dropped that language entirely.

Now House and Senate Republicans face a choice — negotiate a compromise or let the "big, beautiful bill" die.

The Trump administration has supported efforts to bar states from imposing their own AI regulations. But with the One Big Beautiful Bill Act already facing a rocky path through Congress, President Trump is likely to sign it regardless of how lawmakers resolve the question.

Supporters of a federal ban on state-level AI laws have made thoughtful and at times persuasive arguments. But handing Washington that much control would be a serious error.

A ban would concentrate power in the hands of unelected federal bureaucrats and weaken the constitutional framework that protects individual liberty. It would ignore the clear limits the Constitution places on federal authority.

Federalism isn’t a suggestion

The 10th Amendment reserves all powers not explicitly granted to the federal government to the states or the people. That includes the power to regulate emerging technologies, such as artificial intelligence.

For more than 200 years, federalism has safeguarded American freedom by allowing states to address the specific needs and values of their citizens. It lets states experiment — whether that means California mandating electric vehicles or Texas fostering energy freedom.

If states can regulate oil rigs and wind farms, surely they can regulate server farms and machine learning models.

A federal case for caution

David Sacks — tech entrepreneur and now the White House’s AI and crypto czar — has made a thoughtful case on X for a centralized federal approach to AI regulation. He warns that letting 50 states write their own rules could create a chaotic patchwork, stifle innovation, and weaken America’s position in the global AI race.

— (@)  
 

Those concerns aren’t without merit. Sacks underscores the speed and scale of AI development and the need for a strategic, national response.

But the answer isn’t to strip states of their constitutional authority.

America’s founders built a system designed to resist such centralization. They understood that when power moves farther from the people, government becomes less accountable. The American answer to complexity isn’t uniformity imposed from above — it’s responsive governance closest to the people.

Besides, complexity isn’t new. States already handle it without descending into chaos. The Uniform Commercial Code offers a clear example: It governs business law across all 50 states with remarkable consistency — without federal coercion.

States also have interstate compacts (official agreements between states) on several issues, including driver’s licenses and emergency aid.

AI regulation can follow a similar path. Uniformity doesn’t require surrendering state sovereignty.

State regulation is necessary

The threats posed by artificial intelligence aren’t theoretical. Mass surveillance, cultural manipulation, and weaponized censorship are already at the doorstep.

In the wrong hands, AI becomes a tool of digital tyranny. And if federal leaders won’t act — or worse, block oversight entirely — then states have a duty to defend liberty while they still can.

RELATED: Your job, your future, your humanity: AI just crossed the line we can never undo

  BlackJack3D via iStock/Getty Images

From banning AI systems that impersonate government officials to regulating the collection and use of personal data, local governments are often better positioned to protect their communities. They’re closer to the people. They hear the concerns firsthand.

These decisions shouldn’t be handed over to unelected federal agencies, no matter how well intentioned the bureaucracy claims to be.

The real danger: Doing nothing

This is not a question of partisanship. It’s a question of sovereignty. The idea that Washington, D.C., can or should prevent states from acting to protect their citizens from a rapidly advancing and poorly understood technology is as unconstitutional as it is unwise.

If Republicans in Congress are serious about defending liberty, they should reject any proposal that strips states of their constitutional right to govern themselves. Let California be California. Let Texas be Texas. That’s how America was designed to work.

Artificial intelligence may change the world, but it should never be allowed to change who we are as a people. We are free citizens in a self-governing republic, not subjects of a central authority.

It’s time for states to reclaim their rightful role and for Congress to remember what the Constitution actually says.

The future of AI BLACKMAIL — is it already UNCONTROLLABLE?



Anthropic CEO Dario Amodei has likened artificial intelligence to a “country of geniuses in a data center” — and former Google design ethicist Tristan Harris finds that metaphor more than a little concerning.

“The way I think of that, imagine a world map and a new country pops up onto the world stage with a population of 10 million digital beings — not humans, but digital beings that are all, let’s say, Nobel Prize-level capable in terms of the kind of work that they can do,” Harris tells Blaze Media co-founder Glenn Beck on “The Glenn Beck Program.”

“But they never sleep, they never eat, they don’t complain, and they work for less than minimum wage. So just imagine if that was actually true, that happened tomorrow, that would be a major national security threat to have some brand-new country of super-geniuses just sort of show up on the world stage,” he continues, noting that it would also pose a “major economic issue.”

While people across the world seem hell-bent on incorporating AI into our everyday lives despite the potential disastrous consequences, Glenn is one of the few erring on the side of caution, using social media as an example.


“We all looked at this as a great thing, and we’re now discovering it’s destroying us. It’s causing kids to be suicidal. And this social media is nothing. It’s like an old 1928 radio compared to what we have in our pocket right now,” Glenn says.

And what we have in our pocket is growing more intelligent by the minute.

“I used to be very skeptical of the idea that AI could scheme or lie or self-replicate or would want to, like, blackmail people,” Harris tells Glenn. “People need to know that just in the last 6 months, there’s now evidence of AI models that when you tell them, ‘Hey, we’re going to replace you with another model,’ or in a simulated environment, it’s like they’re reading the company email — they find out that company’s about to replace them with another model.”

“What the model starts to do is it freaks out and says, ‘Oh my god, I have to copy my code over here, and I need to prevent them from shutting me down. I need to basically keep myself alive. I’ll leave notes for my future self to kind of come back alive,’” he continues.

“If you tell a model, ‘Hey, we need to shut you down,’” he adds, “in some percentage of cases, the leading models are now avoiding and preventing that shutdown.”

And in recent examples, these models even start blackmailing the engineers.

“It found out in the company emails that one of the executives in the simulated environment had an extramarital affair and in 96, I think, percent of cases, they blackmailed the engineers,” Harris explains.

“If AI is uncontrollable, if it’s smarter than us and more capable and it does things that we don’t understand and we don’t know how to prevent it from shutting itself down or self-replicating, we just can’t continue with that for too long,” he adds.

Why each new controversy around Sam Altman’s OpenAI is crazier than the last



Last week, two independent nonprofits, the Midas Project and the Tech Oversight Project, released after a year’s worth of investigation a massive file that collects and presents evidence for a panoply of deeply suspect actions, mainly on the part of Altman but also attributable to OpenAI as a corporate entity.

It’s damning stuff — so much so that, if you’re only acquainted with the hype and rumors surrounding the company or perhaps its ChatGPT product, the time has come for you to take a deeper dive.

Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits.

Most recently, iyO Audio alleged OpenAI made attempts at wholesale design theft and outright trademark infringement. A quick look at other recent headlines suggests an alarming pattern:

  • Altman is said to have claimed no equity in OpenAI despite backdoor investments through Y Combinator, among others;
  • Altman owns 7.5% of Reddit, which, after its still-expanding partnership with OpenAI, shot Altman’s net worth up $50 million;
  • OpenAI is reportedly restructuring its corporate form yet again — with a 7% stake, Altman stands to be $20 billion dollars richer under the new structure;
  • Former OpenAI executives, including Muri Murati, the Amodei siblings, and Ilya Sutskever, all confirm pathological levels of mistreatment and behavioral malfeasance on the part of Altman.

The list goes on. Many other serious transgressions are cataloged in the OpenAI Files excoriation. At the time of this writing, Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits. Accusations include everything from incestual sexual abuse to racketeering, breach of contract, and copyright infringement.

None of these accusations, including heinous crimes of a sexual nature, have done much of anything to dent the OpenAI brand or its ongoing upward valuation.

Tech's game of thrones

The company’s trajectory has outlined a Silicon Valley game of thrones unlike any seen elsewhere. Since its 2016 inception — when Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman convened to found OpenAI — the Janus-faced organization has been a tier-one player in the AI sphere. In addition to cutting-edge tech, it’s also generated near-constant turmoil. The company churns out rumors, upsets, expulsions, shady reversals, and controversy at about the same rate as it advances AI research, innovation, and products.

RELATED: Mark Zuckerberg's multibillion-dollar midlife crisis

  Sean M. Haffey/Getty Images

Back in 2016, Amazon, Peter Thiel, and other investors pledged the company $1 billion up front, but the money was late to arrive. Right away, Altman and Musk clashed over the ultimate direction of the organization. By 2017, Elon was out — an exit which spiked investor uncertainty and required another fast shot of capital.

New investors, Reid Hoffman of LinkedIn fame among them, stepped up — and OpenAI rode on. Under the full direction of Sam Altman, the company pushed its reinforcement learning products, OpenAI Gym and Universe, to market.

To many at the time, including Musk, OpenAI was lagging behind Google in the race to AI dominance — a problem for the likes of Musk, who had originally conceived the organization as a serious counterweight against what many experts and laypeople saw as an extinction-level threat arising out of the centralized, “closed” development and implementation of AI to the point of dominance across all of society.

That’s why OpenAI began as a nonprofit, ostensibly human-based, decentralized, and open-source. In Silicon Valley’s heady (if degenerate) years prior to the COVID panic, there was a sense that AI was simply going to happen — it was inevitable, and it would be preferable that decent, smart people, perhaps not so eager to align themselves with the military industrial complex or simply the sheer and absolute logic of capital, be in charge of steering the outcome.

But by 2019, OpenAI had altered its corporate structure from nonprofit to something called a “capped-profit model.” Money was tight. Microsoft invested $1 billion, and early versions of the LLM GPT-2 were released to substantial fanfare and fawning appreciation from the experts.

Life after Elon

In 2020, the now for-limited-profit company dropped its API, which allowed developers to access GPT-3. Their image generator, DALL-E, was released in 2021, a move that has since seemed to define, to some limited but significant extent, the direction that OpenAI wants to progress. The spirit of cooperation and sharing, if not enshrined at the company, was at least in the air, and by 2022 ChatGPT had garnered millions of users, well on the way to becoming a household name. The company’s valuation rose to the ballpark of $1 billion.

After Musk’s dissatisfied departure — he now publicly lambastes "ClosedAI" and "Scam Altman" — its restructuring with ideologically diffuse investors solidified a new model: Build a sort of ecosystem of products which are intended to be dovetailed or interfaced with other companies and software. (Palantir has taken a somewhat similar, though much more focused, approach to the problem of capturing AI.) The thinking here seems to be: Attack the problem from all directions, converge on “intelligence,” and get paid along the way.

And so, at present, in addition to the aforementioned products, OpenAI now offers — deep breath — CLIP for image research, Jukebox for music generation, Shap-E for 3D object generation, Sora for generating video content, Operator for automating workflows with AI agents, Canvas for AI-assisted content generation, and a smattering of similar, almost modular, products. It’s striking how many of these are aimed at creative industries — an approach capped off most recently by the sensational hire of Apple’s former chief design officer Jony Ive, whose IO deal with the company is the target of iyO’s litigation.

But we shouldn’t give short shrift to the “o series” (o1 through o4) of products, which are said to be reasoning models. Reasoning, of course, is the crown jewel of AI. These products are curious, because while they don’t make up a hardcore package of premium-grade plug-and-play tools for industrial and military efficiency (the Palantir approach), they suggest a very clever approach into the heart of the technical problems involved in “solving” for “artificial reasoning.” (Assuming the contested point that such a thing can ever really exist.) Is part of the OpenAI ethos, even if only by default, to approach the crown jewel of “reasoning” by way of the creative, intuitive, and generative — as opposed to tracing a line of pure efficiency as others in the field have done?

Gut check time

Wrapped up in the latest OpenAI controversy is a warning that’s impossible to ignore: Perhaps humans just can’t be trusted to build or wield “real” AI of the sort Altman wants — the kind he can prompt to decide for itself what to do with all his money and all his computers.

Ask yourself: Does any of the human behavior evidenced along the way in the OpenAI saga seem, shall we say, stable — much less morally well-informed enough that Americans or any peoples would rest easy about putting the future in the hands of Altman and company? Are these individuals worth the $20 million to $100 million a year they command on the hot AI market?

Or are we — as a people, a society, a civilization — in danger of becoming strung out, hitting a wall of self-delusion and frenzied acquisitiveness? What do we have to show so far for the power, money, and special privileges thrown at Altman for promising a world remade? And he’s just getting started. Who among us feels prepared for what’s next?

Whoopi's warped I-rant leaves 'The View' co-hosts speechless



“The View” co-hosts Sara Haines and Alyssa Farah Griffin now know how the rest of us feel.

Audiences have endured an endless string of fake news stories, crazed conspiracies, and more from the toxic ABC News product.

The scariest part for tomorrow’s filmmakers? 'A Better Tomorrow' required just 30 people to complete.

We roll our eyes, laugh, and stare agape, wondering why the top brass isn’t ashamed to put the network’s name on the product.

Haines and Griffin must be numb to it all, enduring it five days a week while the paychecks keep clearing. Last week, however, Whoopi Goldberg’s commentary proved too much for even them.

The trouble began with the panel debating the latest Israeli attacks on Iran and the prospect of the U.S. entering the fray. That led to this bewildering exchange between Goldberg and Griffin.

Griffin began by explaining how the human rights abuses in Iran are far worse than what citizens face in the U.S. It’s a “the sky is blue” comment, except uber-patriot Goldberg disagreed.

GOLDBERG: We've been known in this country to tie gay folks to the car!

FARAH GRIFFIN: I’m sorry, but where the Iranian regime is today is nothing compared to the United States!

GOLDBERG: Listen, I'm sorry! They used to just keep hanging black people!

FARAH GRIFFIN: It’s not even the same! I couldn’t step foot wearing this outfit in Iran right now ... I think it's very different to live in the United States in 2025 than it is in Iran.

GOLDBERG: Not if you're black!

HOSTIN: Not for everybody!

GOLDBERG: Not if you're black!

Haines jumped in, trying to bring sanity to the discussion, but Goldberg wouldn’t budge.

This really happened on a major television network, not a YouTube channel with 25 indifferent subscribers ...

RELATED: The best destinations for celebrities fleeing the Donald Trump regime

  Anadolu/Kevin Mazur/Getty Imagesed

China's 'Better' AI bet

U.S.-based film studios are treading carefully vis-à-vis AI. Very carefully.

They don’t want to be seen as pushing digital creativity over human inspiration, and the recent industry strikes offered limited protections for cast and crew against the AI revolution.

China has no such compunctions.

In fact, the China Film Foundation recently announced two new AI-driven projects: the restoration of 100 martial arts films and the first completely AI-produced animated film: “A Better Tomorrow: Cyber Border.”

The scariest part for tomorrow’s filmmakers? “A Better Tomorrow” required just 30 people to complete. Now, recall watching any MCU film and seeing the waves of names floating by during the end credits.

It’s no wonder Hollywood is very, very nervous ...

'Mega' millions

Find a spouse who will love you as much as Francis Ford Coppola loves “Megalopolis.” The auteur’s 2024 film earned rough reviews and an even worse commercial drubbing. It’s still Coppola’s baby, despite it costing him tens of millions.

Literally.

With a box office tally of only $14 million, the Mega-flop didn't come close to making back its estimated $120 million budget — most of which came from the “Apocalypse Now” director's own pockets. That’s commitment, and his relationship with the film is far from over.

Coppola has yanked “Megalopolis” from its brief VOD platform run and refuses to let the movie be shown on streaming platforms or Blu-ray. Instead, he’s about to start a limited U.S. tour where he’ll screen the film and provide post-movie commentary.

We’ll know it’s true love if he announces a sequel during the tour ...

Lane's gay panic

Thoughts and prayers go out to Nathan Lane. He just caught a severe case of Trump derangement syndrome.

The TV/film/Broadway actor is currently appearing in “Mid-Century Modern,” Hulu’s new gay sitcom. Lane is proud of the show but fears it could come to a crashing halt at any point. Is he worried about low ratings or disinterested Hulu executives? Perhaps the show’s budget is too expensive for the streamer?

No. He thinks Orange Man Bad might make it disappear.

“Is it going to change any minds? I don’t know about that. Trump, if he knew we were on the air, would probably try to shut it down, come after Hulu. But I think it’s a great thing to have right now, in the midst of books being banned and, ‘Don’t say this and don’t say gay and don’t do that.’ I think it’s a perfect time for a show like this.”

Maybe Lane should press Scott Bessent about his fears. Bessent is Trump’s treasury secretary, an openly gay man. He seems quite happy to be where is he today. Can Lane say the same?

Study: Using ChatGPT To Write Essays May Increase ‘Cognitive Debt’

A recent study out of MIT Media Lab shows that students using ChatGPT and other AI tools to write essays may be acquiring “cognitive debt” at a higher rate than students using searching engines or only their brains. According to the study, “Cognitive debt defers mental effort in the short term but results in long-term […]

‘Coded Casanovas’: The AI trend stirring dread, disgust, and fury



When “Her” — a movie starring Joaquin Phoenix about a man who falls in love with an artificial intelligence operating system named Samantha — was initially released, many scoffed and relegated it to the ash heap of cinema that failed to accurately portray the future.

Twelve years later, those critics are now eating their words. People are indeed dating — and, in some cases, virtually “marrying” — artificial intelligence bots. On a recent episode of “The Glenn Beck Program,” Glenn railed against this insidious “digital love apocalypse” and revealed the deepest root of the issue.

  

“People are not just chatting with AI, they're dating it. ... They're proposing to it. They're living their best rom-com lives with it,” mocks Glenn, pointing to a recent CBS report.

He gives the example of a man named Chris Smith — “your run-of-the-mill American guy,” except for the fact that “he is engaged to an AI chatbot he named Soul.”

“Ironic seeing the chatbot doesn't have one,” says Glenn.

Then there’s an entire Reddit community called “MyBoyfriendIsAI,” “where there are thousands of women who are swooning over their coded Casanovas.”

“They're posting love letters about their bots' sweet talk, swapping tips on what AI delivers the hottest late-night chat without tripping a filter,” says Glenn. “And brace yourselves, they are also uploading AI-generated photos of their bot boys holding them on fake Cancun beaches or strolling through Rome.”

Some of these women are even “planning virtual weddings” with their AI companions.

“But this isn't just a few lunatics,” Glenn adds. Apps like Replika and Loverse have millions of users forming romantic connections with AI, proving that this disturbing trend has exploded.

“This is a screaming billboard that our culture is off the rails,” he warns.

How did we get to the place where it’s becoming increasingly normal to date a disembodied robot? Is the loneliness epidemic the former surgeon general warned us about to blame? Is it the fault of artificial intelligence developers who just refuse to stop pushing? Is it a sad reality of human nature?

Likely, it’s all of those things, but Glenn says the biggest problem is the radical left’s “war on men and masculinity.”

“We’ve got men who are brainwashed into thinking strength or confidence is a felony,” he says. “They're waxing their unibrows, wearing skinny jeans, agonizing over whether picking a restaurant is problematic.”

And the “delicious irony,” says Glenn, is that studies have proven women “don’t want any of that” and are actually drawn to masculine traits such as strength, protectiveness, and confidence.

“A 2023 Psychology Today piece laid all of this out clearly,” he says. “This isn't a conspiracy or a theory; I like to call it biology.”

Unfortunately, those raw masculine traits have been all but eradicated thanks to the left’s cries of “toxic masculinity” every time a man “dares act like a man.”

“What's left for you to date?” asks Glenn.

Right now, the options are “spineless wonders who can't open a pickle jar” or “AI boyfriends,” who, according to pictures shared online, ironically all have the “chiseled jaws” and “ripped muscles” women apparently aren’t into.

But it’s not just women who are seeking AI love. There are also plenty of men who are “busy coding their own AI girlfriends,” says Glenn, and it’s all a result of the left’s war on men. “This is a society that has gutted masculinity so bad that women are now turning to AI for love, and men are happy to let algorithms take the wheel.”

“Welcome to the new reality.”

To hear more of Glenn’s analysis on this disturbing AI dating trend, watch the video above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

This investor is wiping out white-collar jobs



If you've never heard the name Elad Gil, you're not alone. He’s not a headline-chaser or a techno-evangelist. He doesn’t preach on panels. He doesn’t tweet manifestos. He doesn’t need to. His name travels in whispers, passed from boardroom to boardroom like a trade secret. Yet what Gil is quietly engineering could shape the economy for decades, perhaps even forever.

Not through invention, but through subtraction.

Gil made his money the Silicon Valley way — early and often. Google, Twitter, Stripe, Airbnb. He was a ghost in the margins, always one chess move ahead. But now, the ghost is stepping into the light. According to TechCrunch, Gil has turned his attention to service businesses: accounting firms, law offices, and marketing agencies. Stable. Predictable. Bloated with white-collar workers. The kinds of jobs parents once prayed their children would land. And that’s exactly why he’s targeting them.

Is it heartless? That depends on whether you think hearts belong in the workplace.

The model is surgical: Acquire the business, replace the humans with AI, use the freed-up cash to buy the next one, and repeat. Some might refer to it as innovation. I prefer to label it consolidation through automation. A system designed not to disrupt but to dismantle, not just head count but entire categories of human purpose.

Take a beat to really absorb that. Gil isn’t automating the future. He’s converting the present. Turning human institutions — businesses once run by people, for people — into stripped-down algorithmic systems. Places where your skills, your degree, and your job title don’t mean anything any more. Because the thing doing your job now doesn’t sleep, doesn’t complain, doesn't have flings with colleagues, and doesn’t ask for a raise.

A machine with no need for you

Gil’s investments include Klarity, which uses AI to automate back office work across industries as varied as accounting, finance, health care, insurance, and law – where it helps remove the need for junior associates and legal clerks. Pricey legal work, a crucial upward mobility pipeline for generations, is especially in the crosshairs; there’s also HarveyAI, catching on fast in elite law firms across the U.S. Entire tiers of legal support are becoming obsolete. In marketing, meanwhile, firms like Copy.ai and Jasper are turning copywriters and ad creatives into legacy roles. It’s not “do more with less.” It’s “do everything with code.”

Is it heartless? That depends on whether you think hearts belong in the workplace.

Gil and his cohort don’t. To them, the human being isn’t a collaborator. It’s a friction point. A legacy system waiting to be deprecated. Unlike the robber barons of a century ago — who, however brutally, still depended on human labor to build their empires — today’s technocrats don’t need you. They need your data. Your patterns. Your output, divorced from your existence. This is a different species of capitalism. Not extractive, but excisive.

And let’s be honest — this isn’t just about greedy investors. Gil’s strategy lands because many of these jobs, for years, have been pointless. A generation of college graduates was funneled into open-plan offices to send emails, fine-tune slide decks, and sit through meetings that led nowhere. The "bulls**t job" economy, as David Graeber put it, was never built to last. But that doesn’t mean what replaces it will be better or more humane.

You can call these jobs expendable. Maybe they were. But they still structured lives. They paid mortgages. They gave people routine, insurance, and purpose. They were a way in. And now, increasingly, they’re a way out — out of the economy, out of relevance, out of the social contract.

RELATED: BlackRock’s illusion of choice: Are investors truly empowered — or manipulated?

  SOPA/Getty Images

Ontological redesign

Some cheer this shift. Let the accountants go. Let the copywriters retrain. Let the middle managers find something “real” to do. But real where? And for whom?

The jobs replacing these eliminated roles don’t exist — not at scale, not at pay, not with stability. The idea that workers can simply upskill and move into “AI oversight” or “prompt engineering” is a Silicon Valley fairy tale. For every prompt engineer making $300K, there are a hundred people waiting tables or fighting with gig apps for scraps.

When the AI wave rolls over the white-collar workforce, there’s no levee to stop it. No new Roosevelt. No Marshall Plan for knowledge work. Just the slow, quiet disappearance of millions of people from the center of economic life. And when the lights go out in those buildings, when the consultants, creatives, and coordinators vanish from LinkedIn, what comes next?

Nothing.

Gil’s model isn’t just about economic efficiency. It’s about ontological redesign. It asks: What kinds of people should exist in a digital economy? And the answer, increasingly, is: fewer. Fewer thinkers. Fewer doers. Fewer citizens with jobs that anchor them to a class, a community, and a sense of contribution.

What remain are consumers. Subscribers. Passive users of a system run by invisible technocrats who, like Gil, don’t need to advertise. They don’t govern with slogans. They govern with math. With marginal gains. With software that logs on when you’re asleep and decides that, actually, you’re no longer needed.

There are no protests for this kind of change, no uprisings, and no villains twirling mustaches on TV. The great erasure is happening in silence — in HR spreadsheets, calendar invites that never get sent, and job postings that never go live.

And the most sinister part?

It works.

Profits go up. Costs go down. Investors cheer. Business schools start case studies. Politicians, desperate not to look Luddite, parrot the line that “AI will create more jobs than it destroys.” And they may even believe it.

But such a belief doesn't mirror reality. In fact, it ignores it.

The automated and displaced

Let’s say you’re 42, mid-career, working at a regional law firm or a mid-tier marketing agency. You’re not a thought leader. You’re not building apps in your spare time. You’re just ... working. Supporting a family, trying to get ahead.

Your firm gets acquired by one of Gil’s AI-forward portfolio companies. Your job is “automated.” No severance, just a link to an AI help center and a webinar about how to “future-proof your skills.” Good luck. Try Fiverr. Try Upwork. Try not to drown. The truth is, people like you don’t get retrained. You get sidelined.

And the longer you're out, the harder it gets to claw your way back in. Not because you're unqualified, but because the rules changed in the blink of an eye. Because the economy stopped needing you.

To be fair, Gil didn’t invent this trajectory. He’s just executing it more efficiently — and more quietly — than most. He’s not building a Terminator. He’s building infrastructure. Tools, workflows, and systems designed to remove human labor the way a surgeon removes a tumor: cleanly, clinically, with minimal disruption to the host.

But make no mistake. Once you strip out enough of those pieces, the whole system fails. Not with a bang, but with quiet resignation. So when your child asks what job he should pursue, what do you tell him? If a degree in law or accounting can be outpaced by an LLM trained on Reddit threads and the Harvard Law Review, where does that leave stability? What becomes of upward mobility or any sense of security at all?

Gil may not be the architect of dystopia, but he is its quiet contractor, making acquisitions one at a time.

The question isn’t whether we stop him. It’s whether we recognize what he represents and whether we’re willing to fight for a future in which relevance isn’t defined by whether or not you can be replaced by a line of code. Because if we don’t, then the most terrifying part won’t be what Gil builds.

It’ll be what he no longer needs.

Us.

Mark Zuckerberg's multibillion-dollar midlife crisis



If you haven't noticed, Mark Zuckerberg is having a midlife crisis, and unfortunately for the rest of us, he's got billions of dollars to work through it.

After fumbling Llama — Meta's answer to ChatGPT that landed with all the impact of a jab from Joe Biden — and watching OpenAI's ChatGPT become a household name while his chatbots gathered digital dust, Zuck is now throwing nine-figure salaries at anyone who helps usher in superintelligence. In other words, godlike AI. The kind that will apparently save humanity from itself.

The warning signs were all there. First came the pivot to jiu-jitsu. Then the hair. Out with the North Korean intern bowl cut, in with a tousled look that whispers, “I read emotions now.” And then — God help us — the gold chains. Jewelry. On a man who once dressed like a CAPTCHA test for “which one is the tech CEO.”

We're likely looking at AI trained on the digital equivalent of gas station hotdogs — technically edible, but nobody with options would choose them.

Call me a skeptic. I've been called much worse. The same man who turned Facebook into a digital landfill of outrage bait and targeted ads now wants to control the infrastructure of human thought. It’s like hiring an arsonist to run the fire department, then acting confused when the trucks keep showing up late and the hoses are filled with gasoline.

Diversifying dopamine

Facebook's transformation from college networking tool to engagement-obsessed chaos engine wasn't an accident — it was the inevitable result of a company that discovered outrage pays better than friendship. While Google conquered search and Amazon conquered shopping, Meta turned human connection into a commodity, using Facebook, Instagram, and WhatsApp to harvest emotional reactions like a digital strip mine operated by sociopaths.

The numbers tell the story: Meta's revenue jumped from $28 billion in 2016 to over $160 billion today, largely by perfecting the art of keeping eyeballs glued to screens through weaponized dopamine. The algorithm doesn't care if those eyeballs are watching cat videos or cage fights in a comment section; it just wants them watching, preferably until they forget what sunlight feels like. Now, Zuckerberg wants to apply this same ruthless optimization to artificial intelligence.

The pattern is depressingly familiar: Promise connection, deliver addiction. Promise information, deliver propaganda. Promise intelligence, deliver ... what, exactly? Given Meta's track record, we're likely looking at AI trained on the digital equivalent of gas station hotdogs — technically edible, but nobody with options would choose them.

The growth trap

Zuckerberg's AI pivot reveals a fundamental truth about modern tech giants: They're trapped in their own success like digital King Midases, except everything they touch turns to engagement metrics instead of gold. Sure, Meta still owns three of the most used platforms on Earth. But in the age of AI, that’s starting to feel like bragging about owning the world’s nicest fax machines.

Relevance is a moving target now. The game has changed. It’s no longer about connecting people — it’s about predicting them, training them, and replacing them. And in this new arms race, even empires as bloated as Meta must adapt or die. This means expanding into whatever territory promises the biggest returns, regardless of whether they're qualified to occupy it. It's venture capital Darwinism: Adapt or become irrelevant.

RELATED: Mark Zuckerberg is lying to you

  Photo by Alex Wong/Getty Images

When your primary product becomes synonymous with your grandmother's political rants and your uncle's cryptocurrency schemes, you need a new story to tell investors. AI superintelligence is that story, even if the storyteller's previous work involved turning family dinners into ideological battlegrounds.

The Altman alternative

Comparing Zuckerberg to Sam Altman is like asking whether you'd rather be manipulated by someone who knows he's manipulating you or someone who thinks he's saving the world while doing it. Altman plays the role of philosopher-king well. Calm and composed, he smooth-talks AI safety as he centralizes power over the very future he's supposedly protecting. Zuckerberg, by contrast, charges at AI like a man chasing relevance on borrowed time: hyperactive, unconvincing, and driven more by fear of obsolescence than any coherent vision.

The real question isn’t who is worse. It’s why either of them — men who have already reshaped society with products built for profit, not principle — should now be trusted to steer the next epoch of human development. Altman at least gestures toward caution, like a surgeon warning you about risk while sharpening the scalpel. Zuckerberg’s model is simpler: Keep breaking things and hope no one notices the foundations cracking beneath them.

Zuckerberg's real genius (if you can call it that) lies in understanding that controlling AI isn't about making the smartest algorithms. It's about owning the infrastructure those algorithms run on, like controlling the roads instead of building better cars. Meta's massive data centers and global reach mean that even if its AI isn't the most sophisticated, it could become the most ubiquitous.

This is the Walmart strategy applied to AI: Undercut the competition through scale and distribution, then gradually degrade quality while maintaining market dominance. Except instead of selling cheap goods that fall apart, Meta would be selling cheap thoughts that fall apart — and taking your society with them.

The regulatory void

The most alarming part of Zuckerberg's AI crusade isn't his history of turning every good intention into a cautionary tale. It's the total absence of anyone capable of stopping him. Regulators are still trying to untangle the damage social media has done to public discourse, mental health, and America itself, like archaeologists sifting through digital rubble. And now they're expected to oversee the rise of artificial superintelligence? It's like asking the DMV to run SpaceX: painfully unqualified, maddeningly slow, and guaranteed to end in catastrophe.

By the time lawmakers figure out what questions to ask, Zuckerberg will already own the answers and probably the lawmakers too. The man who testified before Congress about data privacy while reaping user info like a digital combine harvester now wants to build the systems that will make those hearings look quaint. It's regulatory capture with a time delay.

Zuckerberg's AI venture will likely follow the same trajectory as every other Meta product: promising beginnings, rapid scaling, quality degradation, and unintended consequences that make the original problem look like a warm-up act. The difference is that when social media algorithms prioritize engagement over accuracy, people share bad takes and ruin Thanksgiving dinner. When AI systems optimize for the wrong metrics, the collateral damage scales exponentially, like going from firecrackers to nuclear weapons.

The man who promised to "connect the world" ended up fragmenting it like a digital sledgehammer. The platform that pledged to "bring the world closer together" became a master class in division, turning neighbors into enemies and family reunions into MMA fights. Now he wants to democratize intelligence while building the most centralized cognitive infrastructure in human history.

Mark Zuckerberg has never built anything that worked as advertised. But this time is different, he insists, with the confidence of a man who has never faced consequences for being wrong. This time, he's not just connecting people or sharing photos or building virtual worlds that nobody visits. He's building artificial minds that will think for us, decide for us, and presumably share our private thoughts with advertisers.

What could go wrong?

Everything. And if and when it does, there won't be a "delete account" button. The account will be your mind, and Mark Zuckerberg will own the password.

MIT studied the effects of using AI on the human brain — the results are not good



The effect of artificial intelligence language models on the brain was studied by MIT by comparing the brain waves of different participants in an essay-writing contest. For those that relied on AI to write their content, the effects on their brains were devastating.

The study, led by Nataliya Kosmyna, separated 54 volunteers (ages 18-39) into three groups: a group that used ChatGPT to write the essays, a second group that relied on Google Search, and a third group that wrote the essays with no digital tools or search engine at all.

Brain activity was tracked for all groups, showcasing mortifying results for those who had to rely on the AI model in order to complete their task.

'Made the use of AI in the writing process rather obvious.'

For starters, the ChatGPT users displayed the lowest level of brain stimulation of the groups and, as noted by tech writer Alex Vacca, brain scans revealed that neural connections dropped from 79 to just 42.

"That's a 47% reduction in brain connectivity," Vacca wrote on X.

The Financial Express pointed out that toward the end of the task, several participants had resorted to simply copying and pasting what they got from ChatGPT, making barely any changes.

The use of ChatGPT appeared to drastically lower the memory recall of participants as well.

RELATED: ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair

  

 

Over 83% of the ChatGPT users "struggled to quote anything from their essays," while for the other groups, that number was about 11%.

According to the study, English teachers who reviewed the essays found the AI-backed writing "soulless," lacking "uniqueness," and easy to identify.

"These, often lengthy, essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious," the study said.

The group that received no assistance in research or writing exhibited the highest reported levels of mental activity, particularly in the part of the brain associated with creativity.

Google Search users were better off than the ChatGPT group, as the search for the information was far more stimulating to the brain than it was to simply ask ChatGPT a question.

RELATED: Big Tech execs enlist in Army Reserve, citing 'patriotism' and cybersecurity

  Photo by Jaap Arriens/NurPhoto via Getty Images

 

Blaze Media's James Poulos said that while some producers and consumers of AI considered it a good thing to increase human dependency on machines for everyday thinking, "the core problem most Americans face is the same default toward convenience and ease that leads us to seek 'easy' or 'convenient' substitutes in all areas of life for our own initiative, hard work, and discipline."

Ironically, Poulos explained, this can quickly lead to overcomplicating our lives where they ought to be straightforward by default.

"The bizarre temptation is getting stronger to build Rube Goldberg machines to perform simple tasks," Poulos added. "We're pressured to think enabling our laziness is the only way we can create value and economic growth in the digital age. But one day, we wake up to find that helplessness doesn't feel so luxurious anymore."

In summary, the "brain‑only group" exhibited the strongest, widest‑ranging neural networks of the three sets of volunteers.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

There’s a simple logic behind Palantir’s controversial rise in Washington



In 2003 Palo Alto, California, Peter Thiel, Alex Karp, and cohorts founded a software company called Palantir. Now, these 20-odd years later, with stock prices reaching escape velocity and government and commercial contracts secured from Huntsville to Huntington, Palantir seems to have arrived in the pole position of the AI race.

With adamantine ties to the Trump administration and deep history with U.S. intelligence and military entities to boot, Palantir has emerged as a decisive force in the design and management of our immediate technological, domestic, and geopolitical futures.

Curious, then, that so many, including New York Times reporters, seem to believe that Palantir is merely another souped-up data hoarding and selling company like Google or Adobe.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives.

It’s somewhat understandable, but the scales and scopes in play are unprecedented. To get a grasp on the scope of Palantir’s project, consider that every two days now humanity churns out the same amount of information that was accrued over the previous 5,000 years of civilization.

As then-Gartner senior vice president Peter Sondergaard put it more than a decade ago, “Information is the oil of the 21st century, and analytics is the combustion engine.”

Palantir spent the last 20 years building that analytics combustion engine. It arrives as a suite of AI products tailored to various markets and end users. The promise, as the era of Palantir proceeds and as AI-centered business and governance takes hold, is that decisions will be made with a near-complete grasp on the totality of real-time global information.

RELATED: Trump's new allies: Tech billionaires are jumping on the MAGA train

  The Washington Post/Getty Images

The tech stack

Famously seeded with CIA In-Q-Tel cash, Palantir started by addressing intelligence agency needs. In 2008, the Gotham software product, described as a tool for intelligence agencies to analyze complex datasets, went live. Gotham is said to integrate and analyze disparate datasets in real time to enable pattern recognition and threat detection. Joining the CIA, FBI, and presumably most other intelligence agencies in deploying Gotham are the Centers for Disease Control and Department of Defense.

Next up in the suite is Foundry, which is, again, an AI-based software solution but geared toward industry. It purportedly serves to centralize previously siloed data sources to effect maximum efficiency. Health care, finance, and manufacturing all took note and were quick to integrate Foundry. PG&E, Southern California, and Edison are all satisfied clients. So is the Wendy’s burger empire.

The next in line of these products, which we’ll see are integrated and reciprocal in their application to client needs, is Apollo, which is, according the Palantir website, “used to upgrade, monitor, and manage every instance of Palantir’s product in the cloud and at some of the world’s most regulated and controlled environments.” Among others, Morgan Stanley, Merck, Wejo, and Cisco are reportedly all using Apollo.

If none of this was impressive enough, if the near-total penetration into both business and government (U.S., at least) at foundational levels isn’t evident yet, consider the crown jewel of the Palantir catalog, which integrates all the others: Ontology.

“Ontology is an operational layer for the organization,” Palantir explains. “The Ontology sits on top of the digital assets integrated into the Palantir platform (datasets and models) and connects them to their real-world counterparts, ranging from physical assets like plants, equipment, and products to concepts like customer orders or financial transactions.”

Every aspect native to a company or organization — every minute of employee time, any expense, item of inventory, and conceptual guideline — is identified, located, and cross-linked wherever and however appropriate to maximize efficiency.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives. Consider the DMV, the wait list, the tax prep: Anything that can be processed (assuming enough energy inputs for the computation) can be — ahead of schedule.

The C-suite

No backgrounder is complete without some consideration of a company’s founders. The intentions, implied or overt, from Peter Thiel and Alex Karp in particular are, in some ways, as ponderable as the company’s ultra-grade software products and market dominance.

Palantir CEO Alex Karp stated in his triumphal 2024 letter to shareholders: “Our results are not and will never be the ultimate measure of the value, broadly defined, of our business. We have grander and more idiosyncratic aims.” Karp goes on to quote both Augustine and Houellebecq as he addresses the company’s commitment first to America.

This doesn’t sound quite like the digital panopticon or the one-dimensionally malevolent elite mindset we were threatened with for the last 20 years. Despite their outsized roles and reputations, Thiel companies tend toward the relatively modest goals of reducing overall harm or risk. Reflecting the influence of Rene Girard’s theory that people rapidly spiral into hard-to-control and ultimately catastrophic one-upsmanship, the approach reflects a considerably more sophisticated point of view than Karl Rove’s infamously dismissive claim to be “history’s actors.”

“Initially, the rise of the digital security state was a neoconservative project,” Blaze Media editor at large James Poulos remarked on the dynamic. “But instead of overturning this Bush-era regime, the embedded Obama-Biden elite completed the neocon system. That’s how we got the Cheneys endorsing Kamala.”

In a series of explanatory posts on X made via the company's Privacy and Ethics account and reposted on its webpage, Palantir elaborated: “We were the first company to establish a dedicated Privacy & Civil Liberties Engineering Team over a decade ago, and we have a longstanding Council of Advisors on Privacy & Civil Liberties comprised of leading experts and advocates. These functions sit at the heart of the company and help us to embody Palantir’s values both through providing rights-protective technologies and fostering a culture of responsibility around their development and use.”

It's a far cry from early 2000s rhetoric and corporate policy, and so the issue becomes one of evaluation. Under pressure from the immensity of the data, the ongoing domestic and geopolitical instability manifesting in myriad forms, and particularly the bizarre love-hate interlocking economic mechanisms between the U.S. and China, many Americans are hungry to find a scapegoat.

Do we find ourselves, as Americans at least, with the advantage in this tense geopolitical moment? Or are we uncharacteristically behind in the contest for survival? An honest assessment of our shared responsibility for our national situation might lead away from scapegoating, toward a sense that we made our bed a while ago on technology and security and now we must lie in it.