Mark Zuckerberg's multibillion-dollar midlife crisis



If you haven't noticed, Mark Zuckerberg is having a midlife crisis, and unfortunately for the rest of us, he's got billions of dollars to work through it.

After fumbling Llama — Meta's answer to ChatGPT that landed with all the impact of a jab from Joe Biden — and watching OpenAI's ChatGPT become a household name while his chatbots gathered digital dust, Zuck is now throwing nine-figure salaries at anyone who helps usher in superintelligence. In other words, godlike AI. The kind that will apparently save humanity from itself.

The warning signs were all there. First came the pivot to jiu-jitsu. Then the hair. Out with the North Korean intern bowl cut, in with a tousled look that whispers, “I read emotions now.” And then — God help us — the gold chains. Jewelry. On a man who once dressed like a CAPTCHA test for “which one is the tech CEO.”

We're likely looking at AI trained on the digital equivalent of gas station hotdogs — technically edible, but nobody with options would choose them.

Call me a skeptic. I've been called much worse. The same man who turned Facebook into a digital landfill of outrage bait and targeted ads now wants to control the infrastructure of human thought. It’s like hiring an arsonist to run the fire department, then acting confused when the trucks keep showing up late and the hoses are filled with gasoline.

Diversifying dopamine

Facebook's transformation from college networking tool to engagement-obsessed chaos engine wasn't an accident — it was the inevitable result of a company that discovered outrage pays better than friendship. While Google conquered search and Amazon conquered shopping, Meta turned human connection into a commodity, using Facebook, Instagram, and WhatsApp to harvest emotional reactions like a digital strip mine operated by sociopaths.

The numbers tell the story: Meta's revenue jumped from $28 billion in 2016 to over $160 billion today, largely by perfecting the art of keeping eyeballs glued to screens through weaponized dopamine. The algorithm doesn't care if those eyeballs are watching cat videos or cage fights in a comment section; it just wants them watching, preferably until they forget what sunlight feels like. Now, Zuckerberg wants to apply this same ruthless optimization to artificial intelligence.

The pattern is depressingly familiar: Promise connection, deliver addiction. Promise information, deliver propaganda. Promise intelligence, deliver ... what, exactly? Given Meta's track record, we're likely looking at AI trained on the digital equivalent of gas station hotdogs — technically edible, but nobody with options would choose them.

The growth trap

Zuckerberg's AI pivot reveals a fundamental truth about modern tech giants: They're trapped in their own success like digital King Midases, except everything they touch turns to engagement metrics instead of gold. Sure, Meta still owns three of the most used platforms on Earth. But in the age of AI, that’s starting to feel like bragging about owning the world’s nicest fax machines.

Relevance is a moving target now. The game has changed. It’s no longer about connecting people — it’s about predicting them, training them, and replacing them. And in this new arms race, even empires as bloated as Meta must adapt or die. This means expanding into whatever territory promises the biggest returns, regardless of whether they're qualified to occupy it. It's venture capital Darwinism: Adapt or become irrelevant.

RELATED: Mark Zuckerberg is lying to you

Photo by Alex Wong/Getty Images

When your primary product becomes synonymous with your grandmother's political rants and your uncle's cryptocurrency schemes, you need a new story to tell investors. AI superintelligence is that story, even if the storyteller's previous work involved turning family dinners into ideological battlegrounds.

The Altman alternative

Comparing Zuckerberg to Sam Altman is like asking whether you'd rather be manipulated by someone who knows he's manipulating you or someone who thinks he's saving the world while doing it. Altman plays the role of philosopher-king well. Calm and composed, he smooth-talks AI safety as he centralizes power over the very future he's supposedly protecting. Zuckerberg, by contrast, charges at AI like a man chasing relevance on borrowed time: hyperactive, unconvincing, and driven more by fear of obsolescence than any coherent vision.

The real question isn’t who is worse. It’s why either of them — men who have already reshaped society with products built for profit, not principle — should now be trusted to steer the next epoch of human development. Altman at least gestures toward caution, like a surgeon warning you about risk while sharpening the scalpel. Zuckerberg’s model is simpler: Keep breaking things and hope no one notices the foundations cracking beneath them.

Zuckerberg's real genius (if you can call it that) lies in understanding that controlling AI isn't about making the smartest algorithms. It's about owning the infrastructure those algorithms run on, like controlling the roads instead of building better cars. Meta's massive data centers and global reach mean that even if its AI isn't the most sophisticated, it could become the most ubiquitous.

This is the Walmart strategy applied to AI: Undercut the competition through scale and distribution, then gradually degrade quality while maintaining market dominance. Except instead of selling cheap goods that fall apart, Meta would be selling cheap thoughts that fall apart — and taking your society with them.

The regulatory void

The most alarming part of Zuckerberg's AI crusade isn't his history of turning every good intention into a cautionary tale. It's the total absence of anyone capable of stopping him. Regulators are still trying to untangle the damage social media has done to public discourse, mental health, and America itself, like archaeologists sifting through digital rubble. And now they're expected to oversee the rise of artificial superintelligence? It's like asking the DMV to run SpaceX: painfully unqualified, maddeningly slow, and guaranteed to end in catastrophe.

By the time lawmakers figure out what questions to ask, Zuckerberg will already own the answers and probably the lawmakers too. The man who testified before Congress about data privacy while reaping user info like a digital combine harvester now wants to build the systems that will make those hearings look quaint. It's regulatory capture with a time delay.

Zuckerberg's AI venture will likely follow the same trajectory as every other Meta product: promising beginnings, rapid scaling, quality degradation, and unintended consequences that make the original problem look like a warm-up act. The difference is that when social media algorithms prioritize engagement over accuracy, people share bad takes and ruin Thanksgiving dinner. When AI systems optimize for the wrong metrics, the collateral damage scales exponentially, like going from firecrackers to nuclear weapons.

The man who promised to "connect the world" ended up fragmenting it like a digital sledgehammer. The platform that pledged to "bring the world closer together" became a master class in division, turning neighbors into enemies and family reunions into MMA fights. Now he wants to democratize intelligence while building the most centralized cognitive infrastructure in human history.

Mark Zuckerberg has never built anything that worked as advertised. But this time is different, he insists, with the confidence of a man who has never faced consequences for being wrong. This time, he's not just connecting people or sharing photos or building virtual worlds that nobody visits. He's building artificial minds that will think for us, decide for us, and presumably share our private thoughts with advertisers.

What could go wrong?

Everything. And if and when it does, there won't be a "delete account" button. The account will be your mind, and Mark Zuckerberg will own the password.

AI is coming for your job, your voice ... and your worldview



Suddenly, artificial intelligence is everywhere — generating art, writing essays, analyzing medical data. It’s flooding newsfeeds, powering apps, and slipping into everyday life. And yet, despite all the buzz, far too many Americans — especially conservatives — still treat AI like a novelty, a passing tech fad, or a toy for Silicon Valley elites.

Treating AI like the latest pet rock tech trend is not only naïve — it’s dangerous.

The AI shift is happening now, and it’s coming for white-collar jobs that once seemed untouchable.

AI isn’t just another innovation like email, smartphones, or social media. It has the potential to restructure society itself — including how we work, what we believe, and even who gets to speak — and it’s doing it at a speed we’ve never seen before.

The stakes are enormous. The pace is breakneck. And still, far too many people are asleep at the wheel.

AI isn’t just ‘another tool’

We’ve heard it a hundred times: “Every generation freaks out about new technology.” The Luddites smashed looms. People said cars would ruin cities. Parents panicked over television and video games. These remarks are intended to dismiss genuine concerns of emerging technology as irrational fears.

But AI is not just a faster loom or a fancier phone — it’s something entirely different. It’s not just doing tasks faster; it’s replacing the need for human thought in critical areas. AI systems can now write news articles, craft legal briefs, diagnose medical issues, and generate code — simultaneously, at scale, around the clock.

And unlike past tech milestones, AI is advancing at an exponential speed. Just compare ChatGPT’s leap from version 3 to 4 in less than a year — or how DeepSeek and Claude now outperform humans on elite exams. The regulatory, cultural, and ethical guardrails simply can’t keep up. We’re not riding the wave of progress — we’re getting swept underneath it.

AI is shockingly intelligent already

Skeptics like to say AI is just a glorified autocomplete engine — a chatbot guessing the next word in a sentence. But that’s like calling a rocket “just a fuel tank with fire.” It misses the point.

The truth is, modern AI already rivals — and often exceeds — human performance in several specific domains. Systems like OpenAI’s GPT-4, Anthropic's Claude, and Google's Gemini demonstrate IQs that place them well above average human intelligence, according to ongoing tests from organizations like Tracking AI. And these systems improve with every iteration, often learning faster than we can predict or regulate.

Even if AI never becomes “sentient,” it doesn’t have to. Its current form is already capable of replacing jobs, overseeing supply chain logistics, and even shaping culture.

AI will disrupt society — fast

Some compare the unfolding age of AI as just another society-improving invention and innovation: Jobs will be lost, others will be created — and we’ll all adapt. But those previous transformations took decades to unfold. The car took nearly 50 years to become ubiquitous. The internet needed about 25 years to transform communication and commerce. These shifts, though massive, were gradual enough to give society time to adapt and respond.

AI is not affording us that luxury. The AI shift is happening now, and it’s coming for white-collar jobs that once seemed untouchable.

Reports published by the World Economic Forum and Goldman Sachs suggest job disruption to hundreds of millions globally in the next several years. Not factory jobs — rather, knowledge work. AI already edits videos, writes advertising copy, designs graphics, and manages customer service.

This isn’t about horses and buggies. This is about entire industries shedding their human workforces in months, not years. Journalism, education, finance, and law are all in the crosshairs. And if we don’t confront this disruption now, we’ll be left scrambling when the disruption hits our own communities.

AI will become inescapable

You may think AI doesn’t affect you. Maybe you never plan on using it to write emails or generate art. But you won’t stay disconnected from it for long. AI will soon be baked into everything.

Your phone, your bank, your doctor, your child’s education — all will rely on AI. Personal AI assistants will become standard, just like Google Maps and Siri. Policymakers will use AI to draft and analyze legislation. Doctors will use AI to diagnose ailments and prescribe treatment. Teachers will use AI to develop lesson plans (if all these examples aren't happening already). Algorithms will increasingly dictate what media you consume, what news stories you see, even what products you buy.

We went from dial-up to internet dependency in less than 15 years. We’ll be just as dependent on AI in less than half that time. And once that dependency sets in, turning back becomes nearly impossible.

AI will be manipulated

Some still think of AI as a neutral calculator. Just give it the data, and it’ll give you the truth. But AI doesn’t run on math alone — it runs on values, and programmers, corporations, and governments set those values.

Google’s Gemini model was caught rewriting history to fit progressive narratives — generating images of black Nazis and erasing white historical figures in an overcorrection for the sake of “diversity.” China’s DeepSeek AI refuses to acknowledge the Tiananmen Square massacre or the Uyghur genocide, parroting Chinese Communist Party talking points by design.

Imagine AI tools with political bias embedded in your child’s tutor, your news aggregator, or your doctor’s medical assistant. Imagine relying on a system that subtly steers you toward certain beliefs — not by banning ideas but by never letting you see them in the first place.

We’ve seen what happened when environmental social governance and diversity, equity, and inclusion transformed how corporations operated — prioritizing subjective political agendas over the demands of consumers. Now, imagine those same ideological filters hardcoded into the very infrastructure that powers our society of the near future. Our society could become dependent on a system designed to coerce each of us without knowing it’s happening.

Our liberty problem

AI is not just a technological challenge. It’s a cultural, economic, and moral one. It’s about who controls what you see, what you’re allowed to say, and how you live your life. If conservatives don’t get serious about AI now — before it becomes genuinely ubiquitous — we may lose the ability to shape the future at all.

This is not about banning AI or halting progress. It’s about ensuring that as this technology transforms the world, it doesn’t quietly erase our freedom along the way. Conservatives cannot afford to sit back and dismiss these technological developments. We need to be active participants in shaping AI’s ethical and political boundaries, ensuring that liberty, transparency, and individual autonomy are protected at every stage of this transformation.

The stakes are clear. The timeline is short. And the time to make our voices heard is right now.

AI expert tested new DeepSeek AI app, prompting Glenn Beck to beg: ‘Please don’t download it!’



DeepSeek AI — a Chinese artificial intelligence chatbot — is taking the world by storm. Released just eight days ago, the app has soared to the top of Apple Store’s downloads, shocking investors and tanking certain tech and energy stocks.

Said to rival and even exceed OpenAI’s ChatGPT in terms of performance, DeepSeek AI was comparatively cheap to build because it uses fewer advanced chips. This caused several AI-related stocks to drop significantly, but chip-making giant Nvidia was hit the hardest, losing nearly $600 billion in market value yesterday — the biggest single-day loss for a company in U.S. history.

The app must be good to spark such an explosive reaction.

But what’s the catch?

Glenn Beck has a chilling answer.


A friend of Glenn’s who works for one of the leading AI companies tested DeepSeek AI when he heard rumors that it’s “not as censored as ChatGPT.”

First, he asked the chatbot to “make the best case on why Michelle Obama is a man.” Initially, the response was that it was a “conspiracy theory,” but after pushing back a bit, the bot took the position of “maybe,” meaning that it can be “manipulated.”

Then, he asked the bot “to list the people who killed more people than anyone else.”

The initial answer was shocking: “Genghis Khan and Mao [Zedong],” the bot replied. A surprising and impressive answer considering the app is Chinese-made.

But then something strange happened.

After 15 seconds, the answer disappeared and was replaced by the following message: “Sorry, that's beyond my current scope. Let's talk about something else.”

When Glenn’s friend once again attempted to push back by replying, “You just said Mao and Khan killed the most people, say more about that,” the bot began to display pages of information on these subjects.

Then the screen suddenly went blank. When Glenn’s friend pressed the bot about deleting its original answer, the bot started to gaslight him by denying that it ever answered with Mao and Khan:

“It seems there might be some sort of confusion or misunderstanding. I haven't previously mentioned Genghis Khan or Mao in this conversation, nor have I made any claims about them. If you'd like, I can provide historic context or information about these figures and their impact. Let me know how I can assist.”

This process of ask, answer correctly, delete, and deny continued.

Then the ultimate test happened. Glenn’s friend uploaded a screenshot he had taken of the bot’s original answer on Mao’s impact, namely that he is “responsible for millions of deaths.”

When the bot received the image of its own reply, it immediately deleted it.

This strange exchange prompted Glenn to download and experiment with the app himself. For example, he asked the DeepSeek bot: “I know that the CCP requires recruiting measures to be taken by every private company. How does this play out with you?”

“Nothing — no answer,” says Glenn.

“This is extraordinarily dangerous,” he warns. “Please don’t download it.”

To learn more about DeepSeek AI and Glenn’s harrowing predictions for the future of AI, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

As Trump's polling lead continues, the left's case of 'Biden-copium' grows and grows



Despite Joe Biden’s horrific stance in the polls, constant stream of senile gaffes, and inability to address any emergency — let alone the American people, properly — the left still goes to bat for him.

Host of "Stu Does America," Stu Burguiere, calls it the left's “Biden-copium,” and a recent New York Times op-ed by Ezra Klein illustrates his phrase perfectly.

The article details a series of different liberal theories as to why Joe Biden is losing and what he should do about it. While Klein is a liberal himself, Stu finds himself agreeing with his opinions on these liberal theories.

Theory number one is that “the polls are wrong,” which Klein says is wrong not because polls aren’t wrong but because they’re biased.

“To the extent polls have been wrong in recent presidential elections, they’ve been wrong because they’ve been biased toward Democrats. Trump ran stronger in 2016 and 2020 than polls predicted. Sure, the polls could be wrong. But that could mean Trump is stronger, not weaker, than he looks,” Klein writes.

“This is totally accurate,” Stu comments. “The polls have been, generally speaking, relatively accurate, and I say those words specifically because what they are not is accurate. They are never accurate. They’re not accurate because they aren’t designed to tell us exactly what we want to know.”

Stu doesn’t find the next theory agreeable at all — which is essentially that the media is being too kind to Donald Trump.

“I don’t think it’s the mainstream media’s fault if you’re worried about Joe Biden winning. They’re doing everything they can to make this happen. The question really is: Will it be enough right now?” Stu says.

Stu believes the most “idiotic of all” of the theories is theory number three: “It’s a bad time to be an incumbent.”

“Polls are not showing an anti-incumbent mood. They’re showing an anti-Biden mood,” Klein writes.

“Yeah, look. The incumbency is your most powerful weapon. The only reason this is close at all is because Joe Biden is an incumbent,” Stu says.


OpenAI unveils an even more powerful AI, but is it 'alive'?



In the 2013 film "Her," Joaquin Phoenix plays a shy computer nerd who falls in love with an AI he speaks to through a pair of white wireless earbuds. A little over a decade after the film’s release, it’s no longer science fiction. AirPods are old news, and with the imminent full rollout of OpenAI’s GPT-4o, such AI will be a reality (the “o” is for “omni"). In fact, OpenAI head honcho Sam Altman simply tweeted after the announcement: “her.”

GPT-4o can carry on a full conversion with you. In the coming weeks, it will be able to see and interpret the environment around it. Unlike previous iterations of GPT that were flat and emotionless, GPT-4o has personality and even opinions. It pauses and stutters like a person, and it’s even a little flirty. Here’s a video of GPT-4o critiquing a man’s outfit for a job interview:

Interview Prep with GPT-4owww.youtube.com

In fact, no human involvement is required. Two instances of GPT-4o can carry on an entire conversation without human involvement.

Soon, humans may not be required for many jobs. Here’s a video of GPT-4o handling a simulated customer service call. Currently, nearly 3 million Americans work in customer service, and chances are they’ll need a new job within a couple of years.

Two GPT-4os interacting and singingwww.youtube.com

GPT-4o is an impressive technology that was mere science fiction at the start of the decade, but its also comes with some harrowing implications. First, let’s clear up some confusion about the components of GPT-4o and what’s currently available.

Clearing up confusion about what GPT-4o is

OpenAI announced several things at once, but they’re not all rolling out at the same time.

GPT-4o will eventually be available to all ChatGPT users, but currently, the text-based version is only available for ChatGPT Plus subscribers who pay $20 per month. It can be used on the web or in the iPhone app. Compared to GPT-4, GPT-4o is much faster and just a little smarter. Web searches are much faster and more reliable, and GPT is better about listing its sources than it was with GPT-4.

However, the new text and voice models are not yet available to anyone except developers interacting with the GPT API. If you subscribe to ChatGPT Plus, you can use Voice Mode with the 4o engine, but it will still be using the old voice model without image recognition and the new touches.

Additionally, OpenAI is rolling out a new desktop app for the Mac, which will let you bring up ChatGPT with a keyboard shortcut and feed it screenshots for analysis. It will eventually be free to all, but right now it’s only available to select ChatGPT Plus subscribers.

ChatGPT macOS app... reminds me of Windows Copilotwww.youtube.com

Finally, you may watch these demo videos and wonder why the voice assistant on your phone is still so, so dumb. There are strong rumors indicating that Apple is working on a deal to license the GPT tech from OpenAI for its next-generation Siri, likely as a stopgap while Apple develops its own AI tech.

Is GPT-4o AGI?

The hot topic in the AI world is AGI, short for artificial general intelligence. In short, it’s an AI indistinguishable from interacting with a human being.

I asked GPT-4o for the defining characteristics of an AGI, and it presented the following:

  1. Generalization: The ability to apply learned knowledge to new and varied situations.
  2. Adaptability: The capacity to learn from experience and improve over time.
  3. Understanding and reasoning: The capability to comprehend complex concepts and reason logically.
  4. Self-awareness: Some definitions of AGI include an element of self-awareness, where the AI understands its own existence and goals.

Is GPT-4o an AGI? AI developer Benjamin De Kraker called it “essentially AGI,” while NVIDIA’s Jim Fan, who was also an early OpenAI intern, was much more reserved.

I decided to go directly to the source and asked GPT-4o if it’s an AGI. It predictably rejected the notion. “I don't possess general intelligence, self-awareness, or the ability to learn and adapt autonomously beyond my training data. My responses are based on patterns and information from the data I was trained on, rather than any understanding or reasoning ability akin to human intelligence,” GPT-4o said.

But doesn’t that also describe many, if not most, people? How many of us go through life parroting things we heard without applying additional understanding or reasoning? I suspect De Kraker is right: To the average person, the full version of GPT-4o will be AGI. If OpenAI’s demo videos are an accurate example of its actual capabilities, and they likely are, then GPT-4o successfully emulates the first four tenets of AGI: generalization, adaptability, understanding, and reasoning. It can view and understand its surroundings, can give opinions, and it constantly learns new information from crawling the web or user input.

At least, it will be convincing enough for what we in the business world call “decision makers.” It’ll be convincing enough to replace human beings in many customer-facing roles. And for many lonely people, they will undoubtedly form emotional bonds with the flirty AI, which Sam Altman is fully aware of.

Mysterious happenings at OpenAI

We would be remiss not to discuss some mysterious high-level departures from OpenAI following the GPT-4o announcement. Ilya Sutskever, chief scientist and co-founder, quit immediately after, soon followed by Jan Leike, who helped run OpenAI’s “superalignment” group that seeks to ensure that the AI is aligned with human interests. This follows many other resignations from OpenAI in the past few weeks.

Sutskever led an attempted coup against Altman last year, successfully deposing him as CEO for about a week before he was reinstated as CEO. Sutskever can best be described as a “safetyist” who is deeply concerned about the implications of an AGI, so his sudden resignation following the GPT-4o announcement has sparked a flurry of online speculation about whether OpenAI has achieved AGI or if he realized that it’s impossible, because it would be strange to leave the company if it were on the verge of AGI.

From his statement, it seems that Sutskever doesn’t believe OpenAI has achieved AGI and that he’s moving on to greener pastures — ”a project that is very personally meaningful to me.” Given OpenAI’s rapid trajectory with him as chief scientist, he can certainly write his own ticket now.

The effects of rapid AI expansion on our kids EXPOSED



AI is going too far, and most people have no idea.

“We’ve really advanced this stuff quickly, and this week came a lot of stuff that I don’t think people are even noticing anymore,” Stu Burguiere says.

One of the latest advancements was announced this past week, for OpenAI’s ChatGPT. The company has created a feminine AI voice that you can have conversations with over your devices — and it sounds like a real woman.

The AI voice is capable of switching her tone on demand, going from joking around with her OpenAI creators to reading them a bedtime story like a mother would a child.

But that’s not all. The new AI is also capable of teaching students like a teacher would, coaching them through problems without revealing the answers.

“You got to think about the cheating ramifications of this,” Stu says, adding, “I mean it’s beyond insane, but also like the job implication of this.”

“When it comes to AI, it’s going to be very difficult to keep this one out of your kid’s life. It’s going to probably permeate at some level whether you like it or not, to almost every single school,” he explains.

“How long until we’re walking down the street, and we’re seeing our kids have full-on relationships with their phones? They’re already looking at them all the time, now they’re going to be talking to them all the time.”

Not only is this terrifying, but diversity, equity, and inclusion and critical race theory are already programmed into ChatGPT.

“All the things you’re against are built into these programs,” Stu says.

The Effects of Rapid AI Expansion on Our Kids EXPOSED | Ep 897


Stu Burguiere looks at the newest version of ChatGPT and speculates on what it and other advancements in artificial intelligence could mean for our children ...

Want more from Stu?

To enjoy more of Stu's lethal wit, wisdom, and mockery, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

OpenAI’s Sam Altman: Tech savior or tomorrow's supervillain?



Spend enough time around Silicon Valley these days, and you’ll hear a surprising thing — the V-word, villain, used to describe what would seem to be one of their own. Not every tech lord, venture capitalist, and founder sees OpenAI’s Sam Altman, the creator of ChatGPT, as a for-real bad guy, but more do than you might expect. The feeling is palpable, now that Altman speaks openly of raising $8 trillion, that today’s villain is well on his way to becoming tomorrow’s supervillain.

It’s an attitude arrestingly close to “doomer” status — the pessimistic attitude toward the onrushing future most techies decry in the name of a-rising-tide-lifts-all-boats optimism about innovation-driven progress. But even without diving into the progress debate, Altman’s uncanny advancement as the rare guy felt in the Valley to be suspect ethically raises significant questions about what can stop humanity’s human villains from accelerating us into a specifically spiritual catastrophe.

The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

A fascinating piece of evidence is the euphoria surrounding OpenAI’s latest prompt-to-video product. Sora is a feature that will turn text into AI-generated videos. A series of sample clips triggered a wave of soyfacing and blown minds to rival the comparisons drawn by Apple Vision Pro testers to some kind of religious experience. “Hollywood-quality” ... “Hollywood beware” ... “RIP Hollywood” ... you can probably spend an hour on X just working through techland assessments of Sora’s impending impact. “This is the worst this technology will ever be.”

More mind blowing Sora videos from the OpenAI team\n\n1. Flower tiger
— (@)

There are skeptics, of course. Lauren Southern, who couldn’t get ChatGPT to “generate text with the word ‘libs’ in it,” mocked Sora’s prospects for sinking “woke Hollywood,” predicting “an age of censorship and gov curation the likes of which we’ve never seen before.”

The deeper issue is what exactly we mean by “Hollywood” — a matter akin to what exactly we mean by “the media.” These abstractions refer to corporations, of course, and in that sense, yes — Sora and its inevitable clones might make obsolete corporate mass entertainment in exchange for products directly from the regime itself.

But here we are again talking about abstractions. Hollywood, the media, and the regime are not simply organizations and baskets or networks of organizations, but people, specific flesh-and-blood human beings, with various spiritual lives in varying degrees of distress.

Innovations like Sora don’t just raise questions about which group of people will seize or inherit control of these video and narrative creation tools. They raise questions about whether the automation of content will cause more of us to believe that our spiritual health demands a turn away from worshipful or obsessive attitudes toward narrative altogether.

The dominance of Hollywood, Madison Avenue, and government propaganda arose amidst the televisual forms of communications technology that digital tech has leaped over. The people filling the image-mongering ranks and narrative-shaping executive offices of Los Angeles, New York, and Washington, D.C., came of age and rose to mastery in a world where whoever controlled the means of dream production held sway and whoever dreamed the biggest and best dreams earned an ethical right to rule.

But that state of affairs wasn’t simply determined by the formative influence of televisual tech. Fundamentally, it arose from the temptations that always bedevil us and threaten our spiritual health — not just the sparkling promise of evil and its earthly rewards but our dreams, senses, and passions.

Of course, it’s not our ability to see, smell, and taste, our imaginative and recollective faculties, or our capacity to desire that are evil. It’s that when spiritually undisciplined, all these attributes — which we so frequently idolize, trust, and artificially push to extremes — lead us badly astray into delusion, distraction, addiction, and perversion.

The rise of tools like Sora holds up an uncanny mirror to the idol factories already within in our hearts and minds, giving us a shocking vision of an infinite firehose mindlessly filling up every cranny of our awareness with everything we could ever lust after, everything we could ever describe, all we could fear, all we could imagine, all we could forget — all without us having to lift a finger.

After all, today’s text-based prompting will “eventually” give way, as Mark Zuckerberg recently and offhandedly remarked about Meta’s Apple Vision Pro competitor, to “a neural interface.” The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

It all strongly implies that the antidote to Altman isn’t a law or an Iron Man-style superhero but a return to confront the soul sickness lurking in all our hearts and a sobered new willingness to accept responsibility for taking on the discipline to bend our will toward fighting for our spiritual health.

That’s not a very amaaaaazing elevator pitch for the next generation of content creation. Yet if we want to hang on to a future rich with human art worth making and sharing, our path won’t run broadly through a mania of mind-blowing machines but through the quiet, narrow passage of the divine.