‘You become a serf’: Artificial general intelligence is coming SOON



Artificial general intelligence is coming sooner than many originally anticipated, as Elon Musk recently announced he believes his latest iteration of Grok could be the first real step in achieving AGI.

AGI refers to a machine capable of understanding or learning any intellectual task that a human being can — and aims to mimic the cognitive abilities of the human brain.

“Coding is now what AI does,” Blaze Media co-founder Glenn Beck explains. “Okay, that can develop any software. However, it still requires me to prompt. I think prompting is the new coding.”

“And now that AI remembers your conversations and it remembers your prompts, it will get a different answer for you than it will for me. And that’s where the uniqueness comes from,” he continues.


“You can essentially personalize it, right, to you,” BlazeTV host Stu Burguiere confirms. “It’s going to understand the way you think rather than just a general person would think.”

And this makes it even more dangerous.

“This is something that I said to Ray Kurzweil back in 2011. ... I said, ‘So, Ray, we get all this. It can read our minds. It knows everything about us. Knows more about us than anything, than any of us know. How could I possibly ever create something unique?’” Glenn recalls.

“And he said, ‘What do you mean?’ And I said, ‘Well, let’s say I wanted to come up with a competitor for Google. If I’m doing research online and Google is able to watch my every keystroke and it has AI, it’s knowing what I’m looking for. It then thinks, “What is he trying to put together?” And if it figures it out, it will complete it faster than me and give it to the mother ship, which has the distribution and the money and everything else,’” he continues.

“And so you become a serf. The lord of the manor takes your idea and does it because they have control. That’s what the free market stopped. And unless we have control of our own thoughts and our own ideas and we have some safety to where it cannot intrude on those things ... then it’s just a tool of oppression,” he adds.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Digital companionship masks emptiness in a quiet, lonely world



Jill Smola is 75 years old. She’s a retiree from Orlando, Florida, and she spent her life caring for the elderly. She played games, assembled puzzles, and offered company to those who otherwise would have sat alone.

Now, she sits alone herself. Her husband has died. She has a lung condition. She can’t drive. She can’t leave her home. Weeks can pass without human interaction.

Loneliness is an epidemic. And AI will not fix it. It will only dull the edges and make a diminished life tolerable.

But CBS News reports that she has a new companion. And she likes this companion more than her own daughter.

The companion? Artificial intelligence.

She spends five hours a day talking to her AI friend. They play games, do trivia, and just talk. She says she even prefers it to real people.

My first thought was simple: Stop this. We are losing our humanity.

But as I sat with the story, I realized something uncomfortable. Maybe we’ve already lost some of our humanity — not to AI, but to ourselves.

Outsourcing presence

How often do we know the right thing to do yet fail to act? We know we should visit the lonely. We know we should sit with someone in pain. We know what Jesus would do: Notice the forgotten, touch the untouchable, offer time and attention without outsourcing compassion.

Yet how often do we just … talk about it? On the radio, online, in lectures, in posts. We pontificate, and then we retreat.

I asked myself: What am I actually doing to close the distance between knowing and doing?

Human connection is messy. It’s inconvenient. It takes patience, humility, and endurance. AI doesn’t challenge you. It doesn’t interrupt your day. It doesn’t ask anything of you. Real people do. Real people make us confront our pride, our discomfort, our loneliness.

We’ve built an economy of convenience. We can have groceries delivered, movies streamed, answers instantly. But friendships — real relationships — are slow, inefficient, unpredictable. They happen in the blank spaces of life that we’ve been trained to ignore.

And now we’re replacing that inefficiency with machines.

AI provides comfort without challenge. It eliminates the risk of real intimacy. It’s an elegant coping mechanism for loneliness, but a poor substitute for life. If we’re not careful, the lonely won’t just be alone — they’ll be alone with an anesthetic, a shadow that never asks for anything, never interrupts, never makes them grow.

Reclaiming our humanity

We need to reclaim our humanity. Presence matters. Not theory. Not outrage. Action.

It starts small. Pull up a chair for someone who eats alone. Call a neighbor you haven’t spoken to in months. Visit a nursing home once a month — then once a week. Ask their names, hear their stories. Teach your children how to be present, to sit with someone in grief, without rushing to fix it.

Turn phones off at dinner. Make Sunday afternoons human time. Listen. Ask questions. Don’t post about it afterward. Make the act itself sacred.

Humility is central. We prefer machines because we can control them. Real people are inconvenient. They interrupt our narratives. They demand patience, forgiveness, and endurance. They make us confront ourselves.

A friend will challenge your self-image. A chatbot won’t.

RELATED: What fatherhood has taught me as my children move on

Photo by Liv Bruce via Unsplash

Our homes are quieter. Our streets are emptier. Loneliness is an epidemic. And AI will not fix it. It will only dull the edges and make a diminished life tolerable.

Before we worry about how AI will reshape humanity, we must first practice humanity. It can start with 15 minutes a day of undivided attention, presence, and listening.

Change usually comes when pain finally wins. Let’s not wait for that. Let’s start now. Because real connection restores faster than any machine ever will.

Want more from Glenn Beck? Get Glenn's FREE email newsletter with his latest insights, top stories, show prep, and more delivered to your inbox.

Stuck in a simulation? If 'The Matrix' were real, this would be why



In the new edition of my 2025 book, "The Simulation Hypothesis," I’ve updated my estimate of how likely we are to be in a simulation to approximately 70%, thanks to recent AI developments. This means we are almost certainly inside a virtual reality world like that depicted in "The Matrix," the most talked-about film of the last year of the 20th century.

Even young people who weren’t born in 1999 tend to know the basic plot of this blockbuster: Neo (Keanu Reeves) thinks he’s living in the real world, working in a cubicle in a mega software corporation, only to discover, with the help of Morpheus (Laurence Fishburne) and Trinity (Carrie-Anne Moss), that he’s living inside a computer-generated world.

When we reach the simulation point, we would be capable of building something like 'The Matrix' ourselves, complete with realistic landscapes, avatars, and AI characters.

What makes me so sure that we are living in a simulation?

There are multiple reasons explored in the book, including a new way to explain quantum weirdness, the strange nature of time and space, information theory and digital physics, spiritual/religious arguments, and even an information-based way to explain glitches in the matrix.

AI am I?

However, even while discounting these other possible reasons we may be in a simulation, the main reason for my new estimate is the rapid advance of AI and virtual reality technology, combined with a statistical argument put forward by Oxford philosopher Nick Bostrom in 2003. In the past few years, the rise of generative AI like ChatGPT, Google Gemini, and X’s Grok has proceeded rapidly. We now not only have AI that has passed the Turing test, but we already have rudimentary AI characters living in the virtual world with whom we can interact.

One recent example is prompt-generated video from Google Veo. Recently, Google has introduced the ability to create realistic-looking videos on demand, complete with landscapes that are completely AI-generated and virtual actors speaking real lines of dialogue, all based on prompts. This has led to prompt theory, a viral phenomenon of AI-generated video of realistic characters exclaiming that they were definitely not generated by AI prompts.

Virtual situationship

Another recent example is the release of AI companions from Grok, which combine LLMs with a virtual avatar, leading to a new level of adoption of the rising wave of AI characters that are already serving as virtual friends, therapists, teachers, or even virtual lovers. The sexy anime girl in particular has led to thousands of memes of obsession with virtual characters. The graphics fidelity and responsiveness of these characters will improve — imagine the fidelity of the Google Veo videos combined with a virtual friend/boyfriend/girlfriend/assistant, who can pass what I call the Metaverse or virtual Turing test (described in my new book in detail).

RELATED: How Joe Rogan dismantled the Big Bang with one sentence — and made atheists squirm

Michael S. Schwartz / Contributor | Getty Images

All of this means we are getting closer than ever to the simulation point, a term I coined a few years ago as a kind of technological singularity. I define this as a theoretical point at which we can create virtual worlds that are indistinguishable from physical reality and AI beings that are indistinguishable from biological beings. In short, when we reach the simulation point, we would be capable of building something like "The Matrix" ourselves, complete with realistic landscapes, avatars, and AI characters.

Ancestor simulation

To understand why our progress in reaching this point might increase the likelihood that we are already in a simulation, we can build on the simulation argument that Bostrom proposed in his 2003 paper “Are You Living in a Computer Simulation?

Bostrom surmised that for a technological civilization like ours, there are only three possibilities when it comes to building highly realistic simulations of the past, which he called ancestor simulations. Each of these simulations would have realistic simulated minds, holding all of the information and computing power a biological brain might hold. We can think of having the capability of building these simulations as approximately similar to my definition of the simulation point.

The first two possibilities, which can be combined for practical purposes, are that no civilization ever reaches the simulation point (i.e., by destroying themselves or because it isn’t possible to create simulations), or that all such civilizations that reached this point decided not to build such sophisticated simulations.

The term “simulation hypothesis” was originally meant by Bostrom to refer to the third possibility, which was that “we are almost certainly living in a computer simulation.” The logic underlying this third scenario was that any such advanced civilization would be able to create entirely new simulated worlds with the click of a button, each of which could have billions (or trillions) of simulated beings indistinguishable from biological beings. Thus, the number of simulated beings would vastly outnumber the tally of biological beings. Statistically, then, if you couldn’t tell the difference, then you were (much) more likely to be a simulated being than a real, biological one.

Today’s AI developments have convinced me we are at least 67% likely to be able to reach the simulation point and possibly more than 70%.

Bostrom himself initially declined to put a percentage on this third option compared to the other two, saying only that it was one of three possibilities, implying a likelihood of 33.33% (and later changed his odds for the third possibility to be around 20%). Elon Musk used a variation of Bostrom’s logic in 2016 when he said the chances of us being in base reality (i.e. not in a simulation) were one in billions. He was implying that there might be billions of simulated worlds, but only one physical world. Thus, statistically, we are by far highly likely (99.99%+) in a simulated world.

What are the odds?

Others have weighed in on the issue, using variations of the argument, including Neil deGrasse Tyson, who put the percentage likelihood at 50%. Columbia scientist David Kipping, in a paper using Bayesian logic and Bostrom’s argument, came up with a similar figure of slightly less than 50/50.

Musk was relying on the improvement in video game technology and projecting it forward. This is what I do in detail in my book, where I lay out the 10 stages of getting to the simulation point, including virtual reality (VR), augmented reality (AR), BCIs (brain-computer interfaces), AI, and more. It is the progress in these areas over the past few years that gives me the conviction that we are getting closer to the simulation point than ever before.

In my book, I argue that the percentage likelihood that we are in a simulation is based almost entirely on whether we can reach the simulation point. If we can never reach this point, then the chances are basically zero that we are in a simulation that was already developed by anyone else. If we can reach this point, then the chances of being in a simulation simply boil down to how far from this theoretical point we are, minus some uncertainty factor.

If we have already reached that point, then we can be 99% confident about being in a simulation. Even if we haven’t reached the simulation point (we haven’t, at least not yet), then the likelihood of the simulation hypothesis, Psim , basically simplifies down to Psimpoint, the confidence level we have that we can reach this point, minus some small, extra uncertainty factor (pu).

Psim » Psimpointpu

If we are 100% confident we can reach the simulation point, and the small factor pu is 1, then the likelihood of being in a simulation jumps up to 99%. Why? Per the earlier argument, if we can reach this point, then it is very likely that another civilization has already reached this point and that we are inside one of that civilization's (many) simulations. pu is likely to be small because we have already built uncertainty into our Psimpoint for any value less than 100%.

A matter of capabilities

So in the end, it doesn’t matter when we reach this point; it’s a matter of capabilities. And the more we develop our AI, video game, and virtual reality technology, the more likely it is that at some point soon, we will be able to reach the simulation point.

So how close are we? In the new book, I estimate that we are more than two-thirds of the way there, and I am fairly certain that we will be able to get there eventually. This means that today’s AI developments have convinced me that we are at least 67% likely to be able to reach the simulation point and possibly more than 70%.

If I add in factors from digital and quantum physics detailed in the book, and if we take the “trip reports” of mystics of old and today’s NDErs and psychonauts (who expand their awareness using DMT, for example) at face value, we can be even more confident that our physical reality is not the ultimate reality. Those who report such trips are like Plato’s philosopher, who not only broke his chains but also left Plato’s allegorical cave. If you read Plato’s full allegory, it ends with the philosopher returning to the cave to describe what he saw in the world outside to the other residents, who didn’t believe him and were content to continue watching shadows on the wall. Because most scientists are loath to accept these reports and are likely to dismiss this evidence, I won’t include them in my own percentage estimation, though as I explain in the book, this brings my confidence level that we are in a virtual, rather than a physical, reality even higher.

This brings us back to the inescapable realization that if we will eventually be able to create something like "The Matrix," someone has likely already done it. While we can debate what is outside our cave, it’s our own rapid progress with AI that makes it more likely than ever that we are already inside something virtual like "The Matrix."

Regulating AI won't protect Americans; it's about Big Tech having a monopoly



The more I read and write about AI, the firmer I get in my conviction that Big Tech incumbents absolutely must strangle the decentralized AI in its cradle before it wrecks everything.

Take a look at this interview Ben Thompson did with OpenAI’s Sam Altman and Microsoft CTO Kevin Scott. I call your attention to this part in particular:

From Microsoft’s perspective, is this going to be a funnel into new products or do you see it as an end goal in and of itself, winning search?
KS: So I think you hit on a very important point which is even if the ad economics of this system doesn’t have the same economics that “normal search” has, if we gain share, it’s just great for Microsoft. I think we have a lot of ability here, partially because we’ve done so much performance optimization work and we’re really confident around costs, that we can figure out what the business model is. The thing that I know having been a pre-IPO employee at Google is the search business that you have now is very different from the search business that we had twenty years ago, and so I really think we’re going to figure out what the ad units are, we will figure out what the business model is, and we have plenty of ability to do all of that profitably at Microsoft.
SA: There’s so much value here, it’s inconceivable to me that we can’t figure out how to ring the cash register on it. [Emphasis added]

I recently said the same thing to an interviewer who asked me about search and Google. The point I made was that Google was here before with search the first time — there was no business model for it until it hit on one (by acquisition, no less). Don’t assume, I argued, that it won’t hit on another profitable model for whatever kind of user experience BingGPT and Bard evolve into.

But I also made another point to the interviewer that’s not at all captured in the above but that’s critically important for everyone thinking about tech policy in the current moment: to make any business model work for them, they will first have to kill decentralized AI.

Centralized vs. decentralized

There’s a set of assumptions implicit in Scott and Altman’s vision of how they might eventually “ring the cash register” on AI-backed chat as the new query interface for most information:

  • Users go to their centralized servers and type text into a box that they host.
  • Advertisers go to those same servers to get in front of all the users.
  • Somehow, the advertisers and the users can be connected to one another, with Microsoft acting as a middleman.
  • Or, maybe the users pay Microsoft directly for the queries via a subscription or micropayment scheme.

In other words, Microsoft’s ability to squeeze profits out of the experience of interacting with an LLM presumes that billions of users will continue to flock to a handful of centralized services to get their queries answered. This is a vision, then, predicated on a world of centralized AI.

But what if we end up in a world of decentralized AI instead? What if I can download an app that will answer current questions from all of Wikipedia and Reddit, in some cases going out to both of those sites and pulling in fresh data?

What if some of the data sources are my favorite news websites and forums, all of which have signed up to provide data to the app and which get a cut of whatever revenue it generates?

Or, what if multiple such apps are powered by open-source language models and kept fresh by access to current data sources via an API? I could certainly see the New York Times publishing such an app all by itself, with the ability to answer any question from its vast archives of past issues.

Decentralized AI is a real threat

To give some technical context for why the vision of app-based, decentralized AI I’ve described above is quite possible, consider that the size of the models needed to do this might be on the order of a few gigabytes each. For instance, the Stable Diffusion model file that powers its image generation is from 2.5 to 4.5 GB, depending on the version, and it was trained on 240TB of image data. That’s an astonishing level of compression.

So, it may be possible that the average size of the models that we need to answer, say, 75% of our random questions about the world is roughly 3GB or so — about the size of a large mobile game download.

If I can download models that can reliably answer questions about their training data, why do I need to visit a Microsoft- or Google-hosted website and type queries into their text boxes? If I want recipes from my favorite recipe site, maybe I visit their site instead and talk to their model. If I want the current NYT or WaPo consensus on Ukraine, why won’t I just go to those sites and chat with their bots? Why does a Microsoft or a Google need to be involved in any of this?

The answer, of course, is that they don’t need to be involved. Decentralized AI can and will cut them out entirely, assuming it’s allowed to.

But that’s a big assumption because the future of decentralized AI is by no means guaranteed.

But before we go into who’s trying to kill decentralized AI and why, some caveats:

  1. Using the models to answer questions requires quite a bit of computing power. But these inference costs can and will be reduced with innovation, as this is an active area of research. Also, have you seen mobile phones, lately? There’s no shortage of computing power, and phone makers are always looking for ways to use it. After a few product cycles of optimizing the hardware for running queries, it’s not hard to imagine very fast local performance on many kinds of models.
  2. Yes, the models still make up facts. This hallucination is a big problem, but it’s also one that everyone is working on. The models will get better at faithfully representing the facts in their data sources.

We’ll have to fight for a decentralized future

I’ve written at length on my Substack about the forces arrayed against decentralized AI, so I won’t repeat that here. But to summarize: The aforementioned model files representing the “brains” of an AI like Stable Diffusion or ChatGPT could very easily be treated like digital contraband and wiped from the internet.

Everyone from Googlers to Google-hating former Googlers to indie artists to profiteering lawyers are hard at work constructing rationales for why these model files should be subject to the same censorship as child porn, 3D-printed gun files, pirated movies, SPAM, and malware.

Here are some of the rationales currently being explored for banning decentralized AI:

  • All the model files are full of copyright violations because they were trained on copyrighted data.
  • Generative text models can cause harm to the marginalized because “hate speech” can be coaxed out of them.
  • Generative text models will catastrophically increase the threat of “disinformation.”
  • Generative image models will be used for non-consensual fake porn of real people, many of them children.

We wouldn’t even have to pass any new laws to have these model files banned. All it would take was an agreement among a handful of large players that these files and any apps or sites based on them pose a threat. I imagine the following platforms can and probably will come together to effect what amounts to an effective ban on decentralized AI:

  • Google Play
  • Apple’s App Stores
  • Amazon Web Services
  • Cloudflare

This means a world where everyone gets to host their own models backed by their own data sources, and facts are by no means guaranteed. Going by the lessons of history, I’d say it’s probably unlikely.

It seems increasingly likely to me that the forces of centralization will succeed in getting unauthorized model files treated like contraband, and in five years, we’ll still be running all of our queries on servers hosted by one of the Big Tech platforms.

I hope I’m wrong about this, but I do know that if we’re going to have decentralized AI, then we’re going to have to fight for it.

How Google’s getting an AI backdoor into iPhone



Apple and Google have long held differing views on user data and device privacy. While Apple promises to keep most personal info on-device and encrypted, Google is known for mining user data and leveraging it to serve ads, improve products, and more. However, a new partnership between these two tech giants could allow Google’s AI platform, Gemini, to access user data like never before.

If you can’t beat them, join them

Earlier this year, rumors swirled that Apple was working on a new AI-powered version of Siri for iOS 18. The update would make Apple’s personal assistant comparable to generative AI platforms like ChatGPT and Google Gemini, allowing it to provide better query responses, edit written content, and possibly even create text and images of its own. While this project may still be in development, new details claim that Apple hopes to kickstart its AI ambitions by striking a deal directly with one of its competitors.

Google Gemini is now poised to take center stage at Apple’s WWDC event this spring, where iOS 18 is expected to be unveiled. Debuting in December 2023, the platform is relatively young compared to OpenAI’s ChatGPT, which launched to the public in November 2022. However, Google has been quick to iterate on the platform as it aims to replace its antiquated Google Assistant soon.

Apple and Google go back farther than you think

This isn’t the first time Apple has let Google into the iPhone. For instance, when the iPhone debuted in 2007, Google Maps was the default navigation app that came pre-installed on every device. This would remain the status quo for iPhone users until Apple Maps swooped in as a homegrown replacement in 2012.

YouTube was also famously built directly into the iPhone until meeting its untimely ousting for reasons unknown in the same year. Google took the gesture in stride by launching its third-party YouTube app on the App Store today.

Despite Google missing out on some direct integration with the iPhone, the search giant reportedly pays Apple $18 billion per year to be the default search engine in Apple’s Safari web browser across all Apple devices, including iPhone, iPad, and Mac.

The two tech giants have a history of working together, significantly when both businesses can mutually benefit from one another and Apple’s rich user base. In the case of Gemini, Apple gets to boast new AI features on the iPhone that weren’t possible before. Google gets instant access to a larger pool of users, which could help it supplant ChatGPT as the leading generative AI solution on the block.

How does Google Gemini work?

While it’s a mystery how Gemini will be integrated directly into iOS 18, it’s possible to interact with Gemini today through your web browser. Simply go to the official Google Gemini website and sign in with a Google account. Before you do anything else, note the disclaimer at the bottom of the page:

“Your conversations are processed by human reviewers to improve the technologies powering Gemini Apps. Don’t enter anything you wouldn’t want reviewed or used.”

Keep in mind that live Google employees will review anything and everything you type into the prompt bar. Why? Because Gemini is still in the early stages of development, and Google’s employees are continuously monitoring the platform and making changes as issues arise, like with the diversity image scandal in February.

But even once Gemini has surpassed the need for human reviewers, you should still know that every request typed into the prompt bar is sent to Google’s servers to be processed before all responses can be sent back to your device. This means that Google will still technically have a record of every request you make and every response it creates on your behalf for up to three years, according to Google’s privacy policy.

So, be careful what you say to Gemini, especially if you value your privacy.

What does this mean for user privacy?

Herein lies the tricky part of this collaboration between Apple and Google. How does the former, which prides itself on user privacy and keeping as much data on-device as possible, work with the latter that regularly collects and processes user data through its servers in the cloud?

It’s hard to believe Apple would be willing to compromise its privacy-focused values just to add generative AI capabilities to its devices, and it might not have to. Google makes a version of Gemini called Gemini Nano that’s small enough to run directly on-device without sending user data to Google’s servers. This module is currently reserved only for Google’s Android-powered Pixel 8 Pro and Samsung’s S24 series, but any device that supports Android’s AICore system could technically run Gemini Nano.

Then again, the only way to get the most advanced features Gemini offers is through leveraging Google’s much larger and far more powerful AI models located in its cloud-based servers. Whether or not this extra power is worth the potential privacy trade-offs is up to Apple. However, if the company is willing to expose users to Google’s data-tracking efforts through Safari, giving up data to Gemini isn’t much of a stretch.

Regardless of how Google Gemini comes to iOS 18 and Apple’s family of devices later this year, one thing is clear: Generative AI is everywhere, and soon, all of your devices will have a version of it, whether you want it or not.

To stay on the safe side, never tell an AI bot what you wouldn’t tell your mother, and even then, some words are best said strictly between the humans in your life.

What 'Dune' teaches us about human achievement and the dangers of AI



One of the superb concepts of "Dune" that didn’t make it into the movie was the Butlerian Jihad. This is not the jihad that Paul commences, but rather an event long in the past that had drastic implications for the universe of Dune. In short, the Butlerian Jihad was a war on AI and thinking machines (computers). The jihad was incited by a machine decreeing an abortion, and that was the straw that broke the camel’s back. Humanity was already on the verge of being replaced, but when machines were beginning to determine who lived and died, mankind was losing its sovereignty as well.

This crusade against thinking technology strikes at looming questions that grow bigger in our lives by the day. We outsource our energy and capabilities to a tool whenever we use technology. Typically, this is well and good. An axe is far more efficient at splitting wood than attempting to do so with one’s hands, and this frees up a person to spend his energies elsewhere.

But as technology advances, we perpetually outsource ourselves to the devices around us. When we create a car, we use a device to substitute our legs. Again, this is good, as it allows far more efficient travel. But what happens when technology is entirely substituting the human individual?

Now, I am not necessarily referring to the AGI, but what happens to vast chunks of the population when a machine can do everything they can but better? What happens when we have created tools that have abolished the need for men? We made tools to serve us, but now they have replaced us. Is that a good thing? Can technology advance too far? Can we even stop technology from advancing? Huge numbers of people can no longer effectively live without modern transportation. Can we return? Should we return?

"Dune" presents us with a theoretical world where technological progression has been halted. And while it’s far from a perfect world, I think it’s a better, wiser one than we have now. Technology is not necessarily good because it is advanced. It needs to justify itself. I think we need to adopt an attitude of skepticism, certainly given the current state of the modern world. We may be in a better material position, but with skyrocketing rates of mental illness, drug abuse, and suicide, something has clearly gone wrong somewhere.

And I don’t think it’s terrible to refrain from technology that makes your life easier at the cost of your competence. You’ll never be a great artist if you rely on inputting prompts into an AI generator, and you’ll never be a talented writer if you exclusively use ChatGPT. Those skills have to be developed and refined the hard way. Otherwise, you’re just like everyone else using AI generators and ChatGPT.

In “Dune,” this type of person is called a mentat. This individual is a social adaptation to the lack of computers and advanced algorithmic calculators. Much like a savant, mentats can perform almost impossibly complex computations in their heads in only a few seconds.

Screenshot from Youtube


Now, that power is probably infeasible for us, but the concept is ever-present in our lives. If you want to be physically fit, you have to actually exercise those muscles. Refraining from technology that substitutes for your muscles is one method of gaining strength. And with strength, you gain a little control as well. Now, you are not relying on devices that break down or malfunction. It’s all on you.

That principle extends to nearly every facet of life. With careful restraint, you can develop within yourself all that unrealized potential you are wasting away. The human being was not made to be at rest. Human beings were made to do work, and it is only through work that a person becomes truly remarkable.

However, the most important lesson of the Butlerian Jihad is that it presents a world where humanity has regained control of itself. We often think our lives are insignificant specks in the grand scheme. After all, what can one man do against the march of progress? If you have problems with where the world is heading, how could you fix things, especially when you are one among billions?

But "Dune" presents a more hopeful outlook. We can take back control in our lives. We can say no to our desires and appetites to build ourselves up. We can say no to the march of the world. And I think that is an inspiring thought.