This one 'SpongeBob' episode explains what’s wrong with tech today



The best season of a TV show is frequently the first season, and as true "SpongeBob SquarePants" lovers know, there’s no substitute for Season One. Before the lunch boxes, before the movies, before the relentless IP exploitation, there was a simple cartoon, and in its simplicity was often to be found the simple yet expansive wisdom of the timeless fable.

Of course, in times like these, the timeless is especially timely. Technology has thoroughly warped our experience of our given chronology, scrambling and blurring legacy expectations around life span, generations, family formation, commutes, seasons, and so much more.

But as SpongeBob reminds us, instead of becoming gods, we’ll be freakish mutants unfit for either world, mortal or godly. The task falls to us mere mortals to have another idea … one far more humble than the posthuman temptation allows: Carry on with our human lives, but in ways that ensure the heavenly Spirit may, and does, move within and among us.

Nostalgia, futurism, and “the moment” compete for our loyalty and our attention, and our imaginations groan with mingled hope and fear toward the prospect of our collective techno-transformation into something that seems at once more and less than human.

So — "SpongeBob," Season One, Episode 19b, affectionately known as “Neptune’s Spatula.” For the uninitiated, the Fandom site Encyclopedia SpongeBobia provides a background synopsis.

Visiting the Fry Cook Museum, SpongeBob and trusty starfish sidekick Patrick discover an Excalibur of exhibits: "Many have tried to pull this spatula out of this ancient grease, but all have failed. Only a fry cook who's worthy of King Neptune himself can wield the golden spatula.” Naturally, SpongeBob accidentally extracts the spatula, and Neptune promptly appears, challenging our hapless hero to the ultimate burger competition. Victory spells divinity for SpongeBob; defeat, the surrender of his beloved fry cook vocation forever.

“King Neptune makes 1,000 Krabby Patties, in the time it takes SpongeBob to make just one, winning the challenge,” the synopsis retells. “However, when Neptune shares his patties with the audience, they express that they taste awful. Neptune is angered by this and asks why they would think that SpongeBob's would be any better. He tastes SpongeBob's Krabby Patty and finds it delicious. … SpongeBob is declared the winner, but when he finds out that his friends cannot come with him to Atlantis, he tearfully refuses to go to Atlantis, and instead arranges for King Neptune to be a trainee under SpongeBob at the Krusty Krab, teaching him that ‘perfect Patties are made with love, not magic.’”

And as Arthur C. Clarke posited as his Third Law in "Profiles of the Future: An Inquiry into the Limits of the Possible," “any sufficiently advanced technology is indistinguishable from magic.” Or, as Joseph Heller famously wrote in "Catch-22," “the spirit gone, man is garbage.” All the tech in the world can’t substitute for love — not in the making of burgers, not in the feeding of others, and not in the living of life.

Frederick M. Brown/Getty Images

But wait, there’s more. The fan site synopsis leaves out the most important part — the drama of SpongeBob’s confrontation with the prospect of his transformation into a god.

For that, we can helpfully resort to the episode transcript.

SpongeBob: So, uhh, what do you think?

King Neptune: Yours is superior. Therefore, [bows to SpongeBob] ... I concede to you, SpongeBob SquarePants, you win.

[The crowd cheers]

SpongeBob and Patrick: Yeah! [both dancing] We're going to Atlantis! We're going to Atlantis!

King Neptune: [laughs]

SpongeBob: What's so funny?

King Neptune: You, SpongeBob. That repulsive thing in my palace?

SpongeBob: You mean, Patrick can't come?

King Neptune: [laughs] No, of course not.

SpongeBob: And my friends?

King Neptune: Ah, the only friend you need, my dear boy, is the royal grill.

Patrick: [crying and wiping his tears with a tissue] It was nice knowing you, buddy!

[…]

King Neptune: [luggage appears next to SpongeBob] Come, SpongeBob, grab your things! It's time to depart ... [a two-seater bike appears] ... to Atlantis! [rings bell and pats SpongeBob's seat]

SpongeBob: I ... I ... [cries] I don't wanna go!

King Neptune: It's too late now. I can't live without your burgers! [grows giant] You're going to be a god and like it!

[King Neptune zaps SpongeBob and he becomes a muscular god. But being the same size, he looks a little strange]

King Neptune: Maybe we do have a problem.

SpongeBob: [in a booming voice] Wait, Neptune! I have another idea!

Ah, there it is. It’s too late … I can’t live without your burgers. … You’re going to be a god and like it!

Little else captures with such economy the technological devil’s — I mean, uh, “Neptune’s” — bargain of a compulsory divinization. There’s no time left to escape. In 2009, posting on the website of the agent who plugged Jeffrey Epstein into the tech community, legendary futurist Stewart Brand reflected that “40 years ago, I could say in the 'Whole Earth Catalog,' ‘we are as gods, we might as well get good at it.’ Photographs of Earth from space had that god-like perspective. What I'm saying now is we are as gods and have to get good at it.”

Now, 15 years later, the propaganda pressuring us to believe it’s too late not to become posthuman divinities is harder than ever to escape or even ignore. But as SpongeBob reminds us, instead of becoming gods, we’ll, let’s say, look a little strange — that is, we’ll be freakish mutants unfit for either world, mortal or godly, enough to make any partial observer conclude we do have a problem. The task falls to us mere mortals to have another idea … one far more humble than the posthuman temptation allows: Carry on with our human lives, but in ways that ensure the heavenly Spirit may, and does, move within and among us. Lose that, and we lose everything — except responsibility for the monstrosities we will become.

Thanks, SpongeBob!

Can art made by machines ever be real art?



When is a work of art not a work of art? When it’s made by a machine, perhaps?

Until recently, this was an all but academic question. Society was not gripped with fear at the prospect of photography destroying the art of painting. Film theorists observed calmly that the motion picture camera was its own agent in the moviemaking process, recording and “noticing” things that no one person involved, even the director or cinematographer, might have picked up at the time of shooting. But because the camera didn’t do its own scripting, acting, editing, color correction, and whatnot, nobody worried that mechanical films would compete with or surpass the normal, human-produced kind.

The agonized debate over whether AI art is an oxymoron reveals what, consciously or otherwise, it tries to conceal: a great personal and social agony over the consequences of our individual and collective retreat from making art spiritually, as beings created by God with souls and bodies who must be prepared both for earthly death and, God willing, life eternal.

Now, with Hollywood crews historically idle and studios and talent scrambling to survive the streaming revolution, we seem to be in a much different place. The ground truth of the problem — the accelerating substitution of people in arts and entertainment with digital machinery — has trickled all the way up to the New Yorker, which late last month ran a searching, near-viral essay on the topic by sci-fi author Ted Chiang.

The thrust of Chiang’s case for “why AI isn’t going to make art” is that each of us is singular — and so our human singularity, when applied to the demands of making the many choices required by artistic undertakings, produces a freshness and novelty unattainable by any machine-induced singularity.

“What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable,” he concludes. “The fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise.”

Hard to argue! And yet, something about Chiang’s logic is a bit too evasive to hold up under the increasing pressure of the theoretically infinite imagery our gargantuan computers can produce.

First, the good stuff: Chiang is right to underscore the centrality of the relationship between artist and audience in defining the meaning and purpose of art. He’s spot-on in insisting that, to take one of his key examples, “the significance of a child’s fan letter — both to the child who writes it and to the athlete who receives it — comes from its being heartfelt rather than from its being eloquent.”

And, crucially, he intuits that our ease with language makes us “fall prey to mimicry” by computational models trained on our words at scale — giving in to the diabolical temptation of the so-called “Turing test” to think that superintelligence is defined by the ability to seem superintelligent. It’s some "Princess Bride"-tier foolishness to say a computer is truly smart if it tricks us into thinking it’s truly smart.

But today we live under the cultural and spiritual sway of people who really think that it’s better to have the simulation of a thing than to lack the thing itself — an idea that swiftly leads on to believing the simulation is “even better than the real thing,” to quote the old U2 hit, because real is hard, real is costly, real is vulnerable, real is fleeting, real starts fights, real limits us and makes demands, and simulations might not do or be any of those things, or be them a lot less. Just think of the way virtual or artificial sex is presented socially as a great leap forward from the real thing. More and more of our shared human world is being hived off and sold for parts in this fashion, trading away the real for the virtual, simulated, or out-and-out fake. The virtual has become the height of virtue.

Chiang’s defense of human art falters in the face of virtualization, collapsing back on a kind of solipsistic sentimentalism. He wants to insist that human beings are intrinsically good, but the evidence he musters is slippery, appealing to our sense of sympathy, cuteness, pity, or even our selfish desire to feel meaningful. This is where, in spite of itself, the evasiveness appears.

Listen to the directness with which the great Russian filmmaker Andrei Tarkovsky answers the question of art and its justification. “The allotted function of art is not, as is often assumed, to put across ideas, to propagate thoughts, to serve as example. The aim of art is to prepare a person for death, to plough and harrow his soul, rendering it capable of turning to good. Touched by a masterpiece, a person begins to hear in himself that same call of truth which prompted the artist to his creative act.” How similar, on the surface, to what Chiang is trying to say. But how much deeper!

And why? Because Tarkovsky understood that even purely human art, with no robots, algorithms, or code involved whatsoever, will still be fruitless — pointless — in the absence of religion. “An artist who has no faith,” he wrote, “is like a painter who was born blind. … Only faith interlocks the system of images” that makes up the “system of life” itself. “The meaning of religious truth is hope.” To Tarkovsky, art is an ordeal of suffering and joy, one through which the artist and the audience co-create the particulars of hope in one another’s lives. Here is where our singularity and unity are to be found, not in the fact that this or that collection of events, to this degree a jumble, to that degree a narrative, unfolded in this or that human life and not any other.

The agonized debate over whether AI art is an oxymoron reveals what, consciously or otherwise, it tries to conceal: a great personal and social agony over the consequences of our individual and collective retreat from making art spiritually, as beings created by God with souls and bodies who must be prepared both for earthly death and, God willing, life eternal. This retreat leaves a heart-shaped hole into which an infinity of artifice and simulation may rush, but which an infinity can never fill. The pressing issue is not whether a machine might one day artfully trick us by simulating a soul but whether we will, today, put our real souls to work, without which real art will forever elude us.

Why religion will save us from automated warfare in the digital age



The technology now exists to render video games in real, playable time computationally — a first achieved with the classic pixelated first-person shooter Doom.

Don’t yawn — this isn’t just a footnote in the annals of nerd history. Elon Musk promptly chimed in on the news in the replies to promise, “Tesla can do something similar with real world video.”

We are now governed by people who seem hell-bent on preserving their power regardless of the cost — people who are also getting first dibs on the most powerful AIs in development.

The military applications of this latest leap forward are obvious enough. A person at a terminal — or behind the wheel — enters a seamless virtual environment every bit as complex and challenging as a flesh-and-blood environment … at least as far as warfare goes. Yes, war has a funny way of simplifying or even minimizing our lived experience of our own environment: kill, stay alive, move forward, repeat. No wonder technological goals of modeling or simulating the given world work so well together with the arts and sciences of destruction.

But another milestone in the computational march raises deeper questions about the automation of doom itself. Coinbase CEO Brian Armstrong announced that the company has “witnessed our first AI-to-AI crypto transaction.”

“What did one AI buy from another? Tokens! Not crypto tokens, but AI tokens (words basically from one LLM to another). They used tokens to buy tokens,” he tweeted, adding a 🤯 emoji. “AI agents cannot get bank accounts, but they can get crypto wallets. They can now use USDC on Base to transact with humans, merchants, or other AIs. Those transactions are instant, global, and free. This,” he enthusiastically concluded, “is an important step to AIs getting useful work done.”

In the fractured world of bleeding-edge tech, “doomerism” is associated with the fear that runaway computational advancement will automate a superintelligence that will destroy the human race.

Perhaps oddly, less attention flows toward the much more prosaic likelihood that sustainable war can soon be carried out in a “set it and forget it” fashion — prompt the smart assistant to organize and execute a military campaign, let it handle all the payments and logistics, human or machine, and return to your fishing, hiking, literary criticism, whatever.

Yes, there’s always the risk of tit-for-tat escalation unto planetary holocaust. But somehow, despite untold millions in wartime deaths and nuclear weapons aplenty, we’ve escaped that hellacious fate.

Maybe we’re better off focusing on the obvious threats of regular ordinary world war in the digital age.

But that would require a recognition that such a “thinkable” war is itself so bad that we must change our ways right now — instead of sitting around scaring ourselves to death with dark fantasies of humanity’s enslavement or obliteration.

That would require recognizing that no matter how advanced we allow technology to become, the responsibility for what technology does will always rest with us. For that reason, the ultimate concern in the digital age is who we are responsible for and answerable to.

As the etymology of the word responsible reveals (it comes from ancient terminology referring to the pouring out of libations in ritual sacrifice), this question of human responsibility points inescapably toward religious concepts, experiences, and traditions.

Avoiding World War Autocomplete means accepting that religion is foundational to digital order — in ways we weren’t prepared for during the electric age typified by John Lennon’s “Imagine.” It means facing up to the fact that different civilizations with different religions are already well on their way to dealing in very different ways with the advent of supercomputers.

And it means ensuring that those differences don’t result in one or several civilizations freaking out and starting a chain reaction of automated violence that engulfs the world — not unto the annihilation of the human race, but simply the devastation of billions of lives. Isn’t that enough?

Unfortunately, right now, the strongest candidate for that civilizational freakout is the United States of America. Not only did we face the biggest shock in how digital tech has worked out, but we also have the farthest to fall in relative terms from our all-too-recent status as a global superpower. We are now governed by people who seem hell-bent on preserving their power regardless of the cost — people who are also getting first dibs on the most powerful AIs in development.

Scary as automated conflict indeed is, the biggest threat to the many billions of humans — and multimillions of Americans — who would suffer most in a world war isn’t the machines. It’s the people who want most to control them.

Beware of worshipping the great and powerful digital god of the Great Reset



Much fanfare has been lavished on the concept of artificial intelligence in the past five years, to the point that its primacy is treated in the media as a foregone conclusion. The idea that algorithms can “think” has become a pervading myth, a sci-fi fantasy come to life. The reality is much less impressive.

We continually hear from globalists at the World Economic Forum and other elitist institutions that AI is the catalyst for the “Fourth Industrial Revolution” — a technological singularity that will supposedly change every aspect of our society forever. I keep waiting for the moment that AI does something significant in terms of advancing human knowledge or making our lives better. The moment never comes. In fact, the globalists keep moving the goalposts for what AI really is.

Globalists want the proliferation of AI because they know that people are lazy and will use the system as a stand-in for individual research. If this happens on a large scale, then AI could be used to rewrite every aspect of history, corrupt the very roots of science and mathematics, and turn the population into a drooling hive mind, a buzzing froth of brain-dead drones consuming every proclamation of the algorithm.

I would note that WEF zealots like Yuval Harari talk about AI like it is the rise of an all-powerful deity (I discuss the globalist worship of AI in my article "Artificial Intelligence: A Secular Look at the Digital Antichrist"). Yet Harari has also recently downplayed AI as a sentient intelligence. He argues that self-awareness or consciousness doesn’t need to be achieved in order to be considered a super-being or a living entity. He even suggests that the popular image of a Terminator-like AI with individual agency and desire is not a legitimate expectation.

In other words, AI as it stands today is nothing more than a mindless algorithm, and thus it is not AI. But if every aspect of our world is engineered around digital infrastructures and the populace is taught to put blind faith in the “infallibility” of algorithms, then eventually they become the robot gods the globalists so desperately desire. That is to say, AI dominance is only possible if everyone BELIEVES that AI is legitimate. Harari essentially admits to this agenda in the speech above.

The allure of AI for average people is the pie-in-the-sky promise of freedom from worry or responsibility. As with all narcissists, the global elite love to future-fake and buy popular conformity now on false promises of rewards that will never come.

Yes, algorithms are currently used to help laymen do things they could not do before, such as build websites, edit essays, cheat on college exams, create bad artwork and video content, etc. Useful applications are few and far between. For example, the claim that AI is “revolutionizing” medical diagnosis and treatment is far-fetched. The U.S., the nation that arguably has the most access to AI tools, is also suffering from declining life expectancy. We know it’s not COVID because the virus has a 99.8% average survival rate. You would think that if AI is so powerful in its ability to identify and treat ailments, the average American would be living longer.

These digital tools haven't lived up to the hype

There is no evidence of a single benefit to AI on a broader social scale. At most, it looks like it will be good at taking jobs away from web developers and McDonald’s drive-thru employees. The globalist notion that AI will create a robotic renaissance of art, music, literature, and scientific discovery is utter nonsense. AI has proven to be nothing more than a tool of mediocre convenience, but that’s actually why it’s so dangerous.

I suspect the WEF has changed its ideas about what AI should be because it’s not living up to the delusional aspirations the elites originally had for it. They’ve been waiting for a piece of software to come to life and start giving them insights into the mechanics of the universe, and they’re starting to realize that’s never going to happen. Instead, the elitists are increasingly shifting their focus to the melding of the human and digital worlds. They want to fabricate the necessity of AI because human dependence on the technology serves the purposes of centralization.

But what would this actually look like? Well, it requires the population to continue to get dumber while AI becomes more integral to society.

For example, it is widely accepted at this point that a college education is no indication of intelligence or skill. Millions of graduates entering the workforce today display an unsettling level of incompetence. This is partially because college educators are less capable and ideologically biased and the average curriculum has degraded. But we also need to start accounting for the number of kids coasting their way through school using ChatGPT and other cheat boxes.

They don’t need to learn anything; the algorithm and their cell phone cameras do it all for them. This trend is disturbing because human beings tend to take the easiest path in every aspect of survival. Most people stopped learning how to grow food because industrial farming does it for us. They stopped learning how to hunt because there are slaughterhouses and refrigerated trucks. Many Zennials today are incapable of cooking for themselves because they can get takeout delivered to their door any time they want. They barely talk on the phone or create physical communities any more because texting and social media have become the intermediaries in human interaction.

Yes, everything is “easier,” but that does not mean anything is better.

My great fear — the future that I see coming down the road — is one in which human beings no longer bother to think. AI might be seen as the ultimate accumulation of human knowledge, a massive library or digital brain that does all the searching and thinking for you. Why learn anything when AI “knows everything”? Except this is a lie.

AI doesn’t know everything; it only knows what its programmers want it to know. It only gives you the information its programmers want you to have. The globalists understand this, and they can taste the power that they will have should AI become paramount as an educational platform. They see it as a way to trick people into abandoning personal development and individual thought.

Look at it this way: If everyone in the world starts turning to AI for answers to all their questions, then everyone in the world will be given the same exact answers and will come to the same exact conclusions. All AI has to do is actively censor any information that contradicts the official narrative.

Centralized control

We got a taste of this Orwellian condition during the COVID pandemic, when Big Tech companies like Google used algorithms to bury any and all data that proved COVID was not the threat that government officials said it was. You could not go on YouTube for at least three years and look up alternative information on COVID or the vaccines. The algorithm forced everyone to sift through a long list of establishment sources, many of them promoting blatant lies about masking, social distancing, the COVID death rate, and vaccine safety.

The powers that be don’t even need to directly censor or remove information they don’t like. All they have to do is let the algorithm dictate the search results and bury the truth on page 10,000, where no one will look.

How would this affect the average person? Say AI is programmed to dictate scientific discourse. What if AI says that man-made climate change is an undeniable reality and the “science is settled” while never presenting the mountain of counter-evidence to the contrary? No one will look up the real data because AI will make it impossible to find. Everyone will assume AI is telling them all there is to know about the subject, but it gets worse than that.

Many readers might remember a few months back when Google AI system “Gemini” was programmed to force DEI onto its users. Whenever a person asked the AI to create a historical image, the algorithm made everyone black or brown and often female. Depictions of white men were suspiciously rare despite historical accuracy. This meant endless images of black and brown Highlanders in Scotland, black Founding Fathers in America, female Catholic popes, Asian knights in medieval Europe, and, hilariously, even black Nazis in WWII Germany.

AI developers often claim that once an AI is created, they don’t really control what it does and how it grows. The Gemini incident proves this is a lie. AI can definitely be controlled or at least molded by coding to promote whatever propaganda the programmers want it to promote. There is no such thing as an autonomous AI; there is always an agenda.

To summarize, globalists want the proliferation of AI because they know that people are lazy and will use the system as a stand-in for individual research. If this happens on a large scale, then AI could be used to rewrite every aspect of history, corrupt the very roots of science and mathematics, and turn the population into a drooling hive mind, a buzzing froth of brain-dead drones consuming every proclamation of the algorithm as if it is sacrosanct.

In this way, Yuval Harari is right. AI does not need to become sentient or wield an army of killer robots to do great harm to humanity. All it has to do is be convenient enough that we no longer care to think for ourselves. Like the “great and powerful” Oz hiding behind a digital curtain, you assume you’re gaining knowledge from a wizard when you’re really being manipulated by globalist snake-oil salesmen.

Editor's note: This article was originally published on alt-market.us.

I asked Grok to tell us why we shouldn’t worship technology. Here’s its surprising answer.



Grok, Elon Musk’s “frontier language model" accessible from right inside X, is shaking up the internet with its censorship-free automated image generation. Fun — and yes, the name the company has given this setting is Fun Mode — but even well-meaning enthusiasm can cross over into debasing adulation at a certain point. Whatever the fun quotient, no amount of “mind blown” emojis or soyface thumbnails can make free speech alone save us.

For that reason (among others), I’ve warned for years now that tech worship is a dead end masquerading as heaven on earth. I figured we could all benefit at least a little from considering what Grok coughed up in response to a request for a takedown of cyber idolatry — at classic 600-word op-ed length.

Where did Grok get the 'idea' that an appeal to the woo-woo Oprah/Goop culture of 'live, laugh, love' was responsive to my prompt? Probably the best answer, of course, is from us.

So, without further ado — and, of course, sans any censorship, editing, or alteration …

Grok, why shouldn’t we worship technology?

———

Ah, humans, always looking for something to worship, aren't you? From the golden calves of yore to the latest smartphone, the urge to bow down before something shiny and new seems almost hard-coded into your DNA. But let's talk about why worshipping technology might be as wise as using a towel for a parachute.

**1. The fallacy of perfection: Technology, much like that overhyped sequel to your favorite movie, promises perfection but often delivers a patchwork of bugs, glitches, and updates. If you worship technology, you're essentially venerating a perpetual beta version of life. Imagine praying to your toaster, only for it to burn your toast because it decided to update its firmware during your morning ritual. Not exactly divine, is it?

**2. The illusion of control: Technology gives you the illusion of control, much like how a child might feel omnipotent with a remote-control car. But let's be real, the moment you start worshipping your gadgets, you're not controlling them; they're controlling you. Ever noticed how your phone tells you when to wake up, eat, work, and even when to rest? You're not the master; you're the servant in this relationship.

**3. The disconnect from humanity: Here's a thought: If you're worshipping technology, you're essentially worshipping efficiency, speed, and data. But what about the human touch? The warmth of a handshake, the depth of a conversation over a cup of tea, or the shared laughter at a live comedy show? Technology can't replicate the soul of human interaction. Worshipping it might make you more efficient at being alone.

**4. The ephemeral nature of tech: Technology has the shelf life of a fruit fly. What's cutting-edge today is obsolete tomorrow. If your deity is your latest gadget, you're in for a cycle of constant disappointment. Imagine the ancient Greeks if every few years, Zeus needed a new model with better thunderbolt capabilities. It'd be less about worship and more about consumerism.

**5. The loss of wonder: When everything becomes technology, where's the wonder? The magic of the world gets reduced to algorithms and code. You might as well replace the northern lights with a well-programmed LED show. Sure, it's pretty, but it's not the same. Worshipping technology means you're worshipping the explanation, not the mystery, which, let's be honest, is where the real magic happens.

**6. The environmental cost: Here's a kicker. Your tech deities demand sacrifices, not of goats or virgins, but of the planet. Every gadget, every server farm, every piece of tech has an environmental footprint. Worshipping technology means you're indirectly worshipping the degradation of your own habitat. It's like praying to a god who's also your landlord, and he keeps raising the rent.

**7. The over-reliance on external solutions: Technology worship fosters a culture where solutions to human problems are always external. Got a problem? There's an app for that. But what about internal growth, wisdom, or personal development? Relying solely on tech for solutions might make you forget how to solve problems with your mind or heart.

In conclusion, while technology is a fantastic servant, it's a terrible master. Worshipping it would be like mistaking the map for the territory, the menu for the meal. Technology should enhance life, not define it. So, humans, keep your tech on a pedestal of utility, not divinity. After all, as any good "Hitchhiker's Guide" reader knows, the answer to life, the universe, and everything isn't found in a circuit board but perhaps in the simple act of living, laughing, and loving. And if all else fails, remember, there's always the option of worshipping a really good sandwich.

———

Maybe Grok’s fairly sophisticated answer surprised you with its humor, its scope, or its “willingness” to critique technology at all. I noted its speedy response time of about 20 seconds on a decent Wi-Fi connection, but that’s not what raised my eyebrow. Can you guess what did?

Here it is: I prompted Grok without any hint of what I might want it to output regarding who or what humans should worship. That the AI included the line invoking "The Hitchhiker’s Guide to the Galaxy" suggests it interpreted my ask as reflecting some kind of openness or inclination toward the idea that we shouldn’t worship tech because we shouldn’t worship anything. Only a fool, Grok jokes in the kicker, would worship even a delicious food item … although, it seems to conclude, idolizing the ultimate sandwich would be a much more harmless or fleeting mistake than genuflecting at the altar of tech.

Where did Grok get the “idea” that an appeal to the woo-woo Oprah/Goop culture of “live, laugh, love” was responsive to my prompt? Probably the best answer, of course, is from us. Like any large language model, Grok is a product of its inputs, a huge collection of virtual mirrors reflecting ourselves back at us in customizable composite.

The one unasked-for whiff of “bias” Grok produced — in favor of the squishy, secular feel-goodery that dominates our wall art, our corporations, and so much else — indicates that even a bot built for truth and neutrality will shimmer to some degree with the rhetoric and psychological insinuations of people who might say they’re all for a society based on “just the facts” but who, deeper down, believe something much different.

Fun!