'There's nowhere to go': Will Elon Musk stop the AI Antichrist — or become it?



Peter Thiel is going viral all over again in a new video interview with the New York Times' Ross Douthat.

The Catholic conservative columnist threw Thiel huge theological questions about transhumanism, AI, and the Antichrist — all topics Thiel has weighed in on with increasing intensity. But in the course of the conversation, Thiel dropped a shocking story about a recent discussion he had with Elon Musk about the viability of Mars as an escape from Earth and its very human predicaments.

'Elon came to believe that if you went to Mars, the socialist US government, the woke AI would follow you to Mars.'

Among numerous conversations last year, Thiel revealed, "I had the seasteading version with Elon where I said: If Trump doesn’t win, I want to just leave the country. And then Elon said: There’s nowhere to go. There’s nowhere to go."

"It was about two hours after we had dinner and I was home that I thought of: Wow, Elon, you don’t believe in going to Mars any more. 2024 is the year where Elon stopped believing in Mars — not as a silly science tech project but as a political project. Mars was supposed to be a political project; it was building an alternative. And in 2024 Elon came to believe that if you went to Mars, the socialist U.S. government, the woke AI would follow you to Mars."

Follow the leader

The stunning revelation came about during an earlier meeting between Musk and DeepMind CEO Demis Hassabis brokered by Thiel. As Thiel paraphrased the exchange between the two, Demis told Musk he was "working on the most important project in the world," namely "building a superhuman AI," to which Musk replied it was he who was working on the most important project in the world, "turning us into interplanetary species." As Thiel recounted, "Then Demis said: Well, you know my AI will be able to follow you to Mars. And then Elon went quiet."

Assuming Thiel has conveyed pretty much the truth, the whole truth, and nothing but the truth about the episode, the ramifications extend in many directions, including toward Musk's repeated meltdowns (or crashouts, as the Zoomers say) about the One Big Beautiful Bill Act and the potential implosion of the American political economy due to runaway debt and deficit spending.

But the main point, of course, pertains to Mars itself, which represents in the visions of many more people than just Elon Musk the idea of the ultimate, last-ditch, fail-safe escape from the "pale blue dot" of planet Earth.

RELATED: There’s a simple logic behind Palantir’s controversial rise in Washington

  Alex Karp. Kevin Diestsch/Getty Images

A backup civilization

As someone who has covered the Mars dream off and on for almost 10 years, beginning around 2016 with an op-ed on how Mars colonization would not succeed without Christian underpinnings, I raised both eyebrows at Thiel's anecdote because of the way it indicated a growing spiritual sense in both tech titans of the risk of an inescapable final showdown on Earth in our lifetimes.

Musk gave an important speech at the World Governments Summit a few years ago in which he argued reasonably that one global government is bad because it invites world collapse. Allowing multiple civilizations to exist politically and share space on Earth was good because history proves that even, or especially, the biggest and best civilizations eventually collapse. If you don't want human civilization as a whole to suffer the same fate, you probably want to hedge your bets and have backups.

Unfortunately, by way of example, he suggested that the fall of Rome was mitigated by the rise of the Islamic empires. In reality, the Ottoman Turks — and all too many Crusaders — destroyed the Roman Empire, which prevailed in the East after Rome's fall for many centuries. The logic of bet-hedging with multiple civilizations isn't much helped by the example of civilization-destroying wars.

Mars attacks ... or not

That problem stuck out to me once again because of how central to Musk's logic for colonizing Mars was the idea that tomorrow's Martians could come back and save Earth if things went in too wrong a direction. Now, Musk seems to be stuck with the risk that Mars can’t escape Earth's problems because Martians can't escape Earthlings' AI, negating their planetary potential as a hedged bet against bad Earth outcomes.

Musk’s apparent concerns seem to indicate a lack of confidence that the right kind of AI — such as his own xAI? — can beat the wrong kind. That would seem to indicate logically that AI itself is the problem, because even or especially the best AI must tend severely toward total dominance over the whole world, putting all our civilizational eggs in a newly extreme way into just one civilizational basket.

No control without Christ

To me, at least, the challenge strengthens my thesis from almost 10 years ago that taking Christianity out of the discussion results in a dead end. Christ's admonition that His kingdom is "not of this world" is significant because human Christians with spiritual authority over AIs will shape them in ways that discourage their consolidation and dominance over all places humans ever go — making it possible for Mars not to be controlled by an AI that controls Earth, in the same way that it would be made possible for, say, America not to be controlled by Chinese AI, or vice versa.

Absent a human spiritual authority granted by a God whose kingdom is not of this world, it just seems very difficult for human beings to find a way to stop AI from becoming not just a temporal power but itself also a spiritual authority — making it the lord of the world, to borrow the title of a famous novel about the triumph of the Antichrist.

RELATED: Why each new controversy around Sam Altman’s OpenAI is crazier than the last

  Justin Sullivan/Getty Images

Putting the AI in Antichrist

This dynamic is probably behind Thiel's uneasy remarks to Douthat when pressed about the problem of the Antichrist and the likelihood of his earthly appearance sooner rather than later. Douthat pointedly expressed concern that despite Thiel's insistence that he was working to discourage the rise of the Antichrist, a potential Antichrist might well look at Thiel's technological feats and embrace them as the best and quickest path to the most complete world domination.

Various wits online have noted that because the Antichrist is expected to be welcomed rapturously by the world, the controversial Thiel must therefore not be the Antichrist.

Our better natures

But the deeper question remains as to what could possibly lead someone to be rapturously welcomed as the lord of the world if not the only thing that seems capable of ruling the entire world plus Mars — that is, AI.

I think Thiel's remarks in the interview make it pretty clear that his goals with Palantir and related efforts have to do with reducing the risk that the wrong kind of person takes over the world with one AI. That kind of person, following the above logic, would not be a controversial and divisive person but someone who could be rapturously received as a figure who frees the world from having to do what Jesus teaches in order to become as gods.

That puts the spotlight on the transhumanism question, which Douthat also pressed with Thiel, who insisted throughout the interview that the "Judeo-Christian" approach to such matters is to forge forward trying not to settle for mere bodily transformation but transformation of soul as well.

Thiel emphasized in making this point that the word "nature" does not appear in the Old Testament. And it does seem that the long-term Western effort has pretty much failed to get past the destructive difficulty of rival interpretations of the Bible by pivoting to the so-called "Book of Nature" to scientifically converge on one universally legitimate interpretation of God's creation.

But an open question remains. Which is more plausible: (1) the worship of nature, which Thiel represents as personified by Greta Thunberg, leads to a rapturous embrace of a Greta-ish Antichrist's rule over all AI and the whole world; or (2) the worship of technology, which we might personify by someone who believes, as Musk says, that "physics sees through all lies," leads to a rapturous embrace of a Musk-like Antichrist's rule over all AI and the whole world?

Not by works alone

Musk and Thiel both seem to find themselves drawn into the AI game at the highest levels out of a feeling that they have little choice but to try to create some alternatives to worse AIs with more power to tempt people to consolidate all humanity under one bot to rule them all.

From an outside perspective, it seems sort of crazy to think that Christ's church — an institution not of this world — offers people an escape from AI bondage that even the hardest-working and best-intentioned secular geniuses on Earth can't provide.

But as the stakes keep rising and our most distinctive tech minds shudder in the face of AI's civilizational challenge, it seems less and less crazy by the day.

RFK’s Latest Idea Has Some In MAHA Scratching Their Heads

'Biometric data is irreplaceable, making it a highly sought-after asset in the digital age'

Can artificial intelligence help us want better, not just more?



The notification chimes. Another algorithmically selected product appears in your feed, something you never knew you wanted until this moment. You pause, finger hovering over the “buy now” button. Is this truly what you desire or just what the algorithm has decided you should want?

We’re standing at a fascinating turning point in human history. Our most advanced technologies — often criticized for trapping us in cycles of shallow wants and helpless determinism — could offer us unprecedented freedom to rediscover what we truly desire. “Agentic AI” — those systems that can perceive, decide, and act on their own toward goals — isn't just another tech advancement. It might actually liberate our attention and intention.

Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

So what exactly is agentic AI? Think of it not just as a fancy calculator or clever chatbot, but as a digital entity with real independence.

These systems perceive their environment, make decisions, and take actions with significant autonomy. They learn from experiences, adapt to new information on the fly, and pursue complex goals without our constant direction. Self-driving cars navigate busy streets, trading algorithms make split-second financial decisions, and research systems discover scientific principles on their own.

These aren't just tools any more. They're becoming independent actors in our world.

To understand this shift, I want to introduce you to two key thinkers: Marshall McLuhan, who famously said “the medium is the message,” and René Girard, who revealed how we tend to want what others want — a phenomenon he called “mimetic desire.” Through their insights, we can see how agentic AI works as both a medium and a mediator, reshaping our reality while influencing what we desire. If we understand how agentic AI will continue to shape our world, we can maintain our agency in a world increasingly shaped by technological advances.

McLuhan: AI as medium

McLuhan showed us that technology’s structure, scale, and speed shape our consciousness more profoundly than whatever content it carries. The railway didn’t just introduce transportation; it created entirely new kinds of cities and work.

Similarly, agentic AI isn't just another tool. It's becoming an evolving environment whose very existence transforms us.

McLuhan offers the example of electric light. It had no “content” in the conventional sense, yet it utterly reshaped human existence by eliminating darkness. Agentic AI similarly restructures our world through its core qualities: autonomy, adaptability, and goal-directedness. We aren't just using agentic AI; we’re increasingly living inside its operational logic, an environment where non-human intelligence shapes our decisions, actions, and realities.

Neil Postman, who built on McLuhan’s work, reminds us that while media environments powerfully shape us, we aren't just passive recipients: “Media ecology looks into how media of communication affect human perception, understanding, feeling, and value.” By understanding these effects, we can maintain our agency within them. We can be active readers of the message rather than just being written by it.

One big impact is on how we make sense of the world. As agentic AI increasingly filters, interprets, and generates information, it becomes a powerful participant in constructing our reality. The challenge is maintaining shared reality while technology increasingly forges siloed, personalized worlds. While previous technological advances contributed to this siloing, AI offers the possibility of connectivity. Walter Ong's concept of "secondary orality" suggests AI might help create new forms of connection that overcome the isolating aspects of earlier digital technologies.

Girard: AI as mediator of desire

While McLuhan helps us understand how agentic AI reshapes our perception, René Girard offers a framework for understanding how it reshapes what we want.

Girard’s theory of mimetic desire suggests that human desire is rarely spontaneous. Instead, we learn what to want by imitating others — our "models." This creates a triangle: us, the model we imitate, and the object of desire.

Now, imagine agentic AI entering this dynamic. If human history has been a story of desire mediated by parents, peers, and advertisements, agentic AI is becoming a significant new mediator in our digital landscape. Its ability to learn our preferences, predict our behavior, and present curated choices makes it an influential model, continuously shaping our aspirations.

RELATED: If AI isn’t built for freedom, it will be programmed for control

  Photo by Lintao Zhang/Getty Images

Peter Thiel, who studied under Girard at Stanford, suggests awareness of these dynamics can lead to more authentic choices. “The most successful businesses come from unique, non-mimetic insights,” Thiel observes. By recognizing how AI systems influence our desires, we can more consciously choose which influences to embrace and which to question, moving toward greater authenticity.

Look at recommendation engines, the precursors to full-blown agentic AI. They already operate on Girardian principles by showing us what others have bought or liked, making those items more desirable to us. Agentic AI takes this farther. Through its autonomous actions and pursuit of goals, it can demonstrate desirability.

The key question becomes: Is your interest in a hobby, conviction about an issue, or lifestyle aspiration truly your own? And more importantly, can you tell the difference, and does it matter if it brings you genuine fulfillment?

A collaborative future

The convergence of AI as both medium and mediator creates unprecedented possibilities for human-AI partnership.

Andrew Feenberg's critical theory of technology offers a constructive path forward. He argues that technologies aren't neutral tools but are laden with values. However, he rejects technological determinism, emphasizing that these values can be redesigned through what he calls “democratic rationalization,” the process by which users reshape technologies to better reflect their values.

“Technology is not destiny but a scene of struggle,” Feenberg writes. "It is a social battlefield on which civilizational alternatives are debated and decided." Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

This vision requires thoughtful design guided by human wisdom. The same capabilities that could liberate us could create more sophisticated traps. The difference lies not in the technology itself but in the values and intentions that shape its development. By drawing on insights from McLuhan, Girard, Postman, Ong, Thiel, Feenberg, and others, we can approach this evolving medium not with fear or passive acceptance, but with creative engagement.

The future of agentic AI isn't predetermined. It’s ours to shape as a technology that enhances rather than diminishes our humanity, that serves as a partner rather than a master in our ongoing quest for meaning, connection, and flourishing.

There’s a simple logic behind Palantir’s controversial rise in Washington



In 2003 Palo Alto, California, Peter Thiel, Alex Karp, and cohorts founded a software company called Palantir. Now, these 20-odd years later, with stock prices reaching escape velocity and government and commercial contracts secured from Huntsville to Huntington, Palantir seems to have arrived in the pole position of the AI race.

With adamantine ties to the Trump administration and deep history with U.S. intelligence and military entities to boot, Palantir has emerged as a decisive force in the design and management of our immediate technological, domestic, and geopolitical futures.

Curious, then, that so many, including New York Times reporters, seem to believe that Palantir is merely another souped-up data hoarding and selling company like Google or Adobe.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives.

It’s somewhat understandable, but the scales and scopes in play are unprecedented. To get a grasp on the scope of Palantir’s project, consider that every two days now humanity churns out the same amount of information that was accrued over the previous 5,000 years of civilization.

As then-Gartner senior vice president Peter Sondergaard put it more than a decade ago, “Information is the oil of the 21st century, and analytics is the combustion engine.”

Palantir spent the last 20 years building that analytics combustion engine. It arrives as a suite of AI products tailored to various markets and end users. The promise, as the era of Palantir proceeds and as AI-centered business and governance takes hold, is that decisions will be made with a near-complete grasp on the totality of real-time global information.

RELATED: Trump's new allies: Tech billionaires are jumping on the MAGA train

  The Washington Post/Getty Images

The tech stack

Famously seeded with CIA In-Q-Tel cash, Palantir started by addressing intelligence agency needs. In 2008, the Gotham software product, described as a tool for intelligence agencies to analyze complex datasets, went live. Gotham is said to integrate and analyze disparate datasets in real time to enable pattern recognition and threat detection. Joining the CIA, FBI, and presumably most other intelligence agencies in deploying Gotham are the Centers for Disease Control and Department of Defense.

Next up in the suite is Foundry, which is, again, an AI-based software solution but geared toward industry. It purportedly serves to centralize previously siloed data sources to effect maximum efficiency. Health care, finance, and manufacturing all took note and were quick to integrate Foundry. PG&E, Southern California, and Edison are all satisfied clients. So is the Wendy’s burger empire.

The next in line of these products, which we’ll see are integrated and reciprocal in their application to client needs, is Apollo, which is, according the Palantir website, “used to upgrade, monitor, and manage every instance of Palantir’s product in the cloud and at some of the world’s most regulated and controlled environments.” Among others, Morgan Stanley, Merck, Wejo, and Cisco are reportedly all using Apollo.

If none of this was impressive enough, if the near-total penetration into both business and government (U.S., at least) at foundational levels isn’t evident yet, consider the crown jewel of the Palantir catalog, which integrates all the others: Ontology.

“Ontology is an operational layer for the organization,” Palantir explains. “The Ontology sits on top of the digital assets integrated into the Palantir platform (datasets and models) and connects them to their real-world counterparts, ranging from physical assets like plants, equipment, and products to concepts like customer orders or financial transactions.”

Every aspect native to a company or organization — every minute of employee time, any expense, item of inventory, and conceptual guideline — is identified, located, and cross-linked wherever and however appropriate to maximize efficiency.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives. Consider the DMV, the wait list, the tax prep: Anything that can be processed (assuming enough energy inputs for the computation) can be — ahead of schedule.

The C-suite

No backgrounder is complete without some consideration of a company’s founders. The intentions, implied or overt, from Peter Thiel and Alex Karp in particular are, in some ways, as ponderable as the company’s ultra-grade software products and market dominance.

Palantir CEO Alex Karp stated in his triumphal 2024 letter to shareholders: “Our results are not and will never be the ultimate measure of the value, broadly defined, of our business. We have grander and more idiosyncratic aims.” Karp goes on to quote both Augustine and Houellebecq as he addresses the company’s commitment first to America.

This doesn’t sound quite like the digital panopticon or the one-dimensionally malevolent elite mindset we were threatened with for the last 20 years. Despite their outsized roles and reputations, Thiel companies tend toward the relatively modest goals of reducing overall harm or risk. Reflecting the influence of Rene Girard’s theory that people rapidly spiral into hard-to-control and ultimately catastrophic one-upsmanship, the approach reflects a considerably more sophisticated point of view than Karl Rove’s infamously dismissive claim to be “history’s actors.”

“Initially, the rise of the digital security state was a neoconservative project,” Blaze Media editor at large James Poulos remarked on the dynamic. “But instead of overturning this Bush-era regime, the embedded Obama-Biden elite completed the neocon system. That’s how we got the Cheneys endorsing Kamala.”

In a series of explanatory posts on X made via the company's Privacy and Ethics account and reposted on its webpage, Palantir elaborated: “We were the first company to establish a dedicated Privacy & Civil Liberties Engineering Team over a decade ago, and we have a longstanding Council of Advisors on Privacy & Civil Liberties comprised of leading experts and advocates. These functions sit at the heart of the company and help us to embody Palantir’s values both through providing rights-protective technologies and fostering a culture of responsibility around their development and use.”

It's a far cry from early 2000s rhetoric and corporate policy, and so the issue becomes one of evaluation. Under pressure from the immensity of the data, the ongoing domestic and geopolitical instability manifesting in myriad forms, and particularly the bizarre love-hate interlocking economic mechanisms between the U.S. and China, many Americans are hungry to find a scapegoat.

Do we find ourselves, as Americans at least, with the advantage in this tense geopolitical moment? Or are we uncharacteristically behind in the contest for survival? An honest assessment of our shared responsibility for our national situation might lead away from scapegoating, toward a sense that we made our bed a while ago on technology and security and now we must lie in it.

Let the golden age begin



“Welcome to the new age,” as Imagine Dragons sings, and so here we are. It is a strange and uncanny time, as befits the long-deferred rise to power of America’s strange and special Gen X cohort. They are a generation — especially the so-called Xennials on the cusp — for whom the drama of their lives has entailed a special kind of mystical belief and experience.

While the mysticism of the standard Millennial is still of the kids’ table variety — Harry Potter and the zodiac of identity – and the Boomer variety remains, now more than ever, one that leverages radical skepticism toward authority with credulous speculation, Xers as a whole have always found themselves in the shadowy borderlands between the two.

Gen X is moreskeptical than Boomers and Millennials about the magic of imagination, yet more savvy about the power of meme magic. They’re more attuned to the spiritual pull of technology, whether utopian or dystopian, yet distinctly more attracted to the high-church Christianity that stands as the last bulwark against the post-human gnostic heresies that tempt their elders and youngers.

For these reasons, I have flagged the importance to future human events of the relationship between Gen Xers and their children, who straddle the ostensible generational divide between Zoomers and Alphas. In "Human Forever," I wrote that generations as Boomers understand them ain’t what they used to be, in large part because the triumph of digital technology over the intimacies of everyday life has aroused spiritual sensibilities to which people are now increasingly drawn regardless of age or cohort.

I don’t get everything right — God forbid — but here I’ve been vindicated. The Trump coalition is dominated spiritually and generationally by the Xer-Zoomer alliance, and because of this, the varieties of spiritual mysticism among Gen X men and their heirs are weighing heavily in the balance amid the onrushing future of technological advancements so profound and pervasive that only trustworthy spiritual authorities can rise above it to guide the many lost, confused, exhausted, battered, broken, and tempted among us.

Same as it ever was, of course. It has always been thus with trustworthy spiritual authorities — the only difference is the rejection and rebellion against them taken up in earnest over the course of modern Western history. That approach has clearly burned itself out, with the remaining well-organized options being, for the vast majority of Americans, two: the out-and-out worship of tech or the worship of God under the guidance of the church’s spiritual authorities.

The X-Z alliance has an outsized influence and responsibility in choosing carefully — not just because of their dominance in the Trump era, but because America can’t endure if Americans succumb to the theocratic temptation, whether in the form of an empire with an established church of tech or in a turbo-trad throne-and-scepter Leviathan.

Nor will things shake out too well if, instead of these established churches, the many simply lose confidence in “the American idea” and run for the exits — into the kinds of techno monasteries Elon Musk has referred to in typical winking fashion or into the real and ancient monasteries. I believe it’s very likely that lots of people will go into these latter monasteries and that the ancient church must be more than ready to receive them. But the life of the monastery is just about the total opposite of the life of the American dream, and in pain and love for the American people, the sudden implosion of American socioeconomic order can’t be desired or encouraged.

For that reason, ruling Xers, particularly those drawn to high-church Christianity, must take their people where they find them and avoid thrusting them into strange and new situations to which they are unaccustomed and which will break them instead of guide them. Even one small step in a scary direction is a profound spiritual and practical challenge, and it is in this way that people are most often led toward reliable and lasting improvement.

But this is tough counsel for Xers who understand that we are in a long-overdue regime change or refounding moment, when swift and decisive action really does seem to be necessary on a paradoxically prudential basis. The same goes for Xers who grasp that the technological leap that must be made to compensate for the precipitous decay in America of basic competence and functionality is a practical necessity given the ugly alternatives — such as the uncontrolled demolition of the so-called “global American empire,” which would introduce a degree of chaos at home and abroad that seems sure to spiral swiftly into anarchy or oblivion.

And yet one more difficulty intrudes. Even more troubling than the prospect of catastrophic meltdown is that of a golden age in the bad sense, that of an artificial Avalon constructed by an AI-powered antichrist. The bad or satanic golden age is actually now more plausible, and the apocalyptic end the church anticipates when the logic of simulating God (the better to replace him) is pushed to the limit now looms even in the minds of some of the leading tech figures, such as Peter Thiel.

So our ruling Xers find themselves faced with the double challenge of avoiding false Avalon and real apocalypse — all while preserving America instead of forcing it back to Old World forms or simulating it (the better to replace it) in cyberspace or on Mars ... without entombing America in a kind of sociopolitical embalming fluid.

Glad it’s not my job, as I sometimes like to joke. And yet in a very real sense, it is all of our jobs — especially those of us Xers who know from long experience that life simply cannot be reduced to mathematical technique or to power politics, even though these things cannot be expunged from the world through sheer force of intelligence or will. Many such Xers have themselves reached a midlife point at which the intellectual pursuits they adopted to survive the cataclysmic sequence of 9/11, the financial crisis, and the COVID lockdowns now seem inadequate to the moment. People really are getting burned out on merely intellectual content — and the expert explainers, critics, interpreters, and talkers who churn it out instant by instant.

The intellectuals, even those who are the most right most of the time about the most things, just can’t do what needs to be done to escape the bad golden age — in fact, they are leading us all too much, whether intentionally or not, toward just that future. To forge ahead in the right directions, fruitful directions, we need people with competence and clarity not just in intellectual and spiritual pursuits but in artistic ones. Soulful art that scales is what gives the many the ability to transition to what is coming in a way they can survive — gaining confidence, courage, and health relatively gradually at a time that seems always to be screaming at them for the kind of immediate radical transformation that shatters people instead of sculpting them.

Of course, art can induce cathartic change — that’s one of the main reasons people often seek it out. But far more important is that art communicates in ways people are starved for: in silence, in mood, in subtext, in the implicit, without explicit elaboration or expert explanation. This is, of course, the mode of communication that is ultimately to be found and sought out in communion with God and in church life, whether in the cathedral or in the monastery. But if it vanishes from public life, our social communication will be dominated by will and intelligence alone, and our given humanity will swiftly disappear or become unrecognizable.

Tacitly, almost instinctively, artists understand this. Unfortunately, art over the past decade or more has become so colonized by ideology or false idol worship that many have lost faith in the ability of artists to serve, as Marshall McLuhan said, as society’s “early warning systems,” or to share, as Andrei Tarkovsky said, “the misery and joy of bringing an image into being.”

Beck, one of the great Gen X artists, understood this well, and expressed it implicitly in “The Golden Age,” the opening track off of "Sea Change," his sumptuous and desolate record, also suitably titled for our moment.

“Put your hands on the wheel,” he sings. “Let the golden age begin.” Initially, it seems fabulous, freeing: “Let the window down, feel the moonlight on your skin / The desert wind cool your aching head / The weight of the world drift away instead.” But the good aspect of his golden age is tangled with the bad, in a way no man can tease apart: “It's a treacherous road with a desolated view / There's distant lights, but here they're far and few / The sun don't shine, even when it's day / Gotta drive all night just to feel like you're OK.”

In typical Gen-X style, Beck wrote these lines about a breakup. But they apply now to the specter of a national, social, personal, and spiritual crack-up. So much fear of the bad golden age permeates life, and so many explicators and elaborators focus our attention on the prospects of fighting the fire of will and intelligence with the fire of will and intelligence.

The church, by contrast, conveys to us that the push for golden ages, with all the good and bad they bring, will never end until the end times, which will come at a time none of us may know. It is safe to assume that technology will advance, that wild doctrines will proliferate, that people will do what we do as we always have, just all the more so. In this sense, and not just the celebratory one, it is time to let the golden age begin — and to focus, not only through will and intelligence but through art and soul — on surviving and thriving amid it, come what may.

Report: Democrat Megadonor’s Adviser Suggested Trump Staged Assassination Attempt

Mehlhorn reportedly sent an email claiming Trump may have staged the shooting which wounded the former president and killed one attendee.