Silicon Valley 'Christian' goes viral for chilling AI-Antichrist theory. Should we listen to him?



Peter Thiel might be the biggest head-scratcher in Silicon Valley. He’s a billionaire, a Trump-Vance-supporting Republican, a married gay man, a transhumanism enthusiast, and ... drum roll ... a “Christian.”

He’s publicly declared that Christianity is true and that Christ is the best role model; he’s deeply involved in various Christian organizations; and yet he’s openly admitted his affinity for transhumanism, believing that the future of humanity is a world where man conquers mortality by fusing with technology. It’s a twisted, human-centric version of the transformed, glorified body Christians are promised after death, says BlazeTV host Allie Beth Stuckey.

Recently Thiel has been in the headlines for his seminars on the Antichrist, which are a bizarre blend of theology and his controversial views on technology and transhumanism. In short, Thiel speculates that Revelation’s beast will be deeply connected to artificial intelligence. Whether a human leveraging AI for control, a pseudo-human system, or an AI-driven global order, Thiel is confident that artificial intelligence will play a key role in the end times.

And he’s not the first to suggest this. The idea that AI and the Antichrist are irrevocably connected — and maybe even synonymous — is a theory that has gained traction in recent years. When you think about it, the proposition isn’t all that crazy. AI’s capacity for global control, deception, economic dominance through digital systems, and false promises of salvation uncannily mirrors Revelation’s description of the Antichrist’s deceptive, totalitarian rule.

Despite Thiel’s theological waywardness, is there merit to his Antichrist warnings? Should we take him seriously?

BlazeTV host Allie Beth Stuckey dove into Thiel’s Antichrist theory on a recent episode of “Relatable.” Her conclusion? It’s complicated.

In an interview with New York Times opinion columnist Ross Douthat on the “Interesting Times” podcast, Thiel described the Antichrist as “a potential systemic threat rather than a literal individual, suggesting it could manifest as a one-world totalitarian state that promises peace and safety but suppresses freedom,” says Allie.

He explained that the Antichrist might weaponize fearmongering about technology’s dangers, like rogue AI, to trick people into accepting a powerful, centralized (likely AI-enabled) authority. In other words, he (or it) would convince the globe that the only way to avoid technology-induced apocalyptic scenarios and ensure safety and peace for all is to consolidate power, including technological power, under a global regime.

But some have noticed a strange incongruence. Thiel co-founded Palantir Technologies, which develops and produces the very types of technology he claims the Antichrist could wield against humanity.

Douthat called him out on this contradiction in their interview. “You're an investor in AI; you're deeply invested in Palantir, in military technology, in technologies of surveillance, in technologies of warfare, and so on, right? And it just seems to me that when you tell me a story about the Antichrist coming to power and using the fear of technological change to sort of impose order on the world, I feel like that Antichrist would maybe be using the tools that you are building,” he said.

Another glaring contradiction is Thiel’s support for transhumanism — the merging of man and machine to achieve immortality. This is, again, the very type of technology he warns could be monopolized and weaponized by the Antichrist.

What gives?

When Allie heard Thiel’s Antichrist theory, her red flag immediately went up. Thiel’s prediction seems to suggest that because the Antichrist will promote “technological stagnation” in order to gather power to himself, the best way to prevent such a scenario is to continue investing and advancing technology — even merging with it.

“It is interesting and maybe questionable that someone who makes a lot of money through technology would say that stopping technological innovation is actually going to, you know, usher in the Antichrist,” she says.

But more importantly, does Thiel’s prediction square with scripture’s accounts of the Antichrist?

The Bible outlines the Antichrist as a "man of lawlessness” (2 Thessalonians 2) who will exercise authority over "every tribe and people and language and nation” (Revelation 13) and eventually declare himself God. He is the evil harbinger of Christ’s second coming.

“So the debate that Peter Thiel is wading into is what is the means by which this person will be able to convince so many people that he is powerful and needs to have all this authority,” says Allie.

“Is it possible that this person uses the threat and the fear of AI-powered Armageddon to gain his power? I would say that is possible. … But is he some kind of metaphor for technological stagnation or climate change or whatever it is? No. [The Antichrist] is an actual man,” she explains.

“I do think it's interesting that Peter Thiel is talking about something like this. I would recommend that he and every single person get right with God.”

To hear more on Thiel’s Antichrist theories and Allie’s thorough analysis, watch the episode above.

Want more from Allie Beth Stuckey?

To enjoy more of Allie’s upbeat and in-depth coverage of culture, news, and theology from a Christian, conservative perspective, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Sam Altman Has Some Unfinished Business

Just a few months after OpenAI released ChatGPT—the viral artificial intelligence (AI) chatbot that uses "generative pre-trained transformers" (GPTs) to hold human-like conversations that has become the go-to source of assumed-accurate information for people across the globe—journalists Berber Jin and Keach Hagey published a profile of Big Tech’s fastest-rising star: OpenAI chief Sam Altman. The Wall Street Journal article, "The Contradictions of Sam Altman, AI Crusader," was released in the spring of 2023, and just over two years later, this profile has morphed into Hagey’s new book, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future.

The post Sam Altman Has Some Unfinished Business appeared first on .

Biotech founder sliced open his own legs on camera to prove his product is safe for US troops



Jake Adler, founder of the medical startup Pilgrim, was willing to bleed to show investors he was serious about his product.

At just age 21, the biotech entrepreneur is so convinced his product has legs that he wounded his own.

In a video sent to investors, Adler sterilized his thighs before reminding viewers that his product is intended to undergo proper and rigorous clinical investigations. But that didn't stop him from testing it on himself first.

'I'm allowed to do anything to my own body.'

Adler reportedly numbed his legs with lidocaine before using a medical device, a punch biopsy tool, to create two "scientifically precise wounds."

Adler then applied his product, called Kingsfoil, to one of the open wounds. The other wound was left undressed as a control subject.

Kingsfoil is a clay-based hemostatic dressing that turns into a gel-like matter when it touches the skin. It is designed to help close wounds and aid in healing.

The product seemingly stalled the bleeding on the wound it was applied to, according to Business Insider, which reviewed the video.

"I was very cautious," he told the outlet.

"When I looked through the laws, there was nothing that inherently said I couldn't do a test on myself."

Adler added, "In the same way you can get a tattoo, I'm allowed to do anything to my own body."

With a warning not to try this at home, Adler showed he was willing to go to any length to get his product to market. A few huge investments later, the young entrepreneur is pushing toward what he has been primed to do for years.

RELATED: Bioscience secrets reveal you need to chew harder — and use better gum

Photo by Ezra Acayan/Getty Images

Adler got a head start in 2023, acquiring a Thiel Fellowship just a year after graduating high school. The fellowship, backed by billionaire Peter Thiel, funds young people who "want to build new things instead of sitting in a classroom."

"Two years. $200,000. Some ideas can't wait," the website reads.

By March 2025, Pilgrim had acquired $3.25 million in investments, capital that has since ballooned to $4.3 million in seed funding at the time of this writing.

Now, Adler openly recognizes how his fellowship was able to eat up some of the initial costs that cause so many startups to stumble out of the gate. Adler says that while it can take most companies many more months to gain approval, Kingsfoil is able to accelerate its timeline thanks to partnerships with the Department of Defense.

RELATED: This 'Star Wars' vehicle is now real, and you don't need a license to fly one

Adler named Kingsfoil after the healing herb in "The Lord of the Rings."

The tech space is rife with these types of references to the J.R.R. Tolkien corpus; Alex Karp's Palantir is named after a seeing stone, Palmer Luckey's tech company Anduril refers to a sword, and Luckey's cryptobank startup Erebor is a mountain in the same lore.

While Adler admits that most of his ideas can be credited to works of fantasy, the unofficial banner under which these startups are named immediately evokes the expectation of an elevated standard. When a startup in this orbit uses one of these fantasy-themed monikers, it is expected to be both serious and promising.

Photo by Chung Sung-Jun/Getty Images

Adler explained in a March interview that his aspirations are focused on helping U.S. armed forces increase their readiness when it comes to defense, not weaponry.

For example, in addition to Kingsfoil, he has looked into the possibilities of controlling "sleep architecture" so that soldiers can feel as if they have slept for five hours when they have only slept for three. Adler does not want soldiers to rely on pharmaceuticals for rest or alertness.

The biotech entrepreneur also said he wants to build soldier readiness when it comes to chemical threats and create a system that can detect airborne pathogens or poisons. According to Business Insider, that system, dubbed ARGUS, would be coupled with Voyager, an inhaled mist to help the body neutralize chemicals (such as nerve agents) before they reach the bloodstream.

Pilgrim is just a five-person team, however, and these products are still prototypes or in the research and development stages.

As for Kingsfoil, its only current known side effect is minor skin irritation.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Palantir insiders aim to make Hollywood great again



A group of Palantir insiders close to co-founder Peter Thiel are raising funds for a new film studio — one that aims to reawaken the bolder, more unapologetically American spirit of Hollywood classics like "The Searchers" and "First Blood."

The privately circulated pitch deck for Founders Films — led by Palantir CTO Shyam Sankar, Palantir employee Ryan Podolsky, and investor Christian Garrett — states the problem bluntly. After years of cultural drift, “movies have become ideological, more cautious, and less entertaining."

First, rebuild what he calls the American Cinematic Universe, a category that includes 'Red Dawn,' 'Rocky IV,' 'Top Gun,' and 'The Hunt for Red October.'

Couldn’t agree more.

The American Cinematic Universe

So far, those involved are all playing it close to the chest. But we can surmise a few things from Sankar's public remarks to date.

Founders Films has a twofold strategy: First, rebuild what Sankar calls the American Cinematic Universe, a category that includes "Red Dawn," "Rocky IV," "Top Gun," and "The Hunt for Red October." Solid list.

Step Two: Take Hollywood back altogether — from foreign influence; from heavy-handed, overtly divisive progressive ideology; and perhaps, if he pulls it off and if we’re lucky, from the sort of suffocating, entirely numbers-driven decision-making that for too long has hamstrung real filmmakers and pushed millions away from cinema altogether.

Very interesting indeed, then, that from the otherwise confidential slides of the circulating pitch deck, we get this bit, which sounds like a promise: “Back artists unconditionally, take risk on novel IP.”

Now you’re talking. America needs to back artists. Period.

The return of fun

Of course, not every last American is going to care deeply about every one of Founders Films' potential projects, but that's the case with every movie house and every movie! American filmgoers and cinephiles have for almost two decades lived in an ever-shrinking box populated only with childish drivel, rehashed/remixed IP, and (face it) corporate progressive propaganda.

RELATED: Don Jr., other fans react to Hulk Hogan's death: 'A true American patriot!'

Photo by JIM WATSON/AFP via Getty Images

With luck, Founders Films garners investment, takes its shot, and succeeds. And you should want this, even if you don't find yourself on the so-called tech right or have any interest in Palantirian sorts of worldviews, for the following reason: The pendulum must swing for any real cinema to flourish again. Set aside high-minded cinema for a moment: When was the last time you saw a fun (or funny, for that matter) movie in a theater?

Founders Films, operating like a depth charge, may be just the thing to shake everyone out of their tedium and agitate the medium. Progressive narratives have dominated long enough that the entire exercise, even for progressives, has been emptied of vital spark.

Vision first

Hollywood needs the proverbial shot in the arm, even if it hurts: a genuine quasi-ideological counterweight, placing the director's or writer’s vision behind the wheel. Lefties have fought for this exact treatment since the 1970s.

That kind of shake-up may be the absolute best thing that can happen to Hollywood, regardless of politics. A plurality of voices apolitical, political, etc., should have a shot at prying open the death grip of progressive control over the arts. That alone may change the entirety of the game.

Milius-pilled

We don’t know much at this point. There are rumors that Elon Musk is the shadows of the project. A variety of military-based or -adjacent projects have been floated, and Sankar, writing at Substack, has hailed the work of legendary director/writer John Milius. If that's the direction Founders intends, cinema could heat up fast.

Can we assume the incredible technological power running under the Palantir umbrella is reapplied into film? Likely so. If (wisely) they bring in organic, human-oriented leavening agents in the form of true artists (less so the compliant, meek striver) to offset the use of AI, the impact on the industry and creatives nationwide could be powerful.

While there’s a baked-in audience for defiant right-coded material and Founders should serve them, with luck, Founders Films will consider the whole of the American film ecosystem and take a broad, long-term approach — launching propaganda-geared works designed to counter ideological opposites, sure, but also backing art “unconditionally” to open up the whole creative spectrum.

Let everyone take a shot. Allow non-ideological cinema a chance to flourish again: indies, new IP from dangerous artists, edgy comedy, the works. This was the '80s, the zeitgeist behind all those right-coded classics. Bring it back.

'There's nowhere to go': Will Elon Musk stop the AI Antichrist — or become it?



Peter Thiel is going viral all over again in a new video interview with the New York Times' Ross Douthat.

The Catholic conservative columnist threw Thiel huge theological questions about transhumanism, AI, and the Antichrist — all topics Thiel has weighed in on with increasing intensity. But in the course of the conversation, Thiel dropped a shocking story about a recent discussion he had with Elon Musk about the viability of Mars as an escape from Earth and its very human predicaments.

'Elon came to believe that if you went to Mars, the socialist US government, the woke AI would follow you to Mars.'

Among numerous conversations last year, Thiel revealed, "I had the seasteading version with Elon where I said: If Trump doesn’t win, I want to just leave the country. And then Elon said: There’s nowhere to go. There’s nowhere to go."

"It was about two hours after we had dinner and I was home that I thought of: Wow, Elon, you don’t believe in going to Mars any more. 2024 is the year where Elon stopped believing in Mars — not as a silly science tech project but as a political project. Mars was supposed to be a political project; it was building an alternative. And in 2024 Elon came to believe that if you went to Mars, the socialist U.S. government, the woke AI would follow you to Mars."

Follow the leader

The stunning revelation came about during an earlier meeting between Musk and DeepMind CEO Demis Hassabis brokered by Thiel. As Thiel paraphrased the exchange between the two, Demis told Musk he was "working on the most important project in the world," namely "building a superhuman AI," to which Musk replied it was he who was working on the most important project in the world, "turning us into interplanetary species." As Thiel recounted, "Then Demis said: Well, you know my AI will be able to follow you to Mars. And then Elon went quiet."

Assuming Thiel has conveyed pretty much the truth, the whole truth, and nothing but the truth about the episode, the ramifications extend in many directions, including toward Musk's repeated meltdowns (or crashouts, as the Zoomers say) about the One Big Beautiful Bill Act and the potential implosion of the American political economy due to runaway debt and deficit spending.

But the main point, of course, pertains to Mars itself, which represents in the visions of many more people than just Elon Musk the idea of the ultimate, last-ditch, fail-safe escape from the "pale blue dot" of planet Earth.

RELATED: There’s a simple logic behind Palantir’s controversial rise in Washington

Alex Karp. Kevin Diestsch/Getty Images

A backup civilization

As someone who has covered the Mars dream off and on for almost 10 years, beginning around 2016 with an op-ed on how Mars colonization would not succeed without Christian underpinnings, I raised both eyebrows at Thiel's anecdote because of the way it indicated a growing spiritual sense in both tech titans of the risk of an inescapable final showdown on Earth in our lifetimes.

Musk gave an important speech at the World Governments Summit a few years ago in which he argued reasonably that one global government is bad because it invites world collapse. Allowing multiple civilizations to exist politically and share space on Earth was good because history proves that even, or especially, the biggest and best civilizations eventually collapse. If you don't want human civilization as a whole to suffer the same fate, you probably want to hedge your bets and have backups.

Unfortunately, by way of example, he suggested that the fall of Rome was mitigated by the rise of the Islamic empires. In reality, the Ottoman Turks — and all too many Crusaders — destroyed the Roman Empire, which prevailed in the East after Rome's fall for many centuries. The logic of bet-hedging with multiple civilizations isn't much helped by the example of civilization-destroying wars.

Mars attacks ... or not

That problem stuck out to me once again because of how central to Musk's logic for colonizing Mars was the idea that tomorrow's Martians could come back and save Earth if things went in too wrong a direction. Now, Musk seems to be stuck with the risk that Mars can’t escape Earth's problems because Martians can't escape Earthlings' AI, negating their planetary potential as a hedged bet against bad Earth outcomes.

Musk’s apparent concerns seem to indicate a lack of confidence that the right kind of AI — such as his own xAI? — can beat the wrong kind. That would seem to indicate logically that AI itself is the problem, because even or especially the best AI must tend severely toward total dominance over the whole world, putting all our civilizational eggs in a newly extreme way into just one civilizational basket.

No control without Christ

To me, at least, the challenge strengthens my thesis from almost 10 years ago that taking Christianity out of the discussion results in a dead end. Christ's admonition that His kingdom is "not of this world" is significant because human Christians with spiritual authority over AIs will shape them in ways that discourage their consolidation and dominance over all places humans ever go — making it possible for Mars not to be controlled by an AI that controls Earth, in the same way that it would be made possible for, say, America not to be controlled by Chinese AI, or vice versa.

Absent a human spiritual authority granted by a God whose kingdom is not of this world, it just seems very difficult for human beings to find a way to stop AI from becoming not just a temporal power but itself also a spiritual authority — making it the lord of the world, to borrow the title of a famous novel about the triumph of the Antichrist.

RELATED: Why each new controversy around Sam Altman’s OpenAI is crazier than the last

Justin Sullivan/Getty Images

Putting the AI in Antichrist

This dynamic is probably behind Thiel's uneasy remarks to Douthat when pressed about the problem of the Antichrist and the likelihood of his earthly appearance sooner rather than later. Douthat pointedly expressed concern that despite Thiel's insistence that he was working to discourage the rise of the Antichrist, a potential Antichrist might well look at Thiel's technological feats and embrace them as the best and quickest path to the most complete world domination.

Various wits online have noted that because the Antichrist is expected to be welcomed rapturously by the world, the controversial Thiel must therefore not be the Antichrist.

Our better natures

But the deeper question remains as to what could possibly lead someone to be rapturously welcomed as the lord of the world if not the only thing that seems capable of ruling the entire world plus Mars — that is, AI.

I think Thiel's remarks in the interview make it pretty clear that his goals with Palantir and related efforts have to do with reducing the risk that the wrong kind of person takes over the world with one AI. That kind of person, following the above logic, would not be a controversial and divisive person but someone who could be rapturously received as a figure who frees the world from having to do what Jesus teaches in order to become as gods.

That puts the spotlight on the transhumanism question, which Douthat also pressed with Thiel, who insisted throughout the interview that the "Judeo-Christian" approach to such matters is to forge forward trying not to settle for mere bodily transformation but transformation of soul as well.

Thiel emphasized in making this point that the word "nature" does not appear in the Old Testament. And it does seem that the long-term Western effort has pretty much failed to get past the destructive difficulty of rival interpretations of the Bible by pivoting to the so-called "Book of Nature" to scientifically converge on one universally legitimate interpretation of God's creation.

But an open question remains. Which is more plausible: (1) the worship of nature, which Thiel represents as personified by Greta Thunberg, leads to a rapturous embrace of a Greta-ish Antichrist's rule over all AI and the whole world; or (2) the worship of technology, which we might personify by someone who believes, as Musk says, that "physics sees through all lies," leads to a rapturous embrace of a Musk-like Antichrist's rule over all AI and the whole world?

Not by works alone

Musk and Thiel both seem to find themselves drawn into the AI game at the highest levels out of a feeling that they have little choice but to try to create some alternatives to worse AIs with more power to tempt people to consolidate all humanity under one bot to rule them all.

From an outside perspective, it seems sort of crazy to think that Christ's church — an institution not of this world — offers people an escape from AI bondage that even the hardest-working and best-intentioned secular geniuses on Earth can't provide.

But as the stakes keep rising and our most distinctive tech minds shudder in the face of AI's civilizational challenge, it seems less and less crazy by the day.

RFK’s Latest Idea Has Some In MAHA Scratching Their Heads

'Biometric data is irreplaceable, making it a highly sought-after asset in the digital age'

Can artificial intelligence help us want better, not just more?



The notification chimes. Another algorithmically selected product appears in your feed, something you never knew you wanted until this moment. You pause, finger hovering over the “buy now” button. Is this truly what you desire or just what the algorithm has decided you should want?

We’re standing at a fascinating turning point in human history. Our most advanced technologies — often criticized for trapping us in cycles of shallow wants and helpless determinism — could offer us unprecedented freedom to rediscover what we truly desire. “Agentic AI” — those systems that can perceive, decide, and act on their own toward goals — isn't just another tech advancement. It might actually liberate our attention and intention.

Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

So what exactly is agentic AI? Think of it not just as a fancy calculator or clever chatbot, but as a digital entity with real independence.

These systems perceive their environment, make decisions, and take actions with significant autonomy. They learn from experiences, adapt to new information on the fly, and pursue complex goals without our constant direction. Self-driving cars navigate busy streets, trading algorithms make split-second financial decisions, and research systems discover scientific principles on their own.

These aren't just tools any more. They're becoming independent actors in our world.

To understand this shift, I want to introduce you to two key thinkers: Marshall McLuhan, who famously said “the medium is the message,” and René Girard, who revealed how we tend to want what others want — a phenomenon he called “mimetic desire.” Through their insights, we can see how agentic AI works as both a medium and a mediator, reshaping our reality while influencing what we desire. If we understand how agentic AI will continue to shape our world, we can maintain our agency in a world increasingly shaped by technological advances.

McLuhan: AI as medium

McLuhan showed us that technology’s structure, scale, and speed shape our consciousness more profoundly than whatever content it carries. The railway didn’t just introduce transportation; it created entirely new kinds of cities and work.

Similarly, agentic AI isn't just another tool. It's becoming an evolving environment whose very existence transforms us.

McLuhan offers the example of electric light. It had no “content” in the conventional sense, yet it utterly reshaped human existence by eliminating darkness. Agentic AI similarly restructures our world through its core qualities: autonomy, adaptability, and goal-directedness. We aren't just using agentic AI; we’re increasingly living inside its operational logic, an environment where non-human intelligence shapes our decisions, actions, and realities.

Neil Postman, who built on McLuhan’s work, reminds us that while media environments powerfully shape us, we aren't just passive recipients: “Media ecology looks into how media of communication affect human perception, understanding, feeling, and value.” By understanding these effects, we can maintain our agency within them. We can be active readers of the message rather than just being written by it.

One big impact is on how we make sense of the world. As agentic AI increasingly filters, interprets, and generates information, it becomes a powerful participant in constructing our reality. The challenge is maintaining shared reality while technology increasingly forges siloed, personalized worlds. While previous technological advances contributed to this siloing, AI offers the possibility of connectivity. Walter Ong's concept of "secondary orality" suggests AI might help create new forms of connection that overcome the isolating aspects of earlier digital technologies.

Girard: AI as mediator of desire

While McLuhan helps us understand how agentic AI reshapes our perception, René Girard offers a framework for understanding how it reshapes what we want.

Girard’s theory of mimetic desire suggests that human desire is rarely spontaneous. Instead, we learn what to want by imitating others — our "models." This creates a triangle: us, the model we imitate, and the object of desire.

Now, imagine agentic AI entering this dynamic. If human history has been a story of desire mediated by parents, peers, and advertisements, agentic AI is becoming a significant new mediator in our digital landscape. Its ability to learn our preferences, predict our behavior, and present curated choices makes it an influential model, continuously shaping our aspirations.

RELATED: If AI isn’t built for freedom, it will be programmed for control

Photo by Lintao Zhang/Getty Images

Peter Thiel, who studied under Girard at Stanford, suggests awareness of these dynamics can lead to more authentic choices. “The most successful businesses come from unique, non-mimetic insights,” Thiel observes. By recognizing how AI systems influence our desires, we can more consciously choose which influences to embrace and which to question, moving toward greater authenticity.

Look at recommendation engines, the precursors to full-blown agentic AI. They already operate on Girardian principles by showing us what others have bought or liked, making those items more desirable to us. Agentic AI takes this farther. Through its autonomous actions and pursuit of goals, it can demonstrate desirability.

The key question becomes: Is your interest in a hobby, conviction about an issue, or lifestyle aspiration truly your own? And more importantly, can you tell the difference, and does it matter if it brings you genuine fulfillment?

A collaborative future

The convergence of AI as both medium and mediator creates unprecedented possibilities for human-AI partnership.

Andrew Feenberg's critical theory of technology offers a constructive path forward. He argues that technologies aren't neutral tools but are laden with values. However, he rejects technological determinism, emphasizing that these values can be redesigned through what he calls “democratic rationalization,” the process by which users reshape technologies to better reflect their values.

“Technology is not destiny but a scene of struggle,” Feenberg writes. "It is a social battlefield on which civilizational alternatives are debated and decided." Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

This vision requires thoughtful design guided by human wisdom. The same capabilities that could liberate us could create more sophisticated traps. The difference lies not in the technology itself but in the values and intentions that shape its development. By drawing on insights from McLuhan, Girard, Postman, Ong, Thiel, Feenberg, and others, we can approach this evolving medium not with fear or passive acceptance, but with creative engagement.

The future of agentic AI isn't predetermined. It’s ours to shape as a technology that enhances rather than diminishes our humanity, that serves as a partner rather than a master in our ongoing quest for meaning, connection, and flourishing.

There’s a simple logic behind Palantir’s controversial rise in Washington



In 2003 Palo Alto, California, Peter Thiel, Alex Karp, and cohorts founded a software company called Palantir. Now, these 20-odd years later, with stock prices reaching escape velocity and government and commercial contracts secured from Huntsville to Huntington, Palantir seems to have arrived in the pole position of the AI race.

With adamantine ties to the Trump administration and deep history with U.S. intelligence and military entities to boot, Palantir has emerged as a decisive force in the design and management of our immediate technological, domestic, and geopolitical futures.

Curious, then, that so many, including New York Times reporters, seem to believe that Palantir is merely another souped-up data hoarding and selling company like Google or Adobe.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives.

It’s somewhat understandable, but the scales and scopes in play are unprecedented. To get a grasp on the scope of Palantir’s project, consider that every two days now humanity churns out the same amount of information that was accrued over the previous 5,000 years of civilization.

As then-Gartner senior vice president Peter Sondergaard put it more than a decade ago, “Information is the oil of the 21st century, and analytics is the combustion engine.”

Palantir spent the last 20 years building that analytics combustion engine. It arrives as a suite of AI products tailored to various markets and end users. The promise, as the era of Palantir proceeds and as AI-centered business and governance takes hold, is that decisions will be made with a near-complete grasp on the totality of real-time global information.

RELATED: Trump's new allies: Tech billionaires are jumping on the MAGA train

The Washington Post/Getty Images

The tech stack

Famously seeded with CIA In-Q-Tel cash, Palantir started by addressing intelligence agency needs. In 2008, the Gotham software product, described as a tool for intelligence agencies to analyze complex datasets, went live. Gotham is said to integrate and analyze disparate datasets in real time to enable pattern recognition and threat detection. Joining the CIA, FBI, and presumably most other intelligence agencies in deploying Gotham are the Centers for Disease Control and Department of Defense.

Next up in the suite is Foundry, which is, again, an AI-based software solution but geared toward industry. It purportedly serves to centralize previously siloed data sources to effect maximum efficiency. Health care, finance, and manufacturing all took note and were quick to integrate Foundry. PG&E, Southern California, and Edison are all satisfied clients. So is the Wendy’s burger empire.

The next in line of these products, which we’ll see are integrated and reciprocal in their application to client needs, is Apollo, which is, according the Palantir website, “used to upgrade, monitor, and manage every instance of Palantir’s product in the cloud and at some of the world’s most regulated and controlled environments.” Among others, Morgan Stanley, Merck, Wejo, and Cisco are reportedly all using Apollo.

If none of this was impressive enough, if the near-total penetration into both business and government (U.S., at least) at foundational levels isn’t evident yet, consider the crown jewel of the Palantir catalog, which integrates all the others: Ontology.

“Ontology is an operational layer for the organization,” Palantir explains. “The Ontology sits on top of the digital assets integrated into the Palantir platform (datasets and models) and connects them to their real-world counterparts, ranging from physical assets like plants, equipment, and products to concepts like customer orders or financial transactions.”

Every aspect native to a company or organization — every minute of employee time, any expense, item of inventory, and conceptual guideline — is identified, located, and cross-linked wherever and however appropriate to maximize efficiency.

The next-level efficiency, one imagines, will have radical implications for our rather inefficient lives. Consider the DMV, the wait list, the tax prep: Anything that can be processed (assuming enough energy inputs for the computation) can be — ahead of schedule.

The C-suite

No backgrounder is complete without some consideration of a company’s founders. The intentions, implied or overt, from Peter Thiel and Alex Karp in particular are, in some ways, as ponderable as the company’s ultra-grade software products and market dominance.

Palantir CEO Alex Karp stated in his triumphal 2024 letter to shareholders: “Our results are not and will never be the ultimate measure of the value, broadly defined, of our business. We have grander and more idiosyncratic aims.” Karp goes on to quote both Augustine and Houellebecq as he addresses the company’s commitment first to America.

This doesn’t sound quite like the digital panopticon or the one-dimensionally malevolent elite mindset we were threatened with for the last 20 years. Despite their outsized roles and reputations, Thiel companies tend toward the relatively modest goals of reducing overall harm or risk. Reflecting the influence of Rene Girard’s theory that people rapidly spiral into hard-to-control and ultimately catastrophic one-upsmanship, the approach reflects a considerably more sophisticated point of view than Karl Rove’s infamously dismissive claim to be “history’s actors.”

“Initially, the rise of the digital security state was a neoconservative project,” Blaze Media editor at large James Poulos remarked on the dynamic. “But instead of overturning this Bush-era regime, the embedded Obama-Biden elite completed the neocon system. That’s how we got the Cheneys endorsing Kamala.”

In a series of explanatory posts on X made via the company's Privacy and Ethics account and reposted on its webpage, Palantir elaborated: “We were the first company to establish a dedicated Privacy & Civil Liberties Engineering Team over a decade ago, and we have a longstanding Council of Advisors on Privacy & Civil Liberties comprised of leading experts and advocates. These functions sit at the heart of the company and help us to embody Palantir’s values both through providing rights-protective technologies and fostering a culture of responsibility around their development and use.”

It's a far cry from early 2000s rhetoric and corporate policy, and so the issue becomes one of evaluation. Under pressure from the immensity of the data, the ongoing domestic and geopolitical instability manifesting in myriad forms, and particularly the bizarre love-hate interlocking economic mechanisms between the U.S. and China, many Americans are hungry to find a scapegoat.

Do we find ourselves, as Americans at least, with the advantage in this tense geopolitical moment? Or are we uncharacteristically behind in the contest for survival? An honest assessment of our shared responsibility for our national situation might lead away from scapegoating, toward a sense that we made our bed a while ago on technology and security and now we must lie in it.