Why each new controversy around Sam Altman’s OpenAI is crazier than the last



Last week, two independent nonprofits, the Midas Project and the Tech Oversight Project, released after a year’s worth of investigation a massive file that collects and presents evidence for a panoply of deeply suspect actions, mainly on the part of Altman but also attributable to OpenAI as a corporate entity.

It’s damning stuff — so much so that, if you’re only acquainted with the hype and rumors surrounding the company or perhaps its ChatGPT product, the time has come for you to take a deeper dive.

Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits.

Most recently, iyO Audio alleged OpenAI made attempts at wholesale design theft and outright trademark infringement. A quick look at other recent headlines suggests an alarming pattern:

  • Altman is said to have claimed no equity in OpenAI despite backdoor investments through Y Combinator, among others;
  • Altman owns 7.5% of Reddit, which, after its still-expanding partnership with OpenAI, shot Altman’s net worth up $50 million;
  • OpenAI is reportedly restructuring its corporate form yet again — with a 7% stake, Altman stands to be $20 billion dollars richer under the new structure;
  • Former OpenAI executives, including Muri Murati, the Amodei siblings, and Ilya Sutskever, all confirm pathological levels of mistreatment and behavioral malfeasance on the part of Altman.

The list goes on. Many other serious transgressions are cataloged in the OpenAI Files excoriation. At the time of this writing, Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits. Accusations include everything from incestual sexual abuse to racketeering, breach of contract, and copyright infringement.

None of these accusations, including heinous crimes of a sexual nature, have done much of anything to dent the OpenAI brand or its ongoing upward valuation.

Tech's game of thrones

The company’s trajectory has outlined a Silicon Valley game of thrones unlike any seen elsewhere. Since its 2016 inception — when Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman convened to found OpenAI — the Janus-faced organization has been a tier-one player in the AI sphere. In addition to cutting-edge tech, it’s also generated near-constant turmoil. The company churns out rumors, upsets, expulsions, shady reversals, and controversy at about the same rate as it advances AI research, innovation, and products.

RELATED: Mark Zuckerberg's multibillion-dollar midlife crisis

Sean M. Haffey/Getty Images

Back in 2016, Amazon, Peter Thiel, and other investors pledged the company $1 billion up front, but the money was late to arrive. Right away, Altman and Musk clashed over the ultimate direction of the organization. By 2017, Elon was out — an exit which spiked investor uncertainty and required another fast shot of capital.

New investors, Reid Hoffman of LinkedIn fame among them, stepped up — and OpenAI rode on. Under the full direction of Sam Altman, the company pushed its reinforcement learning products, OpenAI Gym and Universe, to market.

To many at the time, including Musk, OpenAI was lagging behind Google in the race to AI dominance — a problem for the likes of Musk, who had originally conceived the organization as a serious counterweight against what many experts and laypeople saw as an extinction-level threat arising out of the centralized, “closed” development and implementation of AI to the point of dominance across all of society.

That’s why OpenAI began as a nonprofit, ostensibly human-based, decentralized, and open-source. In Silicon Valley’s heady (if degenerate) years prior to the COVID panic, there was a sense that AI was simply going to happen — it was inevitable, and it would be preferable that decent, smart people, perhaps not so eager to align themselves with the military industrial complex or simply the sheer and absolute logic of capital, be in charge of steering the outcome.

But by 2019, OpenAI had altered its corporate structure from nonprofit to something called a “capped-profit model.” Money was tight. Microsoft invested $1 billion, and early versions of the LLM GPT-2 were released to substantial fanfare and fawning appreciation from the experts.

Life after Elon

In 2020, the now for-limited-profit company dropped its API, which allowed developers to access GPT-3. Their image generator, DALL-E, was released in 2021, a move that has since seemed to define, to some limited but significant extent, the direction that OpenAI wants to progress. The spirit of cooperation and sharing, if not enshrined at the company, was at least in the air, and by 2022 ChatGPT had garnered millions of users, well on the way to becoming a household name. The company’s valuation rose to the ballpark of $1 billion.

After Musk’s dissatisfied departure — he now publicly lambastes "ClosedAI" and "Scam Altman" — its restructuring with ideologically diffuse investors solidified a new model: Build a sort of ecosystem of products which are intended to be dovetailed or interfaced with other companies and software. (Palantir has taken a somewhat similar, though much more focused, approach to the problem of capturing AI.) The thinking here seems to be: Attack the problem from all directions, converge on “intelligence,” and get paid along the way.

And so, at present, in addition to the aforementioned products, OpenAI now offers — deep breath — CLIP for image research, Jukebox for music generation, Shap-E for 3D object generation, Sora for generating video content, Operator for automating workflows with AI agents, Canvas for AI-assisted content generation, and a smattering of similar, almost modular, products. It’s striking how many of these are aimed at creative industries — an approach capped off most recently by the sensational hire of Apple’s former chief design officer Jony Ive, whose IO deal with the company is the target of iyO’s litigation.

But we shouldn’t give short shrift to the “o series” (o1 through o4) of products, which are said to be reasoning models. Reasoning, of course, is the crown jewel of AI. These products are curious, because while they don’t make up a hardcore package of premium-grade plug-and-play tools for industrial and military efficiency (the Palantir approach), they suggest a very clever approach into the heart of the technical problems involved in “solving” for “artificial reasoning.” (Assuming the contested point that such a thing can ever really exist.) Is part of the OpenAI ethos, even if only by default, to approach the crown jewel of “reasoning” by way of the creative, intuitive, and generative — as opposed to tracing a line of pure efficiency as others in the field have done?

Gut check time

Wrapped up in the latest OpenAI controversy is a warning that’s impossible to ignore: Perhaps humans just can’t be trusted to build or wield “real” AI of the sort Altman wants — the kind he can prompt to decide for itself what to do with all his money and all his computers.

Ask yourself: Does any of the human behavior evidenced along the way in the OpenAI saga seem, shall we say, stable — much less morally well-informed enough that Americans or any peoples would rest easy about putting the future in the hands of Altman and company? Are these individuals worth the $20 million to $100 million a year they command on the hot AI market?

Or are we — as a people, a society, a civilization — in danger of becoming strung out, hitting a wall of self-delusion and frenzied acquisitiveness? What do we have to show so far for the power, money, and special privileges thrown at Altman for promising a world remade? And he’s just getting started. Who among us feels prepared for what’s next?

Can artificial intelligence help us want better, not just more?



The notification chimes. Another algorithmically selected product appears in your feed, something you never knew you wanted until this moment. You pause, finger hovering over the “buy now” button. Is this truly what you desire or just what the algorithm has decided you should want?

We’re standing at a fascinating turning point in human history. Our most advanced technologies — often criticized for trapping us in cycles of shallow wants and helpless determinism — could offer us unprecedented freedom to rediscover what we truly desire. “Agentic AI” — those systems that can perceive, decide, and act on their own toward goals — isn't just another tech advancement. It might actually liberate our attention and intention.

Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

So what exactly is agentic AI? Think of it not just as a fancy calculator or clever chatbot, but as a digital entity with real independence.

These systems perceive their environment, make decisions, and take actions with significant autonomy. They learn from experiences, adapt to new information on the fly, and pursue complex goals without our constant direction. Self-driving cars navigate busy streets, trading algorithms make split-second financial decisions, and research systems discover scientific principles on their own.

These aren't just tools any more. They're becoming independent actors in our world.

To understand this shift, I want to introduce you to two key thinkers: Marshall McLuhan, who famously said “the medium is the message,” and René Girard, who revealed how we tend to want what others want — a phenomenon he called “mimetic desire.” Through their insights, we can see how agentic AI works as both a medium and a mediator, reshaping our reality while influencing what we desire. If we understand how agentic AI will continue to shape our world, we can maintain our agency in a world increasingly shaped by technological advances.

McLuhan: AI as medium

McLuhan showed us that technology’s structure, scale, and speed shape our consciousness more profoundly than whatever content it carries. The railway didn’t just introduce transportation; it created entirely new kinds of cities and work.

Similarly, agentic AI isn't just another tool. It's becoming an evolving environment whose very existence transforms us.

McLuhan offers the example of electric light. It had no “content” in the conventional sense, yet it utterly reshaped human existence by eliminating darkness. Agentic AI similarly restructures our world through its core qualities: autonomy, adaptability, and goal-directedness. We aren't just using agentic AI; we’re increasingly living inside its operational logic, an environment where non-human intelligence shapes our decisions, actions, and realities.

Neil Postman, who built on McLuhan’s work, reminds us that while media environments powerfully shape us, we aren't just passive recipients: “Media ecology looks into how media of communication affect human perception, understanding, feeling, and value.” By understanding these effects, we can maintain our agency within them. We can be active readers of the message rather than just being written by it.

One big impact is on how we make sense of the world. As agentic AI increasingly filters, interprets, and generates information, it becomes a powerful participant in constructing our reality. The challenge is maintaining shared reality while technology increasingly forges siloed, personalized worlds. While previous technological advances contributed to this siloing, AI offers the possibility of connectivity. Walter Ong's concept of "secondary orality" suggests AI might help create new forms of connection that overcome the isolating aspects of earlier digital technologies.

Girard: AI as mediator of desire

While McLuhan helps us understand how agentic AI reshapes our perception, René Girard offers a framework for understanding how it reshapes what we want.

Girard’s theory of mimetic desire suggests that human desire is rarely spontaneous. Instead, we learn what to want by imitating others — our "models." This creates a triangle: us, the model we imitate, and the object of desire.

Now, imagine agentic AI entering this dynamic. If human history has been a story of desire mediated by parents, peers, and advertisements, agentic AI is becoming a significant new mediator in our digital landscape. Its ability to learn our preferences, predict our behavior, and present curated choices makes it an influential model, continuously shaping our aspirations.

RELATED: If AI isn’t built for freedom, it will be programmed for control

Photo by Lintao Zhang/Getty Images

Peter Thiel, who studied under Girard at Stanford, suggests awareness of these dynamics can lead to more authentic choices. “The most successful businesses come from unique, non-mimetic insights,” Thiel observes. By recognizing how AI systems influence our desires, we can more consciously choose which influences to embrace and which to question, moving toward greater authenticity.

Look at recommendation engines, the precursors to full-blown agentic AI. They already operate on Girardian principles by showing us what others have bought or liked, making those items more desirable to us. Agentic AI takes this farther. Through its autonomous actions and pursuit of goals, it can demonstrate desirability.

The key question becomes: Is your interest in a hobby, conviction about an issue, or lifestyle aspiration truly your own? And more importantly, can you tell the difference, and does it matter if it brings you genuine fulfillment?

A collaborative future

The convergence of AI as both medium and mediator creates unprecedented possibilities for human-AI partnership.

Andrew Feenberg's critical theory of technology offers a constructive path forward. He argues that technologies aren't neutral tools but are laden with values. However, he rejects technological determinism, emphasizing that these values can be redesigned through what he calls “democratic rationalization,” the process by which users reshape technologies to better reflect their values.

“Technology is not destiny but a scene of struggle,” Feenberg writes. "It is a social battlefield on which civilizational alternatives are debated and decided." Rather than passively accepting AI's influence, we can actively shape AI systems to reflect and enhance our deeply held values.

This vision requires thoughtful design guided by human wisdom. The same capabilities that could liberate us could create more sophisticated traps. The difference lies not in the technology itself but in the values and intentions that shape its development. By drawing on insights from McLuhan, Girard, Postman, Ong, Thiel, Feenberg, and others, we can approach this evolving medium not with fear or passive acceptance, but with creative engagement.

The future of agentic AI isn't predetermined. It’s ours to shape as a technology that enhances rather than diminishes our humanity, that serves as a partner rather than a master in our ongoing quest for meaning, connection, and flourishing.

Under The Guise Of ‘Preventative Medicine’ For IVF, Eugenics Is Back

The libertarian tech bros creating neo-eugenics startups seem totally unaware of the moral monstrosities they are conjuring up.

Silicon Valley's 'demons': Transhumanists possessed by something 'anti-human'



One of the foremost thought leaders in AI and transhumanism is Joe Allen, who now serves as the transhumanism editor for "Bannon’s War Room" — and he warns that transhumanism isn’t exactly a thing of the future, but rather it’s happening right now.

Transhumanism is the merging of humans with machines, and in the present moment, that consists of billions of people obsessively checking their iPhones. That addiction does not bode well for mankind.

While Allen believes “the power is in the transhumanists' court,” Shanahan, who was deeply embedded in Silicon Valley for a long enough time to really immerse herself in it — believes there is still power in the natural.


“I’ve been surrounded by this world for 15 years now and was always kind of beloved,” Shanahan tells Allen. “Beloved because I was very organic, not augmented in any way. Maybe I used Botox for a few years to try it out, but I stopped all of that.”

“I really love natural human biology. I think it is incredibly beautiful. I think it actually makes an individual beautiful and desirable because there’s something innate in every living being. And I think that this is the piece of the future where there will be mass desire, and this is talked about in 'Mad Max 2,' but for fully organic earthly women,” she continues.

“That never goes away, and I’ve seen a preview of that, having lived in Silicon Valley for as long as I have. I’ve seen that preview. I’ve seen these very powerful men seek out the most organic female, a female that almost reminds them of Greek oracles. So, brilliant, connected to God, channeling information, visionary, but also physically pure,” she adds.

She’s noticed that these tech elites “spiral” and become “greedy” in search of these kinds of women, which Allen chimes in to call “crunchy harems.”

An example of this, Shanahan says, is the Burning Man festival.

“Burning Man is a simulation of that world, of that future, of these very powerful elite men going to Burning Man, and all of these young beautiful women going to Burning Man, and creating these miniature harems around these men. I mean, that’s what Burning Man has become, unfortunately,” she tells Allen.

“You’ve been around a lot of these guys,” Allen says. “I know every person’s different, but by and large, is it misguided goodwill at the heart of the tech elite transhuman dream, or is there a touch of malevolence, or is there deep malevolence?”

“A bit of their humanity is possessed by something very anti-human,” Shanahan answers, adding, “They’re so manipulative; they’re trained in humanity.”

While Shanahan admits she doesn’t “understand it all,” she does “see where the humanity is and what is interfering with that humanity.”

“And I don’t know precisely what that thing is. I know Christians have a word for it,” she continues.

“Demon sounds about right to me,” Allen adds.

Want more from Nicole Shanahan?

To enjoy more of Nicole's compelling blend of empathy, curiosity, and enlightenment, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Singularity: The elites' dystopian view of human beings



The singularity has been at the tip of many tech-savvy and global-elitist tongues as of late — and its implications are more than a little frightening.

According to Justin Haskins, president of Our Republic and senior fellow at the Heartland Institute, the definition of the singularity is a "hypothetical moment off into the future when technology advances to a point where it just is completely transformative for humanity.”

“Typically, the way it's talked about is artificial intelligence — or just machines in general — become more intelligent than human beings,” Haskins tells Allie Beth Stuckey of “Relatable.” He goes on to say that some people describe the singularity as the time when AI "has the ability to sort of continue to redesign itself."


While Haskins notes that some of the consequences of the singularity are positive — like the potential to cure cancer — it also creates all kinds of ethical problems.

“What happens when a lot of employees are no longer needed because HR and loan officers and all these other big gigantic parts of businesses can just be outsourced to an artificial intelligence system?” he asks.

In response, Haskins says, “There’ll be massive disruptions in the job market.”

Stuckey herself is wary of the small issues we have now that might grow into bigger problems.

“People have posted their interactions with different kinds of AI, whether it's ChatGPT or Grok,” she explains.

She continues, “I've seen people post their conversations of saying like, ‘Would you rather’ — asking the AI bot — ‘Would you rather misgender someone, like misgender Bruce Jenner, or kill a thousand people,’ and it will literally try to give some nuanced take about how misgendering is never okay.”

“And I know that we’re talking beyond just these chat bots. We’re talking about something much bigger than that, but if that’s what's happening on a small scale, we can see a peek into the morality of artificial intelligence,” she adds.

“If all of this is being created and programmed by people with particular values, that are either progressive or just pragmatists, like if they’re just like, 'Yeah, whatever we can do and whatever makes life easier, whatever makes me richer, we should just do that’ — there will be consequences of it,” she says.

Stuckey also notes that she had recently heard someone of importance discussing the loss of jobs and what people will do as a result, and the answer to that was concerning.

“It was some executive that said, ‘I’m not scared about AI killing 150 million jobs. That’s actually why we are creating these very immersive video games — so that when people lose their jobs, they can just play these video games and they can be satisfied and fulfilled that way,” Stuckey explains.

“That is a very dystopian look at the future,” she continues, adding, “And yet, that tells us the mind of a lot of the people at WEF, a lot of the people at Davos, a lot of the people in Silicon Valley. That’s really how they see human beings.”

“Whether you’re talking about the Great Reset, whether you’re talking about singularity, they don’t see us as people with innate worth; they see us as cogs in a wheel,” she adds.

Want more from Allie Beth Stuckey?

To enjoy more of Allie’s upbeat and in-depth coverage of culture, news, and theology from a Christian, conservative perspective, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Bill Gates Version 1.0

Before Jeff Bezos could create Amazon, or Mark Zuckerberg could create Facebook, or Larry Page and Sergey Brin could create Google, someone had to build the foundation of modern technology that now dominates every aspect of our lives.

The post Bill Gates Version 1.0 appeared first on .

Google founder's ex-wife speaks out about evils of ‘tech mafia’



The Big Tech elites have been laying “groundwork” to enable the policies of the Great Reset, and no one knows it better than Silicon Valley attorney, entrepreneur, RFK Jr. running mate, and ex-wife of Google co-founder Sergey Brin — Nicole Shanahan.

“Their money especially was being conscripted to set the groundwork for the Great Reset, specifically through a network of NGO advisors, relationship with Hollywood, relationship with Davos, and their own companies,” Shanahan told Allie Beth Stuckey in a recent interview on “Relatable.”

“If you look at who’s on these boards, who hangs out with each other, how the culture of tech wealth works,” Shanahan continued, “it’s a really small group of people, and it’s a really small group of people making these decisions.”


Glenn Beck of “The Glenn Beck Program” is well aware of plans for the Great Reset, but he’s shocked that Shanahan is warning about them.

“It is amazing to go from five years ago, everybody saying, ‘That’s crazy, that’s not happening,’ to the former wife of the head of Google coming out and saying, ‘Yeah, this was all orchestrated, we didn’t even know what we were into as wives of the Silicon Valley mafia wives,’ as she calls them,” Glenn tells Stuckey.

“She said that she really saw the reality of evil, the reality of hell, when she was deep into politics, and that kind of started to shift her perspective on, ‘Wait, who are the bad guys here? What’s going on? All of this evil is being done under the guise of really good intentions, especially in Silicon Valley,’” Stuckey explains.

And when Shanahan’s daughter was diagnosed with autism, she started attempting to figure out what could have caused it.

“As she was digging into the research, she found some things that kind of have been dubbed as right-wing conspiracy theories about different environmental factors, even pharmaceutical factors that could possibly cause some symptoms of autism,” Stuckey says.

“But she had a hard time researching because the search engine that almost everyone uses censors that kind of information. And, well, she was married to the co-founder of Google, who was playing a part in censoring that information, not only inhibiting her research for her daughter, but research for the effects of the COVID-19 vaccine,” she continues.

“And she shared that that caused, understandably, a lot of conflict in her life and still does,” she adds.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Zorp Corp: Decentralizing critical infrastructure



Is humanity doomed to be subjugated to technology and bureaucratic power? As cryptocurrency slips closer to the hands of governments and tech giants, it may seem that the future of tech is already written in stone. However, some people are proposing a freer, more independent future in the digital landscape, refusing to bend the knee to the authoritarian trends in our society.

On “Zero Hour,” Logan Allen, entrepreneur, software developer, and founder and CEO of Zorp Corp, sat down with James Poulos to discuss cryptocurrency, the importance of critical infrastructure, and the future of technology.

Allen talked about the competing “cults” currently vying for power in the age of cryptocurrency. According to him, Silicon Valley represents the worship of technology, while the D.C. “Paper Belt” represents the worship of bureaucracy. Allen, however, said he is placing his bets on a third option, the worship of God: “All you need is a cult that truly believes in God, in competence, and in generational transfer of knowledge. If you have that, you win on a long enough timeline.”

Unfortunately, these other cults recognize the power of cryptocurrency and are seeking to take control of it. Currently, the industry is not as free as it may sound: “Most of the cryptocurrency industry is centered around building a series of virtual scam games, where the game is to play a lottery where you’re guaranteed to lose money if you’re not an insider.”

Zorp Corp, which finds itself at the intersection of critical infrastructure and decentralized currency, recognized a serious problem that needed a solution: “The problem is that we are not training new people to understand the infrastructure that keeps our water clean, that keeps our power plants running, that keeps our trains from derailing, that keeps our supply chains working. We’re not training new people to do these things because the people that are smart are instead being trained to send emails to each other. This is a civilizational killer.”

Logan Allen’s company seeks to provide a solution to the generational skills problem as well as an alternative future of technology to the ones Silicon Valley and the Paper Belt are proposing: “We’re trying to make tools that allow software developers to build things with 100 times less effort and man-hours. We want to make it so that people can build tools that are more secure, more stable, require fewer updates, and require small organizations to maintain and keep running.”

To hear more about Zorp Corp, zero-knowledge proofs, the future of cryptocurrency, and the battle for supremacy between technology and bureaucracy, watch the full episode of “Zero Hour” with James Poulos.