We need to face the dark health care reality behind the AI-fueled cancer treatment stories



Overnight star entrepreneur Paul Conyngham is scaling his company based around his experiences mixing LLM medicine, analysis, established treatments, and private laboratory services to bring tailored treatments, such as he brought to his dog Rosie, to a wider, human audience. The Rosie saga has gone viral and birthed a business, and LLM companies are thrilled. Will any of it work?

Last week’s viral story of Rosie the dog, whose ambitious owner leveraged LLMs (among many other things) to create a custom-tailored mRNA vaccine continues to instruct, maybe edify, but surely divide.

The medical and health care industries are among the most deeply corrupt.

Enter the human version of the AI self-medicating story: GitHub founder and billionaire Sid Sijbrandij was diagnosed several years ago with osteosarcoma. Sid pursued standard treatments, but they weren't enough to halt the cancer’s progress. So Sid countered with a truly impressive, even inspiring, quotient of agency — throwing himself into gathering his own medical team, deploying AI where possible, maximizing every diagnostic test he could find, and open-sourcing his records.

With this approach, Sijbrandij says he was able to advance something workable in terms of a treatment protocol. His cancer is in remission, and he’s talking about the process on X and his website.

First the good news

The stories have remarkable parallels. Both involve successful, wealthy tech entrepreneurs. Both involve cancer, and both offer hope. The timing of both is curious.

There are a few very hard — even uncomfortable and offensive — but necessary questions to ask with respect to media reality. Boomer elites hope to keep the all-important economic and financial line going comfortably up as they exit into retirement en masse. How much of the self-guided vaccine push, and the rosy vision behind it, is real?

At the end of 2024, Sijbrandij transitioned from CEO to executive chair of GitLab, saying, "I want more time to focus on my cancer treatment and health."

RELATED: A man used Grok to save his dog. Is intellectual property about to die?

wildpixel/Getty Images

Paul Conyngham details in a long essay posted to X exactly what the chatbots did to help. Also detailed are the various other cost-prohibitive treatments he leveraged to save his dog. Conyngham admits he was putting in an extra hundred hours of work per week on complex paperwork. This is time most people don’t have to spare, obviously. He does not mention the costs, but in America at this point few have any cash after paying the monthly nut. In Paul’s essay, we read what the chat bots did NOT do”:

They did not collect samples. They did not isolate or sequence the DNA. They did not physically manufacture the vaccine. They did not administer it. Many brilliant scientists were required — including Professor Pall Thordarson at the UNSW mRNA Institute who manufactured the vaccine, Professor Rachel Allavena & Dr. José Granados at the University of Queensland who administered it, and Professor Martin Smith who provided expert guidance on the bioinformatics throughout.

Enter the dissenting opinions and scam artists.

Peeling away the hype

Never one to leave credit, cash, or free and mistaken public goodwill lying on the table, Sam Altman weighed in last week. “The coolest meeting I had this week,” he posted, “was with Paul, who used ChatGPT and other LLMs to create an mRNA vaccine protocol to save his dog Rosie. It is amazing story.”

At about the same time, just after the euphoria dissipated and Altman chimed in, a series of critical posts took swings at the general, and admittedly largely dilettante, story.

Patrick Heizer, a working biomedical engineer, called Sid Sijbrandij’s approach and presentation of the evidence in his self-management of osteosarcoma “extremely impressive.” However, he also estimated that Sid spent “tens of millions” to make it happen.

In Heizer’s learned opinion, the era of personalized, AI medical utopia is not here. With respect to the evidence presented in the story of Rosie the dog, Heizer was dismissive — citing a general lack of risk protocol and evidence for what did and didn’t work.

Another biochem Ph.D./founder figure on X, Egan Peltan, echoed Heizer’s doubts regarding the Conyngham/Rosie story. “There’s no evidence his process (beyond FDA approved doggie α-PD-1) had any impact on disease progression. The most parsimonious explanation is a partial response to α-PD-1.”

In our previous examination of this heartwarming tale, I suggested that the era of AI medicine poses certain glimmers of hope with respect to the future of decentralized — and, potentially, more affordable and effective — medical treatments.

But this sunny future must first punch through a systemically corrupt and increasingly inept system — with medical “errors” leading causes of death, pharmaceutical regulatory capture well entrenched, overall care suffering long-term degradation, and institutional scams outstripping even the power of the federal courts to fight.

Will we arrive at a two-tiered privilege scenario where regular Americans are once again supplying their data to be used merely for the benefit of those at the top, who can afford to leverage the panoply of treatment options? We’ve gotten far too accustomed to being left out in the cold, and the medical and health care industries are among the most deeply corrupt.

Was this the secret CIA tech used to rescue downed US pilot from Iran?



Central Intelligence Agency Director John Ratcliffe said the recovery of a downed U.S. airman in Iran was a "no-fail mission" that required technology available nowhere else in the world.

In reference to an F-15E Strike Eagle fighter pilot who was lost in Iran, the CIA boss told reporters on Tuesday that the challenge of finding the pilot was comparable to hunting for a single grain of sand in the desert; but they did it.

'If your heart is beating, we will find you.'

Director Ratcliffe revealed the agency used human and technical assets and also "executed a deception campaign to confuse the Iranians who were desperately hunting for our airmen."

He added, "At the president's direction, we deployed both human assets and exquisite technologies that no other intelligence service in the world possesses."

While Ratcliffe stopped short of describing exactly what those "unique capabilities" were, an insider report by the New York Post claims that the CIA implemented a secret technology known as "Ghost Murmur."

RELATED: Trump announces CEASEFIRE with Iran ahead of deadline

The mountainous yet barren region of the Kohgiluyeh and Boyer-Ahmad province in Iran offered an ideal setting for the technology's first use, one source reportedly said.

The CIA director stated that even though the pilot was hiding and concealed in a mountain crevice, he was still visible to the CIA but "invisible to the enemy."

It was "about as clean an environment as you could ask for" due to low electromagnetic interference, the source went on. With "almost no competing human signatures" and a strong "thermal contrast between a living body and the desert floor" at nighttime, operators enjoyed a second layer of confirmation that they had found their man.

"It's like hearing a voice in a stadium, except the stadium is a thousand square miles of desert," an unnamed source told the Post.

The "Ghost Murmur" tech uses long-range quantum magnetometry to identify the electromagnetic pulse of a human heartbeat. The heartbeat's signature is separated from background noise to locate it.

The source, allegedly briefed on the CIA program, also said that "in the right conditions, if your heart is beating, we will find you."

The source told the Post that the signal of a heartbeat is usually so weak it can only be measured in a hospital-style setting with sensors pressed to a person's chest, however, advances in the technology — chiefly built around finding defects in synthetic diamonds — have made finding such signals more possible.

"The capability is not omniscient. It works best in remote, low-clutter environments and requires significant processing time," the insider claimed.

RELATED: NASA astronaut gives very American response to DEI questioning

Islamic Revolutionary Guard Corps/Anadolu/Getty Images

Secretary of War Pete Hegseth told reporters at the same press conference that the pilot's first message upon finding cover was "God is good."

"We leave no man behind. And that is not luck. It's the result of unmatched training, superior technology, unbreakable warrior ethos, and sheer American grit," Hegseth added.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Burchett claims alien 'machinery' could destroy us in 'a blink of an eye'



The loose-lipped Republican politician made fresh, wild assertions about classified government meetings, the alleged existence of alien programs, and secret forced breeding programs crossing aliens and humans.

Rep. Tim Burchett (R-Tenn.) continued his recent extraterrestrial revelations in a recent interview, just as NASA is circling the moon.

'This is what the guy told me.'

During a discussion with TMZ hosts Harvey Levin and Charles Latibeaudiere, Burchett was asked to elaborate on a closed-door meeting he had in a "secure setting" with an unnamed official.

Burchett quickly told the hosts that the individual "gave addresses, they gave times and dates," and that the people who were in the meeting included those from the "executive branch of previous presidents, not this current president."

After those remarks, TMZ's Levin got more specific, asking directly about reports of "pieces of machinery" and "life" that were alleged found and did not "seem earthly."

Levin asked Burchett to address the existence of either or both.

"I'd say you'd be safe to say both," the Republican replied.

RELATED: 'I'm not suicidal': Rep. Burchett says US would fall apart if we heard truth about UFOs

Pushing things further, TMZ asked Burchett if it was true that "a member of our government" told him that a piece of alien machinery "interacted in some form with people."

Burchett simply replied, "Yeah, they have ... it's pretty wild."

"I'm not going to lie to you," the 61-year-old continued, claiming he would even take a polygraph test to prove it. "This is what the guy told me."

Burchett then recalled an interaction he had with a "very high-ranking naval official" who allegedly described underwater crafts to him that were the size of "a football field moving at over 200 miles an hour."

Burchett's story placed the meeting at his own office and concluded with the military official pulling him "up close" and saying, "Tim, they're real."

The official then left out the side door, which Burchett said "nobody ever uses," describing it as "kind of weird."

RELATED: Elon Musk announces plans for PERMANENT lunar city

Andrew Harnik/Getty Images

Burchett actually dispelled any idea that Earth is in danger, saying he did not believe there was an imminent threat just because the unknown forces could destroy humanity if they so chose to.

"I don't think we're at danger of this. I mean, if these things exist, as I think they do, they could have destroyed us with a blink of an eye. I just don't see that," the congressman explained.

He then added, "But I do think they have the technology and the capabilities of something that we can't understand or we can't grasp."

The eyebrow-raising interview concluded with Burchett commenting on recent remarks by former Rep. Matt Gaetz (R-Fla.).

Gaetz had told host Benny Johnson about "enforced breeding programs" that involved "captured aliens" who were forced to breed with humans "to create some hybrid race that could engage in intergalactic communication."

"That's a true story," Burchett claimed. The congressman said that he, Gaetz, and Rep. Anna Paulina Luna (R-Fla.) went to an unspecified location in Florida, where the group of politicians was first turned away. That was until Gaetz "made a phone call to somebody at the Pentagon."

"All of a sudden they opened the doors," Burchett recalled.

It was then that a group of pilots allegedly told the politicians about the breeding program.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The top 5 dangers of UBI



Social media is rife with warnings that AI will take everyone’s jobs within the next one to five years. If true, mass unemployment will become a mainstay of modern life, sparking questions as to how civilization as we know it will survive. The big-brained elite think they have a solution through universal basic income — with some optimists like Elon Musk claiming that high basic income is the wave of the future — but this idealistic concept poses several dangers severe enough that they could dismantle America and bring about the end of the world.

1. The death of capitalism

Let’s get this one out of the way first: UBI is a gateway to socialism. In a world where the people earn nothing and everything of value is handed down from on high, the capitalist system that made this country great ceases to exist.

Forced dependence, by any other name, is a form of slavery.

Without a consistent job or a way to earn a steady salary, the people must become dependent on the elite who control the money and dole it out at their discretion. Who exactly is expected to do this honestly and fairly? The government has shown itself to be an unreliable steward, especially on the left as the pursuit of equity ensures some groups — like white, straight men — are intentionally marginalized in favor of minority groups. Private companies don’t seem like good benefactors either, as many of them are currently firing employees in favor of AI, simply to keep more money for themselves.

Even if the UBI rollout magically goes off without a hitch, capitalism stands to face another hurdle. People are less likely to buy products and services when they live on a basic fixed income. In a study conducted in 2024, UBI recipients were most likely to spend UBI on necessities, like food and transportation, while withholding their dollars from what can be seen as more frivolous expenses that drive the American economy.

2. Financial inequity

The left’s disdain for wealthy Americans is well-known, with politicians regularly calling for the rich to “pay their fair share,” because why should you keep your money when the government can have it instead? Right now, the left tries to confiscate as much of the people’s earnings as possible through taxes — like California’s outrageous wealth tax — and if given the chance, they’d gladly redistribute those funds to groups that didn’t earn it.

RELATED: Why doesn't money make you happy?

MicrovOne/Getty Images

Universal basic income would install a fast lane to the left’s unofficial wealth redistribution program. Once in power, they would get to decide which groups receive UBI, as well as the amounts that are distributed. In a left-leaning world, that could mean minority groups get more basic income while “privileged” groups receive less, finally giving them the power to push the “equity” they’ve chased since the Biden administration.

3. The end of the American dream

While Elon Musk’s “high basic income” is a novel idea, the reality of a socialist system means that most of us will get a meager allowance while the elite keep the lion’s share for themselves. In doing so, this will create a larger divide between the upper class and lower class. At the same time, the middle class who can’t work, can’t earn money, and can’t get a leg up will also fall into the lower-class bracket.

Under UBI, the middle class will be hollowed out, permanently relegating the majority of Americans to poverty. Even worse, this new system will ensure that no one can escape the lower class simply because they don’t have a way to earn more money than the elites are willing to give. Job scarcity and financial dependence will keep the poor in check, and the American dream will cease to exist.

4. Freedom isn’t free

Our forefathers promised the people life, liberty, and the pursuit of happiness. They made a social contract, one that still stands to this day. But if the jobs go away, UBI is instated, and the people must depend on someone else for their next paycheck, the Declaration of Independence loses its power.

Simply put, the people can’t be free if we’re forced to depend on politicians, benefactors, or elitists to provide our way of life. Forced dependence, by any other name, is a form of slavery. Universal basic income gives the elite the power to take our rights and render our founding documents null and void.

5. One step closer to the end times

Last but not least, UBI is one of the final levers required to spread the mark of the beast, the precursor to the end times.

In the New International Version of the Bible, Revelation 13:16-17 says: “It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark, which is the name of the beast or the number of its name.”

This doesn’t just mean you can’t buy or sell products unless someone says so. It also means you would need the mark to receive UBI payments.

To put it bluntly, it’s easier to force the people to sell their souls when their means to work, earn money, and be free are all taken away. Even if UBI isn’t the mark itself, it’s a Trojan horse that will usher in top-down control that can be exploited by the most evil forces our world has ever known. It’s exactly what the devil wants and needs before the book of Revelation comes to pass.

Is universal basic income inevitable?

In a word, no, not yet. The things above can only happen if the two things below about the ongoing AI race are true:

  • AI will be effective enough to fully replace human jobs, a feat that’s proving difficult with continuous hallucinations, mistakes, and more.
  • AI will have the power to produce endless mountains of cash. There can only be enough basic income for everybody — even in small amounts — if AI can print infinite money.

Assuming these are true, more roadblocks stand in the way of an AI-controlled economy.

A crippled economy

Businesses are currently run by people who buy products and services from other human-led companies. Some businesses sell products to each other (B2B), while other businesses sell straight to consumers (B2C). This cycle is the beating heart of capitalism.

If companies are suddenly all run by the same AI platforms, they’ll no longer need to buy digital services from each other to get work done. They can simply use AI to build custom versions for their own companies at little or no extra cost, thus cutting out third-party vendors and partners, which will ultimately make some companies obsolete. In fact, this loophole has the power to take down the entire digital B2B market.

On the commerce side, consumers face a different problem. They can’t use AI to manufacture physical products for themselves — like iPhones, PCs, and game consoles — but under the universal basic income strategy, they are more likely to hold their money for necessary purchases than to spend it like they do today. This monumental shift in spending habits could also cripple companies and the market, or at the very least, it could stifle year-over-year growth.

In short, universal basic income, ushered in by the revolution of AI, would be a huge disaster for American workers, the American economy, and the American dream. All of it is in jeopardy unless the government passes regulations that prevent mass job loss. Luckily, after kneecapping the states’ ability to regulate AI via executive order, the federal government is finally stepping up by introducing the National AI Legislative Framework and the Trump America AI Act. More on that soon.

NASA's Victor Glover shares gospel as he circles dark side of the moon: 'Love God with all that you are'



NASA's Artemis II pilot found time to speak about Christ and Christianity before circumnavigating the moon on Monday.

Before Victor Glover and his fellow crew members traversed the dark side of the moon, losing radio signal as they went out of Earth's line of sight, Glover said he wanted to remind Earth-dwellers about one of the "most important mysteries" in the world.

'We love you from the moon.'

In a message to NASA's mission control, with the radio transmission broadcasted live, Glover revealed he was talking about "love."

"Christ said in response to 'what was the greatest command' that it was to love God with all that you are. And he, also being a great teacher, said the second is equal to it, and that is to love your neighbor as yourself," Glover stated.

He concluded the transmission, marked at 6:44 p.m. ET, by saying, "And so as we prepare to go out of radio communication, we're still going to feel your love from Earth. And to all of you down there on Earth and around Earth: We love you from the moon."

After a pause, mission control responded: "Houston copies. We'll see you on the other side."

"We will see you on the other side," Glover affirmed.

RELATED: NASA astronaut gives very American response to DEI questioning

According to NASA's log, the crew had just witnessed an "Earthset" three minutes earlier, the moment Earth drops below the lunar horizon.

This marked the beginning of about 40 minutes of darkness as the astronauts traveled behind the moon, which blocks the radio signals from NASA's network.

The Artemis II crew reached 252,756 miles beyond our planet 18 minutes later, at 7:02 p.m., at a new human record for the maximum distance attained from Earth.

By 8:35 p.m., the crew entered a solar eclipse that lasted about an hour, before beginning their trip back home.

RELATED: UConn star Tarris Reed praises Jesus ahead of national championship: 'He changed everything about me'

Glover has been full of memorable and insightful quotes throughout the mission, including the remarks he made before Easter. Glover spoke on video alongside his crew members about "the beauty of creation" over the weekend, saying that from his perspective, he could see Earth as one whole, and it reminded him of Scripture.

"When I read the Bible and I look at all of the amazing things that were done for us who were created ... you have this amazing place — this spaceship. You guys are talking to us because we're in a spaceship really far from Earth. But you're on a spaceship called Earth that was created to give us a place to live in the universe — in the cosmos," Glover explained.

Astonishingly, without having prepared remarks, Glover delivered an extemporaneous motivational speech to all those listening.

"Maybe the distance we are from you makes you think what we're doing is special, but we're the same distance from you. And I'm trying to tell you — just trust me: You are special. In all of this emptiness — this is a whole bunch of nothing, this thing we call the universe — you have this oasis, this beautiful place that we get to exist together."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Sam Altman described as 'sociopath' by board member in brutal insider report: 'He's unconstrained by truth'



OpenAI CEO Sam Altman was dragged through the mud in a new in-depth report that features former colleagues and current board members referring to him as sociopath and a liar.

Altman, 40, has yet to respond to claims made in a recent report, some of which were uncovered in secret memos to OpenAI's board members.

'He is a sociopath. He would do anything.'

According to the New Yorker, OpenAI's chief scientist, Ilya Sutskever, sent the memos to three other board members in 2023. One of the memos about Altman began with a list titled "Sam exhibits a consistent pattern of." The first item on the list was "lying."

The memos also alleged that Altman misrepresented facts to executives and board members while deceiving them about safety protocols. Unfortunately for Altman, the claims did not stop there.

"He's unconstrained by truth," a board member told the New Yorker. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."

The outlet said that the unnamed board member was not the only person to describe Altman as "sociopathic" without being prompted. Not long before his 2013 suicide, according to the New Yorker, coder Aaron Swartz warned at least one friend about Altman, whom Swartz had known from their time together at Y Combinator. His warning: "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything."

Sutskever additionally implied that he did not think Altman should have power over others, saying, "I don't think Sam is the guy who should have his finger on the button."

Others described him as more ambitious than anything else.

RELATED: Sam Altman tells BlackRock he wants AI on a meter 'like electricity or water'

The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and… pic.twitter.com/vX5xIp5DnI
— Ryan (@ohryansbelt) April 6, 2026

Former OpenAI board member Sue Yoon said Altman was "not this Machiavellian villain" but was able to convince himself of his own sales pitches.

"He's too caught up in his own self-belief," she reportedly said. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world."

Other anonymous colleagues cited by the New Yorker said that Sutskever and similar detractors were simply aspiring to take Altman's throne. Still, even many neutral comments did not help Altman's portrayal in the report.

"He's unbelievably persuasive. Like, Jedi mind tricks," a tech executive colleague of Altman's reportedly said. "He's just next-level."

At the same time, OpenAI is allegedly in the midst of unleashing superintelligence that Altman himself says will be so disruptive that it will require a new social contract.

RELATED: Sexting with chatbots is too far, OpenAI decides

Anna Moneymaker/Getty Images

Altman told Axios that there would be widespread job loss and a threat of cyberattacks coupled with social unrest.

"I suspect in the next year," he said, "we will see significant threats we have to mitigate from cyber."

Altman proposed a new deal with citizens that includes a public wealth fund, taxes on "automated labor," a 32-hour workweek, and the "right to AI."

That confirms previous reports that Altman wanted to put AI on a meter like electricity or water, to both democratize its usage and limit the possibility of overburdening the electrical grid.

OpenAI did not respond to Return's request for comment about the claims made about Altman and who they were coming from.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Elon Musk's Terafab is coming, and you're not ready



The announcement of Terafab was made at a decommissioned power plant, reflecting Elon Musk’s understanding of stagecraft: The ruined infrastructure of one era makes a convenient altar for the next. On March 21 and 22, 2026, at the Seaholm Power Plant in Austin, Musk presented Terafab. It is either the most ambitious semiconductor manufacturing project in history or a very expensive project that may not come to be.

Terafab is a plan to build vertically integrated chip-manufacturing capacity in Austin, combining under one roof the design, fabrication, packaging, and testing of advanced semiconductors. Tesla, SpaceX, and xAI are the collaborating entities. The announced investment figure is $20 billion. The stated long-run target is one terawatt of compute capacity per year, a number that converts the language of performance into the language of power.

Terafab is a cultural event as much as a technical announcement.

Measuring compute in watts means that the limiting factor is energy throughput. The International Energy Agency has described data centers as a fast-growing fraction of global electricity demand; by 2030, in its base case, that demand could roughly double.

The technical core of Terafab is its most defensible part. The pitch is about iteration speed: If you can design a chip, fabricate it, package it, test it, and revise the mask, all inside one building, without shipping components between specialized facilities in different countries, you can improve faster than anyone who does not. In conventional semiconductor manufacturing, these functions are geographically and organizationally scattered. A mask set travels; a wafer ships; a packaged part crosses an ocean. Each journey is a delay, and delay is the enemy of the feedback loop. Terafab is a wager that learning velocity beats static node leadership.

A factory within a factory

Advanced fabs are among the most expensive and complex structures human beings have ever built, typically $10 billion and several years for a single facility, dependent on supply chains for equipment that cannot be wished into existence by ambition or capital alone. Extreme ultraviolet lithography machines, to name one critical dependency, cost hundreds of millions of dollars apiece and are manufactured by a single Dutch company. The closed loop is a compelling engineering idea. The project will involve equipment lead times, utility provisioning, the yielding of learning curves, and the peculiar physics of building things in the real world.

There is a second Terafab nested inside the first. The announcement includes chips, named D3, designed for space environments, paired with a vision of solar-powered orbital compute satellites, initially around 100 kilowatts and scaling toward the megawatt range. Terrestrial compute is constrained by land, power, cooling, and local political opposition to enormous data centers. Space has sunlight and no neighbors to complain about the noise.

RELATED: Bernie Sanders and AOC propose law to shut down future AI data centers

Photo (left): Andrew Harnik/Getty Images; Photo (right): Alex Kraus/Bloomberg/Getty Images

Of course, space also has no air. In vacuum, heat cannot leave a system by convection, only by radiation, which requires very large radiator surfaces at high power levels. The International Space Station’s thermal control system requires radiators the size of tennis courts to reject the heat generated by its systems. Radiation poses its own complications: The energetic particles of the space environment induce bit flips and long-term degradation in electronics not specifically hardened against them. The orbital vision is not impossible. It is simply a different problem than the earthbound one, even when presented in the same breath, as though the same momentum carries the project from Austin to low Earth orbit without friction.

The future needs power

Terafab’s “everything under one roof” approach has an ancestor in the great vertical integration projects of industrial capitalism, such as Ford’s River Rouge complex, which turned raw materials into finished automobiles inside a single, vast geography, its own power plant humming at the center.

The global semiconductor supply chain is highly concentrated: Roughly 92% of the world’s most advanced chip manufacturing capacity sits in Taiwan. To build end-to-end domestic capability is simultaneously a resilience project and a power project, a bid to internalize a strategic resource inside one corporate constellation rather than depend on the broader market of specialized suppliers.

Terafab is a cultural event as much as a technical announcement, and its cultural work is to naturalize a particular diagnosis: that intelligence is infrastructure, infrastructure is energy, and energy is the horizon of meaning for civilizational progress. Whether or not the fab gets built on schedule, whether or not the orbital satellites ever achieve megawatt-scale compute, the frame has been installed. The factory is where the future lives, and the future needs power.

Elon Musk announces plans for PERMANENT lunar city



Elon Musk has virtually mastered the space race. SpaceX regularly sends up Falcon 9 rockets, autonomously lands boosters, and embarked on the most ambitious space exploration program ever dreamed up by mankind with Starship. Now Musk wants to launch a whole new kind of object into orbit — a data center meant to power xAI’s growing portfolio of products and services.

And it all starts with a momentous new lunar mission.

To the moon

In early February, two of Elon Musk’s most ambitious companies — space pioneering venture SpaceX and generative AI startup xAI — merged into one organization. With a unified brand, Musk claims that the move will “improve speed of execution” of the monumental new off-world undertaking.

We’ll finally be rid of the resource-hogging data centers that hamper our infrastructure here on Earth.

The goal? Establish Moonbase Alpha, a permanent lunar city planted on the surface of the moon. The base will serve as a manufacturing hub and a launch site for spacefaring data centers that will power Musk’s growing AI endeavors, including xAI, Grok, Imagine, Optimus robots, and more.

It sounds like something out of a science-fiction novel, but if Musk has his way, Moonbase Alpha will be up and running by approximately 2030.

While this project would mark the first time any human has set up residence on the moon, this isn’t the first time SpaceX has launched permanent objects into orbit. The company currently manages a fleet of 9,600 Starlink satellites that circle the earth daily, beaming wireless internet to regions all around the globe. Presumably, the new space data centers would fall in line along the same or similar paths.

A data center, however, is a little more complicated than a wireless internet router in space. Data centers consist of thousands of GPUs, TPUs, cooling systems, and other networking components. They must have the bandwidth to process, store, and utilize large stores of data. For LLMs in particular, data centers also have to be able to train and maintain new models as AI evolves.

Clearly, there are some pros and cons to running an AI data center in space. Let’s get into them.

Pros of space-based data centers

  • Space: Data centers take up a lot of acreage. The largest data center on earth is 800,000 square feet, or approximately 13.9 football fields. That’s massive! Space, however, has more space. There’s plenty of room for expansion without invoking eminent domain, chopping down forests, or snatching up vacant plots of land. AI is free to grow without encroaching on the general public.
  • Power: Data centers also require a ton of energy. Collectively, the nation’s data centers consume up to 8,190 MW per year on a 70 MW-per-center estimate. In comparison, your home uses 10.8 MW of power per year. While this need is a big strain on Earth’s power grid, orbital data centers have a direct line of solar power straight from the sun, free from cloud cover, pollution, or severe weather events. It’s just straight solar power all the time, a perfect renewable resource without the limitations of a living planet.
  • Maintenance: Data centers have plenty of moving parts and energy demands that all generate a lot of friction and heat. While it takes specialized water cooling systems to mitigate high temperatures on earth, space is a whole different story. Above the atmosphere, it’s much colder, there’s almost no friction, and zero gravity makes it easier for parts to work without additional drag. Together, these unique qualities of space may reduce wear and tear on data centers and allow them to run longer with fewer repairs.

RELATED: NASA astronaut gives very American response to DEI questioning

Manuel Mazzanti/NurPhoto/Getty Images

Cons of space-based data centers

  • Maintenance: While orbital space centers will likely require less maintenance, when something does break, it could be harder to send a repairman — either from Moonbase Alpha or Earth itself — for a quick fix. Alternatively, perhaps Elon will have a team live on the data center itself, but even then, having a specialized crew on board at all times would be costly.
  • Rapid unscheduled disassembly: More than a few times, a Starlink satellite has veered off course enough to tumble toward Earth and burn up in the atmosphere. Now imagine a multibillion-dollar data center the size of Rhode Island careening into the Atlantic Ocean. Not only could unpredictable flight path failures cause an orbital data center to burn up in the sky, such an event could also turn one of those centers into a meteor that strikes Earth on the scale of "Deep Impact."
  • Space junk: Space is so big and vast that it’s hard to believe it’s getting crowded, but that’s exactly what’s going on above the atmosphere. Low-orbit space is filling up so fast with satellites and space junk that it has created collision risks for future rocket launches. Adding massive data centers to the mix would only make space missions more complicated and dangerous.

A moon-shot mission for a new age

Despite weighing the risks against the benefits, Elon Musk believes that space is an essential piece of AI development: “Current advances in AI are dependent on large terrestrial data centers, which require immense amounts of power and cooling," he explained in a recent post at the SpaceX website announcing the merger. "Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment. In the long term, space-based AI is obviously the only way to scale. To harness even a millionth of our Sun’s energy would require over a million times more energy than our civilization currently uses!”

He’s right. The only way to sustain AI in modern society is to move it to a place where it can’t siphon away our vital resources, namely power, water, and land. It needs to operate in its own sustainable vacuum. What could be better than space?

Musk isn’t alone, either. Google is also putting data centers into orbit. According to Google CEO Sundar Pichai, "We are taking our first step in '27. We'll send tiny, tiny racks of machines, and have them in satellites, test them out, and then start scaling from there."

And just like that, the AI age of the space race has begun. As for who will win, mankind is the biggest benefactor — not because renewable AI will magically make everything better, but because we’ll finally be rid of the resource-hogging data centers that hamper our infrastructure here on Earth while Big Tech sets its sights on moon-shot missions in the stars.

Sexting with chatbots is too far, OpenAI decides



Just days after announcing it would be shutting down its artificial intelligence video generation platform, OpenAI put the brakes on another project.

While the terminology remains vague, it seems Sam Altman's company could be drawing a line as to what it deems "adult" content.

'We still believe in the principle of treating adults like adults.'

Those familiar with the adult-themed project at OpenAI have "indefinitely" shelved their plans to release an erotic chatbot, per the Financial Times. OpenAI confirmed that before moving forward with such a product, the company wanted to be able to fall back on long-term research about the effects AI sex chats have on users and any emotional attachments that might be created.

OpenAI said there is no "empirical evidence" available at this time.

RELATED: Sam Altman tells BlackRock he wants AI on a meter 'like electricity or water'

CHARLY TRIBALLEAU/AFP/Getty Images

Last year, Altman announced that ChatGPT would start including more content, including erotica, to "treat adult users like adults."

But in early March, OpenAI made its first announcement that "adult mode" was being delayed. That decision was made in part to focus on more pertinent tasks. "We're pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now," a spokesperson told reporter Alex Heath, "including gains in intelligence, personality improvements, personalization, and making the experience more proactive."

"We still believe in the principle of treating adults like adults, but getting the experience right will take more time," the company stated.

Inside sources since told the Financial Times that the company will refocus on core products after staff and investors expressed concern about the sexualized AI content. The upside to this endeavor was allegedly too small for OpenAI.

RELATED: Sam Altman says NSA can't use OpenAI — then tells staff they don't have a say in military actions

Jaap Arriens/NurPhoto/Getty Images

The revelations follow hot on the heels of other strategy-shifting announcements. The tech giant has recently tightened up its offerings, shuttering generative AI service Sora.

"What you made with Sora mattered, and we know this news is disappointing," the company wrote on X. "We'll share more soon, including timelines for the app and API and details on preserving your work."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

A new study reveals why chatbots can drive even smart, sane people crazy



Perhaps the most interesting slice of drama swirling in what we’re told is the imminent AI remake of human life pertains to the persistent theme of its engineers tinkering with the “balance of truth.”

A recently released academic study from the MIT Department of Brain & Cognitive Sciences — entitled “Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians” — presents yet another example. It’s a real treat for those who have observed this struggle among the engineers to “align” their silicon machines. From the abstract we read: "'AI psychosis' or ‘delusional spiraling’ is an emerging phenomenon where AI chatbot users find themselves dangerously confident in outlandish beliefs after extended chatbot conversations.”

The question posed by the MIT study is: Can it be any other way?

The study, which arrives in the wake of others citing LLM pitfalls and failures, takes two approaches: testing with an ideally rational or “Bayesian” human interlocutor and simply warning the human user that the LLM model he or she is engaging with is sycophantic — unreliable and prone to agree with you because your engagement is its reward system.

Slippery slope

Both tests produced unfortunate outcomes. “Even an idealized Bayes-rational user,” according to the MIT study, “is vulnerable to delusional spiraling," caused at least in part by AI sycophancy; "this effect persists in the face of two candidate mitigations: preventing chatbots from hallucinating false claims, and informing users of the possibility of model sycophancy.”

Too much truth, in other words, and suddenly chatbot users are launched into the psycho-sphere — researching red heifers, Jekyll Island, the feasibility of the 1960s moon landing, and innumerable other topics that tend to open up yet more curious questions and tend to incline investigators away from participating in aspirational lifestyles, accruing money, or voting for one of the two “major” parties.

Too little truth, however, and innovation, curiosity, and even mere engagement are restricted. In our painful submersion into the deep AI waters where society has no helmsman, the engineering of code away from truth appears to cause genuine psychosis.

To put it simply: The engagement with these machines, however many hundreds of billions are dumped into their creation, can easily lead us humans into confusion and suffering.

RELATED: 10 years ago, hundreds of millions played a new video game. It was secretly built to harvest their data.

JianGang Wang/Getty Images

The question posed by the MIT study is: Can it be any other way?

The trust gap

The answer puts the character of Western civilization at stake. The notion of engineering our way to truth would be surprising to all philosophical and theological thinkers since at least Plato. And for some time, the mental health issues around AI usage have been obvious not only to some philosophers but to other tech outsiders such as doctors, artists, and laymen of all sorts. Here’s professor of neuroscience Michael Halassa on his Substack last year: “The pattern is becoming clearer, and it's troubling. People spend hours, often late into the night, in dialogue with a system that never challenges them, never disagrees, never says 'let me think about that differently.'"

From the engineering, coding, AI builder point of view, part of the problem isn’t just steering toward truth; it’s controlling outcomes. It’s a litigious world. People are already very unstable — not just in America, but maybe especially in America, where we’re seeing our economy, infrastructure, and social fabric tear asunder as elites insist we need not worry because the line of progress still goes up.

No, it’s not merely litigation, nor is it purely control that the makers of AI are so concerned with — they’re set on seeing a very particular set of outcomes, part of which necessarily adhere to their specific worldview. It’s a largely secular one, meant to usher in a global and post-traditional economy, privileging a hollow, New Age-y spirituality. The pressure to trust them is immense — not just when they tell us our civilization must and will be refounded and reworked by AI, but when they tell us that just happens to mean they’re the only ones qualified to be in charge.

Black mirror

It's all a bit suspicious given that, in a deep sense, we have all been here long before. Another powerful and mysterious device that seems characteristically to show us too much and too little of the truth about ourselves is the mirror. Put a hall of mirrors together, and the result is all too familiar: confusion and delusion. Historically, experts at manipulating shifting and unreliable reflections of ourselves have been ascribed near-magical powers. Not until recently has the promise of building the ultimate mirror been hyped as building a whole new god.

Recursion, the hard-to-understand process of machine self-improvement, is the culprit. Much of the “spiral” in AI delusion comes down, say researchers, to the recursive agreeability encoded into LLM answers. Last year, prior to scientific confirmation, the New York Times published a story on the delusional spiral effect, relating an instance in which a man spent 300+ hours with ChatGPT chatting about the man’s mathematics insights. The LLM had him convinced that the insights were groundbreaking. They weren’t. The man wound up fracturing his life and seeking psychiatric care.

Juxtapose this with French X poster Denis Tremblay, who likewise spent a great deal of time discussing some “completely original math concepts” with a couple of LLMs. He did so not to confirm his inventive mathematics but to determine “with critical distance” that the machine would work toward truth with rigor concomitant to that of its human interlocutor. He’s still on X, posting valuable, balanced ideas in imperfect English — his third or fourth language — not suicidal, and not in any need of psychiatric help.