Tech elites warn ‘reality itself’ may not survive the AI revolution
When Elon Musk warns that money may soon lose its meaning and Dario Amodei speaks of an AI-driven class war, you might think the media would take notice. These aren’t fringe voices. Musk ranks among the world’s most recognizable tech leaders, and Amodei is the CEO of Anthropic, a leading artificial intelligence company developing advanced models that compete with OpenAI.
Together, they are two of the most influential figures shaping the AI revolution. And they’re warning that artificial intelligence will redefine everything — from work and value to meaning and even our grasp of reality.
But the public isn’t listening. Worse, many hear the warnings and choose to ignore them.
Warnings from inside the machine
At the 2025 Davos conference, hosted by the World Economic Forum, Amodei made a prediction that should have dominated headlines. Within a few years, he said, AI systems will outperform nearly all humans at almost every task — and eventually surpass us in everything.
“When that happens,” Amodei said, “we will need to have a conversation about how we organize our economy. How do humans find meaning?”
Either we begin serious conversations about protecting liberty and individual autonomy in an AI-driven world, or we allow a small group of global elites to shape the future for us.
The pace of change is alarming, but the scale may be even more so. Amodei warns that if 30% of human labor becomes fully automated, it could ignite a class war between the displaced and the privileged. Entire segments of the population could become economically “useless” in a system no longer designed for them.
Elon Musk, never one to shy away from bold predictions, recently said that AI-powered humanoid robots will eliminate all labor scarcity. “You can produce any product, provide any service. There’s really no limit to the economy at that point,” Musk said.
“Will money even be meaningful?” Musk mused. “I don’t know. It might not be.”
Old assumptions collapse
These tech leaders are not warning about some minor disruption. They’re predicting the collapse of the core systems that shape human life: labor, value, currency, and purpose. And they’re not alone.
Former Google CEO Eric Schmidt has warned that AI could reshape personal identity, especially if children begin forming bonds with AI companions. Filmmaker James Cameron says reality already feels more frightening than “The Terminator” because AI now powers corporate systems that track our data, beliefs, and movements. OpenAI CEO Sam Altman has raised alarms about large language models manipulating public opinion, setting trends, and shaping discourse without our awareness.
Geoffrey Hinton — one of the “Godfathers of AI” and a former Google executive — resigned in 2023 to speak more freely about the dangers of the technology he helped create. He warned that AI may soon outsmart humans, spread misinformation on a massive scale, and even threaten humanity’s survival. “It’s hard to see how you can prevent the bad actors from using [AI] for bad things,” he said.
These aren’t fringe voices. These are the people building the systems that will define the next century. And they’re warning us — loudly.
We must start the conversation
Despite repeated warnings, most politicians, media outlets, and the public remain disturbingly indifferent. As machines advance to outperform humans intellectually and physically, much of the attention remains fixed on AI-generated art and customer service chatbots — not the profound societal upheaval industry leaders say is coming.
The recklessness lies not only in developing this technology, but in ignoring the very people building it when they warn that it could upend society and redefine the human experience.
This moment calls for more than fascination or fear. It requires a collective awakening and urgent debate. How should society prepare for a future in which AI systems replace vast segments of the workforce? What happens when the economy deems millions of people economically “useless”? And how do we prevent unelected technocrats from seizing the power to decide those outcomes?
The path forward provides no room for neutrality. Either we begin serious conversations about protecting liberty and individual autonomy in an AI-driven world, or we allow a small group of global elites to shape the future for us.
The creators of AI are sounding the alarm. We’d better start listening.
Google founder's ex-wife speaks out about evils of ‘tech mafia’
The Big Tech elites have been laying “groundwork” to enable the policies of the Great Reset, and no one knows it better than Silicon Valley attorney, entrepreneur, RFK Jr. running mate, and ex-wife of Google co-founder Sergey Brin — Nicole Shanahan.
“Their money especially was being conscripted to set the groundwork for the Great Reset, specifically through a network of NGO advisors, relationship with Hollywood, relationship with Davos, and their own companies,” Shanahan told Allie Beth Stuckey in a recent interview on “Relatable.”
“If you look at who’s on these boards, who hangs out with each other, how the culture of tech wealth works,” Shanahan continued, “it’s a really small group of people, and it’s a really small group of people making these decisions.”
Glenn Beck of “The Glenn Beck Program” is well aware of plans for the Great Reset, but he’s shocked that Shanahan is warning about them.
“It is amazing to go from five years ago, everybody saying, ‘That’s crazy, that’s not happening,’ to the former wife of the head of Google coming out and saying, ‘Yeah, this was all orchestrated, we didn’t even know what we were into as wives of the Silicon Valley mafia wives,’ as she calls them,” Glenn tells Stuckey.
“She said that she really saw the reality of evil, the reality of hell, when she was deep into politics, and that kind of started to shift her perspective on, ‘Wait, who are the bad guys here? What’s going on? All of this evil is being done under the guise of really good intentions, especially in Silicon Valley,’” Stuckey explains.
And when Shanahan’s daughter was diagnosed with autism, she started attempting to figure out what could have caused it.
“As she was digging into the research, she found some things that kind of have been dubbed as right-wing conspiracy theories about different environmental factors, even pharmaceutical factors that could possibly cause some symptoms of autism,” Stuckey says.
“But she had a hard time researching because the search engine that almost everyone uses censors that kind of information. And, well, she was married to the co-founder of Google, who was playing a part in censoring that information, not only inhibiting her research for her daughter, but research for the effects of the COVID-19 vaccine,” she continues.
“And she shared that that caused, understandably, a lot of conflict in her life and still does,” she adds.
Want more from Glenn Beck?
To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.
Utah requires app stores to verify ages in trailblazing child safety law
Utah Governor Spencer Cox (R) signed new legislation on Wednesday that requires mobile app stores, including Apple and Google, to implement a user age verification process to protect children online.
The law, sponsored by Sen. Todd Weiler (R) and Rep. James Dunnigan (R), passed earlier this month. The bill takes effect on May 7.
'The apps are the first main gateway to how you protect children.'
Instead of age checks at app download, Utah's law mandates that app stores verify ages up front. The App Store Accountability Act, a first-of-its-kind law, requires providers to confirm users' age categories, secure parental consent for minors, and share that data with app developers. A minor may download or purchase an app or make in-app purchases only with consent from a linked parental account.
The act prohibits app stores from enforcing contracts against minors who did not receive parental consent or from "misrepresenting parental content disclosures."
Utah's Division of Consumer Protection has been tasked with establishing age verification standards.
Additionally, Utah's new legislation "creates a private right of action for parents of harmed minors," "provides a safe harbor for compliant developers," and "includes a severability clause."
The law permits parents to sue app providers that violate the act, claiming $1,000 per violation or actual damages.
Meta, X, and Snap Inc. issued a joint statement praising Utah's new legislation.
We applaud Governor Cox and the State of Utah for being the first in the nation to empower parents and users with greater control over teen app downloads, and urge other states to consider this groundbreaking approach. Parents want a one-stop-shop to oversee and approve the many apps their teens want to download, and Utah has led the way in centralizing it within a device's app store. This approach spares users from repeatedly submitting personal information to countless individual apps and online services. We are committed to safeguarding parents and teens, and look forward to seeing more states adopt this model.
A February report from the Wall Street Journal found that at least eight other states — Alabama, Alaska, Hawaii, Kentucky, New Mexico, South Carolina, South Dakota, and West Virginia — were considering similar legislation.
Terry Schilling, the president of the American Principles Project, told Blaze News that Utah's new bill is "a very strong law" and a "good first step."
Schilling outlined the major threats facing children online.
"You want to protect children anywhere where people can get access to them," Schilling explained. "The apps are the first main gateway to how you protect children. So that's why I think it's a really great first step."
"Then next, we've got to start protecting kids from porn online directly by forcing the porn companies to do age verification," he continued, noting that 20 states have already implemented this requirement. "You've got to start protecting children and doing age verification for social media accounts in general."
Schilling told Blaze News that he anticipates that other states will soon enforce legislation similar to Utah's to protect children online.
"There is a huge movement of people in America that want to protect kids online, and it's now being translated to the political class — to the politicians and their staff," he said. "That is so critical and important to actually getting things done. You can't just change the culture or people's hearts and minds; you've actually got to legislate it."
Apple and Google did not respond to a request for comment from Business Insider.
Both have previously expressed privacy concerns regarding age verification laws for app stores.
Last month, Apple stated that “the right place to address the dangers of age-restricted content online is the limited set of websites and apps that host that kind of content.”
On March 12, Google’s director of public policy, Kareem Ghanem, stated, “These proposals introduce new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act. Google is proposing a more comprehensive legislative framework that shares responsibility between app stores and developers and protects children’s privacy and the decision rights of parents.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Google unveils new AI models to control robots, but the company is not telling the whole truth
Google announced two artificial intelligence models to help control robots and have them perform specific tasks like categorizing and organizing.
Gemini Robotics was described by Google as an advanced vision-language-action model built on Google's AI chatbot/language model Gemini 2.0. The company boasted physical actions as a new output modality for the purpose of controlling robots.
Gemini Robotics-ER, with "ER" meaning embodied reasoning, as Google explained in a press release, was developed for advanced spatial understanding and to enable roboticists to run their own programs.
The announcement touted the robots as being to perform a "wider range of real-world tasks" with both clamp-like robot arms and humanoid-type arms.
"To be useful and helpful to people, AI models for robotics need three principal qualities: they have to be general, meaning they’re able to adapt to different situations; they have to be interactive, meaning they can understand and respond quickly to instructions or changes in their environment," Google wrote.
The company added, "[Robots] have to be dexterous, meaning they can do the kinds of things people generally can do with their hands and fingers, like carefully manipulate objects."
Attached videos showed robots responding to verbal commends to organize fruit, pens, and other household items into different sections or bins. One robot was able to adapt to its environment even when the bins were moved.
Other short clips in the press release showcased the robot(s) playing cards or tic-tac-toe and packing food into a lunch bag.
The company went on, "Gemini Robotics leverages Gemini's world understanding to generalize to novel situations and solve a wide variety of tasks out of the box, including tasks it has never seen before in training."
"Gemini Robotics is also adept at dealing with new objects, diverse instructions, and new environments," Google added.
What they're not saying
Telsa robots displayed similar capabilities near the start of 2024. Photo by John Ricky/Anadolu via Getty Images
Google did not explain to the reader that this is not new technology, nor are the innovations particularly impressive given what is known about advanced robotics already.
In fact, it was mid-2023 when a group of scientists and robotics engineers at Princeton University showcased a robot that could learn an individual's cleaning habits and techniques to properly organize a home.
The bot could also throw out garbage, if necessary.
The "Tidybot" had users input text that described sample preferences to instruct the robot on where to place items. Examples like, "yellow shirts go in the drawer, dark purple shirts go in the closet," were used. The robot summarized these language models and supplemented its database with images found online that would allow it to compare the images with objects in the room in order to properly identify what exactly it was looking for.
The bot was able to fold laundry, put garbage in a bin, and organize clothes into different drawers.
About six or seven months later, Tesla revealed similar technology when it showed its robot, "Tesla Optimus," removing a T-shirt from a laundry basket before gently folding it on a table.
Essentially, Google appears to have connected its language model to existing technology to simply allow for speech-to-text commands for a robot, as opposed to entering commands through text solely.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
The New AI Race With China Shows Why Trump Needs To Crack Down On H-1Bs
App Store Accountability Act aims to put parents back in the driver’s seat
Families with young children face a daunting challenge: navigating app stores controlled by large companies that place profit over safety. The lack of age verification, privacy protections, and simplified parental tools puts children at serious risk. Fortunately, Sen. Mike Lee (R-Utah) and Rep. John James (R-Mich.) are working to keep kids safe online.
Although the Senate and House versions differ slightly, the App Store Accountability Act includes vital reforms. The measure would require app stores to securely verify users’ ages, mandate parental approval for new downloads, and increase parents’ access to accurate app-specific information. These features would help families understand and control access to apps that jeopardize children’s digital health.
The bill does not prohibit any form of speech. It merely establishes guardrails such as requiring age verification.
Ensuring that this system works requires accountability from the companies that own major app stores. Currently, Google and Apple operate their platforms with minimal oversight, making and arbitrarily enforcing rules for maximum profit. In the absence of accountability, these digital giants leave families and children vulnerable to toxic and disturbing content.
Crucially, the bills include provisions for holding violators responsible through private rights of action (Senate) or by enforcing Federal Trade Commission laws on unfair and deceptive practices (House), each with corresponding penalties. Companies could no longer hide behind opaque app store systems that distribute malicious or poorly vetted apps, exposing children to explicit, violent, or otherwise harmful material. This legislation would protect children and hold violators liable.
Opponents of the measure misleadingly claim these common-sense protections violate the First Amendment. As the executive director of the ACLJ and a strong supporter of free speech rights, I take the First Amendment seriously and advocate for its proper interpretation. Generally, the Constitution protects free speech, but it recognizes long-standing exceptions.
The Supreme Court has ruled that obscenity lacks constitutional protection, and certain sexually explicit content can be deemed obscene for minors. In Ginsberg v. New York, the court upheld a state law restricting the distribution of offensive material to young people. Supreme Court jurisprudence is clear: Obscenity is not protected by the First Amendment, and minors warrant special protection from sexually inappropriate content.
Yet, to be clear, the App Store Accountability Act does not prohibit any form of speech. It merely establishes guardrails such as requiring age verification for app stores to protect young people from inappropriate and harmful content. Far from censoring specific content, this proposal empowers parents to have better tools to make their own decisions about what apps are appropriate for their children.
Further, the burden on companies that oversee app stores would be minimal; the data needed to verify user ages is already in their possession. Apple and Google both offer parental approval controls if parents decide to turn this feature on. Lee and James’ bill would simply make this optional feature required, taking the burden off parents to navigate a vast web of optional parental consent tools.
In the modern online age, children spend significant time online. Parents deserve to be given streamlined tools to ensure that the content their children access is safe, transparent, and protective of their well-being.
Congress should pass the App Store Accountability Act to provide families with the key tools and information necessary to protect their children’s access to online enrichment while keeping them safe from unseen dangers. Our children’s online welfare and safety depend on it.
Bills Aim To Stop Big Tech, Big Government From Silencing Speech Again
Eyes everywhere: The AI surveillance state looms
Rapid advancements in artificial intelligence have produced extraordinary innovation, but they also raise significant concerns. Powerful AI systems may already be shaping our culture, identities, and reality. As technology continues to advance, we risk losing control over how these systems influence us. We must urgently consider AI’s growing role in manipulating society and recognize that we may already be vulnerable.
At a recent event at Princeton University, former Google CEO Eric Schmidt warned that society is unprepared for the profound changes AI will bring. Discussing his recent book, “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” Schmidt said AI could reshape how individuals form their identities, threatening culture, autonomy, and democracy. He emphasized that “most people are not ready” for AI’s widespread impact and noted that governments and societal systems lack preparation for these challenges.
In countries already compromising privacy, AI’s proliferation could usher in an omnipotent state where freedoms become severely restricted.
Schmidt wasn’t just talking about potential military applications; he was talking about individuals’ incorporation of AI into their daily lives. He suggested that future generations could be influenced by AI systems acting as their closest companions.
“What if your best friend isn’t human?” Schmidt asked, highlighting how AI-driven entities could replace human relationships, especially for children. He warned that this interaction wouldn’t be passive but could actively shape a child’s worldview — potentially with a cultural or political bias. If these AI entities become embedded in daily life as educational tools, digital companions, or social media curators, they could wield unprecedented power to shape individual identity.
This idea echoes remarks made by OpenAI CEO Sam Altman in 2023, when he speculated about the potential for AI systems to control or manipulate content on platforms like Twitter (now X).
“How would we know if, like, on Twitter we were mostly having LLMs direct the … whatever’s flowing through that hive mind?” Altman asked, suggesting it might be impossible for users to detect whether the content they see — whether trending topics or newsfeed items — was curated by an AI system with an agenda.
He called this a “real danger,” underscoring AI’s capacity to subtly — and without detection — manipulate public discourse, choosing which stories and events gain attention and which remain buried.
Reshaping thought, amplifying outrage
The influence of AI is not limited to identity alone; it can also extend to the shaping of political and cultural landscapes.
In its 2019 edition of the Global Risks Report, the World Economic Forum emphasizes how mass data collection, advanced algorithms, and AI pose serious risks to individual autonomy. A section of the report warns how AI and algorithms can be used effectively to monitor and shape our behaviors, often without our knowledge or consent.
The report highlights that AI has the potential to create “new forms of conformity and micro-targeted persuasion,” pushing individuals toward specific political or cultural ideologies. As AI becomes more integrated into our daily lives, it could make individuals more susceptible to radicalization. Algorithms can identify emotionally vulnerable people, feeding them content tailored to manipulate their emotions and sway their opinions, potentially fueling division and extremism.
We have already seen the devastating impact of similar tactics in the realm of social media. In many cases, these platforms use AI to curate content that amplifies outrage, stoking polarization and undermining democratic processes. The potential for AI to further this trend — whether in influencing elections, radicalizing individuals, or suppressing dissent — represents a grave threat to the social fabric of modern democratic societies.
In more authoritarian settings, governments could use AI to tighten control by monitoring citizens’ every move. By tracking, analyzing, and predicting human actions, AI fosters an environment ripe for totalitarian regimes to grow.
In countries already compromising privacy, AI’s proliferation could usher in an omnipotent surveillance state where freedoms become severely restricted.
Navigating the AI frontier
As AI continues to advance at an unprecedented pace, we must remain vigilant. Society needs to address the growing potential for AI to influence culture, identity, and politics, ensuring that these technologies are not used for manipulation or control. Governments, tech companies, and civil society must work together to create strong ethical frameworks for AI development and deployment that are devoid of political agendas and instead embrace individual liberty and autonomy.
The challenges are complex, but the stakes are high. Schmidt, Altman, and others in the tech industry have raised alarms, and it is crucial that we heed their warnings before AI crosses an irreversible line. We need to establish global norms that safeguard privacy and autonomy, promoting transparency in how AI systems are used and ensuring that individuals retain agency over their own lives and beliefs.
Court Filings Reveal More Government Lies About Censorship, Seek Deeper Investigation
Get the Conservative Review delivered right to your inbox.
We’ll keep you informed with top stories for conservatives who want to become informed decision makers.
Today's top stories